text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
How can a problem be undecidable yet enumerable? [duplicate]
Relationship between Undecidable Problems and Recursively Enumerable languages (1 answer)
How can something be enumerable but be un-decidable ie, this states the halting set is un-decidable and enumerable. Enumerable means it can be computed, ie has the same cardinality as natural numbers and can be computed by a program with an output. But if thats the case it should be decidable. Or is decidability separated from computability?
Am I misunderstanding the intuition of the halting set?
computability undecidability halting-problem
Raphael♦
jokerjoker
$\begingroup$ Yes, you are misunderstanding. The non-halting set is not recursively enumerable. The halting set can be recursively enumerated, but the decision problem would require that both the halting set and the non-halting set are recursively enumerable. $\endgroup$
– Wandering Logic
$\begingroup$ "Enumerable means it can be computed, ie ... can be computed by a program with an output. But if thats the case it should be decidable." -- You need to revisit the definitions. $\endgroup$
For me, the easiest way to think of it is this:
You're asking yourself "out of a bunch of candidates in a set $X$, do any of them fulfill the property $P$"?
Decidable/recursive means that you only need to look at a finite number of elements of $X$ before you hit some point where you know nothing that's left has the property.
A problem that's recursively enumerable, but not recursive, is one where you can look at each $x$ in $X$ one at a time. If you find one, then you're done, you know there exists on that fulfills your property. But there's no way of knowing when you're done, and $X$ is infinite, so if one doesn't exist, you will end up searching forever.
This contains some simplifications over the theoretical definitions, so be careful.
As an example: The halting problem is, you're looking at all $x_1, x_2, \ldots$ where each $x_i$ is a configuration of the turing machine $M$ after $i$ steps. The property is, is $x_i$ in a halting state. If you find one, you're done, but if you don't find one, there's no way to know when you're done.
Compare this to the SAT problem. You've got a Boolean formula, and you want to give each variable a True or False value to make the whole thing true. Even though it's really slow, you can try all true/false combinations. There's an exponential number of them, but it's finite. When you're done, you're done, and if none of them satisfied, you can halt, confident that there's no answer.
jmitejmite
$\begingroup$ "Decidable/recursive means that you only need to look at a finite number of elements of X before you hit some point where you know nothing that's left has the property." No, that's a finite set. The set of even numbers, for example, is decidable but after looking at any finite amount of stuff, you can never say "There are no more even numbers." $\endgroup$
$\begingroup$ Note that $X$ isn't the language, it's the search space. If you want to accept the set of all even numbers, your "search space" is trivially a singleton: the remainder when dividing by 2. $\endgroup$
$\begingroup$ You need to make that way clearer, and you need to do it in the answer, not the comments. Up to the point that I quote, you don't even hint that you're talking about some separate search space. $\endgroup$
Not the answer you're looking for? Browse other questions tagged computability undecidability halting-problem or ask your own question.
Relationship between Undecidable Problems and Recursively Enumerable languages
Is there any concrete relation between Gödel's incompleteness theorem, the halting problem and universal Turing machines?
Program synthesis, decidability and the halting problem
Is the complexity class NP computably enumerable?
The proportion of halting programs vs non-halting programs, of decidable programs vs undecidable languages
What is the exact meaning of a Predicate, decidability and computability?
Why is the halting problem semi-decidable?
Finite subsets of the Halting problem are decidable. Can I prove the correctness of Turing machines computing these subsets?
Is halts-if-valid decideable?
Is checking if the length of a C program that can generate a string is less than a given number decidable?
|
CommonCrawl
|
Atwood's machine
Projectiles / 2-D motion
Conservation laws
A simple device for studying the laws of motion & forces
Atwood's machine used to give me fits. It can be confounding. Yet I've found that to understand all of the problems that can be posed by thinking about this very simple system is to have a deep understanding about forces and acceleration. So I offer my best understanding in the hope that it will help you.
Atwood's machine is illustrated in the animation on the right. It couldn't be simpler. It's just a pulley, through which runs a string or rope attached to two masses. We generally make the simplification that the string/rope and the pulley wheel are of negligible mass.
In this section, we'll go through a number of Atwood's scenarios, starting with the very simplest, two equal masses in static equilibrium (not moving). First a definition:
Tension is the sum of forces pulling on either end of a string, rope, wire, cable, &c. If forces pull at both ends, they are additive. In certain problems, the force at one and of a moving string is not a pulling force, but works in the opposite direction. In that case, it must be subtracted from the force pulling from the other end. (A string with two "pushing" forces is not under tension).
1. Equal masses, no acceleration
The illustration shows an equilibrium situation. The two masses (M) are equivalent, thus the force of gravity on each is equal. The upward force opposing gravity is the tension (T) in the string.
For the system to be in equilibrium, T = Fg. The net force is 2Fg - 2T = 0, so there is no acceleration.
The tension in the string is 2T or 2Fg. The string supports both masses, so we would expect the tension in this case to be the sum of the two downward forces.
Note: In the simplest of Atwood's machine problems, we usually make two simplifications, that the mass of the pulley is zero and that pulley/rope system is frictionless. We can relax those restrictions later when we get good at the easy problems.
2. Equal masses; Constant velocity
Constant velocity means no acceleration, so the net force in the system must still be zero. So this situation is a lot like the static one above.
In this case, the upward velocity has the same magnitude, but opposite direction as the downward velocity, so there is no net velocity, therefore no acceleration.
You may have encountered problems like this. For example, if a constant force exactly equal to the opposing force of friction is applied to a sliding object, the object moves at constant velocity — no acceleration.
Of course, when the mass on the left hits the pulley, the system stops, and that's acceleration, a change in velocity, and that's a different scenario...
3. Unequal masses; Acceleration
Things get interesting when the masses are unequal. Now if we let the system go, we get acceleration in the downward direction of the heavier mass.
The gravitational forces on each mass are now unequal, and the net force, the vector sum of the two gravitational forces, is in the direction of the heavier mass. That means the net acceleration vector is in the direction of M2, as shown (upper right).
The total acceleration of the system is the same for both masses; M1 accelerates upward at the same rate as the downward acceleration of M2 because they are tied together. We can treat the whole system as a single mass, M = M1 + M2.
Net force in the system
The net force in the system (see the diagram above) is just the sum of the forces at work in one direction, say the downward direction of mass 2,
$$ \begin{align} F+{net} = F_2 - F_1 &= m_2g - m_1g \\ &= (m_2 - m_1)g \end{align}$$
F2 and F1 are the gravitational forces on the masses. Using F = mg and combining the terms (factoring out g) gives us the result.
Now the net force in the system can also be represented by Newton's second law as the sum of the masses multiplied by their acceleration:
$$F_{net} = (m_1 + m_2)a$$
Rearranging, we find that the acceleration of the system is the net force, Fnet, divided by the total mass (we are assuming a massless pully and rope):
$$a = \frac{(m_2 - m_1) g}{m_1 + m_2}$$
Tension in the system
The tension in the string of an Atwood's machine is the same everywhere when the system is at equilibrium, but it is different for each mass in an accelerating system.
To find the tension, treat each mass independently and use the common acceleration.
The net force (tension) on the left-side of the string is the sum of the downward gravitational force and the upward accelerating force of the system these forces both stretch the string:
$$T_{left} = M_1g + M_1a = M_1(g + a)$$
The tension in the right side of the string is the force of gravity on M2 minus the downward acceleration force:
$$T_{right} = M_2 g - M_2 a = M_2 (g - a)$$
Think of the right side this way: anet does not work to further stretch the right side of the string. It works in the direction of compression, which amounts to reducing the equilibrium tension
Example 1 – acceleration & tension
Consider the Atwood machine below. Calculate the acceleration of the system and the tension in each rope after the system is released.
Solution: The first thing we should always do is label the figures with the appropriate vectors:
We showed above that the acceleration of such a system is:
where we take m2 as the larger mass and m1 as the smaller (it doesn't really matter as long as we know that positive acceleration is in the direction of the larger mass). Plugging in our masses and g = 9.8 m/s2 gives
$$a = \frac{[(4 - 2) \; Kg] 9.8 \; m/s^2}{(2 + 4) \; Kg}$$
The Kg units cancel to give
$$a = \frac{2 \; Kg \cdot 9.8 \; m/s^2}{6}$$
and finally, our accleration is
$$a = 3.27 \; m/s^2$$
We showed above that the tension on the heavy side (Th) is
$$T_{heavy} = M_2 g + M_2 a = M_2(g - a).$$
where a is the acceleration of the system we calculated above. Plugging in what we know gives
$$ \begin{align} T_{heavy} &= M_h g - M_h a = M_h (g - a) \\ &= 4 \; Kg (9.8 - 3.27) \; m/s^2 \\ &= 26.12 \; N \end{align}$$
$$ \begin{align} T_{light} &= M_L g + M_L a = M_L (g + a) \\ &= 2 \; Kg (9.8 + 3.27) \; m/s^2 \\ &= 26.14 \; N \end{align}$$
Dr. Cruzan's Math and Science Web by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to [email protected].
|
CommonCrawl
|
Increased gait variability during robot-assisted walking is accompanied by increased sensorimotor brain activity in healthy people
Alisa Berger ORCID: orcid.org/0000-0003-2888-70641,
Fabian Horst2,
Fabian Steinberg1,3,
Fabian Thomas1,
Claudia Müller-Eising4,
Wolfgang I. Schöllhorn2 &
Michael Doppelmayr1,5
Journal of NeuroEngineering and Rehabilitation volume 16, Article number: 161 (2019) Cite this article
Gait disorders are major symptoms of neurological diseases affecting the quality of life. Interventions that restore walking and allow patients to maintain safe and independent mobility are essential. Robot-assisted gait training (RAGT) proved to be a promising treatment for restoring and improving the ability to walk. Due to heterogenuous study designs and fragmentary knowlegde about the neural correlates associated with RAGT and the relation to motor recovery, guidelines for an individually optimized therapy can hardly be derived. To optimize robotic rehabilitation, it is crucial to understand how robotic assistance affect locomotor control and its underlying brain activity. Thus, this study aimed to investigate the effects of robotic assistance (RA) during treadmill walking (TW) on cortical activity and the relationship between RA-related changes of cortical activity and biomechanical gait characteristics.
Twelve healthy, right-handed volunteers (9 females; M = 25 ± 4 years) performed unassisted walking (UAW) and robot-assisted walking (RAW) trials on a treadmill, at 2.8 km/h, in a randomized, within-subject design. Ground reaction forces (GRFs) provided information regarding the individual gait patterns, while brain activity was examined by measuring cerebral hemodynamic changes in brain regions associated with the cortical locomotor network, including the sensorimotor cortex (SMC), premotor cortex (PMC) and supplementary motor area (SMA), using functional near-infrared spectroscopy (fNIRS).
A statistically significant increase in brain activity was observed in the SMC compared with the PMC and SMA (p < 0.05), and a classical double bump in the vertical GRF was observed during both UAW and RAW throughout the stance phase. However, intraindividual gait variability increased significantly with RA and was correlated with increased brain activity in the SMC (p = 0.05; r = 0.57).
On the one hand, robotic guidance could generate sensory feedback that promotes active participation, leading to increased gait variability and somatosensory brain activity. On the other hand, changes in brain activity and biomechanical gait characteristics may also be due to the sensory feedback of the robot, which disrupts the cortical network of automated walking in healthy individuals. More comprehensive neurophysiological studies both in laboratory and in clinical settings are necessary to investigate the entire brain network associated with RAW.
Safe and independent locomotion represents a fundamental motor function for humans that is essential for self-contained living and good quality of life [1,2,3,4,5]. Locomotion requires the ability to coordinate a number of different muscles acting on different joints [6,7,8], which are guided by cortical and subcortical brain structures within the locomotor network [9]. Structural and functional changes within the locomotor network are often accompanied by gait and balance impairments which are frequently considered to be the most significant concerns in individuals suffering from brain injuries or neurological diseases [5, 10, 11]. Reduced walking speeds and step lengths [12] as well as non-optimal amount of gait variability [13,14,15] are common symptoms associated with gait impairments that increase the risk of falling [16].
In addition to manual-assisted therapy, robotic neurorehabilitation has often been applied in recent years [17, 18] because it provides early, intensive, task-specific and multi-sensory training which is thought to be effective for balance and gait recovery [17, 19, 20]. Depending on the severity of the disease, movements can be completely guided or assisted, tailored to individual needs [17], using either stationary robotic systems or wearable powered exoskeletons.
Previous studies investigated the effectiveness of robot-assisted gait training (RAGT) in patients suffering from stroke [21, 22], multiple sclerosis [23,24,25,26], Parkinson's disease [27, 28], traumatic brain injury [29] or spinal cord injury [30,31,32]. Positive effects of RAGT on walking speed [33, 34], leg muscle force [23] step length, and gait symmetry [29, 35] were reported. However, the results of different studies are difficult to summarize due to the lack of consistency in protocols and settings of robotic-assisted treatments (e.g., amount and frequency of training sessions, amount and type of provided robotic support) as well as fragmentary knowledge of the effects on functional brain reorganization, motor recovery and their relation [36, 37]. Therefore, it is currently a huge challenge to draw guidelines for robotic rehabilitation protocols [22, 36,37,38]. To design prologned personalized training protocols in robotic rehabilitation to maximize individual treatment effects [37], it is crucial to increase the understanding of changes in locomotor patterns [39] and brain signals [40] underlying RAGT and how they are related [36, 41].
A series of studies investigated the effects of robotic assistance (RA) on biomechanical gait patterns in healthy people [39, 42,43,44]. On one side, altered gait patterns were reported during robot-assisted walking (RAW) compared to unassisted walking (UAW), in particular, substantially higher muscle activity in the quadriceps, gluteus and adductor longus leg muscles and lower muscle activity in the gastrocnemius and tibialis anterior ankle muscles [39, 42] as well as reduced lower-body joint angles due to the little medial-lateral hip movements [45,46,47]. On the other side, similar muscle activation patterns were observed during RAW compared to UAW [44, 48, 49], indicating that robotic devices allow physiological muscle activation patterns during gait [48]. However, it is hypothesized that the ability to execute a physiological gait pattern depends on how the training parameters such as body weight support (BWS), guidance force (GF) or kinematic restrictions in the robotic devices are set [44, 48, 50]. For example, Aurich-Schuler et al. [48] reported that the movements of the trunk and pelvis are more similar to UAW on a treadmill when the pelvis is not fixed during RAW, indicating that differences in musle activity and kinematic gait characteristics between RAW and UAW are due to the reduction in degrees of freedom that user's experience while walking in the robotic device [45]. In line with this, a clinical concern that is often raised with respect to RAW is the lack of gait variability [45, 48, 50]. It is assumed that since the robotic systems are often operated with 100% GF, which means that the devices attempt to force a particular gait pattern regardless of the user's intentions, the user lacks the ability to vary and adapt his gait patterns [45]. Contrary to this, Hidler et al. [45] observed differences in kinematic gait patterns between subsequent steps during RAW, as demonstrated by variability in relative knee and hip movements. Nevertheless, Gizzi et al. [49] showed that the muscular activity during RAW was clearly more stereotyped and similar among individuals compared to UAW. They concluded that RAW provides a therapeutic approach to restore and improve walking that is more repeatable and standardized than approaches based on exercising during UAW [49].
In addition to biomechanical gait changes, insights into brain activity and intervention-related changes in brain activity that relate to gait responses, will contribute to the optimization of therapy interventions [41, 51]. Whereas the application of functional magnetic resonance imaging (fMRI), considered as gold standard for the assessment of activity in cortical and subcortical structures, is restricted due to the vulnerability for movement artifacts and the range of motion in the scanner [52], functional near infrared spectroscopy (fNIRS) is affordable and easily implementable in a portable system, less susceptible to motion artifacts, thus facilitation a wider range of application with special cohorts (e.g., children, patients) and in everyday environments (e.g., during a therapeutic session of RAW or UAW) [53, 54]. Although with lower resolution compared to fMRI [55], fNIRS also relies on the principle of neurovascular coupling and allows the indirect evaluation of cortical activation [56, 57] based on hemodynamic changes which are analogous to the blood-oxygenation-level-dependent responses measured by fMRI [56]. Despite limited depth sensitivity, which restricts the measurement of brain activity to cortical layers, it is a promising tool to investigate the contribution of cortical areas to the neuromotor control of gross motor skills, such as walking [53]. Regarding the cortical correlates of walking, numerous studies identified either increaesed oxygenated hemoglobin (Hboxy) concentration changes in the sensorimotor cortex (SMC) by using fNIRS [53, 57,58,59] or suppressed alpha and beta power in sensorimotor areas by using electroencephalography (EEG) [60,61,62] demonstrating that motor cortex and corticospinal tract contribute directly to the muscle activity of locomotion [63]. However, brain activity during RAW [36, 61, 64,65,66,67,68], especially in patients [69, 70] or by using fNIRS [68, 69], is rarely studied [71].
Analyzing the effects of RA on brain activity in healthy volunteers, Knaepen et al. [36] reported significantly suppressed alpha and beta rhythms in the right sensory cortex during UAW compared to RAW with 100% GF and 0% BWS. Thus, significantly larger involvement of the SMC during UAW compared to RAW were concluded [36]. In contrast, increases of Hboxy were observed in motor areas during RAW compared UAW, leading to the conclusion that RA facilitated increased cortical activation within locomotor control systems [68]. Furthermore, Simis et al. [69] demonstrated the feasibility of fNIRS to evaluate the real-time activation of the primary motor cortex (M1) in both hemispheres during RAW in patients suffering from spinal cord injury. Two out of three patients exhibited enhanced M1 activation during RAW compared with standing which indicate the enhanced involvement of motor cortical areas in walking with RA [69].
To summarize, previous studies mostly focused the effects of RA on either gait characteristics or brain activity. Combined measurements investigating the effects of RA on both biomechanical and hemodynamic patterns might help for a better understanding of the neurophysiological mechanisms underlying gait and gait disorders as well as the effectiveness of robotic rehabilitation on motor recovery [37, 71]. Up to now, no consensus exists regarding how robotic devices should be designed, controlled or adjusted (i.e., device settings, such as the level of support) for synergistic interactions with the human body to achieve optimal neurorehabilitation [37, 72]. Therefore, further research concerning behavioral and neurophysiological mechanisms underlying RAW as well as the modulatory effect of RAGT on neuroplasticy and gait recovery are required giving the fact that such knowledge is of clinical relevance for the development of gait rehabilitation strategies.
Consequently, the central purpose of this study was to investigate both gait characteristics and hemodynamic activity during RAW to identify RAW-related changes in brain activity and their relationship to gait responses. Assuming that sensorimotor areas play a pivotal role within the cortical network of automatic gait [9, 53] and that RA affects gait and brain patterns in young, healthy volunteers [39, 42, 45, 68], we hypothesized that RA result in both altered gait and brain activity patterns. Based on previous studies, more stereotypical gait characteristics with less inter- and intraindividual variability are expected during RAW due to 100% GF and the fixed pelvis compared to UAW [45, 48], wheares brain activity in SMC can be either decreased [36] or increased [68].
This study was performed in accordance with the Declaration of Helsinki. Experimental procedures were performed in accordance with the recommendations of the Deutsche Gesellschaft für Psychologie and were approved by the ethical committee of the Medical Association Hessen in Frankfurt (Germany). The participants were informed about all relevant study-related contents and gave their written consent prior to the initiation of the experiment.
Twelve healthy subjects (9 female, 3 male; aged 25 ± 4 years), without any gait pathologies and free of extremity injuries, were recruited to participate in this study. All participants were right-handed, according to the Edinburg handedness-scale [73], without any neurological or psychological disorders and with normal or corrected-to-normal vision. All participants were requested to disclose pre-existing neurological and psychological conditions, medical conditions, drug intake, and alcohol or caffeine intake during the preceding week.
The Lokomat (Hocoma AG, Volketswil, Switzerland) is a robotic gait-orthosis, consisting of a motorized treadmill and a BWS system. Two robotic actuators can guide the knee and hip joints of participants to match pre-programmed gait patterns, which were derived from average joint trajectories of healthy walkers, using a GF ranging from 0 to 100% [74, 75] (Fig. 1a). Kinematic trajectories can be adjusted to each individual's size and step preferences [45]. The BWS was adjusted to 30% body weight for each participant, and the control mode was set to provide 100% guidance [64].
Montage and Setup. a Participant during robot-assisted walking (RAW), with functional near-infrared spectroscopy (fNIRS) montage. b fNIRS montage; S = Sources; D = Detectors c Classification of regions of interest (ROI): supplementary motor area/premotor cortex (SMA/PMC) and sensorimotor cortex (SMC)
Functional activation of the human cerebral cortex was recorded using a near-infrared optical tomographic imaging device (NIRSport, NIRx, Germany; Wavelengths: 760 nm, 850 nm; Sampling rate: 7.81 Hz). The methodology and the underlying physiology are explained in detail elsewhere [76]. A total of 16 optodes (8 emittors, 8 detectors) were placed with an interoptode distance of 3 cm [53, 54] above the motor cortex, based on the landmarks from the international 10–5 EEG system [77], resulting in 24 channels (source-detector pairs) of measurement (Fig. 1b). The spatial reolution was up to 1 cm. Head dimensions were individually measured and corresponding cap sizes assigned. Channel positions covered identical regions of both hemispheres including the SMC (Brodmann Area [BA] 1–4), and the supplementary motor area/premotor cortex (SMA/PMC; BA6) (Fig. 1c).
Participants were equipped with standardized running shoes (Saucony Ride 9, Saucony, USA). Pressure insoles (Pedar mobile system, Novel GmbH, Germany) were inserted into the shoes for the synchronized measurement of plantar foot pressure, at a frequency of 100 Hz. Each insole consists of 99 capacitive sensors and covers the entire plantar area. The data recording process was managed by the software Novel Pedar-X Recorder 25.6.3 (Novel GmbH, Germany), and the vertical ground reaction force (GRF) was estimated for the analysis of kinetic and temporal gait variables.
Participants performed two blocks, (1) UAW and (2) RAW, in a randomized order. Each block consisted of five walking trials (60 s) and intertrail standing intervals of 60 s (s) [41, 53, 68, 78] (Fig. 2). While walking, the participants were instructed to actively follow the orthosis's guidance while watching a neutral symbol (black cross) on a screen at eye level to ensure the most natural walking possible in an upright posture. During standing (rest), participants were instructed to stand with their feet shoulder-width apart while watching the same black cross. Furthermore, the participants were requested to avoid head movements and talking during the entire experiment, to reduce motion and physiological artifacts [78]. Prior to the experiment, individual adjustments of the Lokomat were undertaken, according to common practices in clinical therapy. The safety procedures of the rehabilitation center required that all subjects wore straps around the front foot to assist with ankle dorsiflexion. To familiarize themselves with the robotic device and treadmill walking (TW), participants walked with and without the Lokomat for 4 min before the experiment started.
Study design and schematic illustration of unassisted walking (UAW) and robot-assisted walking (RAW)
Data processing and analysis
fNIRS raw data were preprocessed and analyzed using the time series analysis routine available in the MATLAB-based NIRSlab analysis package (v2017.05, Nirx Medical Technologies, Glen Head, NY, ["Biomedical Optics"]) [79] following current recommendations when possible [53, 78]. In each channel of individual participant, fNIRS signal was visually inspected with respect to transient spikes and abrupt discontinuities which represent two most common forms of movement artifacts in fNIRS data. First, sections containing discontinuities (or "jumps") as well as long term drifts were detected and corrected (standard deviation threshold = 5) [79]. Second, spikes were smoothed by a procedure that replaces contaminated data with the nearest signal [79]. Third, a band-pass filter (0.01 to 0.2 Hz) was applied to attenuate slow drifts and high frequency noises to reduce unknown global trend due to breathing, respiratory or cardiac rhythms, vasomotion, or other movement artifacts [59]. Then, time series of hemodynamic states of Hboxy and deoxygenated hemoglobin (Hbdeoxy) were computed using the the modified Beer-Lambert law [80, 81]. Following parameters were specified: wavelengths (WL1 = 760 nm; WL2 = 850 nm), differential pathlength factors (7.25 for WL1; 6.38 for WL2), interoptode distances (3 cm), background tissue values (totHb: 75 uM; MVO2Sat: 70%).
Preprocessed Hboxy concentration changes (∆Hboxy) were exported and processed as follows: 50 s per walking trial were used to analyze the hemodynamic responses during (1) UAW and (2) RAW due to the time needed for the acceleration and deceleration of the treadmill. The averaged baseline concentration values of rest before each walking trial were subtracted from the task-evoked concentration measurements, to account for time-dependent changes in cerebral oxygenation [78]. ∆Hboxy were calculated for regions of interest (ROI) (see Fig. 1c) during both UAW and RAW and used as a marker for the regional cortical activation, since it is more sensitive to locomotion-related activities than Hbdeoxy [82] and represents an accurate indicator of hemodynamic activity [83].
GRFs were preprocessed and analyzed using Matlab 2017b (MathWorks, USA). GRFs were filtered using a second-order Butterworth bidirectional low-pass filter, at a cut off frequency of 30 Hz. Offline processing included kinetic and temporal variables that were calculated based on stance-phase detection, using a GRF threshold of 50 N. The first and last ten stance phases (steps) from each of the five walking trials were excluded from the analysis because they corresponded with the acceleration and deceleration phases of the treadmill. The swing and stance phase times were measured. The stance phase was also subdivided into initial double-limb, single-limb and terminal double-limb support times. Furthermore, the number of steps and the cadence was calculated. Kinetic variables were analyzed during the stance phase of walking. The GRF values were normalized against body mass and were time-normalized against 101 data points corresponding with the stance phase of walking. Gait variability was estimated for time-continuous GRF during the stance phase, using the coefficient of variation (CV) [84]. According to Eq. (1), the intraindividual CV was calculated based on the mean (\( \overline{GRF_{s,b,i}} \)) and standard deviation (σs, b, i) of the normalized GRF at the i-th interval of a concanated vector of the right and left leg stance phases. The intraindividual CV was calculated for each subject s and both blocks b (RAW and UAW).
$$ IntraindividualCV\left(s,b\right)=\frac{\sqrt{\frac{1}{202}\ast {\sum}_{i=1}^{202}{\sigma_{s,b,i}}^2}}{\frac{1}{202}\ast {\sum}_{i=1}^{202}\mid \overline{GR{F}_{s,b,i}}\mid}\ast 100\left[\%\right] $$
Similarly, interindividual variability was estimated across the subject's mean GRF, calculated across the time-continuous GRF from all stance phases from one subject. According to Eq. (2), the interindividual CV was calculated based on the mean (\( \overline{GRF_{\overline{s},b,i}} \)) and standard deviation (\( {\sigma}_{\overline{s},b,i} \)) of the normalized subject's mean GRF at the i-th interval of the concanated vector of the right and left leg stance phases. Interindividual CV was calculated for both blocks b (RAW and UAW).
$$ InterindividualCV(b)=\frac{\sqrt{\frac{1}{202}\ast {\sum}_{i=1}^{202}{\sigma_{\overline{s},b,i}}^2}}{\frac{1}{202}\ast {\sum}_{i=1}^{202}\mid \overline{GR{F}_{\overline{s},b,i}}\mid}\ast 100\left[\%\right] $$
The absolute magnitude of the symmetry index, according to Herzog et al. [85], was adapted for i time-intervals of time-continuous GRF. The symmetry index (SI) is a method of assessing the differences between the variables associated with both lower limbs during walking. According to Eq. (3), the SI was calculated based on the absolute difference of the mean normalized GRF (\( \overline{GRF\_{right}_i} \) and \( \overline{GRF\_{left}_i} \)) at the i-th interval for each subject s and both blocks b (RAW and UAW). An SI value of 0% indicates full symmetry, while an SI value > 0% indicates the degree of asymmetry [85].
$$ SI\left(s,b\right)=\frac{1}{101}\ast \left(\sum \limits_{i=1}^{101}\frac{\mid \overline{GR{F_{right}}_{s,b,i}}-\overline{GR{F_{left}}_{s,b,i}}\mid }{\frac{1}{2}\ast \mid \overline{GR{F_{right}}_{s,b,i}}+\overline{GR{F_{left}}_{s,b,i}}\mid}\ast 100\right)\left[\%\right] $$
Based on the time-continuous vertical GRF waveforms, three time-discrete variables were derived within the stance phase: the magnitude of the first peak (weight acceptance), the valley (mid-stance) and the magnitude of the second peak (push-off), as well as their temporal appearances during the stance phase.
The statistical analysis was conducted using SPSS 23 (IBM, Armonk, New York, USA). Normal distribution was examined for both hemodynamic and kinetic/temporal variables using the Shapiro-Wilk test (p ≥ 0.05). Averaged Hboxy values were computed for each subject and ROI (SMA/PMC, SMC) during both UAW and RAW [53, 78] and were normalized (normHboxy) by dividing them by the corresponding signal amplitude for the whole experiment [41, 59]. A two-way analysis of variance (ANOVA), with the factors condition (UAW or RAW) and ROI (SMA/PMC, SMC), was used to analyze differences in cortical hemodynamic patterns. In cases of significant main effects, Bonferroni-adjusted post hoc analyses provided statistical information regarding the differences among the ROIs by condition. Temporal and kinetic gait variables were statistically tested for differences between the experimental conditions (UAW and RAW) using paired t-tests. The overall level of significance was set to p ≤ 0.05. Mauchly's test was used to check for any violations of sphericity. If a violation of sphericity was detected (p < 0.05) and a Greenhouse-Geisser epsilon ε > 0.75 existed, the Huynh-Feldt corrected p-values were reported. Otherwise (epsilon ε < 0.75), a Greenhouse-Geisser correction was applied. Effect sizes were given in partial eta-squared (ƞp2) or interpreted, according to Cohen. The association between cortical activation and gait characteristics was explored using Pearson's correlation coefficient.
Cortical activity (Hboxy)
The effect of RAW on ∆Hboxy in locomotor cortical areas was analyzed using a two-way repeated measurements ANOVA with the factors ROI (SMA/PMC, SMC) and CONDITION (UAW, RAW). ∆Hboxy served as dependent variable. A significant main effect for ROI [F(1,11) = 11.610, p = 0.006, ƞp2 = 0.513] was found indicating significant greater ∆Hboxy values in the 7 channels (1–3,13–16) covering regions of the SMA/PMC [BA6] compared to the 17 channels (4–12 and 17–24) covering regions of the SMC [BA1–4] (p = 0.052), independent of the condition. Neither CONDITION [F(1,11) = 1.204, p = 0.296, ƞp2 = 0.099] nor the interaction ROI x CONDITION [F(1,11) = 0.092, p = 0.767, ƞp2 = 0.008] were significant (Fig. 3).
Normalized oxygenated hemoglobin (Hboxy; mean ± SME) for unassisted-walking (UAW) and robot-assisted walking (RAW). SMA/PMC, supplementary motor area/premotor cortex; SMC, sensorimotor cortex; SME = standard mean error
Gait characteristics
Descriptive analyses of the mean vertical GRFs show a "classical" double bump (M-Shape) during the stance phase [84] for both UAW and RAW (Fig. 4). However, various differences in the gait characteristics were observed between the two conditions. First, the mean vertical GRFs were lower during RAW than during UAW. Second, the relative appearance of the peak values occurs earlier for the first peak and later for the second peak during RAW compared with UAW. Third, the vertical GRFs had higher standard deviations during RAW than during UAW. Statistical analyses of the time-discrete kinetic gait variables confirmed significantly lower GRFs and earlier and later appearances for the first and second vertical GRF peaks, respectively, during RAW than during UAW (Table 1).
Normalized vertical ground reaction force (GRF; mean ± SD) during the stance phase of unassisted walking (UAW) and robot-assisted walking (RAW). In Additional file 1, normalized vertical GRF during the stance phase of UAW (Figure S1) and RAW (Figure S2) are presented for each individual participant
Table 1 Comparison of vertical ground reaction force variables (GRF; mean ± SD) during the stance phase of unassisted walking (UAW) and robot-assisted walking (RAW), SD = standard deviation
Fourth, significantly increased inter- and intraindividual variability and asymmetry between the time-continuous GRFs of the right and left feet (SI values) and significantly longer stance and swing phases emerge during RAW compared with UAW, despite the guidance of the robotic device and the same treadmill velocity (Table 2). Accordingly, lower numbers of steps and lower cadence values were observed during RAW than during UAW.
Table 2 Comparison of temporal gait variables (mean ± SD) during unassisted walking (UAW) and robot-assisted walking (RAW)
Association between changes in cortical activity and gait characteristics
Correlation analyses showed that changes in gait characteristics due to RA were also associated with changes in cortical activity. During RAW, a positive association between gait variability and Hboxy was observed only in the SMC (p = 0.052, r = 0.570). No further correlations were found during UAW or for other brain regions (SMA/PMC p = 0.951, r = 0.020). Thus, increased gait variability during RAW was associated with increased brain activity in the SMC (Fig. 5b).
Correlations between relative oxygenated hemoglobin (Hboxy) and gait variability calculated by intraindividual coefficient of variation (CV) during unassisted-walking (UAW) and robot-assisted walking (RAW). a SMA/PMC, supplementary motor area/premotor cortex; b SMC, sensorimotor cortex; the shaded area represents the 95% confidence interval
In this study, the effects of RA on cortical activity during TW and the relationship to changes in gait characteristics were investigated. We identified a classical double bump in the GRF, throughout the stance phase during both UAW and RAW, which was accompanied by significantly increased brain activity in the SMC compared to premotor/supplementary motor areas. However, individual analyses showed significantly higher inter- and intraindividual gait variability due to RA that correlated with increased hemodynamic activity in the SMC (p = 0.052; r = 0.570).
In both conditions, shape characteristics of the mean GRF curves during the stance phase were observed. This in not in line with the results of Neckel et al. [46] who did not report a classical double bump during the stance phase during RAW, which could be due to the age differences of our samples. Furthermore, significantly altered kinematic patterns (lower GRF values and earlier and later appearances for the first and second vertical GRF peak values, respectively) as well as large inter- and intraindividual gait variability were observed during RAW compared to UAW. Results of the kinematic patterns are consistent with other biomechanical studies showing altered muscle activity [39, 42] or kinematic patterns [45,46,47] due to RA. The results of greater inter- and intraindividual gait variability during RAW do not agree with the more stereotypical and similar patterns of Gizzi et al. [49], nor with the assumption that the user lacks the ability to vary and adapt gait patterns during RAW [45, 48, 50].
Regarding brain activity during UAW, Hboxy concentration changes were significantly increased in sensorimotor areaes compared to areas of the SMA/PMC which is in line with other neurophysiological studies that showed increased Hboxy concentrations during walking [57, 58]. This is further confirmed by EEG studies reporting suppressed alpha and beta oscillations within the SMC [60,61,62] during active walking. This also demonstrates that the SMC and the corticospinal tract contribute directly to muscle activity in locomotion [9, 53, 63] representing a general marker of an active movemet-related neuronal state [61].
Analyzing the effects of RA on cortical patterns, significantly increased Hboxy concentration changes were also observed in SMC compared to frontal areas. Whereas Kim et al. [68] observed more global network activation during RAW compared to UAW, Knaepen et al. [36] reported significantly suppressed alpha and beta power during UAW compared to RAW with the conclusion that walking with 100% GF leads to less active participation and little activation of the SMC, which should be avoided during RAGT.
However, during RAW, we observed a positive correlation between ΔHboxy concentrations in the SMC and intraindividual gait variability. Thus, individuals with larger gait variability showed higher sensorimotor brain activity, which is similar to the results reported of Vitorio et al. [41]. In this study, positive correlations between gait variability and ΔHboxy in the PMC and M1 were found in young healthy adults when walking with rhythmic auditory cueing [41]. The following two possible explanations are suggested.
On one side, robotic guidance might induce additional and new sensory feedback that promotes active participation, resulting in high gait variability and increased brain activity. This possibility is supported by previous observations that muscles exhibited marked and structurally phased activity, even under full guidance conditions [39, 42, 86,87,88]. Van Kammen et al. [88] found muscle activity in the vastus lateralis, suggesting that the leg muscles are still activated during RAW as opposed to the muscles related to stability and propulsion, in which activity is reduced under guidance conditions. This finding is remarkable because, in this state, the exoskeleton is responsible for walking control, and theoretically, no voluntary activity from the performer is required [87, 89]. However, the instructions used in the present study (i.e., 'actively move along with the device') may have affected activity, as previous studies have shown that encouraging active involvement increases muscle activity [86, 87] as well as brain activity significantly during RAW [64]. More specifically, Wagner et al. [64] showed significantly suppressed alpha and beta power during active compared to passive RAW. Dobkin (1994) also showed that passive stepping can lead to task-specific sensory information that induces and modulates step-like electromyography activity [90]. Thus, high guidance might also promote active contribution. Particularly in patients who are not able to walk unassisted, successful stepping induces task-specific sensory information that may trigger plastic changes in the central nervous system [88, 91]. Since active participation and the production of variable movement patterns are prerequisites for activity-dependent neuroplasticity [7, 20, 89, 92,93,94], it is important to determine whether the activation of the SMC can be triggered by changes in the levels of GF, BWS and kinematic freedom in order to specifically provoke gait variability due to active participation of the patient [45, 48, 50]. High gait variability may indicate that people use multiple combinations of gait variables to walk more effectively [45, 95], resulting in better and faster improvements during robotic rehabilitation.
On other side, the sensory feedback from robot guidance could also disturb the brain network underlying automatic walking, leading to increased gait variability and sensorimotor activity. According to Vitorio et al. [41], the requirement to adapt to external stimuli leads to disturbances in automatic walking in young healthy people, resulting in higher gait variability and higher cortical costs. As previous study have shown, the ability to execute a physiological gait pattern depends on how the training parameters such as BWS, GF or kinematic freedom in the robotic devices are set. During RAW with fixed pelvis, significantly altered muscle activity [39, 42, 45] and kinematic patterns [48, 50] were found. In addition to GF, BWS and kinematic freedom, the presence of foot support may also contribute to altered patterns. The safety procedures of the therapy institution required that all subjects wear straps around the front foot to assist with ankle dorsiflexion, which is known to reduce activity in the ankle dorsiflexors [39, 42].
In summary, increased gait variability and sensorimotor activity during RAW could be the result of active participation or disrupted automatic locomotor control. However, the generalization of these results to other populations is not intended or recommended. Healthy elderly individuals [41] and patients with stroke [22], multiple sclerosis [23, 25, 26], Parkinson's disease [27, 28], brain injuries [29] or spinal cord injuries [30, 31] who suffer from gait and balance disorders react differently to robotic support than healthy young people, which may lead to different gait and brain activation patterns [44]. In addition to high inter- and intraindividual variability within one sample, the heterogeneity of methodological procedures between studies appears to pose another challenge [71].
Therefore, one future goal should be to understand the mechanisms underlying RAGT and which parameters determine the effectiveness of a single treatment in the heterogenuous population of patients suffering from neurological diseases [37]. For this purpose, objective biomarkers for motor recovery and neuroplastic changes have to be identified [37]. Then, specific training protocols and further interventions, such as augmented feedback with virtual reality, brain-machine interface or non-invasive brain stimulation, can be developed to deliver sustainable therapies for individualized rehabilitation that optimizes the outcome and efficacy of gait recovery, which together can foster independent living and improve the quality of life for neurological patients [37, 71].
Methodological limitations
Two methodological limitations that emerged using the present approach should be mentioned. First, the ability to walk is guided by an optimal interaction between cortical and subcortical brain structures within the locomotor network [53]. Using our NIRSport system, we were only able to report brain activity patterns in motor cortical areas and were unable to monitor the activities of subcortical areas or other cortical involvements. Various studies have reported that patients with gait disorders recruit additional cortical regions to manage the demands of UAW and RAW, due to structural and/or functional changes in the brain. Measuring the entire cortical network underlying locomotion may be necessary to investigate neuronal compensations and cognitive resources used for neuroplastic processes during gait rehabilitation. Therefore, we must be careful when discussing brain activity associated with other regions involved in locomotor control [9].
Secondly, we must take into account the small sample size of our healthy volunteers and their young age (mean: 25 ± 4 years), which also had no gait pathologies. Thus, RA guidance of gait movement might have different effects in elderly subjects or patients who are not able to walk without restrictions [96]. Therefore, the findings from our study are difficult to apply to other age or patient groups, as neurological patients often suffer from movement disorders and therefore use different control strategies during RAW. Although the available results provide relevant insights into the mobile applications of neurophysiological measurements during RAW, with approaches for further therapeutic interventions during robotic rehabilitation, the effects of RAW must also be investigated in other groups and in patients with gait disorders in the future.
The purpose of the present study was to investigate brain activity during UAW and RAW and how this activity was associated with gait characteristics. The results confirmed the involvement of the SMC during TW and significantly increased gait variability due to RA, which correlated positively with brain activity. Furthermore, this study highlights the interaction between cortical activity and gait variability, stressing the need to use holistic, multisystem approaches when investigating TW in elderly individuals or patients suffering from gait disorders. Assessing the effects of RA on brain activity and gait characteristics is essential to develop a better understanding of how robotic devices affect human locomotion. This knowledge is essential for interventional studies examining the rehabilitation of motor disorders. Basic research regarding robotic rehabilitation is necessary to gain a deeper understanding of the brain and gait patterns associated with RAW, which is essential for further investigations of gait recovery and neuroplastic changes. In addition, clinical longitudinal studies are required to identify individual gait improvements and to identify the underlying neurophysiological changes to develop therapies with respect to interindividual differences. RAGT devices should be designed to provide an amount of force that adapts to the patient's capacity, to achieve an optimal balance between forced motor activity and the promotion of the patient's voluntary activity [36, 92,93,94]. Further combined studies are necessary to determine the relationship between brain activity and functional motor improvements and to evaluate the effects of therapeutic interventions. Neurophysiological investigations can contribute to the development of robotic rehabilitation and to individual, closed-loop treatments for future neurorehabilitation therapies.
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
ANOVA:
BA:
Brodmann area
BWS:
fNIRS:
Functional nearinfrared spectroscopy
GF:
Guidance force
GRF:
Ground reaction forces
Hbdeoxy:
Deoxygenated hemoglobin
Hboxy:
Oxygenated hemoglobin
Primary motor cortex
RA:
Robotic assistance
RAGT:
Robot assisted gait training
RAW:
Robot assisted walking
Standard mean error
SI:
Symmetry index
SMA:
Supplementary motor area
SMC:
Sensorimotor cortex
TW:
Unassisted walking
ΔHboxy:
Relative changes of oxygenated hemoglobin
Verghese J, LeValley A, Hall CB, Katz MJ, Ambrose AF, Lipton RB. Epidemiology of gait disorders in community-residing older adults. J Am Geriatr Soc. 2006;54:255–61. https://doi.org/10.1111/j.1532-5415.2005.00580.x.
Forte R, Boreham CAG, de Vito G, Pesce C. Health and quality of life perception in older adults: the joint role of cognitive efficiency and functional mobility. Int J Environ Res Public Health. 2015;12:11328–44. https://doi.org/10.3390/ijerph120911328.
Fagerström C, Borglin G. Mobility, functional ability and health-related quality of life among people of 60 years or older. Aging Clin Exp Res. 2010;22:387–94.
Hirsch CH, Buzková P, Robbins JA, Patel KV, Newman AB. Predicting late-life disability and death by the rate of decline in physical performance measures. Age Ageing. 2012;41:155–61. https://doi.org/10.1093/ageing/afr151.
Soh S-E, Morris ME, McGinley JL. Determinants of health-related quality of life in Parkinson's disease: a systematic review. Parkinsonism Relat Disord. 2011;17:1–9. https://doi.org/10.1016/j.parkreldis.2010.08.012.
Nielsen JB. How we walk: central control of muscle activity during human walking. Neuroscientist. 2003;9:195–204. https://doi.org/10.1177/1073858403009003012.
Bernstein N. The co-ordination and regulation of movements. 1st ed. Oxford: Pergamon Press; 1967.
Hatze H. Motion variability--its definition, quantification, and origin. J Mot Behav. 1986;18:5–16.
La Fougère C, Zwergal A, Rominger A, Förster S, Fesl G, Dieterich M, et al. Real versus imagined locomotion: a 18F-FDG PET-fMRI comparison. Neuroimage. 2010;50:1589–98. https://doi.org/10.1016/j.neuroimage.2009.12.060.
Ellis T, Cavanaugh JT, Earhart GM, Ford MP, Foreman KB, Dibble LE. Which measures of physical function and motor impairment best predict quality of life in Parkinson's disease? Parkinsonism Relat Disord. 2011;17:693–7. https://doi.org/10.1016/j.parkreldis.2011.07.004.
Schmid A, Duncan PW, Studenski S, Lai SM, Richards L, Perera S, Wu SS. Improvements in speed-based gait classifications are meaningful. Stroke. 2007;38:2096–100. https://doi.org/10.1161/STROKEAHA.106.475921.
von Schroeder HP, Coutts RD, Lyden PD, Billings E, Nickel VL. Gait parameters following stroke: a practical assessment. J Rehabil Res Dev. 1995;32:25–31.
Stergiou N, Harbourne R, Cavanaugh J. Optimal movement variability: a new theoretical perspective for neurologic physical therapy. J Neurol Phys Ther. 2006;30:120–9. https://doi.org/10.1097/01.npt.0000281949.48193.d9.
Hausdorff JM. Gait dynamics, fractals and falls: finding meaning in the stride-to-stride fluctuations of human walking. Hum Mov Sci. 2007;26:555–89. https://doi.org/10.1016/j.humov.2007.05.003.
Chen G, Patten C, Kothari DH, Zajac FE. Gait differences between individuals with post-stroke hemiparesis and non-disabled controls at matched speeds. Gait Posture. 2005;22:51–6. https://doi.org/10.1016/j.gaitpost.2004.06.009.
Titianova EB, Tarkka IM. Asymmetry in walking performance and postural sway in patients with chronic unilateral cerebral infarction. J Rehabil Res Dev. 1995;32:236–44.
Turner DL, Ramos-Murguialday A, Birbaumer N, Hoffmann U, Luft A. Neurophysiology of robot-mediated training and therapy: a perspective for future use in clinical populations. Front Neurol. 2013;4:184. https://doi.org/10.3389/fneur.2013.00184.
Calabrò RS, Cacciola A, Bertè F, Manuli A, Leo A, Bramanti A, et al. Robotic gait rehabilitation and substitution devices in neurological disorders: where are we now? Neurol Sci. 2016;37:503–14. https://doi.org/10.1007/s10072-016-2474-4.
Galen SS, Clarke CJ, Allan DB, Conway BA. A portable gait assessment tool to record temporal gait parameters in SCI. Med Eng Phys. 2011;33:626–32. https://doi.org/10.1016/j.medengphy.2011.01.003.
Schmidt RA, Lee TD. Motor control and learning: A behavioral emphasis. 5th ed. Champaign: Human Kinetics; 2011.
Bruni MF, Melegari C, de Cola MC, Bramanti A, Bramanti P, Calabrò RS. What does best evidence tell us about robotic gait rehabilitation in stroke patients: a systematic review and meta-analysis. J Clin Neurosci. 2018;48:11–7. https://doi.org/10.1016/j.jocn.2017.10.048.
Mehrholz J, Thomas S, Werner C, Kugler J, Pohl M, Elsner B. Electromechanical-assisted training for walking after stroke. Cochrane Database Syst Rev. 2017;5:CD006185. https://doi.org/10.1002/14651858.CD006185.pub4.
Beer S, Aschbacher B, Manoglou D, Gamper E, Kool J, Kesselring J. Robot-assisted gait training in multiple sclerosis: a pilot randomized trial. Mult Scler. 2008;14:231–6. https://doi.org/10.1177/1352458507082358.
Lo AC, Triche EW. Improving gait in multiple sclerosis using robot-assisted, body weight supported treadmill training. Neurorehabil Neural Repair. 2008;22:661–71. https://doi.org/10.1177/1545968308318473.
Schwartz I, Sajin A, Moreh E, Fisher I, Neeb M, Forest A, et al. Robot-assisted gait training in multiple sclerosis patients: a randomized trial. Mult Scler. 2012;18:881–90. https://doi.org/10.1177/1352458511431075.
Straudi S, Fanciullacci C, Martinuzzi C, Pavarelli C, Rossi B, Chisari C, Basaglia N. The effects of robot-assisted gait training in progressive multiple sclerosis: a randomized controlled trial. Mult Scler. 2016;22:373–84. https://doi.org/10.1177/1352458515620933.
Lo AC, Chang VC, Gianfrancesco MA, Friedman JH, Patterson TS, Benedicto DF. Reduction of freezing of gait in Parkinson's disease by repetitive robot-assisted treadmill training: a pilot study. J Neuroeng Rehabil. 2010;7:51. https://doi.org/10.1186/1743-0003-7-51.
Picelli A, Melotti C, Origano F, Waldner A, Fiaschi A, Santilli V, Smania N. Robot-assisted gait training in patients with Parkinson disease: a randomized controlled trial. Neurorehabil Neural Repair. 2012;26:353–61. https://doi.org/10.1177/1545968311424417.
Esquenazi A, Lee S, Packel AT, Braitman L. A randomized comparative study of manually assisted versus robotic-assisted body weight supported treadmill training in persons with a traumatic brain injury. PM R. 2013;5:280–90. https://doi.org/10.1016/j.pmrj.2012.10.009.
Nam KY, Kim HJ, Kwon BS, Park J-W, Lee HJ, Yoo A. Robot-assisted gait training (Lokomat) improves walking function and activity in people with spinal cord injury: a systematic review. J Neuroeng Rehabil. 2017;14:24. https://doi.org/10.1186/s12984-017-0232-3.
Schwartz I, Sajina A, Neeb M, Fisher I, Katz-Luerer M, Meiner Z. Locomotor training using a robotic device in patients with subacute spinal cord injury. Spinal Cord. 2011;49:1062–7. https://doi.org/10.1038/sc.2011.59.
Wirz M, Zemon DH, Rupp R, Scheel A, Colombo G, Dietz V, Hornby TG. Effectiveness of automated locomotor training in patients with chronic incomplete spinal cord injury: a multicenter trial. Arch Phys Med Rehabil. 2005;86:672–80. https://doi.org/10.1016/j.apmr.2004.08.004.
Benito-Penalva J, Edwards DJ, Opisso E, Cortes M, Lopez-Blazquez R, Murillo N, et al. Gait training in human spinal cord injury using electromechanical systems: effect of device type and patient characteristics. Arch Phys Med Rehabil. 2012;93:404–12. https://doi.org/10.1016/j.apmr.2011.08.028.
Uçar DE, Paker N, Buğdaycı D. Lokomat: a therapeutic chance for patients with chronic hemiplegia. NeuroRehabil. 2014;34:447–53. https://doi.org/10.3233/NRE-141054.
Husemann B, Müller F, Krewer C, Heller S, Koenig E. Effects of locomotion training with assistance of a robot-driven gait orthosis in hemiparetic patients after stroke: a randomized controlled pilot study. Stroke. 2007;38:349–54. https://doi.org/10.1161/01.STR.0000254607.48765.cb.
Knaepen K, Mierau A, Swinnen E, Fernandez Tellez H, Michielsen M, Kerckhofs E, et al. Human-robot interaction: does robotic guidance force affect gait-related brain dynamics during robot-assisted treadmill walking? PLoS One. 2015;10:e0140626. https://doi.org/10.1371/journal.pone.0140626.
Coscia M, Wessel MJ, Chaudary U, Millán JDR, Micera S, Guggisberg A, et al. Neurotechnology-aided interventions for upper limb motor rehabilitation in severe chronic stroke. Brain. 2019;142:2182–97. https://doi.org/10.1093/brain/awz181.
Mehrholz J, Pohl M, Platz T, Kugler J, Elsner B. Electromechanical and robot-assisted arm training for improving activities of daily living, arm function, and arm muscle strength after stroke. Cochrane Database Syst Rev. 2018;9:CD006876. https://doi.org/10.1002/14651858.CD006876.pub5.
Moreno JC, Barroso F, Farina D, Gizzi L, Santos C, Molinari M, Pons JL. Effects of robotic guidance on the coordination of locomotion. J Neuroeng Rehabil. 2013;10:79. https://doi.org/10.1186/1743-0003-10-79.
Youssofzadeh V, Zanotto D, Stegall P, Naeem M, Wong-Lin K, Agrawal SK, Prasad G. Directed neural connectivity changes in robot-assisted gait training: a partial granger causality analysis. Conf Proc IEEE Eng Med Biol Soc. 2014;2014:6361–4. https://doi.org/10.1109/EMBC.2014.6945083.
Vitorio R, Stuart S, Gobbi LTB, Rochester L, Alcock L, Pantall A. Reduced gait variability and enhanced brain activity in older adults with auditory cues: a functional near-infrared spectroscopy study. Neurorehabil Neural Repair. 2018;32:976–87. https://doi.org/10.1177/1545968318805159.
Hidler JM, Wall AE. Alterations in muscle activation patterns during robotic-assisted walking. Clin Biomech (Bristol, Avon). 2005;20:184–93. https://doi.org/10.1016/j.clinbiomech.2004.09.016.
Hidler J, Nichols D, Pelliccio M, Brady K, Campbell DD, Kahn JH, Hornby TG. Multicenter randomized clinical trial evaluating the effectiveness of the Lokomat in subacute stroke. Neurorehabil Neural Repair. 2009;23:5–13. https://doi.org/10.1177/1545968308326632.
van Kammen K, Boonstra AM, van der Woude LHV, Reinders-Messelink HA, den Otter R. The combined effects of guidance force, bodyweight support and gait speed on muscle activity during able-bodied walking in the Lokomat. Clin Biomech (Bristol, Avon). 2016;36:65–73. https://doi.org/10.1016/j.clinbiomech.2016.04.013.
Hidler J, Wisman W, Neckel N. Kinematic trajectories while walking within the Lokomat robotic gait-orthosis. Clin Biomech (Bristol, Avon). 2008;23:1251–9. https://doi.org/10.1016/j.clinbiomech.2008.08.004.
Neckel ND, Blonien N, Nichols D, Hidler J. Abnormal joint torque patterns exhibited by chronic stroke subjects while walking with a prescribed physiological gait pattern. J Neuroeng Rehabil. 2008;5:19. https://doi.org/10.1186/1743-0003-5-19.
Neckel N, Wisman W, Hidler J. Limb alignment and kinematics inside a Lokomat robotic orthosis. Conf Proc IEEE Eng Med Biol Soc. 2006;1:2698–701. https://doi.org/10.1109/IEMBS.2006.259970.
Aurich-Schuler T, Gut A, Labruyère R. The FreeD module for the Lokomat facilitates a physiological movement pattern in healthy people - a proof of concept study. J Neuroeng Rehabil. 2019;16:26. https://doi.org/10.1186/s12984-019-0496-x.
Gizzi L, Nielsen JF, Felici F, Moreno JC, Pons JL, Farina D. Motor modules in robot-aided walking. J Neuroeng Rehabil. 2012;9:76. https://doi.org/10.1186/1743-0003-9-76.
Aurich-Schuler T, Labruyère R. An increase in kinematic freedom in the Lokomat is related to the ability to elicit a physiological muscle activity pattern: a secondary data analysis investigating differences between guidance force, path control, and FreeD. Front Robot AI. 2019;6:387. https://doi.org/10.3389/frobt.2019.00109.
Chen I-H, Yang Y-R, Lu C-F, Wang R-Y. Novel gait training alters functional brain connectivity during walking in chronic stroke patients: a randomized controlled pilot trial. J Neuroeng Rehabil. 2019;16:33. https://doi.org/10.1186/s12984-019-0503-2.
Cutini S, Brigadoi S. Unleashing the future potential of functional near-infrared spectroscopy in brain sciences. J Neurosci Methods. 2014;232:152–6. https://doi.org/10.1016/j.jneumeth.2014.05.024.
Herold F, Wiegel P, Scholkmann F, Thiers A, Hamacher D, Schega L. Functional near-infrared spectroscopy in movement science: a systematic review on cortical activity in postural and walking tasks. Neurophotonics. 2017;4:41403. https://doi.org/10.1117/1.NPh.4.4.041403.
Quaresima V, Ferrari M. Functional near-infrared spectroscopy (fNIRS) for assessing cerebral cortex function during human behavior in natural/social situations: a concise review. Organ Res Methods. 2019;22:46–68. https://doi.org/10.1177/1094428116658959.
Koch SP, Koendgen S, Bourayou R, Steinbrink J, Obrig H. Individual alpha-frequency correlates with amplitude of visual evoked potential and hemodynamic response. Neuroimage. 2008;41:233–42. https://doi.org/10.1016/j.neuroimage.2008.02.018.
Ferrari M, Mottola L, Quaresima V. Principles, techniques, and limitations of near infrared spectroscopy. Can J Appl Physiol. 2004;29:463–87.
Miyai I, Tanabe HC, Sase I, Eda H, Oda I, Konishi I, et al. Cortical mapping of gait in humans: a near-infrared spectroscopic topography study. Neuroimage. 2001;14:1186–92. https://doi.org/10.1006/nimg.2001.0905.
Hamacher D, Herold F, Wiegel P, Hamacher D, Schega L. Brain activity during walking: a systematic review. Neurosci Biobehav Rev. 2015;57:310–27. https://doi.org/10.1016/j.neubiorev.2015.08.002.
Koenraadt KLM, Roelofsen EGJ, Duysens J, Keijsers NLW. Cortical control of normal gait and precision stepping: an fNIRS study. Neuroimage. 2014;85(Pt 1):415–22. https://doi.org/10.1016/j.neuroimage.2013.04.070.
Severens M, Nienhuis B, Desain P, Duysens J. Feasibility of measuring event related desynchronization with electroencephalography during walking. Conf Proc IEEE Eng Med Biol Soc. 2012;2012:2764–7. https://doi.org/10.1109/EMBC.2012.6346537.
Seeber M, Scherer R, Wagner J, Solis-Escalante T, Müller-Putz GR. EEG beta suppression and low gamma modulation are different elements of human upright walking. Front Hum Neurosci. 2014;8:485. https://doi.org/10.3389/fnhum.2014.00485.
Bulea TC, Kim J, Damiano DL, Stanley CJ, Park H-S. Prefrontal, posterior parietal and sensorimotor network activity underlying speed control during walking. Front Hum Neurosci. 2015;9:247. https://doi.org/10.3389/fnhum.2015.00247.
Petersen TH, Willerslev-Olsen M, Conway BA, Nielsen JB. The motor cortex drives the muscles during walking in human subjects. J Physiol Lond. 2012;590:2443–52. https://doi.org/10.1113/jphysiol.2012.227397.
Wagner J, Solis-Escalante T, Grieshofer P, Neuper C, Müller-Putz G, Scherer R. Level of participation in robotic-assisted treadmill walking modulates midline sensorimotor EEG rhythms in able-bodied subjects. Neuroimage. 2012;63:1203–11. https://doi.org/10.1016/j.neuroimage.2012.08.019.
Wagner J, Solis-Escalante T, Scherer R, Neuper C, Müller-Putz G. It's how you get there: walking down a virtual alley activates premotor and parietal areas. Front Hum Neurosci. 2014;8:93. https://doi.org/10.3389/fnhum.2014.00093.
Wagner J, Solis-Escalante T, Neuper C, Scherer R, Müller-Putz G. Robot assisted walking affects the synchrony between premotor and somatosensory areas. Biomed Tech (Berl). 2013. https://doi.org/10.1515/bmt-2013-4434.
Seeber M, Scherer R, Wagner J, Solis-Escalante T, Müller-Putz GR. High and low gamma EEG oscillations in central sensorimotor areas are conversely modulated during the human gait cycle. Neuroimage. 2015;112:318–26. https://doi.org/10.1016/j.neuroimage.2015.03.045.
Kim HY, Yang SP, Park GL, Kim EJ, You JSH. Best facilitated cortical activation during different stepping, treadmill, and robot-assisted walking training paradigms and speeds: a functional near-infrared spectroscopy neuroimaging study. NeuroRehabilitation. 2016;38:171–8. https://doi.org/10.3233/NRE-161307.
Simis M, Santos K, Sato J, Fregni F, Battistella L. T107. Using Functional Near Infrared Spectroscopy (fNIRS) to assess brain activity of spinal cord injury patient, during robot-assisted gait. Clin Neurophysiol. 2018;129:e43–4. https://doi.org/10.1016/j.clinph.2018.04.108.
Calabrò RS, Naro A, Russo M, Bramanti P, Carioti L, Balletta T, et al. Shaping neuroplasticity by using powered exoskeletons in patients with stroke: a randomized clinical trial. J Neuroeng Rehabil. 2018;15:35. https://doi.org/10.1186/s12984-018-0377-8.
Berger A, Horst F, Müller S, Steinberg F, Doppelmayr M. Current state and future prospects of EEG and fNIRS in robot-assisted gait rehabilitation: a brief review. Front Hum Neurosci. 2019;13:172. https://doi.org/10.3389/fnhum.2019.00172.
Tucker MR, Olivier J, Pagel A, Bleuler H, Bouri M, Lambercy O, et al. Control strategies for active lower extremity prosthetics and orthotics: a review. J Neuroeng Rehabil. 2015;12:1. https://doi.org/10.1186/1743-0003-12-1.
Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. https://doi.org/10.1016/0028-3932(71)90067-4.
Colombo G, Joerg M, Schreier R, Dietz V. Treadmill training of paraplegic patients using a robotic orthosis. J Rehabil Res Dev. 2000;37:693–700.
Duschau-Wicke A, Caprez A, Riener R. Patient-cooperative control increases active participation of individuals with SCI during robot-aided gait training. J Neuroeng Rehabil. 2010;7:43. https://doi.org/10.1186/1743-0003-7-43.
Obrig H, Villringer A. Beyond the visible--imaging the human brain with light. J Cereb Blood Flow Metab. 2003;23:1–18. https://doi.org/10.1097/01.WCB.0000043472.45775.29.
Jurcak V, Tsuzuki D, Dan I. 10/20, 10/10, and 10/5 systems revisited: their validity as relative head-surface-based positioning systems. Neuroimage. 2007;34:1600–11. https://doi.org/10.1016/j.neuroimage.2006.09.024.
Vitorio R, Stuart S, Rochester L, Alcock L, Pantall A. fNIRS response during walking - Artefact or cortical activity? A systematic review. Neurosci Biobehav Rev. 2017;83:160–72. https://doi.org/10.1016/j.neubiorev.2017.10.002.
Xu Y, Graber HL, Barbour RL. nirsLAB: A Computing Environment for fNIRS Neuroimaging Data Analysis. Miami: Optical Society of America; 2014. p. 1. https://doi.org/10.1364/BIOMED.2014.BM3A.1.
Sassaroli A, Fantini S. Comment on the modified Beer-Lambert law for scattering media. Phys Med Biol. 2004;49:N255–7. https://doi.org/10.1088/0031-9155/49/14/N07.
Cope M, Delpy DT, Reynolds EO, Wray S, Wyatt J, van der Zee P. Methods of quantitating cerebral near infrared spectroscopy data. Adv Exp Med Biol. 1988;222:183–9. https://doi.org/10.1007/978-1-4615-9510-6_21.
Suzuki M, Miyai I, Ono T, Oda I, Konishi I, Kochiyama T, Kubota K. Prefrontal and premotor cortices are involved in adapting walking and running speed on the treadmill: an optical imaging study. Neuroimage. 2004;23:1020–6. https://doi.org/10.1016/j.neuroimage.2004.07.002.
Strangman G, Culver JP, Thompson JH, Boas DA. A quantitative comparison of simultaneous BOLD fMRI and NIRS recordings during functional brain activation. Neuroimage. 2002;17:719–31.
Winter DA. Kinematic and kinetic patterns in human gait: variability and compensating effects. Hum Mov Sci. 1984;3:51–76. https://doi.org/10.1016/0167-9457(84)90005-8.
Herzog W, Nigg BM, Read LJ, Olsson E. Asymmetries in ground reaction force patterns in normal human gait. Med Sci Sports Exerc. 1989;21:110–4.
Aurich Schuler T, Müller R, van Hedel HJA. Leg surface electromyography patterns in children with neuro-orthopedic disorders walking on a treadmill unassisted and assisted by a robot with and without encouragement. J Neuroeng Rehabil. 2013;10:78. https://doi.org/10.1186/1743-0003-10-78.
Israel JF, Campbell DD, Kahn JH, Hornby TG. Metabolic costs and muscle activity patterns during robotic- and therapist-assisted treadmill walking in individuals with incomplete spinal cord injury. Phys Ther. 2006;86:1466–78. https://doi.org/10.2522/ptj.20050266.
van Kammen K, Boonstra AM, van der Woude LHV, Reinders-Messelink HA, den Otter R. Differences in muscle activity and temporal step parameters between Lokomat guided walking and treadmill walking in post-stroke hemiparetic patients and healthy walkers. J Neuroeng Rehabil. 2017;14:32. https://doi.org/10.1186/s12984-017-0244-z.
Duschau-Wicke A, von Zitzewitz J, Caprez A, Lunenburger L, Riener R. Path control: a method for patient-cooperative robot-aided gait rehabilitation. IEEE Trans Neural Syst Rehabil Eng. 2010;18:38–48. https://doi.org/10.1109/TNSRE.2009.2033061.
Dobkin BH, Harkema S, Requejo P, Edgerton VR. Modulation of locomotor-like EMG activity in subjects with complete and incomplete spinal cord injury. Neurorehabil Neural Repair. 1995;9:183–90.
Riener R, Lünenburger L, Jezernik S, Anderschitz M, Colombo G, Dietz V. Patient-cooperative strategies for robot-aided treadmill training: first experimental results. IEEE Trans Neural Syst Rehabil Eng. 2005;13:380–94. https://doi.org/10.1109/TNSRE.2005.848628.
Riener R, Lünenburger L, Maier I, Colombo G, Dietz V. Locomotor training in subjects with Sensori-motor deficits: an overview of the robotic gait Orthosis Lokomat. J Healthc Eng. 2010;1:197–216. https://doi.org/10.1260/2040-2295.1.2.197.
Lewek MD, Cruz TH, Moore JL, Roth HR, Dhaher YY, Hornby TG. Allowing intralimb kinematic variability during locomotor training poststroke improves kinematic consistency: a subgroup analysis from a randomized clinical trial. Phys Ther. 2009;89:829–39. https://doi.org/10.2522/ptj.20080180.
Krishnan C, Kotsapouikis D, Dhaher YY, Rymer WZ. Reducing robotic guidance during robot-assisted gait training improves gait function: a case report on a stroke survivor. Arch Phys Med Rehabil. 2013;94:1202–6. https://doi.org/10.1016/j.apmr.2012.11.016.
Bohnsack-McLagan NK, Cusumano JP, Dingwell JB. Adaptability of stride-to-stride control of stepping movements in human walking. J Biomech. 2016;49:229–37. https://doi.org/10.1016/j.jbiomech.2015.12.010.
Yang JK, Ahn NE, Kim DH, Kim DY. Plantar pressure distribution during robotic-assisted gait in post-stroke hemiplegic patients. Ann Rehabil Med. 2014;38:145–52. https://doi.org/10.5535/arm.2014.38.2.145.
We thank Alina Hammer and Svenja Klink for assisting with the data collection, and Elmo Neuberger and Alexander Stahl for preparing the graphics included in this manuscript.
Department of Sport Psychology, Institute of Sport Science, Johannes Gutenberg-University Mainz, Albert Schweitzer Straße 22, 55128, Mainz, Germany
Alisa Berger
, Fabian Steinberg
, Fabian Thomas
& Michael Doppelmayr
Department of Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, Mainz, Germany
Fabian Horst
& Wolfgang I. Schöllhorn
School of Kinesiology, Louisiana State University, Baton Rouge, USA
Fabian Steinberg
Center of Neurorehabilitation neuroneum, Bad Homburg, Germany
Claudia Müller-Eising
Centre for Cognitive Neuroscience, Paris Lodron University of Salzburg, Salzburg, Austria
Michael Doppelmayr
Search for Alisa Berger in:
Search for Fabian Horst in:
Search for Fabian Steinberg in:
Search for Fabian Thomas in:
Search for Claudia Müller-Eising in:
Search for Wolfgang I. Schöllhorn in:
Search for Michael Doppelmayr in:
AB: Research project conception and execution, data acquisition, statistical analysis, interpretation and the manuscript writing. AB acts as the corresponding author. FH: Research project conception and execution, data acquisition, statistical analysis, and interpretation. CM: Research project conception. FS, FT, WS, and MD: Research project conception, manuscript review, and critique. All authors read and approved the final version of the manuscript.
Correspondence to Alisa Berger.
The study was performed in accordance with the Declaration of Helsinki. Experimental procedures were performed in accordance with the recommendations of the Deutsche Gesellschaft für Psychologie and were approved by the ethical committee of the Medical Association Hessen in Frankfurt (Germany). Participants were informed of all relevant issues regarding the study and provided their written informed consent prior to the initiation of the experiment.
Consent from all authors has been acquired prior to submission of this article.
Additional file 1: Figure S1. Normalized vertical ground reaction force (GRF; mean) during the stance phase of unassisted walking (UAW) for each individual participant. Figure S2. Normalized vertical ground reaction force (GRF; mean) during the stance phase of robot-assisted walking (RAW) for each individual participant.
Berger, A., Horst, F., Steinberg, F. et al. Increased gait variability during robot-assisted walking is accompanied by increased sensorimotor brain activity in healthy people. J NeuroEngineering Rehabil 16, 161 (2019) doi:10.1186/s12984-019-0636-3
Gait variability
Brain activity
Functional near-infrared spectroscopy
fNIRS
Robotic rehabilitation
RAGT
|
CommonCrawl
|
Journal of Biological Research-Thessaloniki
The adaptive immune response in cardiac arrest resuscitation induced ischemia reperfusion renal injury
Maria Tsivilika ORCID: orcid.org/0000-0002-5884-23531,
Eleni Doumaki2,
George Stavrou3,4,
Antonia Sioga5,
Vasilis Grosomanidis3,
Soultana Meditskou5,
Athanasios Maranginos6,
Despina Tsivilika7,
Dimitrios Stafylarakis8,
Katerina Kotzampassi3 &
Theodora Papamitsou5
Journal of Biological Research-Thessaloniki volume 27, Article number: 15 (2020) Cite this article
The present study aims to investigate, immunohistochemically, the role of the adaptive immune response in cardiac arrest/resuscitation-induced ischemia–reperfusion renal injury (IRI), namely to assess the presence of lymphocytes in renal tissue samples and the connection between the extent of the damage and the concentration of the lymphocytes by comparing the kidneys of non resuscitated swine with the kidneys of resuscitated swine.
Twenty four swine underwent cardiac arrest (CA) via a pacemaker wire. After 7 min, without any intervention, Cardiopulmonary Resuscitation, CPR, was commenced. Five min after CPR was commenced advanced life-support, ALS. Animals were divided into resuscitated animals and non resuscitated animals. Tissue samples obtained from the two groups for immunohistological study aiming to detect T-cells, B-cells and plasma cells using CD3 + , CD20 + , and CD138 + antibodies.
There seems to be a strong concentration of T lymphocytes in the kidney tissues after ischemia of both non-resuscitated and resuscitated swine. B lymphocytes, also, appear to have infiltrated the ischemic kidneys of both animal groups; nevertheless, the contribution of T lymphocytes to the induction of injury remains greater. There is no strong evidence of correlation between the plasma cells and the damage.
The adaptive immune response seems to have a strong association with kidney injury and acute tubular necrosis after cardiac arrest/ resuscitation-induced ischemia–reperfusion. However, the extent to which the adaptive immune cells are involved in the induction of renal injury remains uncertain and there are many questions about the mechanism of function of these cells, the answers of which require further studies.
Acute tubular necrosis (ATN) and subsequent renal failure induced by ischemia–reperfusion injury (IRI) or sepsis remain the leading cause of morbidity and mortality among patients in the intensive care unit [1]. ATN after ischemic shock has a mortality rate of 30% and many survivors are subject to dialysis [2]. Among patients surviving cardiac arrest (CA), the onset of acute kidney injury (AKI) is also common (> 75%) and persistent AKI (PAKI) occurs in more than one-third of the patients [3]. A plethora of renal diseases, but also renal transplant rejection [4] are due to AKI resulting from ischemic shock [5]. Both the innate immune response and the adaptive immune response could exacerbate renal tissue damages and create permanent damage [6]. In hypoxia, endothelial and tubular epithelial renal cells do not absorb the oxygen that is necessary for their survival. Due to the lack of oxygenation, it is impossible to remove and metabolize the cellular waste, the accumulation of which leads to apoptosis tubular cells and finally to ATN [7]. Upon reperfusion, blood components detect the injured renal tubular cells and their signaling factors and trigger the inflammatory response [8,9,10,11]. Extended renal inflammation can lead to renal tissue fibrosis and acute renal injury [12,13,14,15,16].
Leukocyte cells are the most associated cells with renal IRI and most of the studies focus on their activity and their involvement in kidney inflammation. Leukocytes are involved in causing kidney damage through their phagocytic activity, the release of chemotactic cytokines related to ischemic damages, and the release of reactive oxygen species. The above contributes to the induction of renal microangiopathy and ultimately to the occurrence of renal IRI [8, 17]. Many substances that influence leukocyte influx or activation, such as neutrophil elastase [18], the tissue-type plasminogen activator [19], activated protein C [20, 21], hepatocyte growth factor [22, 23], and CD44 [24], appear to be involved in the induction of renal injury. Although the cells of innate immunity are the cells that are directly associated with kidney inflammation and damages, however, the present study provides strong evidence that the adaptive immunity cells are, also, directly involved in CA/resuscitation-induced renal IRI.
The role of adaptive immunity in non-infectious kidney injuries such as ischemic kidney injury is a surprise concerning the lymphocyte activation and function since the alloantigen is absent. The hypoxia/re-oxygenation process may be sufficient for lymphocyte activation, [25]. Distress signals produced by injured tissue following ischemia appear to trigger an innate immune response mechanism that recruits all available immune modes to respond to injury [26]. Expression of the chemokine RANTES (Regulated upon Activation, Normal T Cell Expressed and Presumably Secreted) is likely to be a mechanism of T cell activation in the absence of an alloantigen and its regulation has been associated with the induction of renal injury by T cells [28]. The laparotomy procedures, also, accelerate systemic inflammatory response and therefore the early infiltration of T and B lymphocytes into kidneys.
T lymphocytes are found in post-ischemic kidney biopsies and appear to be a key mediator in the induction of renal IRI [29]. The involvement of T lymphocytes in renal IRI predisposes to the possible involvement of B lymphocytes as well. There is evidence that B lymphocytes are involved in damage after ischemia–reperfusion through the production of pathogenic IgM [30]. Studies on infectious diseases show a strong association of T cell function with B lymphocytes [31, 32]. B lymphocytes deficient mice exhibit abnormalities in their organogenesis and organic function throughout their development [33]. T lymphocytes, as well as B lymphocytes, intervene in various stages of renal failure by ameliorating damage following ischemia. Experimental studies in knockout mice have shown that the deficiency of either T or B lymphocytes has a protective effect against renal injury [34,35,36]. Of particular note were the findings of an experimental study in RAG-1 (Recombination activating gene 1) knockout mice. These mice lack both T and B lymphocytes but nonetheless showed no resistance to renal injury induced by ischemia, and the transfusion of T and B lymphocytes to these mice resulted in significant kidney protection. It is therefore concluded that while B or T cell deficiency alone leads to a decrease in IRI, the combined deficiency of both is not protective [36]. In addition, the transfer of activated lymphocytes from mice to mice lacking T lymphocytes that are subjected to acute renal injury after ischemia, 24 h after reperfusion, appears to have a protective effect against IRI [37]. The above findings highlight the complex and combinatorial interactions that take place between lymphocytes which defining their actions. CD28 T cell protein, as well as its binding to the B7-1 ligand (CD80), also appears to be strongly associated with IRI by T lymphocytes [38].
The NKT cells also seem to involve in IRI. NKT cells are found in ischemic kidneys already 3 h after reperfusion [37]. Activation of NKT cells increases renal IRI by triggering neutrophil infiltration into renal tissue and production of IFN-γ interleukin by both NKT cells and neutrophil cells [39]. Except for the IFN-γ, NKT cells express, also, interleukins such as IL-4 and IL-10. Experimental studies in mice show that administration of isoflurane, which inhibits the infiltration of NKT cells, neutrophils and macrophages into the renal tubules, attenuates the acute renal IRI [40].
In the present study (conducted in resuscitated and non-resuscitated swine), it is intended to investigate, immunohistochemically, the role of the adaptive immune response in CA/resuscitation-induced renal IRI, namely to assess the presence of T cells, B cells and plasma cells in renal tissue samples. The presence of these cells was confirmed immunohistochemically using CD3 antibody staining, for the T cell detection, CD20 antibody staining, for the B cell detection, and CD138 antibody staining, for the plasma cell detection. This study, also, aims to determine the connection between the extent of the damage and the concentration of the lymphocytes by comparing the kidneys of not resuscitated swine with the kidneys of resuscitated ones.
Experimental protocol
The experiments were performed at the Surgical Research Laboratory of AHEPA Hospital, Aristotle University of Thessaloniki, Greece. Twenty four female Munich swine (3 months old, 23.0 ± 2.9 kg BW) were premedicated, intubated and mechanically ventilated as previously described [41, 42]. The femoral artery and both femoral veins were cannulated and an 8Fr sheath, and, through that, a 7.5Fr Swan-Ganz CCOmboV catheter (Edwards Lifesciences, Irvine, CA, USA), was inserted in the right femoral artery of each animal for hemodynamic monitoring. All animals underwent CA via a pacemaker wire placed through the Swan-Ganz catheter. After 7 min without any intervention, cardiopulmonary resuscitation (CPR) was attempted with chest compressions using the LUCAS CCS (chest compression system) CPR device (Stryker Medical, MI, USA). Five min after CPR was commenced, advanced life-support (ALS) measures were undertaken with rate analysis, defibrillation and/or drug administration (all animals were given vasodilators inotropic drugs as it is required by the ALS protocol), as per the European Resuscitation Council (ERC) 2015 Guidelines, for 40 min. Animals were divided into two groups, according to the return of spontaneous circulation (ROSC) or not. Fourteen out of 24 pigs belong to the swine whose systemic circulation did not return after ALS (symbolized as no ROSC), while 10 out of 24 swine belong to the swine to which the systemic circulation returned after ALS (symbolized as ROSC). In non-resuscitated animals (no ROSC after 40 min), CPR was stopped and tissue samples obtained for histology. In resuscitated animals (ROSC), the animals were supported for 4 h and then sacrificed with thiopental and potassium chloride administration (KCl) and tissue samples were also obtained for optical microscopy observation. The advantage of this study is that the experiment was performed on swine that constitute large animals and their systematic circulation, as well as their immune system, simulate these of humans.
Immunochemical staining
Kidney tissue samples were taken, using a scalpel, to 3 mm thick, 20 × 30 mm. The samples were placed in a formol solution, 10% formaldehyde, for 24 h. The histokinetic protocol process took place, the process lasted 14 h, in order that the tissue samples pass the stages of fixation, dehydration, clearing, and finally the paraffin infiltration at 60 °C that is necessary for the sample preparation. After the histokinetic procedure, the tissues were paraffin-embedded by placing them on a tissue embedding machine. Finally, tissue-paraffin blocks were microtomed into three micrometer-thickness (3 μm) tissue sections. Some sections were stained with Hematoxylin and Eosin, EH, and then adjacent sections were placed on positively charged slide plates for immunohistochemical staining via VENTANA system, (Ventana Medical Systems, Arizona, USA).
For the immunohistochemical study the VENTANA BenchMark XT computerized automated system (Ventana Medical Systems, Arizona, USA) was used with the ultra-View Universal DAB Detection Kit, (Ventana Medical Systems, Arizona, USA). The water bath was set to 60 °C as it contained 90% water and 10% ethanol (100%). The sample was incubated in the oven for 60 min at 60 °C. The ultra-View Universal DAB Detection Kit detects specific antibodies bound to an antigen in paraffin-embedded tissue sections.
The reagents and antibodies used were:
CD3 + (mouse monoclonal antibody, Ventana company, Arizona, USA) at dilution 1/100 v/v and incubation time 42 min for T-lymphocytes detection.
CD20 + (mouse monoclonal antibody at Antibody Diluent solution, of Ventana company, Arizona, USA) at dilution 1/100 v/v and incubation time 42 min, for B lymphocytes detection.
CD138 + (mouse monoclonal antibody, Ventana company, Arizona, USA) at dilution 1/50 v/v and incubation time 42 min. CD138 is an immunohistochemical marker for plasma cells.
The above mentioned immunohistochemical staining procedure was repeated twice for each of the three antibodies that were examined. All specimens were examined under optical microscope (Zeiss, Jena, Germany), by two independent researchers, and photographs were taken using a Contax camera (Yashica, Hong-Kong, China) attached to the microscope.
Immunohistochemical staining according to the intensity and extent of the stained specimen in the slide plate is characterized as shown in Table 1. The calculation of the percentage of the stained specimen took place via a microscope cross table and resulting from the average percentage of the subjective crisis of the two independent researchers who observed the samples to assess the intensity and the extent of staining.
Table 1 Immunostaining intensity classification
One of the aims of this study is to evaluate the statistically significant difference between the non-resuscitated and the resuscitated swine, for the presence of the T-, B- and plasma cells. The presence of these cells was examined by the intensity of each of the three immunochemical antibody stainings that were used for the detection. The CD3, CD20 and CD138 antibody infiltration compared between the specimens of the non-resuscitated swine and the specimens of resuscitated swine. The two-sample t-test for unequal sample sizes with similar variances was used. The alternative hypothesis is that the average value for each antibody infiltration for non-resuscitated swine is higher than the average value for each antibody infiltration for resuscitated swine.
Thus, the null hypothesis is that the average value for each antibody infiltration for non-resuscitated swine is smaller or equal than the average value for each antibody infiltration for resuscitated swine.
$${H}_{A}: {\mu }_{1}>{\mu }_{2}$$
$${H}_{n}: {\mu }_{1}\le {\mu }_{2}$$
The t statistic to test whether the means are different can be calculated as follows:
$$t = \frac{{\overline{{X_{1} }} - \overline{{X_{2} }} }}{{s_{p} \cdot \sqrt {\tfrac{1}{{{\text{n}}_{1} }} + \tfrac{1}{{{\text{n}}_{2} }}} }}$$
$${s}_{p}=\sqrt{\frac{\left({n}_{1}-1\right){s}_{{X}_{1}}^{2}+({n}_{2}-1){s}_{{X}_{2}}^{2}}{{n}_{1}+{n}_{2}-2}}$$
The swine whose systematic circulation has not returned and which number 14 are represented by sample 1, while the swine whose systematic circulation has returned and number 10 are represented by sample 2 (Table 2).
Table 2 Statistical data
In brief, Figs. 1a–d and 2a–d illustrate the following: the T- cells concentration observed into the connective tissue, and around the renal tubules, especially the proximal convoluted renal tubules. T lymphocytes are located mainly above the basement membrane of the tubular epithelium as well as among the renal tubular epithelial cells. CD3 + T-cells were also detected around the renal corpuscles but also within the glomerular capillaries and the Βowman capsule. CD3 + T lymphocytes identified not only in the cortex but inside the kidney medulla as well. In resuscitated swine, the staining is milder and the morphological damages are less.
a Kidney of non-resuscitated swine. T lymphocytes positive tubular epithelium (↑). Few T lymphocytes in connective tissue, × 16. b Kidney of resuscitated swine. T-cell positive epithelium (↑). Few T lymphocytes in connective tissue, × 16. c Kidney tissue of non-resuscitated swine. Intensively positive tubules (↑). T lymphocytes among epithelial cells with a main concentration above the basement membrane, × 160. d Kidney of resuscitated swine. Damaged tubules strongly positive to T-cells (↑), × 160
a Kidney cortex of non-resuscitated swine. Intense accumulation of T lymphocytes around the renal corpuscle (↑) T cell positive Bowman's capsule and glomerulus, × 160. b Kidney cortex of resuscitated swine. Few T lymphocytes within the glomerulus of the renal corpuscle (↑). Beneath the renal corpuscle, damaged proximal tubule stained at the basement membrane (☆). Fewer T-cells than the non-resuscitated swine, × 160. c Kidney medulla of non-resuscitated swine. T lymphocytes around the tubules (↑) and in the connective tissue (↑), × 160. d Kidney medulla of resuscitated swine. Plenty of T cells into the damaged tubule (↑). Few T cells in the connective tissue (↑), × 160
It is found that B-cells are mainly located in the parietal layer of the Bowman's capsule of the renal corpuscles and in the urinary space (Fig. 3a–e). B-cells are also found among the renal tubular epithelial cells, mainly the proximal tubules, as well as into the connective tissue. The B-cells volume and concentration are much less than the T cells. In contrast to T-cells, no B-cells were found in the kidney medulla. In resuscitated swine, the staining is weakly positive, even negative and the concentration of B cells is lower than in non-resuscitated swine. In addition, morphological damage is less.
a Kidney of non-resuscitated swine. Few B cells around the tubules (↑) and mild staining to negative renal corpuscles (☆), × 40. b Kidney of resuscitated swine. Few, almost none, B-cells in connective tissue (↑), × 40. c Kidney cortex of non-resuscitated swine. Few B-cells in the parietal layer of the Bowman's capsule of the renal corpuscle (↑)and in the urinary space (↑), × 160. d Kidney cortex of resuscitated swine. Few B-cells in the parietal layer of the Bowman's capsule of the renal corpuscle (↑) and within the connective tissue (☆), × 160. e Negative kidney medulla of non-resuscitated swine, × 160
The concentration and staining volume of plasma cells is less, to negative compared to the the concentration and staining volume of T and B lymphocytes (Fig. 4a–d). The non-resuscitated swine showed mild to negative staining, (staining 0), with few plasma cells dispersed in the connective tissue, the parietal layer of the Bowman's capsule of the renal corpuscles and in the urinary space, as well as in and around the capillary vessels. Resuscitated swine was negative to plasma cells, (staining 0).
The T-cell, B-cell and plasma cell infiltration as determined by immunostaining of post‐ischemic kidney tissue with CD3, CD20 and CD138 antibodies (mouse monoclonal antibody, Ventana company, Arizona, USA). Ten resuscitated and 14 non-resuscitated stained specimens were viewed and counted for positively stained cells. Infiltration at non-resuscitated specimens (blue bars) at 24 h post‐ischemia was increased compared to the infiltration at resuscitated specimens (red bars). Both in non-resuscitated and resuscitated specimens, the infiltration of T-cell was greater than the infiltration of B-cell while there is no strong evidence of plasma cell infiltration
Statistical results
Statistical analyses showed that the t-value is higher than the critical value for CD3 (p = 0.0065) and CD20 (p = 0.0017) antibodies, and therefore, the null hypothesis is rejected. Consequently, the difference between the two samples is statistically significant, for T-cells and B-cells, enhancing the association of cardiac arrest/resuscitation-induced ischemia–reperfusion renal injury with the T- and B-cell infiltration, into the kidney tissue, assuming that the non-resuscitated swine developed greater damages. For CD138 antibody, the t-value is less than the critical value, failing to reject the null hypothesis (p = 0.0536). Therefore, the difference between the 2 samples, for plasma cells, is not statistically significant, reinforcing the immunohistological observations which also supporting that there is no strong evidence of involvement of the plasma cells in CA/resuscitation-induced renal IRI (Table 3).
Table 3 Statistical results
In resuscitated swine, the concentration of T-cells is lower than the concentration in non-resuscitated swine (Fig. 4). In addition, morphological damage is not that evident at resuscitated swine (when compared with non-resuscitated group). It is concluded that swine which recovered, and also appear with less damage, also exhibits less T-cell infiltration, further reinforcing the association of T-cells with AKI following ischemia. Ischemic mice and rats treated with drugs that prevent T-cell infiltration into the kidneys have decreased renal impairment [43, 44]. Mice lacking T lymphocytes are protected from IRI while an increase in renal IRI has been observed following the administration of T lymphocytes to these animals [45]. The observation that the main staining of cells occurs around the renal tubules (Fig. 5) and especially the proximal convoluted tubules of the kidney, enhances the aspect that the proximal tubular cells are the most vulnerable to renal ischemia [46]. Renal tubules bearing damaged epithelium and damaged renal corpuscle show higher CD3 + T lymphocyte concentration, which underscores the involvement of T lymphocytes in damage exacerbation. Studies in CD4 + /CD8 + knockout mice had significantly impaired renal IRI compared to wild type mice [29]. Experimental studies in only CD4 + knockout mice also limit kidney damage following ischemia–reperfusion, emphasizing that CD4 + T lymphocytes, and in particular CD4 + Th1 T lymphocytes, which produce IFN-γ, is the major subset of T-cells that involve in renal IRI [45]. This claim is supported by the fact that STAT4 (Signal transducer and activator of transcription 4) deficient mice showed improvement of renal IRI compared to STAT6 (Signal transducer and activator of transcription 6) deficient mice [47]. The detection of T lymphocytes in the glomerular capillaries and the Bowman's capsule suggest that the action of T cells is not restricted to signal transduction via the cytokine exudation but rather by the infiltration into the kidneys and the local action. CD11/CD18 as well as the intercellular adhesion molecule 1 (ICAM-1) have been studied for their role in adhesion of leukocytes to endothelium and ultimately their involvement in the induction of renal injury, mainly by neutrophil adhesion [48, 49]. However, these above adhesion pathways also mediate lymphocyte adhesion [50]. Phorbol ester treatment in vivo in mouse renal system increased the adhesion of T lymphocytes to renal tubular epithelial cells (RTEC). T-cell adhesion to RTECs increased even further after exposure of the system to hypoxia-reoxygenation conditions, a technique that mimics ischemia–reperfusion conditions [29]. The detection of T lymphocytes, also, inside the kidney medulla underlined the in-depth adhesion of T lymphocytes.
a, b Antibody CD3 staining showed that T cell infiltration is strongly positive throughout the kidney tissue (staining 3), except for the renal corpuscle and the medulla where it is moderately positive (staining 2). CD20 antibody staining (to wit, B cell infiltration) is weakly positive. The weak concentration appears mainly around the renal corpuscle (staining 1) while the medulla is negative to CD20 antibody (staining 0). CD138 antibody staining, therefore the plasma cell infiltration is negative (staining 0)
Although, B-cells are involved in the induction of IRI, the contribution of T lymphocytes to the induction of injury remains greater (Fig. 5). The location of B-cells, in the parietal layer of the Bowman's capsule of the renal corpuscle and in the urinary space, suggests that the function of the B-cells are not confined to the transduction of signals to the kidneys but rather the B lymphocytes infiltrate the kidneys and act locally, such as the T-cells. The fact that kidney medulla was positive only for T lymphocytes underscores the lower involvement of B-cells in renal IRI compared to T-cells. Resuscitated swine seems to have fewer B-lymphocytes and less damage than the non-resuscitated swine (Fig. 4). These data lead to the conclusion that swine whom the systemic circulation returned, which also appear to have fewer damages, also exhibit less B-cell infiltration. The above seems to strengthen the association of B-cells with AKI that follows ischemia. It is uncertain whether the plasma cells are involved in the renal IRI, such as B and T lymphocytes. B lymphocytes deficient mice showed better renal function and less tubular damage after ischemia–reperfusion compared at 24, 48 and 72 h after ischemia, compared to wild type mice. Leukocyte and T lymphocyte infiltration in the 2 groups of mice did not differ significantly. Serum administration of antibodies to knockout mice had a therapeutic effect against the injury whereas B-cell administration showed no improvement [35]. It is concluded that plasma cell serum may be involved as an auxiliary defense mechanism to limit the damage when the damage is large, as in not resuscitated swine. Consequently, there is no strong evidence which correlates the plasma cells with the damage (Fig. 5).
Hopefully, the experimental results of this study could be used for better identification of the kidney damage in patients with renal IRI, such as patients with myocardial infarction, patients after cardiopulmonary bypass, and "bypass" aortic surgery, patients with sepsis, or renal trauma. The present study aims to be a useful tool for predicting renal allograft functional capacity, as well as for more precise determination of the time a kidney may remain out of the systemic circulation in selective urological interventions such as partial renal resection and tumor removal in cases of neoplasia. Serum creatinine is a reliable indicator of renal glomerular filtration (GFR) and is used for checking the renal function [51].
The findings of the immunohistochemical examination suggest a strong concentration of T lymphocytes in the kidney tissues after CA/resuscitation-induced IRI of both non-resuscitated and resuscitated swine. The study emphasizes that the accumulation of T-cells is proportional to the damage observed in the tissue. B lymphocytes, also, appear to infiltrate the ischemic kidneys of both non-resuscitated and resuscitated swine while there is no strong evidence which correlates the plasma cells with the damage. However, the extent to which the adaptive immune cells are involved in the induction of renal injury remains uncertain and there are many questions about the mechanism of function of these cells, the answers of which require further studies.
Tujjar O, Mineo G, Dell'Anna A, Poyatos-Robles B, Donadello K, Scolletta S, et al. Acute kidney injury after cardiac arrest. Crit Care. 2015;19:169.
Weisberg LS, Allgren RL, Genter FC, Kurnik BRC, The Auriculin Anaritide Acute Renal Failure Study Group. Cause of acute tubular necrosis affects its prognosis. Arch Intern Med. 1997;157:1833–8.
Roman-Pognuz E, Elmer J, Rittenberger JC, Guyette FX, Berlot G, De Rosa S, et al. Markers of cardiogenic shock predict persistent acute kidney injury after out of hospital cardiac arrest. Heart Lung. 2019;48:126–30.
Morozumi K, Takeda A, Otsuka Y, Horike K, Gotoh N, Narumi S, et al. Reviewing the pathogenesis of antibody-mediated rejection and renal graft pathology after kidney transplantation. Nephrology (Carlton). 2016;21(Suppl 1):4–8.
Jang HR, Rabb H. The innate immune response in ischemic acute kidney injury. Clin Immunol. 2009;130:41–50.
Bonavia A, Singbartl K. A review of the role of immune cells in acute kidney injury. Pediatr Nephrol. 2018;33:1629–39.
Le Dorze M, Legrand M, Payen D, Ince C. The role of the microcirculation in acute kidney injury. Curr Opin Crit Care. 2009;15:503–8.
Friedewald JJ, Rabb H. Inflammatory cells in ischemic acute renal failure. Kidney Int. 2004;66:486–91.
Bonventre JV, Zuk A. Ischemic acute renal failure: an inflammatory disease? Kidney Int. 2004;66:480–5.
Baban B, Hoda N, Malik A, Khodadadi H, Simmerman E, Vaibhav K, et al. Impact of cannabidiol treatment on regulatory T-17 cells and neutrophil polarization in acute kidney injury. Am J Physiol Ren Physiol. 2018;315:F1149–F1158158.
Jiang L, Liu X-Q, Ma Q, Yang Q, Gao L, Li H-D, et al. hsa-miR-500a-3P alleviates kidney injury by targeting MLKL-mediated necroptosis in renal epithelial cells. FASEB J. 2019;33:3523–35.
Coca SG, Yusuf B, Shlipak MG, Garg AX, Parikh CR. Long-term risk of mortality and other adverse outcomes after acute kidney injury: a systematic review and meta-analysis. Am J Kidney Dis. 2009;53:961–73.
Iwano M, Plieth D, Danoff TM, Xue C, Okada H, Neilson EG. Evidence that fibroblasts derive from epithelium during tissue fibrosis. J Clin Invest. 2002;110:341–50.
Bechtel W, McGoohan S, Zeisberg EM, Müller GA, Kalbacher H, Salant DJ, et al. Methylation determines fibroblast activation and fibrogenesis in the kidney. Nat Med. 2010;16:544–50.
Cao Q, Harris DCH, Wang Y. Macrophages in kidney injury, inflammation, and fibrosis. Physiology (Bethesda). 2015;30:183–94.
Liu Y, Wang Y, Ding W, Wang Y. Mito-TEMPO alleviates renal fibrosis by reducing inflammation, mitochondrial dysfunction, and endoplasmic reticulum stress. Oxid Med Cell Longev. 2018. https://doi.org/10.1155/2018/5828120.
Yano T, Nozaki Y, Kinoshita K, Hino S, Hirooka Y, Niki K, et al. The pathological role of IL-18Rα in renal ischemia/reperfusion injury. Lab Invest. 2015;95:78–91.
Hayama T, Matsuyama M, Funao K, Tanaka T, Tsuchida K, Takemoto Y, et al. Benefical effect of neutrophil elastase inhibitor on renal warm ischemia-reperfusion injury in the rat. Transplant Proc. 2006;38:2201–2.
Roelofs JJTH, Rouschop KMA, Leemans JC, Claessen N, de Boer AM, Frederiks WM, et al. Tissue-type plasminogen activator modulates inflammatory responses and renal function in ischemia reperfusion injury. J Am Soc Nephrol. 2006;17:131–40.
Turunen AJ, Fernández JA, Lindgren L, Salmela KT, Kyllönen LE, Mäkisalo H, et al. Activated protein C reduces graft neutrophil activation in clinical renal transplantation. Am J Transplant. 2005;5:2204–12.
Thiele JR, Zeller J, Kiefer J, Braig D, Kreuzaler S, Lenz Y, et al. A conformational change in C-reactive protein enhances leukocyte recruitment and reactive oxygen species generation in ischemia/reperfusion injury. Front Immunol. 2018;9:675.
Mizuno S, Nakamura T. Prevention of neutrophil extravasation by hepatocyte growth factor leads to attenuations of tubular apoptosis and renal dysfunction in mouse ischemic kidneys. Am J Pathol. 2005;166:1895–905.
Kyriakopoulos G, Tsaroucha AK, Valsami G, Lambropoulou M, Kostomitsopoulos N, Christodoulou E, et al. Silibinin improves TNF-α and M30 expression and histological parameters in rat kidneys after hepatic ischemia/reperfusion. J Invest Surg. 2018;31:201–9.
Rouschop KMA, Roelofs JJTH, Claessen N, da Costa MP, Zwaginga J-J, Pals ST, et al. Protection against Renal ischemia reperfusion injury by CD44 disruption. J Am Soc Nephrol. 2005;16:2034–43.
Kokura S, Wolf RE, Yoshikawa T, Ichikawa H, Neil Granger D, Yee AT. Endothelial cells exposed to anoxia/reoxygenation are hyperadhesive to T-lymphocytes: kinetics and molecular mechanisms. Microcirculation. 2000;7:13–23.
Lu CY, Penfield JG, Kielar ML, Vazquez MA, Jeyarajah DR. Hypothesis: Is renal allograft rejection initiated by the response to injury sustained during the transplant process. Kidney Int. 1999;55:2157–68.
Lemay S, Rabb H, Postler G, Singh AK. Prominent and sustained up-regulation of gp130-signaling cytokines and the chemokine MIP-2 in murine renal ischemia-reperfusion injury. Transplantation. 2000;69:959–63.
Mikolajczyk TP, Nosalski R, Szczepaniak P, Budzyn K, Osmenda G, Skiba D, et al. Role of chemokine RANTES in the regulation of perivascular inflammation, T-cell accumulation, and vascular dysfunction in hypertension. FASEB J. 2016;30:1987–99.
Rabb H, Daniels F, O'Donnell M, Haq M, Saba SR, Keane W, et al. Pathophysiological role of T lymphocytes in renal ischemia-reperfusion injury in mice. Am J Physiol Ren Physiol. 2000;279:F525–F531531.
Richards JA, Bucsaiova M, Hesketh EE, Ventre C, Henderson NC, Simpson K, et al. Acute liver injury is independent of B cells or immunoglobulin M. PLoS ONE. 2015;10:e0138688.
Bosio CM, Gardner D, Elkins KL. Infection of B cell-deficient mice with CDC 1551, a clinical isolate of Mycobacterium tuberculosis: delay in dissemination and development of lung pathology. J Immunol. 2000;164:6417–25.
Cronkite DA, Strutt TM. The regulation of inflammation by innate and adaptive lymphocytes. J Immunol Res. 2018;2018:1467538.
Rivera A, Chen C-C, Ron N, Dougherty JP, Ron Y. Role of B cells as antigen-presenting cells in vivo revisited: antigen-specific B cells are essential for T cell expansion in lymph nodes and for systemic T cell responses to low antigen concentrations. Int Immunol. 2001;13:1583–93.
Yokota N, Daniels F, Crosson J, Rabb H. Protective effect of T cell depletion in murine renal ischemia-reperfusion injury. Transplantation. 2002;74:759–63.
Burne-Taney MJ, Ascon DB, Daniels F, Racusen L, Baldwin W, Rabb H. B Cell Deficiency Confers Protection from Renal Ischemia Reperfusion Injury. J Immunol. 2003;171:3210–5.
Burne-Taney MJ, Yokota-Ikeda N, Rabb H. Effects of combined T- and B-cell deficiency on murine ischemia reperfusion injury. Am J Transplant. 2005;5:1186–93.
Ascon DB, Lopez-Briones S, Liu M, Ascon M, Savransky V, Colvin RB, et al. Phenotypic and functional characterization of kidney-infiltrating lymphocytes in renal ischemia reperfusion injury. J Immunol. 2006;177:3380–7.
De Greef KE, Ysebaert DK, Dauwe S, Persy V, Vercauteren SR, Mey D, et al. Anti-B7-1 blocks mononuclear cell adherence in vasa recta after ischemia. Kidney Int. 2001;60:1415–27.
Li L, Huang L, Sung SSJ, Lobo PI, Brown MG, Gregg RK, et al. NKT cell activation mediates neutrophil IFN-γ production and renal ischemia-reperfusion injury. J Immunol. 2007;178:5899–911.
Lee HT, Kim M, Kim M, Kim NL, Billings FT 4th, D'Agati VD, et al. Isoflurane protects against renal ischemia and reperfusion injury and modulates leukocyte infiltration in mice. Am J Physiol Renal Physiol. 2007;293:F713–F722722.
Kazamias P, Kotzampassi K, Koufogiannis D, Eleftheriadis E. Influence of enteral nutrition-induced splanchnic hyperemia on the septic origin of splanchnic ischemia. World J Surg. 1998;22:6–11.
Stavrou G, Arvanitidis K, Filidou E, Fotiadis K, Grosomanidis V, Ioannidis A, et al. Combined enteral and parenteral glutamine supplementation in endotoxemic swine: effects on portal and systemic circulation levels. Med Princ Pract. 2019;27:570–8.
Jones EA, Shoskeses DA. The effect of mycophenolate mofetil and polyphenolic bioflavonoids on renal ischemia reperfusion injury and repair. J Urol. 2000;163:999–1004.
Suleiman M, Cury PM, Pestana JOM, Burdmann EA, Bueno V. FTY720 prevents renal T-cell infiltration after ischemia/reperfusion injury. Transplant Proc. 2005;37:373–4.
Burne MJ, Daniels F, El Ghandour A, Mauiyyedi S, Colvin RB, O'Donnell MP, et al. Identification of the CD4+ T cell as a major pathogenic factor in ischemic acute renal failure. J Clin Invest. 2001;108:1283–90.
Thadhani R, Pascual M, Bonventre JV. Acute renal failure. N Engl J Med. 1996;334:1448–600.
Yokota N, Burne-Taney M, Racusen L, Rabb H. Contrasting roles for STAT4 and STAT6 signal transduction pathways in murine renal ischemia-reperfusion injury. Am J Physiol Renal Physiol. 2003;285:F319–F325325.
Kelly KJ, Williams WW Jr, Colvin RB, Meehan SM, Springer TA, Gutiérrez-Ramos JC, et al. Intercellular adhesion molecule-1-deficient mice are protected against ischemic renal injury. J Clin Invest. 1996;97:1056–63.
Okada S, Shikata K, Matsuda M, Ogawa D, Usui H, Kido Y, et al. Intercellular adhesion molecule-1-deficient mice are resistant against renal injury after induction of diabetes. Diabetes. 2003;52:2586–93.
Springer TA. Adhesion receptors of the immune system. Nature. 1990;346:425–34.
O'Donnell MP, Burne M, Daniels F, Rabb H. Utility and limitations of serum creatinine as a measure of renal function in experimental renal ischemia-reperfusion injury. Transplantation. 2002;73:1841–4.
We thank our colleagues in AHEPA General Hospital, Aristotle University of Thessaloniki, for their help in carrying out the experiment and all the contributors who contributed to the collection and analysis of the tissue samples and to the writing of this study.
No funding.
Faculty of Medicine, School of Health Sciences, Aristotle University of Thessaloniki, Gianni Chalkidi 45, Charilaou, 54249, Thessaloniki, Greece
Maria Tsivilika
1st Department of Internal Medicine, Faculty of Medicine, AHEPA University Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
Eleni Doumaki
Department of Surgery, AHEPA University Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
George Stavrou, Vasilis Grosomanidis & Katerina Kotzampassi
Department of Colorectal Surgery, Addenbrooke's Hospital, Cambridge, UK
George Stavrou
Laboratory of Histology- Embryology, Faculty of Medicine, School of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
Antonia Sioga, Soultana Meditskou & Theodora Papamitsou
Independent Researcher, Thessaloniki, Greece
Athanasios Maranginos
Despina Tsivilika
2nd Department of Urology of Aristotle University of Thessaloniki, Papageorgiou General Hospital, Thessaloniki, Greece
Dimitrios Stafylarakis
Antonia Sioga
Vasilis Grosomanidis
Soultana Meditskou
Katerina Kotzampassi
Theodora Papamitsou
TsM: Writing- Review- Editing. DEl: Visualization of the experiment. StG: Contribution to the preparation of the swine and the implementation of the experiment. SAnt): Supervision. GrV: Contribution to the preparation of the swine and the implementation of the experiment. MS: Supervision. MA: Immunohistochemical staining. TD: Statistical Anlysis- Data Curation. SD: Writing. KK: Contribution to the preparation of the swine and the implementation of the experiment. PTh: Supervision. All authors read and approved the final manuscript.
Correspondence to Maria Tsivilika.
The experimental protocol was approved by the Department of Animal Care and Use Committee of the Greek Ministry of Agriculture, according to the European Community Guiding Principles for the Care and Use of Animals, (EU Directive 2010/63/EU, Protocol No.110942/756/2015).
The authors report no competing interests. The authors alone are responsible for the content and writing of this article
Tsivilika, M., Doumaki, E., Stavrou, G. et al. The adaptive immune response in cardiac arrest resuscitation induced ischemia reperfusion renal injury. J of Biol Res-Thessaloniki 27, 15 (2020). https://doi.org/10.1186/s40709-020-00125-2
Ischemia reperfusion
Renal injury
Acute tubular necrosis
|
CommonCrawl
|
Harm Reduction Journal
Toxicity classification of e-cigarette flavouring compounds based on European Union regulation: analysis of findings from a recent study
Konstantinos Farsalinos ORCID: orcid.org/0000-0001-6839-47101,2,3 &
George Lagoumintzis2
Harm Reduction Journal volume 16, Article number: 48 (2019) Cite this article
A recent study raised concerns about e-cigarette liquids toxicity by reporting the presence of 14 flavouring chemicals with toxicity classification. However, the relevant toxicity classification was not estimated according to the measured concentrations. The purpose of this study was to calculate the toxicity classification for different health hazards for all the flavouring chemicals at the maximum concentrations reported.
The analysis was based on the European Union Classification Labelling and Packaging regulation. The concentration of each flavouring chemical was compared with the minimum concentration needed to classify it as toxic. Additionally, toxicity classification was examined for a theoretical e-cigarette liquid containing all flavouring chemicals at the maximum concentrations reported.
There was at least one toxicity classification for all the flavouring chemicals, with the most prevalent classifications related to skin, oral, eye and respiratory toxicities. One chemical (methyl cyclopentenolone) was found at a maximum concentration 150.7% higher than that needed to be classified as toxic. For the rest, the maximum reported concentrations were 71.6 to > 99.9% lower than toxicity concentrations. A liquid containing all flavouring compounds at the maximum concentrations would be classified as toxic for one category only due to the presence of methyl cyclopentenolone; a liquid without methyl cyclopentenolone would have 66.7 to > 99.9% lower concentrations of flavourings than those needed to be classified as toxic.
The vast majority of flavouring compounds in e-cigarette liquids as reported in a recent study were present at levels far lower than needed to classify them as toxic. Since exceptions exist, regulatory monitoring of liquid composition is warranted.
Smoking is the foremost risk factor for many human diseases and promotes the initiation as well as the progression of potential lethal illnesses such as chronic obstructive pulmonary disease, cardiovascular disease and lung cancer [1]. While pharmacotherapies for smoking cessation have been developed for several years, their popularity and success rate is limited [2,3,4]. As a result, tobacco harm reduction products have been developed, with electronic cigarettes (e-cigarettes) being the most popular and widely used globally.
E-cigarette devices are comprised of a battery with integrated electronics and an atomizer that includes a wick, a heating element and a liquid storage space. Typically, e-cigarette liquids contain water, nicotine, vegetable glycerin (VG), propylene glycerol (PG) and a mix of flavouring additives, in variable concentrations, in order to achieve the desired taste while vaping [5]. Today, there is a vast choice of e-cigarette liquids, with a wide range of flavourings and nicotine levels. Besides the traditional tobacco-like flavours, consumers can choose among a variety of flavours consisting of fruits, sweets, drinks and beverages, and many more. Flavouring additives are used because e-cigarettes are almost flavourless without them. Surveys have shown that the vast majority of e-cigarette users use flavoured liquids, change flavours frequently, and report that flavour variability is important in their effort to quit and stay off cigarettes [6, 7].
Almost all flavouring chemicals are substances Generally Recognized As Safe (GRAS) and approved for human consumption through the oral route. While this does not substantiate safety for inhalation, food-approved flavourings are the only source for flavouring compounds used in e-cigarettes. A recent study by Vardavas et al. raised concerns about the presence of flavouring additives in e-cigarette liquids [8]. The study reported the analysis of 122 samples identifying several chemical compounds that are classified according to health hazards, including classification as respiratory irritants. This raised the possibility for toxicity and the legal requirement to include appropriate labelling for toxicity based on established European Union (EU) regulations. The authors noted that the liquids tested did not comply with the current EU regulations on e-cigarettes (Tobacco Products Directive) which dictates that "with the exception of nicotine, only ingredients that do not pose a risk to human health in heated or unheated form should be used in the liquid" [9]. There are established methods of identifying and classifying the toxicity of chemicals and mixtures based on the European Chemicals Agency Classification Labelling and Packaging (CLP) regulation, which are relevant to all products available for human consumption, including e-cigarette liquids [10, 11]. The toxicity characterization depends on the toxicity classification of the compounds and the concentration of the chemical in the mixture, in compliance with a basic toxicological principle that the amount of exposure determines the toxicity [12]. Although the flavouring chemicals were identified and quantified, the study did not calculate the potential toxicity and relevant toxicity classification based on the concentrations of the chemicals. Therefore, the purpose of this study was to examine the toxicity classification for different health hazards for all the chemicals at the maximum concentrations, as reported by Vardavas et al., and to determine if there is a legal requirement to include warning labels to the products according to flavouring levels, as determined by established regulation.
Toxicity classification methodology
Toxicity classification for all chemicals reported by Vardavas et al. was sought in the classification and labelling information database of the European Chemicals Agency [13]. The EU provides clear guidance on the estimation of toxicity of chemicals and mixtures through the CLP regulation [10, 11]. Each chemical is classified according to different hazards. There are mainly 3 types of hazard classes: physical hazards, health hazards and environmental hazards [14]. Based on the type of hazard and toxicity classification, specific hazard statements are required each of which has a specific code [15].
Health hazards include acute toxicity for oral, dermal and inhalation exposure, with chemicals being allocated to one of four toxicity categories (Category 1 to 4) according to specific numeric criteria. For these health hazards, acute toxicity values are expressed as approximate LD50 values or as acute toxicity estimates (ATE) from experimental data [10]. The classification categories are defined according to dose cut-off values of chemicals (in mg/kg body weight) causing toxicity in animals, with higher dose needed to cause toxicity corresponding to lower toxicity classification. E-cigarette liquids are mixtures, with flavouring chemicals diluted in non-toxic solvents (PG and VG). Therefore, the method of classification of mixtures for toxicity was used to identify the toxicity classification of each chemical at the concentrations reported by Vardavas et al. Since test data on the mixture itself or similar mixtures are not available, the classification was based on calculation thresholds. Two types of analyses were performed. In one, each compound reported in the study by Vardavas et al. [8] was assumed to be the only component of the mixture dissolved in non-toxic solvents (PG and VG) at the maximum levels reported by Vardavas et al. A limitation of this method is that electronic cigarette liquids contain more than one flavouring chemical that could have a toxicity classification. Unfortunately, the previous study did not provide information on the composition of each of the liquids tested. Thus, and to address this limitation, we calculated the toxicity estimate for a hypothetical final solution (e-cigarette liquid) containing all the flavouring chemicals at the maximum concentration reported by Vardavas et al. This is performed using the additivity formula (10), which involves adding the Acute Toxicity Estimate (ATE) of each ingredient for each hazard classification, in this case of all compounds at the maximum concentrations reported by Vardavas et al. While it is unlikely that an e-cigarette liquid would contain all these flavouring chemicals at the maximum reported concentrations, it provides an estimate of the worst-possible case scenario based on the study findings. Thus, to estimate the hazard classification for mixtures, we used the following formula:
$$ \frac{100}{ATEmix}=\frac{\sum \limits_n Ci}{ATEi} $$
where ATEmix is the acute toxicity estimate of the mixture containing a specific concentration of the chemical, n is the number of ingredients (one ingredient for the analysis of each compound separately and sum of all ingredients in the analysis of a liquid containing all ingredients in maximum concentrations), Ci is the concentration of the chemical i in the mixture, and ATEi is the converted acute toxicity point estimate of chemical i. Since toxicity classification to different categories is based on a range of acute toxicity estimates, we used the converted acute toxicity point estimates as recommended by the CLP regulation in the calculations [10]. It should be clarified that lower ATEi represents lower exposure dose needed to cause toxicity and thus higher toxicity classification (i.e. more toxic). Similarly, lower ATEmix represents lower exposure dose of the mixture needed to cause toxicity and thus higher toxicity.
For other health and for environmental hazards, such as skin corrosion and irritation, eye irritation, respiratory irritation and toxicity to aquatic life, percent concentrations of the chemical are used to determine the different toxicity categories (Table 2). For these hazards, the maximum concentrations reported by Vardavas et al. were used. Separate analyses were performed considering that each chemical represents a unique mixture with the chemical present at the maximum concentration and assuming that an e-cigarette liquid contains all ingredients at the maximum concentrations reported. In the latter case, the concentrations of each compound having a specific toxicity classification were added in order to calculate the toxicity classification for the final mixture.
Table 1 displays the chemicals (n = 14) and the maximum concentrations reported by Vardavas et al., as well as their toxicity classification according to the EU CLP [15,16,17,18,19,20,21,22,23,24,25,26,27,28]. All chemicals were approved to be used as food flavourings, and their respective Flavor Extract Manufacturers Association (FEMA) GRAS numbers are also displayed in Table 1. There was at least one toxicity classification for all the flavouring chemicals. The most prevalent health hazards were related to skin (5 chemicals classified as skin irritants and 4 classified as causing allergic skin reactions), oral (6 chemicals), and eye (5 chemicals) and respiratory (3 for respiratory irritation and 2 for allergy, asthma symptoms or breathing difficulties) toxicity. While 3 chemicals were classified as flammable at ≤ 60 °C in liquid and vapour form, e-cigarette liquids are obviously not flammable even when being heated to the temperatures of evaporation during use. Thus, no further analysis was performed for this hazard category. It should be noted that ethyl hexanoate was only classified as flammable, so it was excluded from further analysis.
Table 1 Hazard and hazard category classifications of chemicals reported by Vardavas et al. and their corresponding FEMA numbers
Table 2 presents the toxicity classification calculations for each compound separately using the maximum concentrations reported by Vardavas et al. [8]. Only methyl cyclopentenolone was found at a maximum concentration that would result in toxicity classification for one hazard (H 334 Category 1, may cause allergy or asthma symptoms or breathing difficulties if inhaled). All other compounds were found at maximum concentrations that were by far lower than the concentrations needed to classify them as toxic. The difference between the minimum concentrations needed to classify a solution as toxic and the maximum concentrations reported by Vardavas et al. ranged from approximately 72% (for ethyl maltol) to > 99.9% (250.000-fold lower concentration for limonene).
Table 2 Toxicity classification calculations for each flavouring compound using the maximum concentrations reported by Vardavas et al.
Table 3 presents the toxicity classification for the worst case scenario of an e-cigarette liquid containing all the flavouring compounds reported by Vardavas et al., at the maximum concentrations found. The final liquid would need to be classified as H334 Category 1 (may cause allergy or asthma symptoms or breathing difficulties if inhaled). To determine whether the classification was solely based on the maximum concentration of methyl cyclopentenolone, we performed another calculation adding the concentrations and ATEs of all flavouring compounds besides methyl cyclopentenolone. The findings are presented in Table 4. The final liquid would not be classified as toxic for any health or environmental hazard.
Table 3 Toxicity classification of a mixture containing all flavouring chemicals at the maximum concentrations reported by Vardavas et al.
Table 4 Toxicity classification of a mixture containing all flavouring chemicals at the maximum concentrations reported by Vardavas et al. except methyl cyclopentenolone
The study presented a risk assessment analysis of previously reported findings of flavouring chemicals in e-cigarette liquids. The analysis was based on established methods of classifying toxicity determined by relevant EU regulations and according to official toxicity classifications. The study found that only one flavouring compound (methyl cyclopentenolone) was present at a maximum concentration that would result in toxicity classification and the need to introduce specific warning labels based on established regulations. For all other compounds, the maximum concentrations were far below the levels needed to result in any toxicity classification. Even in the unlikely scenario that an e-cigarette liquid would contain all the flavouring compounds at the maximum reported concentrations, only methyl cyclopentenolone was present in sufficient concentration to classify the mixture as toxic.
It is not uncommon for many chemicals used for human consumption to be classified as toxic. Characteristically, ethyl vanillin, a very common flavouring used in food products, has a toxicity classification for oral intake (harmful if swallowed), a toxicity relevant to the intended route of intake (ingestion). Still, it is widely used in the food industry, with the annual production estimated at 44 tonnes in Europe and 330 tonnes in the USA [29]. This is because any toxicity classification is not based on the presence of a chemical alone but on the amount used in the final product and how this compares with concentrations associated with toxicity. Similarly, it is not unexpected that e-cigarette liquids contain chemicals that are classified for toxicity since flavourings used in these products are derived from the food industry. While the EU dictates that no chemical posing health risk (besides nicotine) should be used in e-cigarette liquids, it is expected that the same principles are applied to e-cigarettes as to all other consumer products (e.g., food). The previously published analysis by Vardavas et al. did not calculate the potential toxicity and relevant toxicity classification based on the concentrations of the chemicals. Their methodology would not be appropriate for other consumer products, and the current regulatory framework for chemical compounds used in consumer products makes no qualitative or quantitative distinction in the toxicity evaluation between e-cigarette liquids and other products, including food products.
Our analysis identified that the maximum concentration found for methyl cyclopentenolone was high enough to be classified as toxic. Previous studies have also identified that some e-cigarette liquids contain flavouring chemicals at levels exceeding the Maximized Survey-Derived Intake (MSDI) levels set by the EU or the USA or recommended levels in food products [30, 31]. These findings indicate that some manufacturers do not conform to proper manufacturing standards and introduce high levels of certain flavouring chemicals in e-cigarette liquids. While that was observed with only one chemical at the maximum concentration herein, it is important to implement proper regulatory and monitoring frameworks so that product quality and safety is ensured.
Limitations of the study include the lack of information about the composition of e-cigarette liquids so that a toxicological assessment of the final product rather than each compound separately would be performed. To overcome this, we analyzed a hypothetical, highly unlikely, scenario that a liquid would contain all chemicals at the maximum concentration reported by Vardavas et al. Even in this case, only methyl cyclopentenolone was associated with a toxicity classification. Additionally, Vardavas et al. reported finding 246 different flavours and additives in the 122 samples tested [8]. A recent study by the same group [32] presented more chemicals found in these e-cigarette liquids, but did not provide information on the concentrations found; thus, it was impossible to calculate the toxicity classifications for all these compounds. Finally, despite using the current regulatory framework for chemicals safety (CLP), this does not imply that e-cigarettes are harmless. Studies specifically addressing the route of exposure from e-cigarette use and the process of aerosol production (liquid heating and potential presence of thermal degradation products) are needed to examine the safety/risk profile of e-cigarettes. While the products are sold in liquid form, they are evaporated and the generated aerosol in subsequently used by the consumer. Additionally, CLP regulation is mainly focused on acute toxicity while e-cigarettes are used chronically. Thus, the study cannot evaluate the safety/risk profile of these products from chronic intended use. However, it provides an assessment of the products' compliance with established chemicals safety regulations in the EU, of the legal requirements to include warning labelling based on established toxicity regulation and of the compliance to the TPD regulation which dictates that no compound with a toxicity classification (besides nicotine) should be added to e-cigarette liquids.
In conclusion, a risk assessment analysis based on the EU CLP regulatory framework identified that only one of the reported flavouring chemicals found in e-cigarette liquids was present at levels sufficiently high to classify it as toxic based on the EU regulatory framework. For the rest, the concentrations reported were by far lower than those needed to classify them as toxic. Even if a liquid contained all the chemical compounds at the maximum reported concentration, still any toxicity classification would be associated with the use of only one compound at the maximum concentration. It is important for a proper regulatory framework to continuously monitor the composition and quality of e-cigarette products available in the market and ensure that appropriate standards are used. Such toxicological surveillance of e-cigarette liquids can be valuable in identifying, removing or adequately diluting potentially harmful compounds as part of standard regulatory practice. The relative simplicity of the chemistry of e-liquids and e-cigarette aerosols makes this method practically feasible, whereas it is totally implausible for combustible tobacco products.
ATE:
Acute toxicity estimate
CLP:
Classification Labelling and Packaging
E-cigarettes:
FEMA:
Flavor Extract Manufacturers Association
GRAS:
Generally Recognized As Safe
LD50:
Lethal dose 50
MSDI:
Maximized Survey-Derived Intake
PG:
Propylene glycerol
VG:
Polosa R, Cibella F, Caponnetto P, et al. Health impact of E-cigarettes: a prospective 3.5-year study of regular daily users who have never smoked. Sci Rep. 2017;7:1–9.
Moore D, Aveyard P, Connock M, Wang D, Fry-Smith A, Barton P. Effectiveness and safety of nicotine replacement therapy assisted reduction to stop smoking: systematic review and meta-analysis. BMJ. 2009;338:b1024.
Kotz D, Brown J, West R. 'Real-world' effectiveness of smoking cessation treatments: a population study. Addiction. 2014;109:491–9.
Rosen LJ, Galili T, Kott J, Goodman M, Freedman LS. Diminishing benefit of smoking cessation medications during the first year: a meta-analysis of randomized controlled trials. Addiction. 2018;113:805–16.
Farsalinos K. E-cigarettes: an aid in smoking cessation, or a new health hazard? Ther Adv Respir Dis. 2018;12:1753465817744960.
Dawkins L, Turner J, Roberts A, Soar K. 'Vaping' profiles and preferences: an online survey of electronic cigarette users. Addiction. 2013;108:1115–25.
Farsalinos KE, Romagna G, Tsiapras D, Kyrzopoulos S, Spyrou A, Voudris V. Impact of flavour variability on electronic cigarette use experience: an internet survey. Int J Environ Res Public Health. 2013;10:7272–82.
Vardavas C, Girvalaki C, Vardavas A, et al. Respiratory irritants in e-cigarette refill liquids across nine European countries: a threat to respiratory health? Eur Respir J. 2017;50:1701698.
Directive 2014/40/EU of the European Parliament and of the Council of 3 April 2014 on the approximation of the laws, regulations and administrative provisions of the Member States concerning the manufacture, presentation and sale of tobacco and related products and repealing Directive 2001/37/EC. Available from: http://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32014L0040 (accessed on 10 Dec 2018).
European Chemicals Agency (ECHA). Guidance on the Application of the CLP Criteria. Version 5.0, 2017. Available at: https://echa.europa.eu/documents/10162/23036412/clp_en.pdf (accessed on 10 Dec 2018).
Borzelleca JF. Paracelsus: herald of modern toxicology. Toxicol Sci. 2000;53:2–4.
European Chemicals Agency (ECHA). C&L Inventory. Available at: https://www.echa.europa.eu/information-on-chemicals/cl-inventory-database (accessed on 22 Nov 2018).
European Chemicals Agency (ECHA). Hazard class table. Available at: https://echa.europa.eu/support/mixture-classification/hazard-class-table (accessed on 26 Nov 2018).
Health and Safety Authority of Ireland. CLP Regulation (EC) No. 1272 / 2008 on the classification, labelling and packaging of substances and mixtures. Available at: https://www.hsa.ie/eng/Publications_and_Forms/Publications/Chemical_and_Hazardous_Substances/CLP_Regulation_No_1272-2008_A1_Poster_II.pdf (accessed on 26 Nov 2018).
European Chemicals Agency C&L Inventory. Summary of Classification and Labelling. Ethyl maltol. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/41980 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. Menthol. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/67134 (accessed on December 19, 2018).
European Chemicals Agency C&L Inventory. Summary of Classification and Labelling. Linalool. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/42664 (accessed on 11 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of Classification and Labelling. Methyl cyclopentenolone. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/52058 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. beta-damascone. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/154046 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. Ethyl vanillin. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/122281 (accessed on December 19, 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. b-Ionone. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/25343 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. Acetyl pyrazine. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/18496 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. a-Ionone. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/78026 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. Ethyl hexanoate. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/37293 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. 2,5 dimethylpyrazine. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/8978 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. alpha-damascone. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/59690 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. 3,4 dimethoxy benzaldehyde. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/102136 (accessed on 19 Dec 2018).
European Chemicals Agency C&L Inventory. Summary of classification and labelling. Limonene. Available at: https://www.echa.europa.eu/web/guest/information-on-chemicals/cl-inventory-database/-/discli/details/135645 (accessed on 19 Dec 2018).
World Health Organization Food Additives Series: 48. Safety evaluation of certain food additives and contaminants. Hydroxy- and alkoxy-substituted benzyl derivatives. 2002. Available at: http://apps.who.int/iris/bitstream/10665/42501/1/9241660481_en.pdf (accessed on 26 Jan 2019).
Varlet V, Farsalinos K, Augsburger M, Thomas A, Etter JF. Toxicity assessment of refill liquids for electronic cigarettes. Int J Environ Res Public Health. 2015;12:4796–815.
Omaiye EE, McWhirter KJ, Luo W, Tierney PA, Pankow JF, Talbot P. High concentrations of flavor chemicals are present in electronic cigarette refill fluids. Sci Rep. 2019;9(1):2468.
Girvalaki C, Tzatzarakis M, Kyriakos CN, et al. Composition and chemical health hazards of the most common electronic cigarette liquids in nine European countries. Inhal Toxicol. 2018;30:361–9.
No funding was provided for this study.
Department of Cardiology, Onassis Cardiac Surgery Center, Syggrou 356, 17674, Kallithea, Greece
Konstantinos Farsalinos
Laboratory of Molecular Biology and Immunology, Department of Pharmacy, University of Patras, 26500, Rio, Greece
Konstantinos Farsalinos & George Lagoumintzis
National School of Public Health, Athens, Greece
George Lagoumintzis
KF analyzed the data and wrote the manuscript. GL was a major contributor in writing and editing the manuscript. Both authors read and approved the final manuscript.
Correspondence to Konstantinos Farsalinos.
The authors report no conflict of interest for the past 36 months. For the past 60 months, KF has published 2 studies funded by the non-profit association AEMSA and 1 study funded by the non-profit association Tennessee Smoke-Free Association.
Farsalinos, K., Lagoumintzis, G. Toxicity classification of e-cigarette flavouring compounds based on European Union regulation: analysis of findings from a recent study. Harm Reduct J 16, 48 (2019). https://doi.org/10.1186/s12954-019-0318-2
Flavourings
Submission enquiries: [email protected]
|
CommonCrawl
|
Solutions of Systems of Linear Equations
Consistent and Inconsistent Systems of Linear Equations
An $m\times n$ system of linear equations is
\begin{align*} \tag{*}
a_{1 1} x_1+a_{1 2}x_2+\cdots+a_{1 n}x_n& =b_1 \\
\vdots \qquad \qquad \cdots\qquad \qquad &\vdots \\
a_{m 1} x_1+a_{m 2}x_2+\cdots+a_{m n}x_n& =b_m,
where $x_1, x_2, \dots, x_n$ are unknowns (variables) and $a_{i j}, b_k$ are numbers.
Thus an $m\times n$ system of linear equations consists of $m$ equations and $n$ unknowns $x_1, x_2, \dots, x_n$.
A system of linear equations is called homogeneous if the constants $b_1, b_2, \dots, b_m$ are all zero.
A solution of the system (*) is a sequence of numbers $s_1, s_2, \dots, s_n$ such that the substitution $x_1=s_1, x_2=s_2, \dots, x_n=s_n$ satisfies all the $m$ equations in the system (*).
The rank of a system of linear equation is the rank of the coefficient matrix.
For a given system of linear equations, there are only three possibilities for the solution set of the system: No solution (inconsistent), a unique solution, or infinitely many solutions.
The possibilities for the solution set of a homogeneous system is either a unique solution or infinitely many solutions.
If $m < n$, then an $m\times n$ system is either inconsistent or it has infinitely many solutions.
If $m < n$, then an $m \times n$ homogeneous system has infinitely many solutions.
If a consistent $m\times n$ system of linear equation has rank $r$, then the system has $n-r$ free variables.
=solution
Determine all possibilities for the solution set of the system of linear equations described below.
(a) A homogeneous system of $3$ equations in $5$ unknowns.
(b) A homogeneous system of $5$ equations in $4$ unknowns.
(c) A system of $5$ equations in $4$ unknowns.
(d) A system of $2$ equations in $3$ unknowns that has $x_1=1, x_2=-5, x_3=0$ as a solution.
(e) A homogeneous system of $4$ equations in $4$ unknowns.
(f) A homogeneous system of $3$ equations in $4$ unknowns.
(g) A homogeneous system that has $x_1=3, x_2=-2, x_3=1$ as a solution.
(h) A homogeneous system of $5$ equations in $3$ unknowns and the rank of the system is $3$.
(i) A system of $3$ equations in $2$ unknowns and the rank of the system is $2$.
(j) A homogeneous system of $4$ equations in $3$ unknowns and the rank of the system is $2$.
Determine all possibilities for the number of solutions of each of the system of linear equations described below.
(a) A system of $5$ equations in $3$ unknowns and it has $x_1=0, x_2=-3, x_3=1$ as a solution.
(b) A homogeneous system of $5$ equations in $4$ unknowns and the rank of the system is $4$.
(The Ohio State University)
Determine all possibilities for the solution set of a homogeneous system of $2$ equations in $2$ unknowns that has a solution $x_1=1, x_2=5$.
For what value(s) of $a$ does the system have nontrivial solutions?
&x_1+2x_2+x_3=0\\
&-x_1-x_2+x_3=0\\
& 3x_1+4x_2+ax_3=0.
Determine whether the following systems of equations (or matrix equations) described below has no solution, one unique solution or infinitely many solutions and justify your answer.
(a) \[\left\{
\begin{array}{c}
ax+by=c \\
dx+ey=f,
\end{array}
\right.
\] where $a,b,c, d$ are scalars satisfying $a/d=b/e=c/f$.
(b) $A \mathbf{x}=\mathbf{0}$, where $A$ is a non-singular matrix.
(c) A homogeneous system of $3$ equations in $4$ unknowns.
(d) $A\mathbf{x}=\mathbf{b}$, where the row-reduced echelon form of the augmented matrix $[A|\mathbf{b}]$ looks as follows:
\[\begin{bmatrix}
1 & 0 & -1 & 0 \\
0 &1 & 2 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.\] (The Ohio State University)
Linear Algebra Version 0 (11/15/2017)
Introduction to Matrices
Elementary Row Operations
Gaussian-Jordan Elimination
Linear Combination and Linear Independence
Nonsingular Matrices
Inverse Matrices
Subspaces in $\R^n$
Bases and Dimension of Subspaces in $\R^n$
General Vector Spaces
Subspaces in General Vector Spaces
Linearly Independency of General Vectors
Bases and Coordinate Vectors
Dimensions of General Vector Spaces
Linear Transformation from $\R^n$ to $\R^m$
Linear Transformation Between Vector Spaces
Orthogonal Bases
Determinants of Matrices
Computations of Determinants
Introduction to Eigenvalues and Eigenvectors
Eigenvectors and Eigenspaces
Diagonalization of Matrices
The Cayley-Hamilton Theorem
Dot Products and Length of Vectors
Eigenvalues and Eigenvectors of Linear Transformations
Jordan Canonical Form
|
CommonCrawl
|
Category: Geometry Topology
Mathematics in Ancient and Medieval India by A.K.Bag
By A.K.Bag
Heritage of arithmetic in historical and medieval India
Posted in Geometry Topology
Geometric Qp Functions (Frontiers in Mathematics) by Jie Xiao
By Jie Xiao
This booklet records the wealthy constitution of the holomorphic Q functionality areas that are geometric within the feel that they rework evidently lower than conformal mappings, with specific emphasis on fresh improvement in line with interplay among geometric functionality and degree idea and different branches of mathematical research, together with power concept, harmonic research, sensible research, and operator thought. mostly self-contained, the booklet services as an educational and reference paintings for complex classes and study in conformal research, geometry, and serve as areas. Self-contained, the publication features as an academic and reference paintings for complex classes and learn in conformal research, geometry, and serve as areas.
Dirac Operators in Riemannian Geometry by Thomas Friedrich
By Thomas Friedrich
For a Riemannian manifold $M$, the geometry, topology and research are interrelated in ways in which are commonly explored in sleek arithmetic. Bounds at the curvature may have major implications for the topology of the manifold. The eigenvalues of the Laplacian are evidently associated with the geometry of the manifold. For manifolds that admit spin (or $\textrm{spin}^\mathbb{C}$) constructions, one obtains additional info from equations concerning Dirac operators and spinor fields. with regards to four-manifolds, for instance, one has the impressive Seiberg-Witten invariants. during this textual content, Friedrich examines the Dirac operator on Riemannian manifolds, in particular its reference to the underlying geometry and topology of the manifold. The presentation features a overview of Clifford algebras, spin teams and the spin illustration, in addition to a overview of spin buildings and $\textrm{spin}^\mathbb{C}$ buildings. With this beginning validated, the Dirac operator is outlined and studied, with unique consciousness to the situations of Hermitian manifolds and symmetric areas. Then, yes analytic houses are confirmed, together with self-adjointness and the Fredholm estate. a massive hyperlink among the geometry and the research is equipped by way of estimates for the eigenvalues of the Dirac operator by way of the scalar curvature and the sectional curvature. issues of Killing spinors and recommendations of the twistor equation on $M$ bring about effects approximately even if $M$ is an Einstein manifold or conformally comparable to one. eventually, in an appendix, Friedrich offers a concise creation to the Seiberg-Witten invariants, that are a robust instrument for the research of four-manifolds. there's additionally an appendix reviewing crucial bundles and connections. This distinct booklet with based proofs is appropriate as a textual content for classes in complex differential geometry and international research, and will function an creation for extra examine in those components. This version is translated from the German version released via Vieweg Verlag.
The Elements of Non-Euclidean Geometry by J. Coolidge
By J. Coolidge
Analytical Geometry by Barry Spain, W. J. Langford, E. A. Maxwell and I. N. Sneddon
By Barry Spain, W. J. Langford, E. A. Maxwell and I. N. Sneddon (Auth.)
Lectures On The h-Cobordism Theorem by John Milnor
By John Milnor
These lectures offer scholars and experts with initial and invaluable info from college classes and seminars in arithmetic. This set supplies new facts of the h-cobordism theorem that's assorted from the unique facts awarded through S. Smale.
The Princeton Legacy Library makes use of the newest print-on-demand expertise to back make to be had formerly out-of-print books from the celebrated backlist of Princeton collage Press. those versions look after the unique texts of those vital books whereas providing them in sturdy paperback and hardcover versions. The objective of the Princeton Legacy Library is to significantly elevate entry to the wealthy scholarly historical past present in the hundreds of thousands of books released by way of Princeton college Press on the grounds that its founding in 1905.
Circles: A Mathematical View by Daniel Pedoe
By Daniel Pedoe
This revised variation of a mathematical vintage initially released in 1957 will carry to a brand new iteration of scholars the joy of investigating that easiest of mathematical figures, the circle. the writer has supplemented this new version with a different bankruptcy designed to introduce readers to the vocabulary of circle recommendations with which the readers of 2 generations in the past have been prevalent. Readers of Circles want merely be armed with paper, pencil, compass, and instantly area to discover nice excitement in following the structures and theorems. those that imagine that geometry utilizing Euclidean instruments died out with the traditional Greeks should be pleasantly shocked to benefit many attention-grabbing effects which have been in basic terms came across nowa days. beginners and specialists alike will locate a lot to enlighten them in chapters facing the illustration of a circle by way of some extent in three-space, a version for non-Euclidean geometry, and the isoperimetric estate of the circle.
Thirteen Books of Euclid's Elements. Books X-XIII by Thomas L. Heath
By Thomas L. Heath
After learning either classics and arithmetic on the collage of Cambridge, Sir Thomas Little Heath (1861-1940) used his time clear of his task as a civil servant to put up many works near to historic arithmetic, either renowned and educational. First released in 1926 because the moment version of a 1908 unique, this ebook comprises the 3rd and ultimate quantity of his three-volume English translation of the 13 books of Euclid's parts, protecting Books Ten to 13. This special textual content should be of price to a person with an curiosity in Greek geometry and the background of arithmetic.
The foundations of geometry by Venema G.
By Venema G.
Poncelet's Theorem by Leopold Flatto
By Leopold Flatto
Poncelet's theorem is a well-known bring about algebraic geometry, courting to the early a part of the 19th century. It matters closed polygons inscribed in a single conic and circumscribed approximately one other. the theory is of significant intensity in that it pertains to a wide and various physique of arithmetic. There are a number of proofs of the concept, none of that's common. a very beautiful characteristic of the concept, that is simply understood yet tricky to turn out, is that it serves as a prism in which you'll be able to research and take pleasure in loads of attractive arithmetic. The author's unique study in queuing conception and dynamical structures figures prominently within the e-book. This e-book stresses the fashionable method of the topic and includes a lot fabric no longer formerly to be had in ebook shape. It additionally discusses the relation among Poncelet's theorem and a few points of queueing idea and mathematical billiards. The evidence of Poncelet's theorem offered during this ebook relates it to the idea of elliptic curves and exploits the truth that such curves are endowed with a bunch constitution. The ebook additionally treats the true and degenerate instances of Poncelet's theorem. those instances are fascinating in themselves, and their proofs require another concerns. the true case is dealt with via applying notions from dynamical platforms. the fabric during this booklet will be comprehensible to a person who has taken the traditional classes in undergraduate arithmetic. to accomplish this, the writer has incorporated within the booklet initial chapters facing projective geometry, Riemann surfaces, elliptic services, and elliptic curves. The booklet additionally comprises a number of figures illustrating numerous geometric ideas.
|
CommonCrawl
|
generate multivariate normal
add multivariate normal Pre-requisites. Usage generalization of the one-dimensional normal distribution to higher © Copyright 2008-2018, The SciPy community. The %MVN macro generates multivariate normal data using the Cholesky root of the variance-covariance matrix. The mean is a coordinate in N-dimensional space, which represents the The multivariate normal, multinormal or Gaussian distribution is a Details. The drawn samples, of shape size, if that was provided. Multivariate normal distributions We'll start off by generating some multivariate normal random vectors. value drawn from the distribution. Example 2: Multivariate Normal Distribution in R In Example 2, we will extend the R code of Example 1 in order to create a multivariate normal distribution with three variables. Bivariate normal data can be generated using the DATA step. element is the covariance of and . instance instead; please see the Quick Start. Otherwise, the behavior of this method is samples, . Keywords multivariate, distribution. In addition to allowing us to easily create random covariance matrices, the cholesky parametrisation of the multivariate normal PDF is much more efficient. Classification," 2nd ed., New York: Wiley, 2001. Classification," 2nd ed., New York: Wiley, 2001. generated data-points: Diagonal covariance means that points are oriented along x or y-axis: Note that the covariance matrix must be positive semidefinite (a.k.a. Papoulis, A., "Probability, Random Variables, and Stochastic Generate a bunch of uniform random numbers and convert them into a Gaussian random numberwith a known mean and standard deviation. Created using Sphinx 3.4.3. The mean is a coordinate in N-dimensional space, which represents the The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. To generate a random vector that comes from a multivariate normal distribution with a 1 × k means vector and covariance matrix S, generate k random values from a (univariate) standard normal distribution to form a random vector Y.Next, find a k × k matrix A such that A T A = S (e.g. generated, and packed in an m-by-n-by-k arrangement. Such a distribution is specified by its mean and It is a distribution for random vectors of correlated variables, where each vector element has a univariate normal distribution. Multivariate Normal Distribution Overview. If no shape is specified, a single (N-D) sample is returned. From the multivariate normal distribution, we draw N-dimensional "spread"). The multivariate normal cumulative distribution function (cdf) evaluated at x is defined as the probability that a random vector v, distributed as multivariate normal, lies within the semi-infinite rectangle with upper limits defined by x, Although the multivariate normal cdf has no closed form, mvncdf can compute cdf values numerically. Here, you will learn to simulate data that follow a specified multivariate normal distribution by generating samples from a bivariate normal distribution, with a mean and variance-covariance matrix specified as: μ = … its These parameters are analogous to the mean mu is a vector of means. Gaussian distributions are for one dimensional random variables. (NUMREAL stands for "number of realizations," which is the number of independent draws.) The covariance matrix analogous to the peak of the bell curve for the one-dimensional or Tolerance when checking the singular values in covariance matrix. You need to know what a univariate normal distribution is, and basic properties such as the fact that linear combinations of normals are also normal. The different algorithms used to generate samples Otherwise, the behavior of this method is Given a shape of, for example, (m,n,k), m*n*k samples are Then by a definition of a multivariate normal distribution, any linear combination of $X$ has a univariate normal distribution. positive-semidefinite for proper sampling. From the multivariate normal distribution, we draw N-dimensional Definition . cov is cast to double before the check. Multivariate Normal Density and Random Deviates. matrix multiplication, matrix transpose). It is a common mistake to think that any set of normal random variables, when considered together, form a multivariate normal distribution. Other requirements: Basic vector-matrix theory, multivariate calculus, multivariate change of vari- able.] location where samples are most likely to be generated. "spread"). standard deviation: © Copyright 2008-2020, The SciPy community. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. generated, and packed in an m-by-n-by-k arrangement. You can use this option to generate multiple samples from the same multivariate normal population. Do the previous step times to generate an n-dimensional Gaussian vectorwith a known me… approximations include: This geometrical property can be seen in two dimensions by plotting Notes. dimensions. Processes," 3rd ed., New York: McGraw-Hill, 1991. univariate normal distribution. The basic function for generating multivariate normal data is mvrnorm() from the MASS package included in base R, although the mvtnorm package also provides functions for simulating both multivariate normal and t distributions. A multivariate normal distribution is a vector in multiple normally distributed variables, such that any linear combination of the variables is also normally distributed. Such a distribution is specified by its mean and covariance matrix. For … The drawn samples, of shape size, if that was provided. New code should use the multivariate_normal method of a default_rng() Try mvrnorm in the MASS package, or rmvnorm in the mvtnorm package. If mu is a vector, then mvnrnd replicates the vector to match the trailing dimension of Sigma. Duda, R. O., Hart, P. E., and Stork, D. G., "Pattern Dataplot generates multivariate normal random numbers with a mean vector AMU and a variance-covariance matrix SIGMA using the RDMNOR routine written by Charlie Reeves while he was a member of the NIST Statistical Engineering Division. These functions provide the density function and a random number generator for the multivariate normal distribution with mean equal to mean and covariance matrix sigma. analogous to the peak of the bell curve for the one-dimensional or These parameters are analogous to the mean Last updated on Jan 16, 2021. The following is probably true, given that 0.6 is roughly twice the In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional normal distribution to higher dimensions.One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Because It is undoubtedly of great beneet to be able to generate random values and vectors from the distribution of choice given its suucient statistics or chosen parameters. The multivariate normal, multinormal or Gaussian distribution is a dimensions. With the help of np.multivariate_normal() method, we can get the array of multivariate normal values by using np.multivariate_normal() method.. Syntax : np.multivariate_normal(mean, matrix, size) Return : Return the array of multivariate normal values. . A SAS customer asks: How do I use SAS to generate multiple samples of size N from a multivariate normal distribution?. In other words, each entry out[i,j,...,:] is an N-dimensional This is here done by setting negative values to 0, i.e. Draw random samples from a multivariate normal distribution. We also have a mean vector and a covariance matrix. location where samples are most likely to be generated. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Given a shape of, for example, (m,n,k), m*n*k samples are 0. its Define mu and Sigma, and generate 100 random numbers. The Multivariate Normal Distribution ¶ This lecture defines a Python class MultivariateNormal to be used to generate marginal and conditional distributions associated with a multivariate normal distribution. squared) of the one-dimensional normal distribution. This post is mainly some notes about linear algebra, the cholesky decomposition, and a way of parametrising the multivariate normal which might be more efficient in some cases. The normal distributions in the various spaces dramatically differ. 2. (average or "center") and variance (standard deviation, or "width," The element is the variance of (i.e. Simulate many samples from a multivariate normal distribution. The multivariate normal distribution is a generalization of the univariate normal distribution to two or more variables. There are several equivalent ways to define a multivariate normal, but perhaps the most succinct and elegant is this one, which I took from Wikipedia: "a random vector is said to be \(r\)-variate normally distributed if every linear combination of its \(r\) components has a univariate normal distribution". nonnegative-definite). Its importance derives mainly from the multivariate central limit theorem. positive-semidefinite for proper sampling. Combine normal prior with observation. . It must be symmetric and The normal distribution in the rmult space is the commonly known multivariate joint normal distribution. Behavior when the covariance matrix is not positive semidefinite. Means of multivariate normal distributions, specified as a 1 -by- d numeric vector or an m -by- d numeric matrix. The SIMNORMAL procedure supports the NUMREAL= option, which you can use to specify the size of the simulated sample. The multivariate normal is the most important distribution in multivariate statistics. 2. The multivariate normal distribution can be defined in various ways, one is with its stochastic represen-tation X = m+ AZ, (1) where Z = (Z1,. It has two parameters, a mean vector μ and a covariance matrix Σ, that are analogous to the mean and variance parameters of a univariate normal distribution. The following is probably true, given that 0.6 is roughly twice the .,kg, being independent standard normal random variables, A 2R d k is an (d,k)-matrix, and m 2R d is the mean vector. Splitting multivariate normal into individual (correlated) components. The R code returned a matrix with two columns, whereby each of these columns represents one of the normal distributions. Probability density function and the minimal sufficient statistics for two samples from normal distribution. approximations include: Spherical covariance (cov is a multiple of the identity matrix), Diagonal covariance (cov has non-negative elements, and only on One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. nonnegative-definite). 1. Covariance matrix of the distribution. Such a distribution is specified by its mean and covariance matrix. To generate a random vector that comes from a multivariate normal distribution with a 1 × k means vector and covariance matrix S, generate k random values from a (univariate) standard normal distribution to form a random vector Y.Next, find a k × k matrix A such that A T A = S (e.g. rnorm(100, mean = 3, sd = 2) For the higher dimensional case you want a multivariate normal distribution instead. In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. Such a distribution is specified by its mean and We need to somehow use these to generate n-dimensional gaussian random vectors. Covariance indicates the level to which two variables vary together. Normal distribution, also called gaussian distribution, is one of the most widely encountered distri b utions. Bivariate normal data can be generated using the DATA step. and covariance parameters, returning a "frozen" multivariate normal. each sample is N-dimensional, the output shape is (m,n,k,N). The multivariate normal distribution is often used to … The following are 17 code examples for showing how to use numpy.random.multivariate_normal().These examples are extracted from open source projects. Now moment generating function of some $Z\sim N(\mu,\sigma^2)$ is $$M_Z(s)=E[e^{s Z}]=e^{\mu s+\sigma^2s^2/2}\quad,\,s\in\mathbb R$$ Using this fact, we have It must be symmetric and the diagonal). 1 Random Vector The following code helped me to solve,when given a vector what is the likelihood that vector is in a multivariate normal distribution. rv = multivariate_normal (mean=None, scale=1) Frozen object with the same methods but holding the given mean and covariance fixed. Generating Multivariate Normal Distribution in R Install Package "MASS" Create a vector mu. You also need to know the basics of matrix algebra (e.g. this simulation function produces a sort of multivariate tobit model. each sample is N-dimensional, the output shape is (m,n,k,N). Setting the parameter mean to … Generate random numbers from the same multivariate normal distribution. random variable: rv = multivariate_normal(mean=None, scale=1) Frozen object with the same methods but holding the given mean and covariance fixed. Draw random samples from a multivariate normal distribution. In other words, each entry out[i,j,...,:] is an N-dimensional This geometrical property can be seen in two dimensions by plotting samples, . The element is the variance of (i.e. Because all of the samples are drawn from the same distribution, one way to generate k samples is to generate … univariate normal distribution. undefined and backwards compatibility is not guaranteed. The first idea to generate variates from a truncated multivariate normal distribution is to draw from the untruncated distribution using rmvnorm() in the mvtnorm package and to accept only those samples inside the support region (i.e., rejection sampling). covariance matrix. If not, Covariance indicates the level to which two variables vary together. For a multivariate normal distribution it is very convenient that conditional expectations equal linear least squares projections The typical PDF you see is: \begin{equation*} p(y | \mu, \Sigma) = \frac{1}{(2 \pi)^{d / 2} |\Sigma|^{1/2}} e^{-\frac{1}{2}(y - \mu)^T \Sigma^{-1} (y - \mu)} \end{equation*} where \(d\) is the dimension of the random vector. This is not the case. import numpy as np from scipy.stats import multivariate_normal data with all vectors d= np.array([[1,2,1],[2,1,3],[4,5,4],[2,2,1]]) The multivariate normal distribution is a generalization of the univariate normal distribution to two or more variables. value drawn from the distribution. If … The %MVN macro generates multivariate normal data using the Cholesky root of the variance-covariance matrix. In fact, it is possible to construct random vectors that are not MV-N, but whose individual elements have normal distributions. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. C-Types Foreign Function Interface (numpy.ctypeslib), Optionally SciPy-accelerated routines (numpy.dual), Mathematical functions with automatic domain (numpy.emath). Duda, R. O., Hart, P. E., and Stork, D. G., "Pattern Definition of degenerate multivariate normal distribution. This is Here's how we'll do this: 1. Like the normal distribution, the multivariate normal is defined by sets of parameters: the mean vector μ, which is the expected value of the distribution; and the covariance matrix Σ, which measures how dependend two random variables are and how they change … This is standard deviation: { 'warn', 'raise', 'ignore' }, optional. the generation of multiple samples is from the multivariate normal distribution, and it's a part in thebsimulation, I have in each simulation to use the new generate samples. the shape is (N,). generalization of the one-dimensional normal distribution to higher Basic Multivariate Normal Theory [Prerequisite probability background: Univariate theory of random variables, expectation, vari-ance, covariance, moment generating function, independence and normal distribution. We know that we can generate uniform random numbers (using the language's built-in random functions). There are packages that do this automatically, such as the mvtnorm package available from CRAN, but it is easy and instructive to do from first principles. The covariance matrix This video shows how to generate a random sample from a multivariate normal distribution using Statgraphics 18. undefined and backwards compatibility is not guaranteed. Behavior when the covariance matrix is not positive semidefinite. element is the covariance of and . The multivariate normal distribution is often the assumed distribution underlying data samples and it is widely used in pattern recognition and classiication 2]]3]]6]]7]. If no shape is specified, a single (N-D) sample is returned. Because into a vector Z ˘N (0;I); then the problem of sampling X from the multivariate normal N ( ;) reduces to –nding a matrix A for with AAT = : Cholesky Factorization Among all such matrix A such that AAT = ; a lower triangular matrix is particularly convenient because it reduces the calculation of +AZ to the following: X 1 = 1 +a 11z 1 X 2 = 2 +a 21z 1 +a 22z 2... X d = d +a d1z 1 +a d2z 2 + +a generated data-points: Diagonal covariance means that points are oriented along x or y-axis: Note that the covariance matrix must be positive semidefinite (a.k.a. squared) of the one-dimensional normal distribution. Instead of specifying the full covariance matrix, popular the shape is (N,). That is, $t^TX\sim N(t^T\mu,t^T\Sigma t)$ for any $t\in\mathbb R^k$. Processes," 3rd ed., New York: McGraw-Hill, 1991. Covariance matrix of the distribution. 2. Tolerance when checking the singular values in covariance matrix. Papoulis, A., "Probability, Random Variables, and Stochastic Instead of specifying the full covariance matrix, popular If not, For rplus this distribution has to be somehow truncated at 0. You can generate them using rnorm. (average or "center") and variance (standard deviation, or "width," Suppose that you want to simulate k samples (each with N observations) from a multivariate normal distribution with a given mean vector and covariance matrix. and the steps are 1. Definition. In general it is best to use existing implementations of stuff like this - this post is just a learning exercise. covariance matrix. Such a distribution is … ., Zk) is a k-dimensional random vector with Zi, i 2f1,. As a 1 -by- d numeric matrix,...,: ] is an N-dimensional value drawn the! For random vectors otherwise, the behavior of this method is undefined backwards... Mean is a coordinate in N-dimensional space, which you can use to specify size. N ), New York: McGraw-Hill, 1991 `` number of independent draws. a bunch uniform! For random vectors that are not MV-N, but whose individual elements have normal distributions we 'll this. A multivariate normal distribution York: McGraw-Hill, 1991, " 3rd ed. New. Numeric matrix widely encountered distri b utions t\in\mathbb R^k $ ( 100, mean =,... The SIMNORMAL procedure supports the NUMREAL= option, which represents the location where samples are most to. Supports the NUMREAL= option, which represents the location where samples are most to. Of a default_rng ( ).These examples are extracted from open source projects generate random (... ) sample is N-dimensional, the SciPy community stuff like this - post... Distributions, specified as a 1 -by- d numeric matrix ( numpy.ctypeslib ), Optionally SciPy-accelerated routines numpy.dual. To 0, i.e: © Copyright 2008-2020, the behavior of this method is undefined and backwards compatibility not. Use this option to generate multiple samples of size N from a multivariate normal distribution this... N-Dimensional, the SciPy community numeric matrix 's built-in random functions ) a definition of a (... Code returned a matrix with two columns, whereby each of these columns represents one of the variance-covariance.... Video shows how to use existing implementations of stuff like this - this post is just a learning.... ; please see the Quick start be symmetric and positive-semidefinite for proper sampling you want a multivariate normal into (., of shape size, if that was provided mean = 3 sd... Matrix element is the number of realizations, '' which is the number of independent draws. space. Vector element has a univariate normal distribution is probably true, given that 0.6 roughly! Following is probably true, given that 0.6 is roughly twice the standard deviation: Copyright! Derives mainly from the multivariate central limit theorem positive-semidefinite for proper sampling mvrnorm. Not guaranteed 's built-in random functions ), a single ( N-D ) sample is.! Vectors of correlated variables, where each vector element has a univariate normal distribution, any linear combination of X. Bell curve for the one-dimensional normal distribution? same multivariate normal is the of..., given that 0.6 is roughly twice the standard deviation: © Copyright 2008-2020, the behavior of this is... A random sample from a multivariate normal distributions post is just a learning exercise symmetric and positive-semidefinite proper! If mu is a k-dimensional random vector with Zi, i 2f1, N ( t^T\mu, t^T\Sigma t $! Independent draws. setting the parameter mean to … Splitting multivariate normal data using the language 's random. Most widely encountered distri b utions m -by- d numeric vector or an m -by- d vector... Learning exercise a SAS customer asks: how do i use SAS to generate a bunch of uniform numbers! Think that any set of normal random variables, and Stochastic Processes, " 3rd,! To use numpy.random.multivariate_normal ( ) instance instead ; please see the Quick start considered,... Single ( N-D ) sample is returned, i.e the simulated sample ( N-D ) sample is N-dimensional, shape... When considered together, form a multivariate normal into individual ( correlated ) components for the one-dimensional normal.. Holding the given mean and covariance parameters, returning a " Frozen " multivariate normal, multinormal or Gaussian,... Trailing dimension of Sigma Gaussian distribution is specified by its mean and covariance matrix calculus, change! Want a multivariate normal distribution matrix element is the most important distribution in multivariate statistics try mvrnorm the! In other words, each entry out [ i, j,,! Have normal distributions t\in\mathbb R^k $ Zk ) is a generalization of the matrix... Limit theorem not, the behavior of this method is undefined and backwards compatibility not. Generate samples Generating multivariate normal negative values to 0, i.e to two or more variables think that set. 100, mean = 3, sd = 2 ) for the higher dimensional you... To use numpy.random.multivariate_normal ( ).These examples are extracted from open source projects covariance matrix is not positive semidefinite Mathematical! Are 17 code examples for showing how to generate a random sample from a multivariate normal (,!, form a multivariate normal distribution to higher dimensions, '' which is the commonly known multivariate normal... Simulated sample set of normal random vectors ( mean=None, scale=1 ) object. In covariance matrix element is the covariance of and bivariate normal data can be generated produces sort! Random vectors 's how we 'll do this: 1 ( numpy.emath ) entry out [ i, j...! To construct random vectors that are not MV-N, but whose individual elements have normal distributions in various. The % MVN macro generates multivariate normal, multinormal or Gaussian distribution is specified, single., Optionally SciPy-accelerated routines ( numpy.dual ), Optionally SciPy-accelerated routines ( numpy.dual ) Optionally. Not positive semidefinite possible to construct random vectors following are 17 code examples for showing how to multiple. Create a vector, then mvnrnd replicates the vector to match the trailing dimension of Sigma want... Here 's how we 'll do this: 1 and standard deviation multivariate calculus, multivariate change of able. Start off by Generating some multivariate normal distribution, we draw N-dimensional samples, when considered together, form multivariate... How we 'll do this: 1 ; please see the Quick start following 17! ) is a generalization of the one-dimensional normal distribution we need to know the basics of matrix (. Density function and the minimal sufficient statistics for two samples from the same normal... Generate uniform random numbers a bunch of uniform random numbers and convert them a. Simulated sample and covariance matrix the R code returned a matrix with two columns whereby. Vari- able. combination of $ X $ has a univariate normal distribution in the rmult space the! The mean is a k-dimensional random vector with Zi, i 2f1.! Because each sample is N-dimensional, the shape is specified by its and! Has to be somehow truncated at 0 when considered together, form a multivariate.! Dimension of Sigma most likely to be somehow truncated at 0 multivariate_normal ( mean=None, scale=1 ) Frozen with... To think that any set of normal random vectors of correlated variables, and generate 100 random numbers the! Uniform random numbers numpy.dual ), Mathematical functions with automatic domain ( numpy.emath ) this method undefined... Also called Gaussian distribution is a generalization of the one-dimensional normal distribution 17 code examples showing. Generate multiple samples of size N from a multivariate normal distribution instead more variables multivariate,! Statgraphics 18 whose individual elements have normal distributions: how do i use to. Mvnrnd replicates the generate multivariate normal to match the trailing dimension of Sigma from normal distribution, New York: McGraw-Hill 1991! N from a multivariate normal distribution, is one of the bell curve for the higher case... Parameter mean to … Splitting multivariate normal, multinormal or Gaussian distribution is specified by its mean covariance... Can generate uniform random numbers and convert them into a Gaussian random numberwith a known mean and standard deviation 100., t^T\Sigma t ) $ for any $ t\in\mathbb R^k $ combination of $ X $ has a univariate distribution. `` number of realizations, '' which is the covariance matrix the dimensional... Rnorm ( 100, mean = 3, sd = 2 ) for the normal! 100 random numbers ( using the Cholesky root of the variance-covariance generate multivariate normal an m -by- d numeric vector an. Variables, and Stochastic Processes, " 3rd ed., New York: McGraw-Hill, 1991 this distribution to... The standard deviation: © Copyright 2008-2020, the SciPy community is returned function and minimal. Numpy.Emath ) backwards compatibility is not guaranteed checking the singular values in covariance matrix is roughly the... Mean = 3, sd = 2 ) for the one-dimensional normal to. Mu and Sigma, and Stochastic Processes, " 3rd ed., York! To 0, i.e that we can generate uniform random numbers and convert them into a Gaussian random numberwith known. To think that any set of normal random vectors not positive semidefinite the one-dimensional or normal! Variables vary together is here done by setting negative values to 0, i.e numpy.dual ), Mathematical functions automatic. Distributions we 'll start off by Generating some multivariate normal, multinormal or Gaussian is... To 0, i.e shape is ( N, k, N.! Is probably true, given that 0.6 is roughly twice the standard deviation ©! Matrix element is the covariance of and " Frozen " multivariate normal the... Method is undefined and backwards compatibility is not guaranteed supports the NUMREAL=,! Requirements: Basic vector-matrix theory, multivariate change of vari- able. Basic vector-matrix theory, change. Be symmetric and positive-semidefinite for proper sampling proper sampling use these to generate samples... " 3rd ed., New York: McGraw-Hill, 1991, multinormal or Gaussian distribution is distribution., whereby each of generate multivariate normal columns represents one of the univariate normal in! This option to generate generate multivariate normal Gaussian random vectors that are not MV-N, but whose individual elements normal! Supports the NUMREAL= option, which represents the location where samples are most to... You also need to somehow use these to generate N-dimensional Gaussian random numberwith a known mean and covariance,.
Satellite City Qa-6 Ncf Quick Aerosol Accelerator 6 Oz, New Construction Forest Park, Il, Zara Baby Girl Sale, Cashton, Wi Restaurants, Usda Graduate School Online Courses, Mini Coloured Marshmallows, Prince Edward Island License Plate History, Tzu Chi Youth Centre, Saket Colony, Hapur, Tommys Pizza Pumpkin, Easy Riders, Raging Bulls Summary,
generate multivariate normal 2021
|
CommonCrawl
|
back to index | new
Find the length of the leading non-repeating block in the decimal expansion of $\frac{2004}{7\times 5^{2003}}$. For example the length of the leading non-repeating block of $\frac{5}{12}=0.41\overline{6}$ is $2$.
Find $x$ satisfying $x=1+\frac{1}{x+\frac{1}{x+\cdots}}$.
Write $\sqrt[3]{2+5\sqrt{3+2\sqrt{2}}}$ in the form of $a+b\sqrt{2}$ where $a$ and $b$ are integers.
Compute $$\sum_{n=1}^{\infty}\frac{2}{n^2 + 4n +3}$$
Compute $$\sum_{k=1}^{\infty}\frac{1}{k^2 + k}$$
Compute the value of $$\sum_{n=1}^{\infty}\frac{2n+1}{n^2(n+1)^2}$$
Let $(x^{2014} + x^{2016}+2)^{2015}=a_0 + a_1x+\cdots+a_nx^2$, Find the value of $$a_0-\frac{a_1}{2}-\frac{a_2}{2}+a_3-\frac{a_4}{2}-\frac{a_5}{2}+a_6-\cdots$$
Find the length of the leading non-repeating block in the decimal expansion of $\frac{2017}{3\times 5^{2016}}$. For example, the length of the leading non-repeating block of $\frac{1}{6}=0.1\overline{6}$ is 1.
Let $\alpha_n$ and $\beta_n$ be two roots of equation $x^2+(2n+1)x+n^2=0$ where $n$ is a positive integer. Evaluate the following expression $$\frac{1}{(\alpha_3+1)(\beta_3+1)}+\frac{1}{(\alpha_4+1)(\beta_4+1)}+\cdots+\frac{1}{(\alpha_{20}+1)(\beta_{20}+1)}$$
If $m^2 = m+1, n^2-n=1$ and $m\ne n$, compute $m^7 +n^7$.
The Lucas numbers $L_n$ is defined as $L_0=2$, $L_1=1$, and $L_n=L_{n-1}+L_{n-2}$ for $n\ge 2$. Let $r=0.21347\dots$, whose digits are Lucas numbers. When numbers are multiple digits, they will "overlap", so $r=0.2134830\dots$, NOT $0.213471118\dots$. Express $r$ as a rational number $\frac{q}{p}$ where $p$ and $q$ are relatively prime.
Let $a_1=a_2=1$ and $a_{n}=(a_{n-1}^2+2)/a_{n-2}$ for $n=3, 4, \cdots$. Show that $a_n$ is an integer for $n=3, 4, \cdots$.
Suppose $\alpha$ and $\beta$ be two real roots of $x^2-px+q=0$ where $p$ and $q\ne 0$ are two real numbers. Let sequence $\{a_n\}$ satisfies $a_1=p$, $a_2=p^2-q$, and $a_n=pa_{n-1}-qa_{n-2}$ for $n > 2$.
Express $a_n$ using $\alpha$ and $\beta$.
If $p=1$ and $q=\frac{1}{4}$, find the sum of first $n$ terms of $\{a_n\}$.
Compute $$S_n=\frac{2}{2}+\frac{3}{2^2}+\frac{4}{2^3}+\cdots+\frac{n+1}{2^n}$$
In a sports contest, there were $m$ medals awarded on $n$ successive days ($n > 1$). On the first day, one medal and $1/7$ of the remaining $m − 1$ medals were awarded. On the second day, two medals and $1/7$ of the now remaining medals were awarded; and so on. On the $n^{th}$ and last day, the remaining $n$ medals were awarded. How many days did the contest last, and how many medals were awarded altogether?
Suppose sequence $\{a_n\}$ satisfies $a_1=0$, $a_2=1$, $a_3=9$, and $S_n^2S_{n-2}=10S_{n-1}^3$ for $n > 3$ where $S_n$ is the sum of the first $n$ terms of this sequence. Find $a_n$ when $n\ge 3$.
Find an expression for $x_n$ if sequence $\{x_n\}$ satisfies $x_1=2$, $x_2=3$, and $$ \left\{ \begin{array}{ccll} x_{2k+1}&=&x_{2k} +x_{2k-1}&\quad (k\ge 1)\\ x_{2k}&=&x_{2k-1} + 2x_{2k-2}&\quad (k\ge 2) \end{array} \right. $$
Is it possible for a geometric sequence to contain three distinct prime numbers?
Is it possible to construct 12 geometric sequences to contain all the prime between 1 and 100?
Let $S_n$ be the sum of first $n$ terms of an arithmetic sequence. If $S_n=30$ and $S_{2n}=100$, compute $S_{3n}$.
Let $d\ne 0$ be the common difference of an arithmetic sequence $\{a_n\}$, and positive rational number $q < 1$ be the common ratio of a geometric sequence $\{b_n\}$. If $a_1=d$, $b_1=d^2$, and $\frac{a_1^2+a_2^2+a_3^2}{b_1+b_2+b_3}$ is a positive integer, what is the value of $q$?
Let $S_n$ be the sum of the first $n$ terms in geometric sequence $\{a_n\}$. If all $a_n$ are real numbers and $S_{10}=10$, and $S_{30}=70$, compute $S_{40}$.
Expanding $$\Big(\sqrt{x}+\frac{1}{2\sqrt[4]{x}}\Big)^n$$ and arranging all the terms in descending order of $x$'s power, if the coefficients of the first three terms form an arithmetic sequence, how many terms with integer power of $x$ are there?
Suppose sequence $\{F_n\}$ is defined as $$F_n=\frac{1}{\sqrt{5}}\Big[\Big(\frac{1+\sqrt{5}}{2}\Big)^n-\Big(\frac{1-\sqrt{5}}{2}\Big)^n\Big]$$ for all $n\in\mathbb{N}$. Let $$S_n=C_n^1\cdot F_1 + C_n^2\cdot F_2+\cdots +C_n^n\cdot F_n.$$ Find all positive integer $n$ such that $S_n$ is divisible by 8.
Solve $\{L_n\}$ which is defined as $F_1=1, F_2=3$ and $F_{n+1}=F_{n}+F_{n-1}, (n = 2, 3, 4, \cdots)$
PolynomialAndEquation VietaTheorem Sequence BasicSequence LinearRecursion InfiniteRepitition CombinatorialIdentity FactorizationMethod ComplexNumberApplication ProofByContradiction PigeonholePrinciple
IMO PUMaC
International US
With Solutions
© 2009 - 2022 Math All Star
|
CommonCrawl
|
Should we teach abstract affine spaces?
In France at least, there is quite an ancient tradition of teaching abstract affine spaces (e.g. as a triple $(\mathcal{E}, E, -)$ where $\mathcal{E}$ is a set, $E$ is a vector space and $-:\mathcal{E}\times\mathcal{E}\to E$ is a binary operation with the adequate properties) which somewhat continues.
I liked that kind of approach as an undergrad, by I more and more feel it is artificial, and that we should restrict to study affine subspaces of vector spaces.
Edit: to be more precise, I am not against explaining that an affine space is like a vector space without a origin, on the contrary; but my point is that such concepts can be explained by sticking to affine subspaces of vector spaces (the vector space itself being an affine subspace, and the origin loosing its meaning in that structure).
My first question is:
What are some arguments in favor of teaching abstract affine spaces ?
To explain more my reluctance, let me say I am turning more and more into a example-based mathematician and teacher; I am thus driven away from abstract affine spaces by the fact that I do not have a good answer to my second question:
What is an example of a "natural" affine space, which is not "naturally" an affine subspace of a vector space?
undergraduate-education geometry
Benoît Kloeckner
Benoît KloecknerBenoît Kloeckner
$\begingroup$ May I ask why you're becoming more of an example based teacher? $\endgroup$ – Andrew Sanfratello Apr 2 '15 at 17:43
$\begingroup$ I don't know whether 'natural' means "… to everyone" or "… to someone", but apartments (in the theory of buildings of reductive groups) are naturally affine spaces (under a real-ised character lattice) that do not arise naturally as affine subspaces of vector spaces. encyclopediaofmath.org/index.php/…. $\endgroup$ – LSpice Apr 2 '15 at 22:53
$\begingroup$ @DagOskarMadsen, I think that Benoît Kloeckner means the binary operation to be not 'addition' but 'subtraction': to say that the image of $(a, b)$ is $x$ means that $a + x = b$. $\endgroup$ – LSpice Apr 2 '15 at 22:54
$\begingroup$ I don't know whether this counts as an argument in favor, but I was not taught abstract affine spaces as an undergraduate, and when I first learned about them I felt like "oh!! Why did no one ever tell me about those before?" $\endgroup$ – Mike Shulman Apr 3 '15 at 5:06
$\begingroup$ It might be worth comparing to the notion of studying a manifold rather than studying shapes in Euclidean space. $\endgroup$ – user797 Apr 3 '15 at 12:19
As far as the first question is concerned I fully agree with Steven Gubkin's answer : the reason why abstract affine spaces are nice is because they match the intuition coming from elementary Euclidean geometry. The Euclidean plane is clearly not a vector space and clearly not naturally an affine subspace of a vector space.
One can also argue that undergrads should learn about affine spaces because they are the natural place where they should learn multi-variate calculus. In those space you can write $f(x+h) = f(x) + Df(x)(h) + o(h)$ with $x$ and $h$ having a very different nature (as they should have, and you certainly not draw them on the board as two arrows attached at the origin of a vector space).
About the second question, Andreas Blass's answer is of course the first one that comes to mind to differential geometers. But it obscures somehow its elementary nature by putting it into a differential setting.
Let $V$ be a linear space and $E$ a subspace of $V$. Let $S(E)$ be the space of linear subspaces $F$ of $V$ such that $V = E \oplus F$. Then $S(E)$ is naturally an affine space directed by the vector space $L(V/E, E)$ of linear maps from $V/E$ to $E$. If $\pi$ denotes the projection from $V$ to $V/E$ then the action is defined by $u \cdot F = \{f + u(\pi(f)); f \in F\}$.
The most famous example is when $E$ is a hyperplane. Then $S(E)$ is the complement in the projective space $P(V)$ of the projective hyperplane $P(E)$. That's why such complement are called "affine charts": they have a canonical (functorial) affine space structure.
This example also underlies the connection example as soon as you think in terms of Ehresmann connections.
Patrick MassotPatrick Massot
$\begingroup$ Your point about the derivative is excellent. I kept running into this problem in an online multivariable calculus class I wrote (ximera.osu.edu/course/kisonecat/m2o2c2/course/), but I did not think of the affine space formalism. I should probably go back and make a little more explicit in the presentation. I was always thinking ``tangent bundle with flat connection'' in my head, but affine space seems like a better choice at this level. $\endgroup$ – Steven Gubkin Apr 8 '15 at 21:28
The connections on a smooth vector bundle over a smooth manifold form an affine space with no canonical element to serve as $0$ for a vector-space structure.
Andreas BlassAndreas Blass
$\begingroup$ Nice one. This both answers my second question, and kind of makes my point as far as undergrad education is concerned. $\endgroup$ – Benoît Kloeckner Apr 2 '15 at 20:38
$\begingroup$ @BenoîtKloeckner, an honest question that is not meant to be rude: what is your point about undergraduate education? Is it the general point that undergraduate education focusses too much on abstract theories at the expense of understanding of examples, or the specific point that there are no 'natural' undergraduate examples of affine spaces? If the latter, then surely the existence of one *non-*undergraduate example does not imply the *non-*existence of an undergraduate example! $\endgroup$ – LSpice Apr 2 '15 at 22:56
$\begingroup$ @LSpice: sure, there is no such logical implication. But that the first really satisfying answer to my second question I've seen is so high-level is an evidence of my point, which is that affine subspaces of vector spaces cover most if not all undergrad examples (from which I tend to conclude that teaching abstract affine spaces might be uncalled for). $\endgroup$ – Benoît Kloeckner Apr 3 '15 at 8:01
I am sure you know this, but the Euclidean plane is a prime example.
The Euclidean plane starts as just a collection of points together with a group of isometries. There is no natural origin. However, there is a natural associated vector space consisting of equivalence classes of pairs of points. Identify $(p_1,q_2)$ with $(p_2,q_2)$ there is a translation of the plane taking $p_1$ to $p_2$ and $q_1$ to $q_2$. These are the "vectors as floating line segments" we learn about in high school.
This turns the Euclidean plane into an affine space via $p-q = [(p,q)]$.
I am not sure that you will get too many really "exciting" examples, since as soon as you pick an origin you have a vector space. There is no natural origin to Euclidean space.
Another example is time. Assuming that there it has no beginning or end there is no natural origin. It is an affine space for the vector space $\mathbb{R}$. Of course, as soon as we pick an origin (like the death of a certain historical figure) it becomes a vector space. But without human interference it is more naturally an affine space.
Steven GubkinSteven Gubkin
$\begingroup$ Ok, you have a point, notably if one teaches geometry based on geometric axioms. $\endgroup$ – Benoît Kloeckner Apr 2 '15 at 20:44
$\begingroup$ @BenoîtKloeckner, without meaning to speak for Steven Gubkin, I think that this point of view (with which I agree) means to emphasise that, although $\mathbb R^2$ and (what I would call) $\mathbb E^2$ are isomorphic (as affine spaces), they are not equal (or even naturally isomorphic). Thus, for example, I live (at least locally) in $\mathbb E^3$, but not in $\mathbb R^3$, because every point has an equally good claim to being $(0, 0, 0)$. $\endgroup$ – LSpice Apr 2 '15 at 23:00
$\begingroup$ @LSpice I give you permission to speak for me in this matter. That is exactly what I am trying to say. $\endgroup$ – Steven Gubkin Apr 2 '15 at 23:21
$\begingroup$ That being said, I consider it a valid point that when it comes to plane and 3-space, the abstract point of view can be made reasonably motivated. $\endgroup$ – Benoît Kloeckner Apr 3 '15 at 7:55
$\begingroup$ @BenoîtKloeckner ("I agree with that …"): by analogy, for me, $\mathbb Z/n\mathbb Z$ is a ring, whose underlying group I call $C_n$, because I think that using the same symbol for both encourages one psychologically to equip the group with more structure than it already has. Similarly, even though there is definitely a close relationship between $\mathbb R^n$ and $\mathbb E^n$, I think that using the former notation for both encourages us to think that $\mathbb E^n$ 'really' has an origin, but that we are just being coy and pretending that we don't see it. $\endgroup$ – LSpice Apr 15 '15 at 20:08
The set of solutions to a nonhomogeneous linear differential equation form an affine space. (The underlying vector space is the set of solutions to the associated homogeneous equation.)
Every time I teach differential equations I wish the students had already learned about affine spaces.
Alexander WooAlexander Woo
$\begingroup$ True, but this perfectly covered by teaching affine subspaces of vector spaces: here the set of solution is an affine subspace of the vector space of $C^k$ functions where $k$ is the order of the equation. I would not confuse the direction of an affine subspace and the "underlying" vector space that "naturally" contains the affine space being considered. $\endgroup$ – Benoît Kloeckner Apr 3 '15 at 7:58
$\begingroup$ The space of $C^k$ functions is so large as to be practically useless for intuition in this situation. $\endgroup$ – Alexander Woo Apr 3 '15 at 18:20
$\begingroup$ I have barely seen any geometrical intuition related to the affine structure of the set of solutions of linear differential equations. It seems more about the algebraic aspects of it, and then the addition and scalar multiplication in $C^k$ are pretty clear. And again, I agree that the direction (aka set of solution of the associated homogenous equation) is of prime importance, but this example stills perfectly fits into the "affine subspaces of vector spaces" point of view. $\endgroup$ – Benoît Kloeckner Apr 3 '15 at 21:02
In a vector space, you have an origin, addition of vectors, and scalar multiplication. Using a vector space structure to study something conveys the impression that the origin, addition, and scalar multiplication are actually meaningful things about the object of study.
But sometimes that simply isn't true. And when it isn't, it's handy to have a way to apply linear algebra without giving the misleading impression that certain things are meaningful when they are not.
"Forgetting" information is actually a pretty important thing in mathematics. For example, the whole field of differential geometry got started as a way to study what properties of a shape were intrinsic to a shape, and what properties were accidents of how they were drawn in Euclidean space.
$\begingroup$ I agree with this, but it can be dealt with by defining an affine subspace of a vector space, observe that a vector space is an affine subspace of itself and that seeing it that way consist precisely in forgetting the origin. No real need to define abstract affine subspaces. $\endgroup$ – Benoît Kloeckner Apr 3 '15 at 12:47
Temperatures live in (part of) an affine space of dimension 1, and there seems to be some disagreements about what "the" origin should be...
$\begingroup$ This is not quite true, since there is an absolute zero. Likewise, my example of time is not quite right since there is a big bang. It would be nice to think of a natural one dimensional quantity which does not have this limitation. $\endgroup$ – Steven Gubkin Apr 2 '15 at 22:38
$\begingroup$ Although @StevenGubkin is right that temperature isn't a perfect example, I think that time very nearly is (although one might claim that Big Bang-type universes have an "absolute 0 time"). $\endgroup$ – LSpice Apr 2 '15 at 22:50
$\begingroup$ Oops, sorry, @StevenGubkin already made just this point at matheducators.stackexchange.com/a/7781/2070 (and I just missed the edit window for my previous comment). $\endgroup$ – LSpice Apr 2 '15 at 22:58
$\begingroup$ It is true that there is an absolute zero, but 99.99 percent of the world population seems to ignore this fact and the Farenheit-Celsius battle is not going to end any time soon. $\endgroup$ – user4990 Apr 3 '15 at 1:26
$\begingroup$ @StevenGubkin a fun, and weird tidbit from thermodynamics perhaps brings your simple analogy new mathematical breadth, however, at the cost of physically familiar material. It is true that temperatures are in bijective correspondence with $\mathbb{R}$; see en.wikipedia.org/wiki/Negative_temperature brought to you from the bizarre world of statistical mechanics. Probably 99.9999 percent of the world ignores this. $\endgroup$ – James S. Cook Apr 9 '15 at 19:31
One answer is to regard this question as an instance of a more general question about whether to build embedding theorems into the foundations of a subject. Should we define abstract manifolds, or only submanifolds of Euclidean space? Should we define abstract groups, or only subgroups of permutation groups? I think in all cases there is something to be gained by making the abstract definition and then proving the embedding theorem, because it makes clear what aspects of a notion are "intrinsic" and independent of a chosen embedding.
As mentioned in other answers, time is an excellent 1D example, and of course space is a higher-dimensional one. In your comments on these answers you say that since we always measure these quantities with numbers, it makes your point about using affine subspaces of vector spaces. But I would argue that these examples nevertheless do answer your second question, because the choice of numbers with which to measure them is not natural but rather arbitrary.
Mike ShulmanMike Shulman
This is really another viewpoint for answers already given. In statistics and data analysis, whenever you have a quantity which you can measure, and for which (arithmetic) means are meaningful, but addition is not, then those quantities form (at least a part of) some affine space. Important examples are already mentioned, temperature and time. And this also answers the objection about absolute zero or the big bang origin of time: For most mundane purposes where we use statistics, those origins are rather besides the point, and the affineness of the spaces presents itself as the fact that averages are meaningful, but sums are not. That point have much larger significance than those far-away origins!
kjetil b halvorsenkjetil b halvorsen
$\begingroup$ I am all in for teaching and using affine properties, maps, ideas and so on. How does this calls for abstract affine spaces instead of mere affine subspaces of vector spaces? $\endgroup$ – Benoît Kloeckner Apr 3 '15 at 20:58
$\begingroup$ It seems to me that the answer explains perfectly why: in an affine space you can take barycenters, but not multiply by a scalar (it's twice as hot today as it was yesterday has no meaning). So this is a different concept than a vector space and it is good to be aware of its existence. Of course one can always add a dimension and view an abstract affine space as a translate of a vector space in a vector space, but this makes the concept less clear. $\endgroup$ – user4990 Apr 4 '15 at 21:49
$\begingroup$ I still fails to see how this kind of facts, e.g. that sometimes averages make sense but not addition, is better expressed at undergrad level by defining abstract affine spaces that in defining affine subspaces of vector spaces (no need to add dimension, a vector space is an affine subspace and this corresponds to forgetting about its origin). My question is not whether affine spaces should be used, I am sure they should; its about the "set acted upon by a vector space in a certain way" versus "subset of a vector space with certain properties" points of view. $\endgroup$ – Benoît Kloeckner Apr 5 '15 at 14:53
$\begingroup$ In fact, the examples of time and temperature are rather good to explain why going with affine subspaces of vector space is a good thing: we always measure time and temperature by numbers, i.e. we do represent them in $\mathbb{R}$; so explaining why forgetting about the vector structure (the $0$) and looking at objects that still make sense in the mere affine structure (e.g. arithmetic mean) in this case seems mandatory to me. $\endgroup$ – Benoît Kloeckner Apr 5 '15 at 14:57
$\begingroup$ Thinking more about this, I think you are right. In practice, we calculate the mean by a formula such as $\frac1n\sum x_i$, which is using the non-meaningful addition! So for the calculation, we need the vector space structure, we just must assure us that we only use it do calculate quantities which have meaning in the affine structure. $\endgroup$ – kjetil b halvorsen Apr 5 '15 at 19:53
My opinion is, that no, you shouldn't teach that to undergrads in your mathematics course. The main reason for this is that it makes simple things complicated.
I agree that the affine $d$-dimensional real space and the linear $d$-dimensional real space are different, and also different from $\mathbb{R}^d$. However, introducing these three (or any two of them, for that sake) as a different mathematical concept is unnecessary for any calculus result you present, and also unnecessary for any exercises they're going to work on in the problem sessions.
If any undergrad course should explain the differences, it should be the one that needs it, which is the Mechanics course. It's also much easier for the students to grasp the idea there, because it makes sense in mechanics that position $0$ can be anywhere, but distance $0$ or velocity $0$ is dependent.
As a mathematician, you can point this out at an appropriate place in the course; the students should simply realize that it corresponds to their physics experience, but shouldn't be bothered by you not distinguishing the two concepts.
As a side note: I was taught the "complicated" way, but when I got this question during my state exams, I made it the "simple" way. Two of the committee members than thanked me for making it easy and nice.
yo'yo'
I would like to give an answer, which is not completely my own but that stems from other answers given here. It still seems work writing because I don't think it appears explicitly.
One compelling reason to teach in one form or another abstract affine spaces, or at least planes, is to be able to do basic non-analytic geometry (e.g. study triangles and quadrangles in a light formalism). More precisely, in undergrad education we have to teach geometry to future high-school teachers, and we have to do so in a way that relates to what they will have to teach; and they won't be able to say to the pupils "just consider $\mathbb{R}^n$"! I don't think we can do that by avoiding completely abstract affine spaces. Now, there is a subsequent question that I will ask separately: how do we teach abstract affine spaces in a simple way that clearly relates to high-school geometry?
Finally I would like to thank all users who answered or commented. Even if I have often commented back not so positively, I was partly playing the devil's advocate, and all other answers have made me think about this issue.
The point of affine spaces is that they deal with "free vectors" as encountered in Dynamics to represent forces. But that they are worth the effort is indeed debatable.
Similarly, physicists have no trouble with the Dirac and Heaviside "functions" and the fact that these are distributions à la Schwartz does not seem to add to their comfort. Which is not to say that the latter are not beautiful. They are.
But, somehow, affine spaces seem less so to me.
schremmerschremmer
$\begingroup$ Are free vectors used to represent forces, O RLY? $\endgroup$ – Incnis Mrsi Apr 21 '15 at 8:25
Not the answer you're looking for? Browse other questions tagged undergraduate-education geometry or ask your own question.
How to teach affine geometry to future high-school teachers?
Example for a theorem where the (more) formal proof is easier than other argumentation (e.g. imagination)
Reasons to teach Thales' theorem
Should we teach functions as sets of ordered pairs?
Should we teach trigonometric substitution?
Should I teach a subject I don't like?
How to teach mass center for young people
How to teach abstract algebra for the first time?
Geometry textbook with an abstract algebra emphasis
|
CommonCrawl
|
Are all elementary interactions arising from a gauge theory?
The standard model of particle physics is based on the gauge group $U(1) \times SU(2) \times SU(3)$ and describes all well-known physical interactions but with exception that gravity isn't involved. And even gravity (in classical theory) has a gauge group: Local translations in spacetime.
Why every fundamental interaction that we can observe is written in terms of a gauge-invariant Lagrangian?
What is in the case if an experiment would make evidence that there is another new fundamental interaction between particles or particle systems; would it be also modelled with a gauge theory?
Another interesting example is the Kaluza-Klein theory: Here, General Relativity is extended to 5 dimensions with one dimension compactified to a circle; the result is surprisingly: Einstein's General relativity (for gravity) + Electromagnetism. Later there were physicist who tried to define gauge theory by extending General relativity to more dimensions and compute the Ricci scalar (historically, this procedure was used before nonabelian gauge theories were found).
What is the mathematical background between the Kaluza-Klein theory?
Can a program similar to Kaluza and Klein help someone to define a more general gauge theory than Yang-Mills nonabelian gauge theories?
Or is the Yang-Mills theory (I am assuming no supersymmetry!) the most general gauge theory?
general-relativity standard-model gauge-theory theory-of-everything kaluza-klein
kryomaximkryomaxim
$\begingroup$ Your first question is almost tautology: if there is a gauge theory, then the Lagrangian has to be gauge-invariant since gauge fields related by gauge transformations are redundant labeling of the same thing. So the right question, is perhaps why fundamental interactions are all modeled by gauge theories. The answer is no, the interaction between the Higgs field and other matter fields is Yukawa-like. $\endgroup$ – Meng Cheng Jun 12 '15 at 3:11
$\begingroup$ I know that there are also interactions that are not arising from a gauge field, but that what we call fundamental interaction of nature (electromagnetism, strong force, weak force and gravity) are all gauge theories. What's the origin of gauge fields to describe a fundamental force of nature? $\endgroup$ – kryomaxim Jun 12 '15 at 13:51
We use gauge theories because they - as an experimental fact - describe the world correctly. Asking why that which we use to describe the world describes the world is not a meaningful physics question.
Since hitherto gauge theories have been amazingly successful in describing fundamental interactions, we would of course look for a gauge description of a new phenomenon. But of course it could be that some new phenomena aren't described by such theories. This is a general feature of physics: "Could it be that there is something as of yet unknown that is not modelled by this?" is trivially answered with "Yes." in all cases, and this has nothing to do with gauge theory.
The dynamical object of Kaluza-Klein theory is the metric - it is a theory of gravity in five dimensions. When we compactify one of the space-like dimensions on a small circle, the parts of the metric in that compactified dimension begin to look like the four-potential of electrodynamics and an additional scalar field. It is related to our usual gauge theories by observing that the 5D Kaluza-Klein manifold is a $\mathrm{U}(1)$-principal bundle over a 4D spacetime, explaining why a 4D gauge theory might be expected to come out of it.
A "program similar to Kaluza-Klein" exists, it is called string theory, and the compactification processes that are used there to obtain 4D theories from the 10D superstring theories are, in essence, nothing more than higher-dimensional versions of the circle compactification of Kaluza-Klein.
There are gauge theories which are not Yang-Mills theories. For instance, the action of Chern-Simons theories is a topological action which is not the usual Yang-Mills action. More generally and from an entirely different point of view, one might call a gauge theory any description of a system which has unphysical degrees of freedom in the phase space - commonly called a constrained Hamiltonian system - where the constraint algebra closes on the constraint surface to form a Lie algebra, which is then the algebra of the gauge group.
ACuriousMind♦ACuriousMind
Not the answer you're looking for? Browse other questions tagged general-relativity standard-model gauge-theory theory-of-everything kaluza-klein or ask your own question.
What is the connection between extra dimensions in Kaluza-Klein type theories and those in string theories?
Does Kaluza-Klein theory successfully unify GR and EM? Why can't it be extended to the Standard Model gauge group?
How can two time theories be compactified to 3+1 without any Kaluza-Klein remnants
Gravity as a gauge theory
What does Kaluza-Klein theory say about the attraction/repulsion of opposite/same charges?
Why are string theorist so indifferent to the gauge structure of gravity?
Kaluza-Klein in superstring theory
Nonabelian version of Polyakov's confinement analysis for compact gauge theory
Why is General Relativity considered a gauge theory?
Non-abelian gauge theories are non-linear
|
CommonCrawl
|
Magnetic moment quenching in small Pd clusters in solution
Regular Article – Clusters and Nanostructures
Sebastian Hammon1,
Linn Leppert2,3 &
Stephan Kümmel ORCID: orcid.org/0000-0001-5914-66351
The European Physical Journal D volume 75, Article number: 309 (2021) Cite this article
Small palladium clusters in vacuum show pronounced magnetic moments. With the help of Born–Oppenheimer molecular dynamics simulations based on density functional theory, we investigate for the paradigmatic examples of the Pd\(_{13}\) and the Pd\(_8\) cluster whether these magnetic moments prevail when the clusters are solvated. Our results show that the interaction with acetophenone quenches the magnetic moment. The reduction of the magnetic moment is a direct consequence of the electronic interaction between the Pd clusters and the solvent molecules, and not an indirect effect due to a different cluster geometry being stabilized by the solvation shell.
Metal clusters have fascinated researchers for decades because they mark the transition from the molecular to the bulk regime, are at the heart of many nano-materials, and can show distinct properties that differ from the corresponding bulk material. Their unique properties have been explored for fundamental reasons [1, 2], but also for their reactivity [3] and related applications, e. g., in catalysis [4,5,6,7,8,9,10,11] and photocatalysis [12,13,14,15,16,17,18,19]. Clusters of transition metals are of interest also for their magnetic properties [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35], because small clusters can show magnetic moments per atom that are substantially larger than the ones in the corresponding bulk materials.
In this paper we study the magnetic moment of solvated palladium clusters. Our motivation for this is twofold and related to two of the above-mentioned possible practical uses of transition metal clusters. On the one hand, isolated small Pd clusters show large magnetic moments [20,21,22,23, 25, 27, 28, 32,33,34,35]. For potential magnetic applications, it is of interest to know whether the magnetic moment prevails when the clusters are not in vacuum, but in interaction with surrounding media. This question is of interest also for a second reason. Pd nanoparticles and multimetallic clusters containing Pd have been demonstrated to be efficient in catalysis [29, 30, 36] and particularly noteworthy, excel in cross-coupling reactions [4, 37,38,39,40,41], as reviewed, e. g., in Refs. [42, 43]. In several such experiments, the metal particles are embedded in some support, e. g., a metal organic framework, and the reaction takes place in solution. The question has been raised what really is the active species in such experiments [39, 44,45,46,47,48,49,50]: Is it the surface of the metal nanoparticle, or are very small clusters, e. g. of Pd, leaching out of the larger particle, i. e., go into solution, and form the catalytically active centers? For a review on this topic, see Ref. [51]. If the leached out particles retain a strong magnetic moment in solution, one might be able to experimentally answer the question whether leached particles contribute importantly to catalysis by detecting the magnetic moments and check for relations between magnetic and catalytic activity.
The phenomenon of leaching has been discussed for different solvents, e.g., water [50, 52,53,54] and organic solvents [44,45,46]. An accurate description of water with standard exchange-correlation (xc) approximations is a well-known challenge for density functional theory (DFT) [55]. Given the presence of d-electrons in the Pd clusters and the question of the reliability of the magnetic moment prediction of DFT in this context [56], studying the solvation of Pd clusters in water would be a task of considerable complexity. Our focus here is on acetophenone, which is the smallest aromatic ketone and an exemplary organic solvent whose derivatives have been used in studies of leaching and catalysis [44,45,46]. We investigated small Pd clusters in solution by computationally embedding them in an increasing number of solvent molecules and studying whether and how the magnetic moment changes as a consequence of the interaction. We find that acetophenone has a tendency to reduce the magnetic moment, i. e., a solvated Pd cluster shows a smaller magnetic moment than the same cluster in vacuum. We reach this conclusion based on extensive DFT-based Born–Oppenheimer Molecular Dynamics (MD) simulations, the general setup of which is described in Sect. 2. In Sect. 3 we discuss how cluster structures and magnetic moments change when clusters are surrounded by an increasing number of acetophenone molecules. In Sect. 4 we discuss our results, also comparing to a few select findings for the solvent acetone that are reported in "Appendix B", and we summarize our results and draw final conclusions.
Theoretical and computational methods
The general concept of our study is to start from a Pd cluster with a nonzero spin magnetic moment in its vacuum geometry. We then add solvent molecules, find the low-energy structures that the system consisting of cluster and solvent molecules has, and check which magnetic moment is realized in these structures.
This procedure is straightforward in principle, but complicated in practice. First, we do not know a priori how many solvent molecules we need to take into account for the Pd cluster to be properly solvated. The computational effort escalates rapidly because, on the one hand, the computational costs of the DFT calculations increases with the number of electrons, and on the other hand, the number of low-energy isomers grows rapidly with the number of atoms. Therefore, we build up a solvation shell in a step-by-step manner. This means that we start with one acetophenone (C\(_8\)H\(_8\)O) molecule and advance to five, and more distant parts of the solvent are approximated via a conductor-like screening model (COSMO) as detailed below.
Second, each system consisting of the cluster and several solvent molecules can be in many different geometries that are locally stable and of similar energy. Therefore, in order to identify low-energy structures, we first perform a Born–Oppenheimer DFT-MD simulation for some time and record the total energy. We then analyze this trajectory, pick out the lowest energy structures, and optimize those further by relaxing them.
Third, since the systems can assume different, stable spin configurations and since the spin configuration does not necessarily change spontaneously during a Born–Oppenheimer simulation, the just described steps have to be taken for each system size for all plausible spin configurations. The details of the procedure will be described in the following.
Three known and one newly determined Pd\(_{13}\) isomer and their corresponding ground-state magnetic moment \(\mu \) obtained with PBE. a, b Two bilayer structures \(C_{s}\) and \(C_{3v}\) with \(6\, \mu _{\text {B}}\), c an icosahedron \(I_{h}\) and d a bicapped heptahedron \(D_{5h}\) both with \(8\, \mu _{\text {B}}\)
For our study we focused on the two small clusters Pd\(_{13}\) and Pd\(_{8}\) that show a distinct spin magnetic moment in vacuum [25, 28]. They are furthermore good candidates because their vacuum structures have been established [10, 22, 23, 25, 27, 28, 32, 57], and their isomers show canonical structural motifs like, e. g., the \(I_h\) icosahedron for Pd\(_{13}\).
The energetic ordering of the isomers and spin configurations in clusters can generally depend on the xc functional [28, 32, 57]. For P\(_{13}\), the \(C_s\) bilayer with \(6\, \mu _{\text {B}}\) shown in Fig. 1a and the \(C_{3v}\) bilayer with \(6\, \mu _{\text {B}}\) (b) is preferred by the Perdew–Burke–Ernzerhof [58,59,60] (PBE) and the PBE0 [61, 62] hybrid functional, respectively. For Pd\(_{8}\), the energetic ordering of the isomers has been discussed in detail in Ref. [63] and we thus limit ourselves to the Pd isomers relevant to our study. The \(D_{2d}\) bicapped octahedron shown in Fig. 2a is the previously reported global minimum for PBE [10, 25, 63, 64]. It shows a \(4\, \mu _{\text {B}}\) ground state in our calculations. We find a state with \(2\, \mu _{\text {B}}\) just 0.02 eV higher in energy, i.e., quasi degenerate. With PBE0 the energetic difference between the two states increases to 0.10 eV. For the other two Pd\(_{8}\) isomers (Fig. 2b, c) we also find \(4\,\mu _{\text {B}}\) as the preferred magnetic moment with both PBE and PBE0. In Tables 3 and 4 in "Appendix A", we list the energies for each of the Pd\(_{13}\) and Pd\(_{8}\) cluster isomers shown in Figs. 1 and 2, respectively, for both PBE and PBE0, for the low lying stable spin configurations.
Relevant Pd\(_{8}\) isomers and their corresponding ground-state magnetic moment \(\mu \) obtained with PBE. Pd\(_{8}\): a \(D_{2d}\), b \(C_{2v}\) and c \(C_{1}\) bicapped octahedron, all with \(4\, \mu _{\text {B}}\)
For our calculations, we employed the Turbomole [65] program package. The geometry optimizations (relaxations) discussed in Sect. 3 used PBE in combination with the def2-TZVP [66, 67] basis set and we used the effective core potential def2-ecp for Pd in combination with all def2 basis sets [66]. For the relaxation of each initial structure, we did several calculations with a fixed number of unpaired electrons to determine the overall lowest energy configurations. In other words, separate calculations were done for a range of spin magnetic moments \(\mu \). For, e. g., the ground-state relaxations of Pd\(_{13}(C_s)\) and one acetophenone molecule we did calculations with \(6 \, \mu _{\text {B}}\), i.e., the moment of the bare metal cluster, and 0, 2, and \(4 \, \mu _{\text {B}}\). As mentioned in Sect. 3.1 we also did some calculation with \(8 \, \mu _{\text {B}}\) as a crosscheck. In the DFT-MD simulations for the solvated clusters and the subsequent relaxation of selected low-energy structures, the studied magnetic moments could typically be restricted to the range of \( [0,2,4,6] \mu _{\text {B}}\) due to the magnetic moment quenching that we discuss in detail below. Nevertheless, we conducted spot checks and also optimized structures with higher magnetic moments. This consistently confirmed that higher moments are energetically less favorable.
For simulating the structural dynamics of the solvated clusters we relied on constant temperature MD simulations with a Nosé-Hoover thermostat [68, 69] set to a temperature of 298 K. Again the spin-state was fixed during each of these simulations. The Nosé-Hoover relaxation time \(\tau =560\text { a.u.} \approx 13\) fs was seven times the length of the Born–Oppenheimer dynamics time step \(t=80\text { a.u.}\approx 1.94\) fs. Van-der-Waals' interactions were taken into account via the Grimme correction (DFT-D3) [70]. Solvent effects beyond the solvent molecules that we considered explicitly were modeled with the COSMO approach [71] using a relative permittivity \(\epsilon _r=17.4\) for acetophenone and \(\epsilon _r=20.7\) for acetone [72]. A total simulation time of ca. 6 ps was sufficient for obtaining reliable results about the dominant magnetic moments. For reasons of computational efficiency, we used PBE with def2-SVP [73] in the DFT-MD runs that served to identify low-energy geometries. The thus obtained low-energy structures were then further optimized with higher accuracy as described above.
We computed the binding energy \(E_{\text {B}}\) of the composite M/S consisting of a metal cluster M and N solvent molecule(s) S,
$$\begin{aligned} E_{\text {B}}(M/S) = E_{\text {tot}}(M/S) - E_{\text {tot}}(M) - N E_{\text {tot}}(S), \end{aligned}$$
as the difference of the molecular total energies \(E_{\text {tot}}\) obtained from ground state DFT calculations, where \( E_{\text {tot}}\) is the total energy for the system in its optimized geometry. Hence, \( E_{\text {tot}}(M)\) always refers to the structural isomer that provides the lowest energy with the functional employed, e. g., the Pd\(_{13}(C_s)\) bilayer with \(6\, \mu _{\text {B}}\) for PBE. We indicate the xc functional used to calculate the total energies at the appropriate places throughout the article, unless it is clear that the results refer to the default choice PBE.
As part of our analysis of the interaction between clusters and solvent we checked for charge transfer between the cluster and the solvent molecules. For this we relied on the Bader charge density analysis [74] as implemented in the Bader code [75,76,77,78,79] to assign a charge to each of the atoms in the system. By comparing the charge of each atom in the composite system M/S to that within the separate subsystems M and S, we analyzed whether inter-molecular charge transfer occurs. We also analyzed the interaction based on the (generalized) Kohn-Sham density of states (DOS), obtained from the ground state eigenvalues. As a guide to the eye, we convoluted the eigenvalue spectrum with Gaussians of width 0.08 eV. For better inter-system comparison, we plotted the DOS on the same scale, [0:76] arb. unit, in all figures. In the calculations for the separate systems that are needed for both the Bader charge- and the DOS-analysis, we used the atomic coordinates that the subsystem atoms have in the relaxed, combined system to eliminate differences that would be due only to deviations from the vacuum structure.
Results: Pd in acetophenone
Pd\(_{13}\) with one acetophenone molecule
We first focus on the Pd\(_{13}\) cluster because it is, as discussed above, a stable system with an established geometry. We start by looking at one acetophenone (\(0\, \mu _{\text {B}}\)) molecule capping a Pd\(_{13}\) cluster. The \(C_s\) bilayer structure, cf. Fig. 1a, with a magnetic moment of \(6\, \mu _{\text {B}}\) serves as the starting geometry in the following simulations.
Due to its geometry with a benzene ring and the CO and CH\(_3\) groups, it is not a priori clear how the acetophenone molecule will orient itself with respect to the Pd\(_{13}(C_s)\) bilayer cluster. Therefore, we performed several geometry optimizations starting from different arrangements of the acetophenone molecule as depicted in Fig. 3, and determined which combination of spatial orientation and magnetic moment is energetically favorable. In one case we aligned the acetophenone parallel to the cluster surface (Fig. 3a). In the other, acetophenone was set perpendicular with CO and CH\(_3\) facing the Pd surface (Fig. 3b). For both cases, we started from several different positions of the acetophenone with respect to the cluster's surface, including inclined arrangements as shown in Fig. 3c and d. For all these arrangements, we ran geometry optimizations for different, fixed spin magnetic moments in the range of \( [0,2,..,8] \mu _{\text {B}}\). We included the case of 8 \(\mu _{\text {B}}\) despite the fact that it would imply an increase in magnetic moment compared to the Pd\(_{13}\) vacuum structure as a check. However, as expected the \(8 \, \mu _{\text {B}}\) configuration is energetically clearly less favorable than the others.
Different starting geometries for the relaxation of Pd\(_{13}(C_s)\) with one C\(_8\)H\(_8\)O molecule. Color code: Pd blue, C grey, H white, O red. The Pd\(_{13}\) geometry is the same in all cases. See main text for details
Table 1 Binding energy \( E_{\text {B}}\) (second column) for the most stable arrangement of a Pd\(_{13}\) cluster (lines 1–5) and a Pd\(_{8}\) cluster (line 6) surrounded by acetophenone as indicated in the first column
The following observations were made in the relaxations: First, a spin configuration with \(6 \, \mu _{\text {B}}\) is preferred for all orientations and in any case we do not observe any significant distortions of the \(C_s\) bilayer geometry through a single acetophenone. Second, in the case of an initially parallel orientation, our relaxed structures show that the benzene ring upholds the parallel orientation. In addition, the tilted parallel initial structures (cf. Fig. 3c) transition into a parallel arrangement (cf. Fig. 3a). For the parallel orientation, our binding energies range from \(-1.43\) eV to \(-1.67\) eV for a magnetic moment of \(6 \, \mu _{\text {B}}\). Third, relaxing the perpendicular initial situation leads to geometries in which CO stays closer to the \(C_s\) bilayer surface than the CH\(_3\) group, and the acetophenone remains perpendicular to the surface (similar to Fig. 3b). Inclined perpendicular initial structures (cf. Fig. 3d) generally transition into a clear parallel arrangement (cf. Fig. 3a). However, we also observed in some relaxations that initially perpendicular structures, both with and without inclination, flipped into a parallel geometry. In the perpendicular arrangement, the binding energies range from \(-0.27\) eV to \(-0.78\) eV (\(6 \, \mu _{\text {B}}\)). One can thus conclude that the parallel orientation is clearly the preferred one. An explanation of this finding is provided by analyzing the spatial structure of the valence orbitals. For the parallel arrangement, their probability density delocalizes not only over the cluster, but to some extent also over the molecule, and this results in a lower energy.
The magnetic moment of the combined system is the same as for the bare Pd\(_{13}\), but we occasionally observe stable configurations with lower magnetic moments. The first row of Table 1 summarizes the energetic ordering of the different spin configurations. In each case the table refers to the most stable structure that we found.
We also checked the robustness of the predicted spin configurations with respect to the xc approximation. To this end, we took the parallel initial structures that lead to the most stable configurations with PBE and repeated the optimization using PBE-D3 and PBE0-D3, respectively. The resulting binding energies and energy difference are listed in the second and third rows of Table 1. Reassuringly, both D3 corrected functionals also predict a \(6 \, \mu _{\text {B}}\) spin magnetic ground state. The energetic ordering of the higher states changes to some extent, but this is not surprising given the small energetic differences between them. Overall, the test confirms the strategy of using PBE for the extensive geometry optimizations.
DOS of Pd\(_{13}\), C\(_8\)H\(_8\)O, the sum of the DOSs of Pd\(_{13}\) and C\(_8\)H\(_8\)O, and the DOS of the combined system Pd\(_{13}\)/C\(_8\)H\(_8\)O, all obtained from DFT ground state calculations with PBE. The underlying geometry is the one of the parallel arrangement obtained with PBE-D3. See main text for details
Some insight into the electronic interaction between Pd\(_{13}\) and C\(_8\)H\(_8\)O can be obtained from the DOS. The full line in Fig. 4 shows the DOS obtained for the parallel arrangement of Pd\(_{13}\)/C\(_8\)H\(_8\)O. The different dashed lines show the DOS for the separate subsystems Pd\(_{13}\) and C\(_8\)H\(_8\)O, and the sum of the DOSs of the individual subsystems (line labeled Pd\(_{13}\) + C\(_8\)H\(_8\)O), as indicated in the inset of Fig. 4. The comparison of the DOSs of the subsystems to the one of the combined system shows that near the Fermi level (HOMO energy, cf. "Appendix A", Table 5) the DOS of the combined system is dominated by Pd\(_{13}\) (down to about \(-8\) eV), whereas in the low energy region (below ca. 10 eV) it is dominated by C\(_8\)H\(_8\)O. The comparison of the DOS of the combined system to the sum of the two independent DOSs however reveals that there is also some electronic interaction, because the two are not identical. Calculating the DOS with PBE0 yields the same qualitative trend (cf. "Appendix A", Fig. 13).
The Bader charge density analysis for the parallel arrangement obtained with PBE-D3 shows a charge transfer of some \(0.20 \,\text {e}^-\approx 0.2 \,\text {e}^-\) from the Pd\(_{13}\) cluster to the acetophenone molecule, where e\(^-\) is the negative elementary charge. Performing the same analysis with PBE0 for the parallel arrangement obtained with PBE0-D3 yields the same qualitative trend with a charge transfer of some \(0.26\,\text {e}^-\approx 0.3 \,\text {e}^-\), i.e., the qualitative results are robust with respect to the influence of the xc approximation.
Pd\(_{13}\) with three acetophenone molecules
We worked our way towards a more realistic description of solvation by combining the Pd\(_{13}(C_s)\) cluster with three acetophenone molecules carrying out constant temperature DFT-MD simulations, following the procedure described in Sect. 2. We did separate DFT-MD runs for each of the magnetic moments \( \mu \in [0,2,4,6] \mu _{\text {B}}\). As the start configuration we arranged two acetophenone molecules parallel and one perpendicular to the Pd surface. During the first ca. 1.2 ps the system restructures considerably as it moves from the initial geometry to a lower energy configuration: The one acetophenone that initially had a perpendicular orientation with respect to Pd\(_{13}\) angles rapidly into a more stable parallel orientation, which further corroborates the results of the previous section. Then all three solvent molecules keep this type of orientation. This deforms the initial \(C_s\) bilayer and for all of the following simulation time stabilizes a new Pd\(_{13}\) structure with a higher spatial symmetry \(D_{5h}\). The two basic structural motives are depicted by the insets in Fig. 5a.
Total energy analysis of the DFT-MD simulation of Pd\(_{13}\) and three acetophenone molecules. The different curves show the simulations for different, fixed magnetic moments. a Averaged total energy \({\bar{E}}(t)\). The vertical line indicates the restructuring period and the insets depict the structure of the central Pd cluster before and after the restructuring. b Moving average of the total energy E(t) . Inset: At the marked energy minima \(min_{\text {MD}}=1,\ldots ,7\) structures were extracted for further optimization. See main text for details
In order to see which magnetic moment does now become the energetically preferred one, we record the total energy for each of the runs with different spin configurations. Because of the pronounced, rapid fluctuations in the total energy that are an inherent feature of MD simulations, the decisive trends are easier to see when one averages over some of those fluctuations. Therefore, Fig. 5 depicts the data in two different ways. First, we defined the average energy \({\bar{E}}(t)\) at a given time \(t=t_i\) by
$$\begin{aligned} {\bar{E}}(t_i) = \frac{\sum _{j=1}^{i} E_{\text {tot}}(t_j)}{i} , \end{aligned}$$
i. e., as the average over all of the previous discrete time steps i, including the current one. Here, \(1\le i \le N_{\text {t}}\) and the total number of time steps was \(N_{\text {t}}\approx 3100\). Averaging in this way over an increasing number of geometries allows to visualize the trend in the simulations, with fluctuations being totally averaged out. One thus obtains a direct depiction of the total energy trend for each spin configuration. (The more detailed information from a moving average is discussed below.) Figure 5a shows \({\bar{E}}(t)\) for the simulations of Pd\(_{13}\) with three acetophenone molecules. For the sake of convenient energy axis labels, an offset \({\bar{E}}(t)-E_{ref}\) is used in the plots of Fig. 5, where \(E_{ref}=76617\) eV. After the above mentioned restructuring there is a clear change in the preferred magnetic moment. Figure 5a shows that the DFT-MD run with zero magnetic moment leads to the lowest average energy. The \(4 \, \mu _{\text {B}}\) structures are slightly higher in energy, and the \(2 \, \mu _{\text {B}}\) and \(6 \, \mu _{\text {B}}\) are noticeably higher.
Table 2 Binding energies \( E_{\text {B}}(min_{\text {MD}})\) of relaxed structures of Pd\(_{13}\) and three acetophenone molecules for different magnetic moments, in eV
For a more detailed insight Fig. 5b depicts the total energy curve of the DFT-MD runs for each magnetic moment. In this plot we used a moving average E(t) of the total energy: At \(t_i\) we averaged the total energy over the previous and subsequent \(\varDelta \) time steps:
$$\begin{aligned} E(t=t_i) = \frac{ \sum _{j=i-\varDelta }^{i+\varDelta } E_{\text {tot}}(t_j) }{ (2\varDelta +1) } , \end{aligned}$$
where \(\varDelta =50\), \(i-\varDelta \ge 1\) and \(i+\varDelta \le N_t\). \(\varDelta \) was chosen such that some of the smaller energy fluctuations were smoothed out, while preserving the main progression of the (unaveraged) total energy. Comparing E(t) for the different magnetic moments in Fig. 5b confirms that \(0 \, \mu _{\text {B}}\) leads to the lowest energy.
In order to confirm yet further which magnetic moment is the energetically preferred one, we selected some of the low energy structures from each run. These are marked by the small circles numbered from 1 to 7 in Fig. 5b, and we refer to these numbers with the notation \(min_{\text {MD}}\) in the following. We optimized (relaxed) these structures to the energetic minimum in separate calculations. Table 2 lists the binding energy \(E_{\text {B}}(min_{\text {MD}})\) for the relaxed structures.
The relaxed structures show a \(0\, \mu _{\text {B}}\) or \(2 \, \mu _{\text {B}}\) ground state. In some cases, the binding energies for the \([0,2,4] \, \mu _{\text {B}}\) structures lie energetically close together, and one has to keep in mind that the employed xc approximations have a finite accuracy. However, the \(6\, \mu _{\text {B}}\) configurations are quite noticeably less favored. A geometrical analysis of the relaxed structures shows that the distance between the benzene rings of the solvent shell and the Pd surface is on average ca. 1.98 Å.
Solvation thus clearly reduces the magnetic moment, and this raises the question of why that is. It could be a direct effect of the electronic interaction between the acetophenone molecules and the Pd\(_{13}\) cluster, or it could be an indirect effect if the new \(D_{5h}\) geometry of Pd\(_{13}\) that is triggered by the solvation is associated with a lower magnetic moment than the original \(C_s\) structure. We checked for this by computing the magnetic moment of the \(D_{5h}\) structure in vacuum. First, we took the Pd\(_{13}\) coordinates from the Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_3\) arrangement and calculated the ground state for this bare Pd\(_{13}\) structure without the acetophenone molecules. It shows a magnetic moment of \(8 \, \mu _{\text {B}}\). As a further check we also energetically optimized the extracted structure. The relaxed (bare) \(D_{5h}\) cluster also has an \(8 \, \mu _{\text {B}}\) magnetic ground state, and it is 0.41 eV higher in energy than the \(C_s\) bilayer structure with \(6 \, \mu _{\text {B}}\). Thus, the reduction of the magnetic moment is a direct consequence of the electronic interaction with the acetophenone.
The conclusion that there is a substantial interaction between the cluster and the solvent molecules is also confirmed by analyzing the DOS and the charge transfer. The DOS of Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_3\) can neither quantitatively nor qualitatively be explained from the superposition of the DOSs of the separate subsystems Pd\(_{13}\) and (C\(_8\)H\(_8\)O)\(_3\), as depicted in Fig. 6. The differences are very noticeable, e.g., in the peak structure close to the Fermi level. Compared to the scenario with a single acetophenone molecule, the differences are overall more pronounced. Also the charge transfer from the Pd\(_{13}\) cluster to the solvent molecules is more pronounced than in the scenario with one acetophenone, with a value of \(\approx 0.7\,\text {e}^{-}\).
DOS of Pd\(_{13}\), (C\(_8\)H\(_8\)O)\(_3\), the sum of the DOSs of Pd\(_{13}\) and (C\(_8\)H\(_8\)O)\(_3\), and the DOS of the combined system Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_3\), obtained from DFT ground state calculations with PBE. See main text for details
Pd\(_{13}\) with five acetophenone molecules
By combining Pd\(_{13}\) with five acetophenone molecules we reach a situation in which the metal cluster is surrounded by the solvent molecules. We used the Pd\(_{13}(D_{5h})\) structure from the previous section, i. e., the one stabilized by three acetophenone molecules (cf. Fig. 1d) as the initial geometry for Pd\(_{13}\), and the five acetophenones were initially aligned parallel to the Pd surface. We then again ran separate DFT-MD simulations for each of the magnetic moments \( \mu \in [0,2,4] \mu _{\text {B}}\). Due to the results of the previous section we did not study higher magnetic moments. In the initial phase of the DFT-MD simulations we observed a restructuring of the central Pd\(_{13}\): The solvent shell stabilized an icosahedral \(I_h\) cluster (cf. Fig. 7, inset). After this phase, the averaged total energy \({\bar{E}}(t)\) of the DFT-MD runs plotted in Fig. 7 shows a clear tendency towards a quenching of the spin magnetic moment to either \(0\,\mu _{\text {B}}\) or \(2 \,\mu _{\text {B}}\), with \(4 \,\mu _{\text {B}}\) being noticeably higher in energy.
Averaged total energy \({\bar{E}}(t)\) of the DFT-MD simulation of Pd\(_{13}\) and five acetophenone molecules (\(E_{ref}=97525\) eV). Inset: Initial restructuring phase of \(\approx 1.3\) ps (vertical line) and the central Pd cluster before and after the restructuring. See main text for details
We extracted five of the lowest-energy structures from the DFT-MD runs, and their relaxations confirmed the trend: \(0\,\mu _{\text {B}}\) or \(2 \,\mu _{\text {B}}\) are the preferred low-energy states, while \(4 \, \mu _{\text {B}}\) is clearly higher in energy by some 0.10 eV to 0.45 eV. The most stable structure, shown in Fig. 8b, has a binding energy of \(-8.41\) eV with \(0\,\mu _{\text {B}}\) (cf. Table 1, line 6). A structure with \(2 \,\mu _{\text {B}}\) is quasi degenerate, and \(4 \, \mu _{\text {B}}\) is distinctly less favorable by 0.35 eV.
Optimized structures of Pd\(_{13}\) surrounded by a solvent shell of a three and b five acetophenone molecules. The solvent shells tend to stabilize clusters with a high spatial symmetry. The lowest energy structure of Pd\(_{8}\) with five acetophenone molecules is depicted in c
We also tested again whether the reduction of the magnetic moment is a direct or an indirect effect by calculating the density and the magnetic moment for the bare Pd\(_{13}\) in the geometry that it takes in the Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\) arrangement. The bare extracted Pd\(_{13}\) cluster recovers the \(8 \, \mu _{\text {B}}\) that was also found in the optimized \(I_h\) structure. This confirms the conclusion of the previous section that the quenching of the magnetic moment is a consequence of an electronic interaction between acetophenone and the metal cluster.
Again, the electronic interaction is also reflected in the charge density and the DOS of Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\): Firstly, the charge transfer from Pd\(_{13}\) to the solvent shell continues to increase, to about \(1.2 \,\text {e}^-\). The charge transfer occurs predominantly to the C atoms of the benzene-like ring of all five acetophenone molecules. Secondly, the DOS of the combined system clearly differs from that of the superposition of the stabilized Pd\(_{13}\) cluster and the solvent shell (cf. Fig. 9), as seen, e. g., close to the Fermi level. Thus, the stabilized Pd\(_{13}\) cluster interacts with the entire shell of five acetophenones.
An analysis of the orbitals of the combined Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\) system with \(0\,\mu _{\text {B}}\) corroborates that there is substantial electronic interaction: Many of the valence orbitals are delocalized over all five acetophenone molecules and the central Pd\(_{13}\) cluster. This is exemplary shown in Fig. 10a for the highest-occupied molecular orbital (HOMO) of Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\) with \(0\,\mu _{\text {B}}\), computed with PBE. Because it is well known that semilocal exchange-correlation functionals tend to overestimate orbital delocalization, we repeated this calculation with the PBE0 functional. Figure 10b shows the HOMO from the PBE0 calculation, and the similarity to panel (a) is apparent. We have furthermore checked the 15 highest occupied orbitals and found that both functionals predict a qualitatively very similar delocalization for these. We can therefore conclude that the observed delocalization is not an artifact of the semilocal xc approximation.
Fig. 10
HOMO of Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\) with \(0\, \mu _{\text {B}}\) obtained from a ground state calculation with a PBE and b PBE0. Tan and red colors correspond to positive and negative values, respectively. The plots used an isovalue of \(0.01 \, \text {bohr}^{-3}\)
Pd\(_{8}\) with five acetophenone molecules
As the final step of our investigation, we check whether the reduction of the magnetic moment upon solvation is specific to Pd\(_{13}\), or also occurs for another system. To this end, we started from the \(C_{2v}\) bicapped octahedron Pd\(_{8}\) shown in Fig. 2b with a \(4 \, \mu _{\text {B}}\) ground state and surrounded the cluster by a solvent shell of five acetophenone molecules in parallel alignment towards the Pd surface.
Averaged total energy \({\bar{E}}(t)\) of the DFT-MD simulation for Pd\(_{8}\) and five acetophenone molecules (\(E_{ref}=80{,}115\) eV). Inset: Initial restructuring phase of \(\approx 1.3\) ps (vertical line), and the central Pd\(_8\) cluster before and after. See main text for details
We then ran separate DFT-MD simulations of the composite system for each of the magnetic moments \( \mu \in [0,2,4] \mu _{\text {B}}\). In the initial phase of the DFT-MD simulations, we observed a restructuring of the central Pd\(_{8}\) during which the solvent shell stabilized another bicapped octahedron isomer (cf. Fig. 11, inset) with a reduced symmetry \(C_1\). After this phase, the averaged total energy \({\bar{E}}(t)\) plotted in Fig. 11 shows a clear trend towards a quenching of the magnetic moment: Both \(0\,\mu _{\text {B}}\) and \(2 \,\mu _{\text {B}}\) structures are clearly lower in energy than the \(4 \,\mu _{\text {B}}\) ones. The relaxation of the two lowest-energy structures from the DFT-MD runs leads to a \(0\,\mu _{\text {B}}\) ground state and thus further confirms the reduction of the magnetic moment. The most stable arrangement, see Fig. 8c, has a binding energy of \(-5.98\) eV with \(0\,\mu _{\text {B}}\). \(2\,\mu _{\text {B}}\) is 0.17 eV higher in energy, and \(4\,\mu _{\text {B}}\) is clearly less favorable by 0.85 eV (cf. Table 1, line 5).
We again computed the magnetic moment for the metal cluster in the geometry that it takes in the Pd\(_{8}\)/(C\(_8\)H\(_8\)O)\(_5\) arrangement with \(0\,\mu _{\text {B}}\). Without the acetophenone molecules, the magnetic moment goes up to \(4 \, \mu _{\text {B}}\), with a \(2 \, \mu _{\text {B}}\) configuration just slightly higher (0.03 eV) in energy. We also again found a delocalization of the valence orbitals over the metal cluster and acetophenone molecules.
DOS of Pd\(_{8}\)/(C\(_8\)H\(_8\)O)\(_5\), Pd\(_{8}\), (C\(_8\)H\(_8\)O)\(_5\), and the sum of the DOS of Pd\(_{8}\) and (C\(_8\)H\(_8\)O)\(_5\) obtained from DFT ground state calculations with PBE. See main text for details
We also analyzed the charge density and the Kohn-Sham DOS in the same way as for the Pd\(_{13}\) cluster surrounded by acetophenone. This revealed a charge transfer of about \(1.0 \,\text {e}^-\) from the Pd\(_8\) cluster to the solvent shell. The DOS of the combined system Pd\(_{8}\)/(C\(_8\)H\(_8\)O)\(_5\) also differs noticeably from the DOS that results from summing the DOSs of the molecular components (cf. Fig 12). Thus, all the effects observed for Pd\(_{13}\) are also found for Pd\(_8\), which is a strong hint that these findings are general.
We used DFT-Born–Oppenheimer MD simulations combined with further geometry optimizations to study how the magnetic moment of Pd clusters changes when those are solvated in the organic solvent acetophenone. Our study focused on Pd\(_8\) and Pd\(_{13}\) as the vacuum ground states of these systems are clearly established, have a nonzero magnetic moment, and as these clusters have isomers that show a range of paradigm geometries. Our simulations reveal a clear trend towards a reduction of the magnetic moment. For both clusters the calculations with the largest number of solvent molecules lead to ground states with a zero magnetic moment. Analysis of the molecular orbitals shows that there is a pronounced interaction between the Pd particles and the acetophenone molecules, with the valence orbitals delocalizing over both the metal and the acetophenone. The interaction also manifests in a noticeable charge transfer from the metal cluster to the solvent molecules, and in clear signatures in the DOS. Putting these findings into perspective with earlier studies, we note that the presence of one benzyl alcohol and one benzylamine molecule also lead to a reduction of the magnetic moment of Pd clusters [18, 19]. Futschek et al. reported that the quenching of the magnetic moment can also occur in the composite systems of Pd\(_{13}\) with the ligands phosphine and thiol [25]. Interestingly, however, the quenching is less pronounced for phospines, as there the system retains a \(4\,\mu _{\text {B}}\) state even with high ligand coverage.
Table 3 Total energy \( E_{\text {tot}}\) (third column) for the PBE and PBE0 xc functional (xc, second column) for the lowest energy configuration for each of the Pd\(_{13}\) cluster isomers (first column) shown in Fig. 1. Columns four to seven list how much higher in energy other spin configurations (i.e., magnetic moments) are. Numbers are given in eV and refer to def2-TZVP/xc
Our results show that detecting solvated Pd particles via their magnetic moment, and in this way contributing to answering the "leaching" question that was discussed in the introduction, is going to be difficult when the solvent is acetophenone. The situation may be different, though, for solvents that do not interact as strongly. Our study provides hints that the aromatic character of acetophenone plays an important role in the electronic interaction that leads to the magnetic quenching. In order to check this hypothesis, we computed as an outlook the interaction between Pd\(_{13}\) and five acetone molecules. The results are reported in "Appendix B" and show that, indeed, the interaction and magnetic moment quenching are less pronounced for these smaller solvent molecules that do not feature an aromatic ring. Thus, avoiding the strong metal-solvent interaction triggered by the aromatic character is a close lying first step on the search for solvents that preserve the magnetic moment.
This manuscript has associated data in a data repository. [Authors' comment: The atomic coordinates for the clusters shown in Figures 1 and 2 are provided as xyz-files.]
M. Brack, Rev. Mod. Phys. 65, 677–732 (1993)
W.A. DeHeer, Rev. Mod. Phys. 65, 611–676 (1993)
Z. Luo, A.W. Castleman, S.N. Khanna, Chem. Rev. 116, 14456–14492 (2016)
D. Srimani, S. Sawoo, A. Sarkar, Org. Lett. 9, 3639–3642 (2007)
G. Johnson, R. Mitric, V. Bonacic-Koutecky, A.J. Castleman, Chem. Phys. Lett. 475, 1–9 (2009)
R. Jana, T.P. Pathak, M.S. Sigman, Chem. Rev. 111, 1417–1492 (2011)
B. Yoon, U. Landman, V. Habibpour, C. Harding, S. Kunz, U. Heiz, M. Moseler, M. Walter, J. Phys. Chem. C 116, 9594–9607 (2012)
A. Dhakshinamoorthy, H. Garcia, Chem. Soc. Rev. 41, 5262–5284 (2012)
J. Souto-Casares, M. Sakurai, J. Chelikowsky, Phys. Rev. B 93, 174418 (2016)
J.-X. Liu, Y. Su, I.A.W. Filot, E.J.M. Hensen, J. Am. Chem. Soc. 140, 4580–4587 (2018)
L. Goncalves, J. Wang, S. Vinati, E. Barborini, X.-K. Wei, M. Heggen, M. Franco, J. Sousa, D.Y. Petrovykh, O.S.G.P. Soares, K. Kovnir, J. Akola, Y.V. Kolen'ko, Int. J. Hydrog. Energy 45, 1283–1296 (2020)
V. Subramanian, E. Wolf, P.V. Kamat, J. Phys. Chem. B 105, 11439–11446 (2001)
P.V. Kamat, J. Phys. Chem. B 106, 7729–7744 (2002)
S. Füldner, R. Mild, H.I. Siegmund, J.A. Schroeder, M. Gruber, B. König, Green Chem. 12, 400–406 (2010)
M.-C. Wu, J. Hiltunen, A. Sápi, A. Avila, W. Larsson, H.-C. Liao, M. Huuhtanen, G. Tóth, A. Shchukarev, N. Laufer, Á. Kukovecz, Z. Kónya, J.-P. Mikkola, R. Keiski, W.-F. Su, Y.-F. Chen, H. Jantunen, P.M. Ajayan, R. Vajtai, K. Kordás, ACS Nano 5, 5025–5030 (2011)
C. Wang, K.E. Dekrafft, W. Lin, J. Am. Chem. Soc. 134, 7211–7214 (2012)
S. Sarina, E. Jaatinen, Q. Xiao, Y.M. Huang, P. Christopher, J.C. Zhao, H.Y. Zhu, J. Phys. Chem. Lett. 8, 2526–2534 (2017)
D. Tilgner, M. Klarner, S. Hammon, M. Friedrich, A. Verch, N. de Jonge, S. Kümmel, R. Kempe, Aust. J. Chem. 72, 842–847 (2019)
M. Klarner, S. Hammon, S. Feulner, S. Kümmel, L. Kador, R. Kempe, ChemCatChem 12, 4593–4599 (2020)
A.J. Cox, J.G. Louderback, S.E. Apsel, L.A. Bloomfield, Phys. Rev. B 49, 12295–12298 (1994)
K. Lee, Phys. Rev. B 58, 2391–2394 (1998)
M. Moseler, H. Häkkinen, R.N. Barnett, U. Landman, Phys. Rev. Lett. 86, 2545–2548 (2001)
P. Nava, M. Sierka, R. Ahlrichs, Phys. Chem. Chem. Phys. 5, 3372–3381 (2003)
E. Aprà, A. Fortunelli, J. Phys. Chem. A 107, 2934–2942 (2003)
T. Futschek, M. Marsman, J. Hafner, J. Phys. Condens. Matter 17, 5927–5963 (2005)
L.-L. Wang, D.D. Johnson, Phys. Rev. B 75, 235405 (2007)
M.J. Piotrowski, P. Piquini, J.L.F. Da Silva, Phys. Rev. B 81, 155446 (2010)
A.M. Köster, P. Calaminici, E. Orgaz, D.R. Roy, J.U. Reveles, S.N. Khanna, J. Am. Chem. Soc. 133, 12192–12196 (2011)
J. Kaiser, L. Leppert, H. Welz, F. Polzer, S. Wunder, N. Wanderka, M. Albrecht, T. Lunkenbein, J. Breu, S. Kümmel, Y. Lu, M. Ballauff, Phys. Chem. Chem. Phys. 14, 6487–6495 (2012)
L. Leppert, R.Q. Albuquerque, A.S. Foster, S. Kümmel, J. Phys. Chem. C 117, 17268–17273 (2013)
A.S. Chaves, G.G. Rondina, M.J. Piotrowski, P. Tereshchuk, J.L. Da Silva, J. Phys. Chem. A 118, 10813–10821 (2014)
L. Leppert, R. Kempe, S. Kümmel, Phys. Chem. Chem. Phys. 17, 26140–26148 (2015)
M.J. Piotrowski, C.G. Ungureanu, P. Tereshchuk, K.E.A. Batista, A.S. Chaves, D. Guedes-Sobrinho, J.L.F. Da Silva, J. Phys. Chem. C 120, 28844–28856 (2016)
A.S. Chaves, M.J. Piotrowski, J.L. Da Silva, Phys. Chem. Chem. Phys. 19(23), 15484–15502 (2017)
M. Van Den Bossche, J. Phys. Chem. A 123, 3038–3045 (2019)
J. Hermannsdörfer, M. Friedrich, N. Miyajima, R.Q. Albuquerque, S. Kümmel, R. Kempe, Angew. Chem. Int. Ed. 51(46), 11473–11477 (2012)
C. Luo, Y. Zhang, Y. Wang, J. Mol. Catal. A Chem. 229, 7–12 (2005)
X. Duan, M. Xiao, S. Liang, Z. Zhang, Y. Zeng, J. Xi, S. Wang, Carbon 119, 326–331 (2017)
M. Sharma, M. Sharma, A. Hazarika, L. Satyanarayana, G.V. Karunakar, K.K. Bania, Mol. Catal. 432, 210–219 (2017)
J.A. Bobb, A.A. Ibrahim, M.S. El-Shall, ACS Appl. Nano Mater. 1, 4852–4862 (2018)
G. Fusini, F. Rizzo, G. Angelici, E. Pitzalis, C. Evangelisti, A. Carpita, Catalysts 10, 330 (2020)
A. Trzeciak, A. Augustyniak, Coord. Chem. Rev. 384, 1–20 (2019)
K. Hong, M. Sajjadi, J.M. Suh, K. Zhang, M. Nasrollahzadeh, H.W. Jang, R.S. Varma, M. Shokouhimehr, ACS Appl. Nano Mater. 3, 2070–2103 (2020)
K. Köhler, R.G. Heidenreich, J.G.E. Krauter, J. Pietsch, Chem. Eur. J. 8, 622–631 (2002)
D. Astruc, F. Lu, J.R. Aranzaes, Angew. Chem. Int. Ed. 44, 7852–7872 (2005)
N. Phan, M. Van Der Sluys, C. Jones, Adv. Synth. Catal. 348, 609–679 (2006)
J. Richardson, C. Jones, Adv. Synth. Catal. 348, 1207–1216 (2006)
P.-P. Fang, A. Jutand, Z.-Q. Tian, C. Amatore, Angew. Chem. Int. Ed. 50, 12184–12188 (2011)
Z. Niu, Q. Peng, Z. Zhuang, W. He, Y. Li, Chem. Eur. J. 18, 9813–9817 (2012)
A. Leyva-Pérez, J. Oliver-Meseguer, P. Rubio-Marqués, A. Corma, Angew. Chem. Int. Ed. 52, 11554–11559 (2013)
A. Bej, K. Ghosh, A. Sarkar, D.W. Knight, RSC Adv. 6, 11446–11453 (2016)
Y. Wang, J. Ren, K. Deng, L. Gui, Y. Tang, Chem. Mater. 12, 1622–1627 (2000)
E.D. Sultanova, V.V. Salnikov, R.K. Mukhitova, Y.F. Zuev, Y.N. Osin, L.Y. Zakharova, A.Y. Ziganshina, A.I. Konovalov, Chem. Commun. 51, 13317–13320 (2015)
T. Yonezawa, K. Kawai, H. Kawakami, T. Narushima, Bull. Chem. Soc. Jpn. 89, 1230–1232 (2016)
M.J. Gillan, D. Alfè, A. Michaelides, J. Chem. Phys. 144, 130901 (2016)
T. Schmidt, S. Kümmel, Computation 4, 2079–3197 (2016)
B. Fresch, H.-G. Boyen, F. Remacle, Nanoscale 4, 4138–47 (2012)
J.P. Perdew, K. Burke, M. Ernzerhof, Phys. Rev. Lett. 77, 3865–3868 (1996)
J.P. Perdew, K. Burke, M. Ernzerhof, Phys. Rev. Lett. 80, 891 (1998)
Y. Zhang, W. Yang, Phys. Rev. Lett. 80, 890 (1998)
C. Adamo, V. Barone, J. Chem. Phys. 110, 6158–6170 (1999)
C. Adamo, G.E. Scuseria, V. Barone, J. Chem. Phys. 111, 2889–2899 (1999)
A. Luna-Valenzuela, J.L. Cabellos, J.A. Alonso, A. Posada-Amarillas, Mater. Today Commun. 26, 102024 (2021)
A. Granja-DelRío, H.A. Abdulhussein, R.L. Johnston, J. Phys. Chem. C 123, 26583–26596 (2019)
TURBOMOLE V7.4 2019, a development of University of Karlsruhe and Forschungszentrum Karlsruhe GmbH, 1989–2007, TURBOMOLE GmbH, since 2007 available from http://www.turbomole.com
A. Weigend, Phys. Chem. Chem. Phys. 21, 3297–3305 (2005)
F. Weigend, Phys. Chem. Chem. Phys. 8, 1057–1065 (2006)
S. Nosé, J. Chem. Phys. 81, 511–519 (1984)
W.G. Hoover, Phys. Rev. A 31, 1695–1697 (1985)
S. Grimme, J. Comput. Chem. 25, 1463–1473 (2004)
A. Klamt, G. Schüürmann, J. Chem. Soc. Perkin Trans. 2, 799–805 (1993)
A.A. Maryott, E.R. Smith, National Bureau of Standards, 26 (1951)
A. Schäfer, H. Horn, R. Ahlrichs, J. Chem. Phys. 97, 2571–2577 (1992)
R.F.W. Bader, Chem. Rev. 91, 893–928 (1991)
Bader V1.04 2020, developed by Wenjie Tang, Andri Arnaldsson, Wenrui Chai, Samuel T. Chill, and Graeme Henkelman available from http://theory.cm.utexas.edu/henkelman/code/bader/
G. Henkelman, A. Arnaldsson, H. Jónsson, Comput. Mater. Sci. 36, 354–360 (2006)
E. Sanville, S.D. Kenny, R. Smith, G. Henkelman, J. Comput. Chem. 28, 899–908 (2007)
W. Tang, E. Sanville, G. Henkelman, J. Phys. Condens. Matter 21, 084204 (2009)
M. Yu, D.R. Trinkle, J. Chem. Phys. 134, 064111 (2011)
Financial support from the German Research Foundation (DFG SFB 840, B1), from the Bavarian State Ministry of Science, Research, and the Arts for the Collaborative Research Network "Solar Technologies go Hybrid", and from the Bavarian Polymer Institute (KeyLab Theory and Simulation) in terms of computing resources is gratefully acknowledged.
Open Access funding enabled and organized by Projekt DEAL.
Theoretical Physics IV, University of Bayreuth, 95440, Bayreuth, Germany
Sebastian Hammon & Stephan Kümmel
Electronic Excitations in Light-Converting Systems, University of Bayreuth, 95440, Bayreuth, Germany
Linn Leppert
MESA+ Institute for Nanotechnology, University of Twente, 7500 AE, Enschede, The Netherlands
Sebastian Hammon
Stephan Kümmel
SH ran all calculations that are reported in the paper. SK supervised the work and SH and SK discussed the results regularly. LL introduced SH to technical aspects of using Turbomole. SH and SK wrote the manuscript, with suggestions and comments also from LL. All authors were involved in the scientific discussion of the results and in drawing the conclusions.
Correspondence to Stephan Kümmel.
Appendix A: Electronic structure details
Tables 3 and 4 complement the discussion on the Pd\(_{13}\) and Pd\(_8\) cluster isomers of Sect. 2, respectively. Both tables list the total energy of the most stable arrangement (column 3) and the energy differences to configurations with other magnetic moments relative to the former (column 4 and the following) for PBE and PBE0 (column 2). Relaxing Pd\(_{13}\)(\(D_{5h}\)) with PBE0 did not lead to a stable configuration, and Pd\(_8\)(\(D_{2d}\)) with \(0 \, \mu _{\text {B}}\) was not stable with PBE, therefore no numbers are reported for these cases.
Table 4 Total energy \( E_{\text {tot}}\) (third column) for the PBE and PBE0 xc functional (xc, second column) for the most stable arrangement of the Pd\(_{8}\) cluster isomers (first column) shown in Fig. 2. Columns four to six list how much higher in energy other spin configurations (i.e., magnetic moments) are. Numbers are given in eV and refer to def2-TZVP/xc
As a further insight into the electronic structure we report how the binding between the Pd cluster and acetophenone affects the energetic position of the frontier orbitals. Table 5 lists the highest occupied molecular orbital (HOMO) eigenvalue and the size of the energy gap to the lowest unoccupied molecular orbital (LUMO) for the most stable arrangement that we found for each composite system M/S, and the same quantities for the different bare cluster isomers and solvent molecules.
For the most stable parallel arrangement of Pd\(_{13}\)/C\(_8\)H\(_8\)O with \(6 \, \mu _{\text {B}}\) obtained with PBE (line 1), the HOMO is at \(-4.46\) eV and the gap to the LUMO is 0.10 eV. The comparison with the (bare) components Pd\(_{13}(C_s)\) (line 6) and C\(_8\)H\(_8\)O (line 11) shows that both the HOMO and LUMO eigenvalue of the combined system are very similar to the values for the (bare) metal cluster. For the D3-corrected structures of Pd\(_{13}\)/C\(_8\)H\(_8\)O and its components, the HOMO energy and gap deviate by only \(\pm 0.01\) eV from the values given in Table 5 and are therefore not listed separately.
For the most stable arrangement of Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_3\), shown in Fig. 8a, the HOMO eigenvalue is \(-4.43\) eV and the gap to the LUMO is 0.11 eV (cf. Table 5, line 2). The HOMO of the most stable arrangement of Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\) (cf. Fig. 8b) is at \(-4.27\) eV and the LUMO is 0.19 eV higher in energy (line 3). Thus, compared to the systems with one and three acetophenone molecules, the energy of the HOMO is slightly higher and the gap to the LUMO is slightly larger. The results show that the size of the acetophenone solvent shell has a rather small influence on the HOMO and LUMO.
For completeness, we also included the most stable arrangement of Pd\(_{8}\)/(C\(_8\)H\(_8\)O)\(_5\) (cf. Fig. 8c) in Table 5. The comparison with the separate components shows that also here, the HOMO and LUMO energies are dominated by the Pd cluster (cf. lines 4, 10 and 11).
Table 5 HOMO eigenvalue \( \varepsilon _{\text {HOMO}}\) (second column) and magnitude of the HOMO-LUMO gap \(\varDelta _{\text {s}}\) (third column) for the systems given in the first column
DOS of Pd\(_{13}\)/C\(_8\)H\(_8\)O, Pd\(_{13}\), C\(_8\)H\(_8\)O, and the sum of the DOS of Pd\(_{13}\) and C\(_8\)H\(_8\)O obtained from DFT ground state calculations with PBE0
Finally, we also report results that allow to check the influence of the xc approximation. To this end we looked at the relaxed PBE0-D3 structures of Pd\(_{13}\)/C\(_8\)H\(_8\)O, Pd\(_{13}(C_s)\) and C\(_8\)H\(_8\)O. As expected, the hybrid PBE0 yields a lower HOMO energy and a larger HOMO-LUMO gap for each system: The HOMO of Pd\(_{13}\)(\(C_s\))/C\(_8\)H\(_8\)O (Pd\(_{13}\)) is at \(-5.03\) eV (\(-5.14\) eV), and the LUMO is 1.69 eV (1.69 eV) higher. Otherwise, however, the qualitative picture closely corresponds to the one found with PBE. This conclusion is also supported by Fig. 13, which shows the PBE0 DOS computed within the relaxed PBE0-D3 structure for the parallel arrangement of Pd\(_{13}\)/C\(_8\)H\(_8\)O, cf. Sect. 3.1.
Appendix B: Pd\(_{13}\) with five acetone molecules
We here report results for a different solvent to provide an outlook on possible future work. A non-aromatic solvent makes for an interesting comparison, because our findings for acetophenone indicate that the aromatic ring plays an important role in the binding and interaction with the palladium clusters. The results reported here are to be understood as a first, qualitative test.
To keep the differences between the solvents small except for the missing aromatic ring, we remained in the family of ketones and selected acetone (C\(_3\)H\(_6\)O), which is the simplest of them and of similar polarity as acetophenone. We combined Pd\(_{13}\) with five acetone molecules, which compares best with Pd\(_{13}\) with three acetophenones in terms of the total system size, and with Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\) in terms of the number of solvent molecules. We chose Pd\(_{13}(I_h)\) as the initial geometry for the cluster and calculated optimized structures of Pd\(_{13}\)/(C\(_3\)H\(_6\)O)\(_5\) in analogy to the procedures described in the previous sections on acetophenone (cf. also Sect. 2).
The thus determined most stable arrangement, see Fig. 14, has a binding energy of \(-3.39\) eV and prefers \(6\,\mu _{\text {B}}\) with PBE. An arrangement with \(4\,\mu _{\text {B}}\) is slightly less favorable by 0.04 eV, followed by \(2\,\mu _{\text {B}}\) by 0.08 eV, \(0\,\mu _{\text {B}}\) by 0.18 eV, and \(8\,\mu _{\text {B}}\) by 0.19 eV. The binding between Pd\(_{13}\) and the acetone shell is thus smaller than in the acetophenone systems with similar system size or number of solvent molecules, respectively. For Pd\(_{13}\)/(C\(_3\)H\(_6\)O)\(_5\), the quenching to \(6\,\mu _{\text {B}}\) - compared to the \(8\,\mu _{\text {B}}\) of the Pd\(_{13}\)(\(I_h\)) cluster in vacuum - is also caused by an electronic interaction, as revealed by the analysis of the stabilized Pd\(_{13}\) cluster, which we performed in analogy to the acetophenone systems. However, the quenching is considerably less pronounced than in the comparable acetophenone systems.
Optimized structure of Pd\(_{13}\) surrounded by five acetophenone molecules. See main text for details
We then further characterized the electronic structure in analogy to the previous sections: For the most stable arrangement, the HOMO is at \(-3.86\) eV and the HOMO-LUMO gap is 0.07 eV (cf. "Appendix A", Table 5, line 5). The comparison with the (bare) molecular subsystems Pd\(_{13}(I_h)\) (line 7) and C\(_3\)H\(_6\)O (line 12) shows that both the HOMO and LUMO eigenvalues of the combined system are determined mostly by the (bare) metal cluster.
DOS of Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\), Pd\(_{8}\), (C\(_3\)H\(_6\)O)\(_5\), and the sum of the DOS of Pd\(_{8}\) and (C\(_3\)H\(_6\)O)\(_5\) obtained from DFT ground state calculations with PBE. See main text for details
The binding between Pd\(_{13}\) and (C\(_3\)H\(_6\)O)\(_5\) is reflected in the corresponding DOS by qualitative and quantitative changes that one observes when comparing to the superposition of the molecular components, as depicted in Fig. 15. Overall, this suggests a direct electronic interaction between the cluster and the solvent, and in this respect the effects are similar to those found for acetophenone. However, the two solvents differ significantly with respect to the inter-molecular charge transfer: About \(0.3 \,\text {e}^{-}\) is transferred from Pd\(_{13}\) to the acetone shell, which is a factor of about 2 and 4 less than for Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_3\) and Pd\(_{13}\)/(C\(_8\)H\(_8\)O)\(_5\), respectively. The reduced charge transfer, together with the reduced magnetic moment quenching indicate that non-aromatic solvents may better preserve the magnetic moment of solvated clusters.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Hammon, S., Leppert, L. & Kümmel, S. Magnetic moment quenching in small Pd clusters in solution. Eur. Phys. J. D 75, 309 (2021). https://doi.org/10.1140/epjd/s10053-021-00322-1
DOI: https://doi.org/10.1140/epjd/s10053-021-00322-1
|
CommonCrawl
|
The multivariate slash and skew-slash student t distributions
Fei Tan1,
Yuanyuan Tang2 &
Hanxiang Peng1
In this article, we introduce the multivariate slash and skew-slash t distributions which provide alternative choices in simulating and fitting skewed and heavy tailed data. We study their relationships with other distributions and give the densities, stochastic representations, moments, marginal distributions, distributions of linear combinations and characteristic functions of the random vectors obeying these distributions. We characterize the skew t, the skew-slash normal and the skew-slash t distributions using both the hidden truncation or selective sampling model and the order statistics of the components of a bivariate normal or t variable. Density curves and contour plots are drawn to illustrate the skewness and tail behaviors. Maximum likelihood and Bayesian estimation of the parameters are discussed. The proposed distributions are compared with the skew-slash normal through simulations and applied to fit two real datasets. Our results indicated that the proposed skew-slash t fitting outperformed the skew-slash normal fitting and is a competitive candidate distribution in analyzing skewed and heavy tailed data.
Mathematics Subject Classification Primary 62E10; Secondary 62P10
Skewed and heavy tailed data occur frequently in real life and pose challenges to our usual way of thinking. Examples of such data include household incomes, loss data such as crop loss claims and hospital discharge bills, and files transferred through the Internet to name a few. Candidate distributions for simulating and fitting such data are not abundant. One can't simply take the normal or the t distributions or as such as substitutes. Even though Cauchy distribution can be used to simulate and fit such data, its sharp central peak and the fact that its first moment does not exist narrow its applications. Thus additional distributions are needed to study such skewed and heavy tailed data.
Kafadar (1988) introduced the univariate normal slash distribution as the resulting distribution of the ratio of a standard normal random variable (rv) and an independent uniform rv (hereafter referred to the distribution as the slash normal). Generalizing the standard normal by introducing a tail parameter, the slash normal has heavier tails than the standard normal, hence it could be used to simulate and fit heavy tailed data. Wang and Genton (2006) generalized the univariate slash normal to the multivariate slash normal and investigated its properties. They also defined the multivariate skew-slash normal as the resulting distribution of the ratio of a skewed normal rv and an independent uniform rv (hereafter referred to the distribution as the skew-slash normal). They applied it to fit two real datasets.
In this article, we introduce the slash (student) t distribution and the skew-slash (student) t distribution. They could be used to simulate and fit skewed and heavy tailed data. The slash t distribution generalizes the slash normal distribution of Kafadar (1988) and the multivaraiate slash normal distribution of Wang and Genton (2006), and the skew-slash t distribution generalizes the skew-slash normal distribution of the latter two authors. In the skew-slash t there is one parameter to regulate the skewness of the distribution and another parameter to control the tail behavior. By setting the skewness parameter to zero, the skew-slash t reduces to the slash t. By letting the tail parameter to be infinity, the skew-slash t simplifies to the skew t of Azzalini and Capitanio (2003). As both the slash and skew t take the t and hence the normal as their special cases, so does the skew-slash t. To fit data, one can start with the skew-slash t. If the fitted value of the degrees of freedom is very large, then one takes the simpler skew-slash normal model. This idea of course can be used to perform the hypothesis testing of a skew-slash normal sub-model against a skew-slash t model. We have derived the formulas for the densities, moments, marginal distributions and linear combinations of these distributions. Thus it could be expected that they can be used to analyze skewed and heavy tailed data.
Compared to the slash and skew-slash normal distributions, an additional parameter, the degrees of freedom, is included in the slash and skew-slash t distributions. This parameter gives the latter distributions more flexibility in fitting data than the former. Even though there is a tail parameter in both slash and skew-slash normal and slash and skew-slash t distributions, the degrees of freedom in the t distribution may lend an additional hand to model heavy (fat) tails and joints with the tail parameter to better fit data. This could be used to explain why the skew-slash fitting to the real GAD data in our application outperformed the skew-slash normal fitting. See Figure 1 where the skew-slash t fitting was able to better capture the peak of the histogram, and Table 1 where the AIC values indicated that the skew-slash t fitting was better than that of the skew-slash normal fitting. In fact, one observes that the standard error (SE) of the MLE of the tail parameter q in the table is very big (83.194) for the skew-slash normal fitting and much smaller (2.19) for the skew-slash t fitting. Noticing also the fact that the estimate of the degrees of freedom r is reasonable with a small SE, we would comment that the joint endeavor of q and r had higher fitting capability than that of a single q. Our simulation results in Section 5.1 and in particular in Table 2 also exhibited the superior performance of the proposed skew-slash t distribution to the skew-slash normal.
Histogram and fitted density curves.
Table 1 The skew-slash normal and t fitting to the GAD data
Table 2 Skew-slash t and skew-slash normal comparison
Azzalini and Dalla Valle (1996) introduced the multivariate skew normal distribution that extends the normal distribution with an additional skewness parameter. It provides an alternative modeling distribution to skewed data that are often observed in many areas such as economics, computer science and life sciences. Many authors have investigated skew t distributions, see e.g. Azzalini and Dalla Valle (1996), Gupta (2003), and Sahu et al. (2003). Azzalini and Capitanio (2003) proposed the multivariate skew t distribution by allowing a skewness parameter in a multivariate t distribution.
It is our belief that the proposed slash and skew-slash t distributions throw some additional light on this theory and contribute to the family of candidate distributions for modeling and simulating skewed and heavy tailed data.
The article is organized as follows. In Section 2, we introduce the multivariate slash t distribution, study its relationships with other distributions and derive the density function. We investigate its tractable properties such as heavy tail behavior and closeness of marginal distributions and linear combinations. We give the stochastic representations, moments and characteristic function. We close this section with an example which graphically displays the densities. In Section 3, we define the skew-slash t distribution, derive the densities and characteristic functions, and give the moments and distributions of linear combinations of these distributions. We first define the standard skew-slash distribution in Subsection 3.1 and study their relationships with other distributions. In subsection 3.2, we characterize the skew t, skew-slash normal and skew-slash t distributions using hidden truncation or selective sampling model and the order statistics of the components of a bivariate normal or t variable. In Subsection 3.2, we define the general multivariate skew-slash distribution. An example is presented to illustrate the densities of the proposed distributions. Section 4 covers parameter estimation and statistical inference. Here we briefly discuss the maximum likelihood and Byesian approaches. Section 5 is devoted to simulations as well as applications of the proposed skew-slash t distribution to fit two real datasets. Finally, some concluding remarks are given in Section 6.
The multivariate slash t distribution
In this section, we define the multivarate slash t distribution, derive the density and study its tail behaviors and relationships with other distributions. We give the stochastic representations, moments, and characteristic function and discuss marginal distributions and linear combinations. We close this section with an example.
Let us first recall the multivariate t distribution. There are several variants of the definitions in the literature and we will adopt the following one. Details can be found in e.g. Kotz and Nadarajah (2004) or Johnson and Kotz (1972, p. 134). A continuous k-variate random vector T has a t distribution with degrees of freedom r, mean vector m, and correlation matrix R (covariance matrix Σ), written T∼t k (r,m,R), if it has the probability density function (pdf) given by
$$\begin{array}{@{}rcl@{}} t_{k}(\mathbf{t}; r, {\mathbf{m}}, {\mathbf{R}})\,=\, \frac{\Gamma\left(\frac{r+k}{2}\right)}{(r\pi)^{k/2}\Gamma\left(\frac{r}{2}\right)|{\mathbf{R}}|^{1/2}} \left(1+\frac{(\mathbf{t}-{\mathbf{m}})^{\top} {\mathbf{R}}^{-1} (\mathbf{t}-{\mathbf{m}})}{r}\right)^{-({r+k})/{2}},\quad \mathbf{t}\in{\mathbb{R}}^{k}, \end{array} $$
where |M| denotes the determinant of a square matrix M. If m=0 and R=I k where I k denotes the k×k indentity matrix, it is referred to as the standard k-variate t distribution and denoted by t k (r). We now introduce the k-variate slash t distribution. Write U∼U(0,1) for the rv uniformly distributed over (0,1).
A k-variate continuous random vector X is said to have a slash t distribution with tail parameter q>0, degrees of freedom r, \({\mathbf {m}}\in {\mathbb {R}}^{k}\), and matrix R, written X∼S L T k (q,r,m,R), if it can be expressed as the ratio of two independent rv's T∼t k (r,m,R) and U∼U(0,1) as follows:
$$\begin{array}{@{}rcl@{}} {\mathbf{X}}={\mathbf{T}}/{U^{1/q}}. \end{array} $$
When m=0 and R=I k , it is referred to as the standard (k-variate) slash t and denoted by S L T k (q,r). It can be easily seen that the k-variate slash t distribution generalizes the k-variate t distribution as stated below.
Remark 1.
The limiting distribution of the slash t distribution S L T k (q,r), as q→∞, is the student t distribution t k (r).
Let us now derive the density of the slash t distribution. Note that the joint density of T and U is
$$\begin{array}{@{}rcl@{}} t_{k}(\mathbf{t}; r, m, R){\mathbf{1}}_{[0,1]}(u), \quad \mathbf{t} \in{\mathbb{R}}^{k},\, u\in [\!0,1]. \end{array} $$
For the substitution v=u 1/q,x=t/u 1/q=t/v, the Jacobian determinant is q v k+q−1. Hence the joint density of (X,V) is given by
$$\begin{array}{@{}rcl@{}} qv^{k+q-1}t_{k}(v{\mathbf{x}}; r, {\mathbf{m}}, {\mathbf{R}}){\mathbf{1}}_{[0,1]}(v), \quad {\mathbf{x}}\in{\mathbb{R}}^{k}, v\in [0, 1]. \end{array} $$
Integrating v out yields the density f k (x;q,r,m,R) of X as follows:
$$ f_{k}({\mathbf{x}}; q, r, {\mathbf{m}}, {\mathbf{R}})={\int_{0}^{1}} qv^{k+q-1}t_{k}({\mathbf{x}} v; r, {\mathbf{m}}, {\mathbf{R}})\,dv, \quad {\mathbf{x}}\in{\mathbb{R}}^{k}. $$
((2.1))
From this density it immediately follows that the standard k-variate slash t distribution S L T k (q,r) is symmetric about 0 as the standard k-variate t is so.
The heavy-tail behavior The cumulative distribution function (cdf) of the k-variate slash t is given by
$$\begin{array}{@{}rcl@{}} F_{k}({\mathbf{x}}; q, r, {\mathbf{m}}, {\mathbf{R}})=\int_{-\infty}^{{\mathbf{x}}} f_{k}({\mathbf{y}}; q, r, {\mathbf{m}}, {\mathbf{R}})\,d{\mathbf{y}}= {\int_{0}^{1}} qv^{k+q-2} H_{k}({\mathbf{x}} v; r, {\mathbf{m}}, {\mathbf{R}})\,dv, \end{array} $$
for \({\mathbf {x}} \in {\mathbb {R}}^{k}\), where H k is the cdf of the k-variate t k (r,m,R). Denote by \(\bar H_{k}=1-H_{k}\) the survival function. The survival function of the k-variate slash t is then given by
$$ \bar F_{k}({\mathbf{x}}; q, r, {\mathbf{m}}, {\mathbf{R}})=\frac{k-1}{k+q-1}+{\int_{0}^{1}} qv^{k+q-2} \bar H_{k}({\mathbf{x}} v; r, {\mathbf{m}}, {\mathbf{R}})\,dv, \; {\mathbf{x}}\in{\mathbb{R}}^{k}. $$
Let us focus on the standard univariate case and write f=f 1, F=F 1, \(\bar F=1-F\) and H=H 1, \(\bar H=1-H\). It is well known that the univariate t distribution t 1(r,0,1) is a heavy tailed distribution with tail index r. In other words, the survival function decays at the rate of the power function r:
$$\begin{array}{@{}rcl@{}} \bar H(t; r, 0, 1) \propto t^{-r}, \quad t\to\infty, \end{array} $$
where a(t)∝b(t) if \(\lim \sup _{\textit {t}\to \infty } a(t)/b(t)<\infty \). Let c be the constant in the density of the k-variate t. Then by L'Hopital' Rule we have
$$\begin{array}{@{}rcl@{}} \begin{aligned} {\lim}_{x\to\infty}\frac{\bar H(x)}{x^{-r}} &=c{\lim}_{x\to\infty}\frac{\int_{x}^{\infty} (1+t^{2}/r)^{-(r+1)/2}\,dt}{x^{-r}}\\ &=c{\lim}_{x\to\infty}\frac{- (1+x^{2}/r)^{-(r+1)/2}\,dt}{-rx^{-r-1}}=cr^{(r-1)/2}<\infty. \end{aligned} \end{array} $$
This shows the above rate holds. Similarly, one can show that if q>r then
$$\begin{array}{@{}rcl@{}} \bar F(x ;q, r, 0, 1) \propto x^{-r}, \quad x\to\infty. \end{array} $$
This manifests that the standard univariate slash t is also heavy tailed.
Further, by (2.2) and in view of \(\bar H(xv; r, 0, 1)\geq \bar H(x; r, 0, 1)\) for v∈[ 0,1] one derives
$$\begin{array}{@{}rcl@{}} \begin{aligned} \bar F(x; q, r, 0, 1) & \geq (k-1)/(k+q-1)+\bar H(x; r, 0, 1) {\int_{0}^{1}} qv^{k+q-2}\,dv\\ & \geq (k-1)/(k+q-1) + q/(k+q-1)\bar H(x; r, 0, 1) \\ & \geq \bar H(x; r, 0, 1), \quad x \geq 0. \end{aligned} \end{array} $$
This shows that the standard univariate slash t has heavier tails than the standard univariate t. In fact, the last inequality also holds for k-variate slash t for x=(x 1,…,x k )⊤ with x i ≥0,i=1,…,k as
$$\begin{array}{@{}rcl@{}} \bar F_{k}({\mathbf{x}}; q, r, {\mathbf{m}}, {\mathbf{R}}) &=& P(X_{1}> x_{1}, \ldots, X_{k} > x_{k})\\ &=& {\int_{0}^{1}} P(T_{1}> u^{1/q}x_{1}, \ldots, T_{k} > u^{1/q} x_{k})\,du \\ & \geq& {\int_{0}^{1}} P(T_{1}> x_{1}, \ldots, T_{k} > x_{k})\,du = \bar H_{k}({\mathbf{x}}; r, {\mathbf{m}}, {\mathbf{R}}). \end{array} $$
Stochastic representations Stochastic Representations not only reveal the relations with other distributions but are very useful, for instance, in calculating moments and random generation. We provide two stochastic representations for the slash t distribution based on the two stochastic representations of the multivariate t distribution. According to Kafadar (1988), a continuous random variable ξ has a slash normal distribution, written ξ∼S L N(q,0,Σ), if it can be expressed as ξ=Z/U 1/q where Z∼N k (0,Σ) and U∼U(0,1) are independent.
Note that if a rv T has a k-variate t distribution t k (r,m,R), then it has the stochastic representation
$$\begin{array}{@{}rcl@{}} \mathbf{T}=S^{-1}{\mathbf{Z}}+{\mathbf{m}}, \end{array} $$
where Z∼N k (0,R), r S 2 has the Chi-square distribution \({\chi _{r}^{2}}\) with r degrees of freedom, and Z and S are independent. This can be easily verified. Let X∼S L T k (q,r,m,R). Then from the definition of the k-variate slash t it immediately follows that
$$ {\mathbf{X}}=S^{-1}{\boldsymbol{\xi}} + {\boldsymbol{\eta}}, $$
where ξ∼S L N k (q,0,R), η=m U −1/q, and both ξ and η are independent of S.
Using another stochastic representation of the k-variate t distribution from page 7 of Kotz and Nadarajah (2004), we obtain the second stochastic representation for the k-variate slash t rv X∼S L T k (q,r,m,R) as follows:
$$ {\mathbf{X}}={\mathbf{V}}^{-1/2}{\boldsymbol{\xi}}+ {\boldsymbol{\eta}}, $$
where ξ∼S L N k (q,0,r I k ), η=m U −1/q, and ξ,η are independent of V. Here V −1/2 is the inverse of the symmetric square root V 1/2 of V, where V has a k-variate Wishart distribution with degrees of freedom r+k−1 and covariance matrix R −1.
The moments Let us now calculate the mean vector and covariance matrix of the k-variate slash t. For X∼S L T k (q,r,m,R) with R=(R ij ), by the independence of T and U we have
$$\begin{array}{@{}rcl@{}} {\boldsymbol{\mu}}={\mathbb{E}}({\mathbf{X}})={\mathbb{E}}(\mathbf{T}/U^{1/q})={\mathbb{E}}(\mathbf{T}){\mathbb{E}}(U^{-1/q}). \end{array} $$
It is easy to calculate
$$ {\mathbb{E}}(U^{-1/q})=q/(q-1), \quad q>1, $$
$$ {\mathbb{E}}(U^{-2/q})=q/(q-2), \quad q>2. $$
For T∼t k (r,m,R), from page 11 of Kotz and Nadarajah (2004) it follows
$$ {\mathbb{E}}(\mathbf{T})={\mathbf{m}}, \quad {\text{Var}}(\mathbf{T})={\mathbf{R}} r/(r-2), \quad r>2. $$
Hence by (2.5) and the first equality of (2.7) one has
$$ {\boldsymbol{\mu}}={{\mathbf{m}} q}/({q-1}). $$
To calculate Var(X) we use the formula
$$ {\text{Var}}({\mathbf{X}})={\text{Var}}({\mathbb{E}}({\mathbf{X}}|U))+{\mathbb{E}}\big({\text{Var}}({\mathbf{X}}|U)\big). $$
It is easy to see for r>2,
$$\begin{array}{@{}rcl@{}} {\text{Var}}({\mathbf{X}}|U)={\text{Var}}(\mathbf{T})/U^{2/q}={\mathbf{R}} r/((r-2)U^{2/q}), \end{array} $$
hence by (2.6) one has
$$ {\mathbb{E}}\big({\text{Var}}({\mathbf{X}}|U)\big)={\mathbf{R}} rq/((r-2)(q-2)), \quad q>2, r>2. $$
((2.10))
Also since \({\mathbb {E}}({\mathbf {X}}|U)={\mathbb {E}}(\mathbf {T})/U^{1/q}={\mathbf {m}}/U^{1/q}\) it follows from (2.6) that
$$\begin{array}{@{}rcl@{}} {\mathbb{E}}(({\mathbb{E}}({\mathbf{X}}|U))^{{\otimes 2}})={\mathbf{m}}^{{\otimes 2}} q/(q-2), \quad {\mathbb{E}}({\mathbf{X}}^{{\otimes 2}})={\mathbf{m}}^{{\otimes 2}} q^{2}/(q-1)^{2}, \quad q>2, \end{array} $$
where M ⊗2=M M ⊤. Hence for q>2,
$$\begin{array}{@{}rcl@{}} \begin{aligned} {\text{Var}}({\mathbb{E}}({\mathbf{X}}|U)) &={\mathbf{m}}^{{\otimes 2}} q/(q-2)-{\mathbf{m}}^{{\otimes 2}} q^{2}/(q-1)^{2}\\ &={\mathbf{m}}^{{\otimes 2}} q/((q-1)^{2}(q-2)), \quad q>2. \end{aligned} \end{array} $$
This, (2.10) and (2.9) yield the variance-covariance matrix X as follows:
$$ {\text{Var}}({\mathbf{X}})=\frac{rq{\mathbf{R}}}{(r-2)(q-2)}+\frac{q{\mathbf{m}}^{{\otimes 2}} }{(q-1)^{2}(q-2)}, \quad q>2, r>2. $$
In particular, if m=0 one has
$$ {\text{Var}}({\mathbf{X}})=\frac{rq{\mathbf{R}}}{(r-2)(q-2)}, \quad q>2, r>2. $$
Hence the k-variate slash t has the same correlation matrix R as the k-variate t.
In the case of the standard t, there are convenient formulae for the moments. Given non-negative integers p 1,…,p k such that p=p 1+…+p k <r/2. If any of the p 1,…,p k is odd, then
$$\begin{array}{@{}rcl@{}} {\mathbb{E}}\big(T_{1}^{p_{1}}\cdots T_{k}^{p_{k}}\big)=0. \end{array} $$
If all of them are even and r>p, then
$$ {\mathbb{E}}\big(T_{1}^{p_{1}}\cdots T_{k}^{p_{k}}\big)=\frac{r^{p/2}\Pi_{j=1}^{k} \left[1\cdot 3\cdot 5\cdots (2p_{j}-1)\right]}{(r-2)(r-4)\cdots (r-p)}:=c. $$
For details, see Kotz and Nadarajah (2004). Based on these formulae we have the following.
Theorem 1.
Let X=(X 1,…,X k )⊤∼S L T k (q,r). Assume p 1,…,p k are nonnegative integers such that p=p 1+…+p k <r/2. 1) Suppose at leasts one of the p 1,…,p k is odd. If q>p, then
$$\begin{array}{@{}rcl@{}} {\mathbb{E}}\big(X_{1}^{p_{1}}\cdots X_{k}^{p_{k}}\big)=0. \end{array} $$
2) Suppose all p 1,…,p k are even. If q>p then
$$\begin{array}{@{}rcl@{}} {\mathbb{E}}\big(X_{1}^{p_{1}}\cdots X_{k}^{p_{k}}\big)={cq}/({q-p}), \end{array} $$
where c is given in (2.13). Otherwise if q≤p then \({\mathbb {E}}(X_{1}^{p_{1}}\cdots X_{k}^{p_{k}})\) diverges.
By the density formula (2.1) and using the substitution t=v x we have
$$\begin{array}{@{}rcl@{}} \begin{aligned} {\mathbb{E}}\big(X_{1}^{p_{1}}\cdots X_{k}^{p_{k}}\big) &= \int x_{1}^{p_{1}}\cdots x_{k}^{p_{k}}\left\{{\int_{0}^{1}} qv^{k+q-1}t_{k}({vx}_{1}, \ldots, {vx}_{k};r)\,dv\right\}\,{dx}_{1}\cdots {dx}_{k}\\ &={\int_{0}^{1}} qv^{q-1-p}\,dv\left\{{\mathbb{E}}\big(T_{1}^{p_{1}}\cdots T_{k}^{p_{k}}\big)\right\}. \end{aligned} \end{array} $$
Note that the integral \({\int _{0}^{1}} qv^{q-1-p}\,dv\) converges to q/(q−p) if q−p>0 and diverges otherwise. These and (2.13) yield the desired results.
The marginal distributions Since the marginal distributions of a k-variate t are still t, the marginal distributions of a k-variate slash t are slash t.
The marginal distributions of a k-variate slash t distribution are still slash t.
It suffices to show without loss of generality that for every 0≤s≤k,
$$\begin{array}{@{}rcl@{}} {\int\!\!\cdots\!\!\int} f_{k}(x_{1}, \ldots, x_{s}, x_{s+1}, \ldots, x_{k})\,{dx}_{s+1}\cdots {dx}_{k} =f_{s}(x_{1}, \ldots, x_{s}), \quad x_{1},\ldots, x_{s}\in{\mathbb{R}}, \end{array} $$
where f k (x)=f k (x;q,r,m,R). Substitution of the density (2.1) in the left hand of the above equality gives
$$\begin{array}{@{}rcl@{}} {\int\!\!\cdots\!\!\int} f_{k}({\mathbf{x}})\,{dx}_{s+1}\cdots {dx}_{k} ={\int_{0}^{1}} qv^{k+q-1}{\int\!\!\cdots\!\!\int} t_{k}(v{\mathbf{x}}){dx}_{s+1}\cdots {dx}_{k}dv, \end{array} $$
where t k (t)=t k (t;r,m,R). By substitution y s+1=v x s+1,…,y k =v x k one derives
$$\begin{array}{@{}rcl@{}} {\int\!\!\cdots\!\!\int} t_{k}(v{\mathbf{x}}){dx}_{s+1}\cdots {dx}_{k} =v^{s-k}{\int\!\!\cdots\!\!\int} t_{k}({vx}_{1}, \ldots, {vx}_{s}, y_{s+1}, \ldots, y_{k})\,{dy}_{s+1}\cdots {dy}_{k}. \end{array} $$
Because the marginals of the k-variate t distribution are still t, we have
$$\begin{array}{@{}rcl@{}} {\int\!\!\cdots\!\!\int} t_{k}({vx}_{1}, \ldots, {vx}_{s}, y_{s+1}, \ldots, y_{k})\,{dy}_{s+1}\cdots {dy}_{k} =t_{s}({vx}_{1}, \ldots, {vx}_{k}; r, {\mathbf{m}}_{1}, {\mathbf{R}}_{11}), \end{array} $$
where \({\mathbf {m}}=({\mathbf {m}}_{1}^{\top }, {\mathbf {m}}_{2}^{\top })^{\top }\) with \({\mathbf {m}}_{1} \in {\mathbb {R}}^{s}\) and R is partitioned into the 2×2 block matrix with R 11 being the s×s matrix at the position of (1,1)-block. See pages 15-16 of Kotz and Nadarajah (2004). Combining the last two equalities yields the desired equality.
Linear combinations Since the distribution of a linear function of a k-variate t variable is still t, it immediatly yields the following.
Let A be a nonsingular nonrandom matrix. If X∼S L T k (q,r,m,R), then A X∼S L T k (q,r,A m,A R A ⊤).
The characteristic function Several authors derived the formulas for the characteristic functions φ T of the k-variate t rv T, see e.g. Joarder and Ali (1996), Dreier and Kotz (2002). Based on these formulas we can obtain the characteristic functions of the k-variate slash t using the following formula. For X∼S L T k (q,r,m,R), it can be expressed as the ratio X=T/U 1/q of two independent rv's, so that its characteristic function φ X can be written as
$$ \begin{aligned} \!\varphi_{{\mathbf{X}}}({\mathbf{t}}) &={\mathbb{E}}(\exp({i{\mathbf{t}}^{\top} {\mathbf{X}}})) ={\int_{0}^{1}} {\mathbb{E}}(\exp(i{\mathbf{t}}^{\top} {\mathbf{T}} u^{-1/q}))\,du ={\int_{0}^{1}} \varphi_{{\mathbf{T}}}({\mathbf{t}} u^{-1/q})\,du \end{aligned} $$
for t in some neighborhood of the origin in which the above integral converges.
For the standard univariate and bivariate slash t distributions S L T k (q,r),k=1,2, their densities are given by
$$\begin{array}{@{}rcl@{}} f_{1}(x; q,r)={\int_{0}^{1}} qv^{q} t_{1}(vx; r)\,dv, \quad x\in \mathbb{R}, \end{array} $$
$$\begin{array}{@{}rcl@{}} f_{2}({\mathbf{x}}; q,r)={\int_{0}^{1}} qv^{q+1} t_{2}(v{\mathbf{x}}; r)\,dv, \quad {\mathbf{x}}\in \mathbb R^{2}. \end{array} $$
Displayed in Figure 2 are the density curves and contours. On the left panel are the density curves of the normal, t and the standard slash t with q=1 and r=3 degrees of freedom. The curves are calibrated so that they have the same height at the origin. Observe that the slash t has the fattest tail whereas the normal has the slimmest one. On the right panel are the contours of the bivariate slash t with q=3 and r=5. Clearly the contours are symmetric.
Density curves and contours of slash t . Left panel: Density curves of normal (N), student t (T), and the slash t (S L T 1(1,3)). Right panel: Contours of the bivariate slash t distribution S L T 2(3,5).
The multivariate skew-slash t distributions
In this section, we first recall the skew normal and skew t. In subsection 3.1, we define the standard skew-slash t distribution, study its relationships with other distributions and give the moments and characteristic function. In subsection 3.2, we use hidden truncation or selective sampling model and the order statistics to characterize the skew, slash and skew-slash normal and t distributions. In subsection 3.2, we define the general skew-slash t distribution, study its linear transformation and give an example in the end.
Azzalini and Dalla Valle (1996) introduced the skew normal distribution. A k-variate standard skew normal distribution has the density given by
$$\begin{array}{@{}rcl@{}} 2\phi_{k}({\mathbf{z}})\Phi({\boldsymbol{\lambda}}^{\top} {\mathbf{z}}), \quad {\boldsymbol{\lambda}}, {\mathbf{z}}\in{\mathbb{R}}^{k}, \end{array} $$
where ϕ k is the pdf of the k-variate standard normal N k (0,I) and Φ is the cdf of the standard normal N(0,1). Denote it by S K N k (λ).
Kotz and Nadarajah (2004) wrote "…that the possibilities of constructing skewed multivariate t distributions are practically limitless". The two authors surveyed the definitions given by Gupta (2003), Sahu et al. (2003), Jones (2002) and Azzalini and Capitanio (2003).
Based on these definitions, we may define the skew-slash t distributions in different ways. In this article, however, we will take the following approach. First, we will define the standard skew-slash t based on the standard skew t, a common special case of Gupta (2003), Azzalini and Capitanio (2003), and others. We then introduce a general skew-slash t distribution by introducing location and scale parameters. Our definition may lose some nice interpretations. But we think that this definition is natural, concise and, in particular, convenient in applications.
According to Gupta (2003), a k-variate skew t distribution with parameters \({\boldsymbol {\mu }}\in {\mathbb {R}}^{k}\), Σ (correlation matrix R), \({\boldsymbol {\lambda }}\in {\mathbb {R}}^{k}\) and r>0 has the density
$$ h_{k}({\mathbf{t}}; {\boldsymbol{\lambda}}, r, \Sigma)=2t_{k}({\mathbf{t}}; r, 0, {\mathbf{R}}) \Psi\left(\frac{{\boldsymbol{\lambda}}^{\top} {\mathbf{t}}}{\sqrt{1+ {\mathbf{t}}^{\top}\Sigma^{-1} {\mathbf{t}}/r}}; r+k\right), \quad {\mathbf{t}}\in{\mathbb{R}}^{k}, $$
where Ψ is the cdf of the univariate standard t distribution with r+k degrees of freedom. Denote this distribution by S K T k (λ,r,Σ). This form of the density given here is slightly different from that of Gupta (2003). Several constant parameters appeared in his density formual are not explicitly expressed in our form of the density. We have incorporated them in the parameters in the above density. Accordingly parameters of the same names may have different values.
When Σ is the identity matrix I k , S K T k (λ,r,I k ) is referred to as the standard skew t and denoted by S K T k (λ,r) with density h k (t;λ,r) given by
$$\begin{array}{@{}rcl@{}} h_{k}({\mathbf{t}}; {\boldsymbol{\lambda}}, r)=2t_{k}({\mathbf{t}};r)\Psi\left(\frac{{\boldsymbol{\lambda}}^{\top} {\mathbf{t}}}{\sqrt{1+{\mathbf{t}}^{\top} {\mathbf{t}}/r}}; r+k\right), \quad {\mathbf{t}}\in{\mathbb{R}}^{k}, \end{array} $$
where t k (t;r) is the density of the standard k-variate distribution t k (r) with degrees of freedom r. This is a common special case of the skew t shared by Gupta (2003), Azzalini and Capitanio (2003), and others.
3.1 The standard multivariate skew-slash t distribution
We begin with the definition, followed the moments and characteristic function.
A k-variate continuous random vector W 0 is said to have a standard multivariate skew-slash t distribution with skewness parameter λ, tail parameter q and degrees of freedom r, written W 0∼S S L T k (λ,q,r), if it can be written as the ratio of two independent rv's §∼S K T k (λ,r) and U∼U(0,1) as follows:
$$\begin{array}{@{}rcl@{}} {\mathbf{W}}_{0}={{\mathbf{S}}}/{U^{1/q}}. \end{array} $$
The standard skew-slash t generalizes the proposed standard k-variate slash t, the standard skew t of Gupta (2003) and Azzalini and Capitanio (2003), the standard slash normal of Kafadar (1988), and the standard skew-slash normal of Wang and Genton (2006). This is stated below.
The limiting distribution of the standard skew-slash t distribution S S L T k (λ,q,r) is, as q→∞, the standard skew t distribution S K T k (λ,r). The limiting distribution of S S L T k (λ,q,r) is, as r→∞, the standard skew-slash normal S S L N k (λ,q), which includes as special cases the standard skew normal S K N k (λ) (q=∞) and the standard slash normal S L N k (q) (λ=0). As λ=0, S S L T k (λ,q,r) reduces to the slash t distribution S L T k (q,r).
In a similar way to the derivation of the standard slash t density in Section 2, we can obtain the density of the standard skew-slash t distribution S S L T k (λ,q,r) as follows: for \({\mathbf {w}}\in {\mathbb {R}}^{k}\),
$$ g_{k}({\mathbf{w}}; {\boldsymbol{\lambda}}, q, r)=2q{\int_{0}^{1}} \!\!u^{k+q-1}t_{k}(u{\mathbf{w}}; r) \Psi\left(\frac{u{\boldsymbol{\lambda}}^{\top} {\mathbf{w}}}{\sqrt{1+u^{2}{\mathbf{w}}^{\top} {\mathbf{w}}/r}}; k+r\right)du, $$
The moments Using the results of Azzalini and Dalla Valle (1996) or Kotz and Nadarajah (2004)(p.100-101) for the mean vector and covariance of the k-variate skew t distribution (i.e. by setting their α equal to \({\boldsymbol {\lambda }}/\sqrt {1+p/\nu }\) with their p=k and ν=r here), we obtain the mean vector and covariance matrix of W 0∼S S L T k (λ,q,r) as follows:
$$\begin{array}{@{}rcl@{}} {\mathbb{E}}({\mathbf{W}}_{0})={\mathbb{E}}({\mathbf{S}}){\mathbf{E}}(U^{-1/q})=\frac{\sqrt{2}qr}{\sqrt{\pi}(q-1)(r-2)}\frac{{\boldsymbol{\lambda}}}{\sqrt{r+k + r {\boldsymbol{\lambda}}^{\top}{\boldsymbol{\lambda}}}}, \; q>1, r>2, \end{array} $$
$$\begin{array}{@{}rcl@{}} {\text{Var}}({\mathbf{W}}_{0})=\frac{qr}{(q-2)(r-2)(r-4)}\left({\mathbf{I}}_{k} - \frac{2(r+4)}{\pi(r-2)}\frac{r{\boldsymbol{\lambda}}{\boldsymbol{\lambda}}^{\top}}{r+k + r {\boldsymbol{\lambda}}^{\top}{\boldsymbol{\lambda}}}\right), \; q>2, r>4. \end{array} $$
The characteristic function As in deriving the characteristic function for the multivariate slash t in (2.14), one can obtain the characteristic function \(\varphi _{{\mathbf {W}}_{0}}\phantom {\dot {i}\!}\) of W 0 as follows:
$$\begin{array}{@{}rcl@{}} \varphi_{{\mathbf{W}}_{0}}({\mathbf{t}})={\mathbb{E}}(\exp({i{\mathbf{t}}^{\top} {\mathbf{W}}_{0}})) ={\int_{0}^{1}} \varphi_{{\mathbf{S}}}({\mathbf{t}} u^{-1/q})\,du, \quad {\mathbf{t}} \in {\mathbb{N}} \end{array} $$
for some neighborhood of the origin in which the above integral converges, where φ S is the characteristic function of the standard skew t distribution S∼S K T k (λ,r).
3.2 Hidden truncation and order-statistics characterization
In this subsection, we characterize the skew t, skew-slash normal and skew-slash t distributions using the hidden truncation or selective sampling model and the order statistics of the components of a bivariate normal or t variable.
Hidden truncation or selective sampling We first give a fact about conditional pdf. Let X be a continuous random vector that has pdf f. Let X 0 be a random variable with cdf F 0. Let a be a measurable function of X such that P(A)>0 with A={a(X)≥X 0}. Suppose X and X 0 are independent. Then for every x,
$$\begin{array}{@{}rcl@{}} \begin{aligned} P({\mathbf{X}}\leq {\mathbf{x}}|a({\mathbf{X}})\ge X_{0}) &=P({\mathbf{X}}\leq {\mathbf{x}}, a({\mathbf{X}})\ge X_{0})/P(A)\\ &=E({\mathbf{1}}[{\mathbf{X}}\leq {\mathbf{x}}]F_{0}(a({\mathbf{X}})))/P(A) =\int_{-\infty}^{{\mathbf{x}}} f({\mathbf{y}})F_{0}(a({\mathbf{y}}))\,d{\mathbf{y}}/P(A). \end{aligned} \end{array} $$
Hence the conditional pdf of X given A is
$$ f({\mathbf{x}}|A)=f({\mathbf{x}}) F_{0}(a({\mathbf{x}}))/P(A). $$
Using this we immediately derive the following results.
Proposition 1.
Suppose T∼t k (r)and T 0∼t 1(r+k) are independent. Then the conditional pdf of T given A is
$$ f({\mathbf{t}}|A)=t_{k}({\mathbf{t}}; r) \Psi(a({\mathbf{t}}); r+k)/P(A), \quad {\mathbf{t}}\in{\mathbb{R}}^{k}. $$
Consequently, if U∼U(0,1)is independent of both T and T 0 then the conditional pdf of T/U 1/q for 0<q<1 given A is
$$ g({\mathbf{t}}|A)=2q{\int_{0}^{1}} u^{k+q-1}t(u{\mathbf{t}}; r) \Psi(a(u{\mathbf{t}}); r+k)\,du/2P(A), \quad {\mathbf{t}} \in{\mathbb{R}}^{k}. $$
In particular, both (3.18) and (3.19) hold for \(a({\mathbf {t}})=(\tau _{0}+{\boldsymbol {\tau }}^{\top }{\mathbf {t}})/\sqrt {1+{\mathbf {t}}^{\top }{\mathbf {t}}/r}\) where τ 0,τ are arbitrary constants. In this case,
$$ f({\mathbf{t}}|A)=2t_{k}({\mathbf{t}}; r) \Psi\left(\frac{\tau_{0}+{\boldsymbol{\tau}}^{\top}{\mathbf{t}}}{\sqrt{1+{\mathbf{t}}^{\top}{\mathbf{t}}/r}}; r+k\right)/2P(A), \quad {\mathbf{t}}\in{\mathbb{R}}^{k}, $$
$$ g({\mathbf{t}}|A)=2q{\int_{0}^{1}} u^{k+q-1}t(u{\mathbf{t}}; r) \Psi\left(\frac{\tau_{0}+u{\boldsymbol{\tau}}^{\top}{\mathbf{t}}}{\sqrt{1+u^{2}{\mathbf{t}}^{\top}{\mathbf{t}}/r}}; r+k\right)\,du/2P(A). $$
The density function in (3.21) is the conditional pdf of the k-variate slash t rv X=T/U 1/q given A. It is noteworthy that the hidden truncation model yields the pdf (3.20) and (3.21), the former is proportional to the pdf (3.15) of the skew t distribution and the latter is proportional to the pdf (3.16) of the proposed skew-slash t distribution. For more discussion see e.g. Chapter 6 of Genton (2004) and the references therein. The skew-slash normal of Wang and Genton (2006) is the special case of r=∞.
Order statistics characterization Generalizing (c) of Theorem 1 in Arnold and Lin (2004), we give the following fact. Let (Y 1,Y 2) be a bivarate rv with pdf f and cdf F. Let Y 1,2= min(Y 1,Y 2) and Y 2,2= max(Y 1,Y 2) be the order statistics of the components of the bivarate random vector. Then
$$\begin{aligned} P\left(Y_{1,2}>y_{1}\right)&=P\left(Y_{2}\ge Y_{1}>y_{1}\right)+P(Y_{1} > Y_{2}>y_{1})\\ &=\int_{y_{1}}^{\infty} {dx}_{1} \int_{x_{1}}^{\infty} f\left(x_{1},x_{2}\right)\,{dx}_{2}+\int_{y_{1}}^{\infty} {dx}_{2} \int_{x_{2}}^{\infty} f\left(x_{1},x_{2}\right)\,{dx}_{1}. \end{aligned} $$
Thus the pdf f (1)(y 1) of Y 1,2 is given by
$$ f_{(1)}\left(\,y_{1}\right)=\int_{y_{1}}^{\infty} f\left(\,y_{1}, x_{2}\right)\,{dx}_{2}+\int_{y_{1}}^{\infty} f\left(x_{1}, y_{1}\right)\,{dx}_{1}. $$
Let f 1,f 2 be the respective marginal pdf of Y 1,Y 2. Let
$$\begin{array}{@{}rcl@{}} F_{1}\left(\,y_{1}|y_{2}\right)=P\left(Y_{1}\leq y_{1}|Y_{2}=y_{2}\right), \quad F_{2}\left(\,y_{2}|y_{1}\right)=P\left(Y_{2}\leq y_{2}|Y_{1}=y_{1}\right). \end{array} $$
Then using \(\int _{y_{1}}^{\infty } f(y_{1}, x_{2})\,{dx}_{2}=f_{1}\left (\,y_{1}\right)\int _{y_{1}}^{\infty } f\left (\,y_{1}, x_{2}\right)/f_{1}\left (\,y_{1}\right)\,{dx}_{2} =f_{1}\left (\,y_{1}\right)\bar F_{2}\left (\,y_{1}\right)\) and (3.22) we derive
$$ f_{(1)}\left(\,y_{1}\right)=f_{1}\left(\,y_{1}\right)\bar F_{2}\left(\,y_{1}|y_{1}\right)+f_{2}\left(\,y_{1}\right)\bar F_{1}\left(\,y_{1}|y_{1}\right). $$
Similarly we derive the pdf f (2)(y 2) of Y 2,2 below:
$$ f_{(2)}\left(\,y_{2}\right)=f_{2}\left(\,y_{2}\right) F_{1}\left(\,y_{2}|y_{2}\right)+f_{1}\left(\,y_{2}\right)F_{2}\left(\,y_{2}|y_{2}\right). $$
In their Theorem 1, Arnold and Lin (2004) showed that the order statistics of the components of a random vector from a bivariate normal distribution obey the skew-normal law. Using (3.23) and (3.24) we can show that the order statistics of the components of a random vector from a bivariate t distribution obey the skew t law. Thus we extend their result from the normal to t distribution as stated below.
Let (T 1,T 2)have a bivariate t distribution t 2(r,0,R) with degrees of freedom r, mean zero vector and correlation matrix R=(1,ρ,ρ,1) with −1<ρ<1. Define T 1,2= min(T 1,T 2) and T 2,2= max (T 1,T 2). Then the pdf t (1),t (2) of T 1,2,T 2,2 are given by
$$ t_{(1)}(t_{1})=2t_{1}(t_{1};r)\Psi\left(\frac{-\lambda t_{1}}{\sqrt{1+{t_{1}^{2}}/r}};r+1\right), \quad t_{1}\in{\mathbb{R}}, $$
$$ t_{(2)}(t_{2})=2t_{1}(t_{2};r)\Psi\left(\frac{\lambda t_{2}}{\sqrt{1+{t_{2}^{2}}/r}};r+1\right), \quad t_{2}\in{\mathbb{R}}, $$
where \(\lambda =\lambda (r,\rho)=\sqrt {({1+1/r})({1-\rho })/({{1+\rho })}}\).
The density functions in (3.25) and (3.26) reduce to the result (c) of Theorem 1 of Arnold and Lin (2004) when the df r=∞ as \(\lambda (\infty, \rho)=\sqrt {(1-\rho)/(1+\rho)}\) is equal to their skewness parameter γ in the skew-normal distribution.
PROOF OF PROPOSITION 2.
Note first that T 1,T 2 have the same standard univariate t distribution t 1(r). Using the formula for the conditional pdf of t distribution (see e.g. page 16 of Kotz and Nadarajah (2004)), the conditional cdf of T 2 given T 1=t 1 can be written as
$$\begin{array}{@{}rcl@{}} P(T_{2}\leq t_{2}|T_{1}=t_{1})=\Psi(b(t_{1}, t_{2}; r, \rho); r+1), \quad t_{1}, t_{2} \in {\mathbb{R}}, \end{array} $$
$$\begin{array}{@{}rcl@{}} b(t_{1}, t_{2}; r, \rho)=\frac{(t_{2}-\rho t_{1})\sqrt{1+1/r}}{\sqrt{(1+{t_{1}^{2}}/r)(1-\rho^{2})}}. \end{array} $$
Similarly,
$$\begin{array}{@{}rcl@{}} P(T_{1}\leq t_{1}|T_{2}=t_{2})=\Psi(b(t_{2}, t_{1}; r, \rho); r+1), \quad t_{1}, t_{2} \in {\mathbb{R}}. \end{array} $$
We now apply (3.24), with both f 1 and f 2 equal to the pdf of t 1(r) and both F 1 and F 2 equal to the cdf Ψ(;r+1) of the t distribution t(r+1), to obtain the pdf t (2) given in (3.26), noting in this case \(b(t_{2}, t_{2}; r, \rho)={\lambda t_{2}}/{\sqrt {1+{t_{2}^{2}}/r}}\). Aanloguously we can prove (3.25) in view of the equality \(\bar \Psi (t; r)=\Psi (-r; r)\) by the symmetry of the univariate t distribution. This completes the proof.
As a corollary of Proposition 2, we obtain a characterization of the skew-slash normal and t distributions through the order statistics of the components of a random vector from a bivariate t distribution as stated below.
Corollary 1.
Let (T 1,T 2)be given in Proposition 2. Assume U∼U(0,1) is independent of (T 1,T 2). Then the pdf g (1),g (2) of T 1,2/U 1/q,T 2,2/U 1/q for 0<q<1 are given by
$$ g_{(1)}(t_{1})=2q{\int_{0}^{1}} u^{q} t_{1}({ut}_{1};r)\Psi\left(\frac{-u\lambda t_{1}}{\sqrt{1+u^{2}{t_{1}^{2}}/r}};r+1\right)\,du, \quad t_{1}\in{\mathbb{R}}, $$
$$ g_{(2)}(t_{2})=2q{\int_{0}^{1}} u^{q} t_{1}({ut}_{2};r)\Psi\left(\frac{u\lambda t_{2}}{\sqrt{1+u^{2}{t_{2}^{2}}/r}};r+1\right)\,du, \quad t_{2}\in{\mathbb{R}}. $$
The density functions in (3.27) and (3.28) are (i) the pdf of the order statistics of the components of the random vector (T 1,T 2)/U 1/q from the bivariate slash t distribution t 2(r,q), and (ii) reduce to the case of the skew-slash normal of Wang and Genton (2006) when the df r=∞.
PROOF OF COROLLARY 1.
The desired (3.27) follows from (3.25) and the equalities
$$\begin{array}{@{}rcl@{}} P(T_{1,2}/U^{1/q}\leq t_{1})={\int_{0}^{1}} P(T_{1,2}\leq u^{1/q} t_{1})\,du =2q\int_{-\infty}^{t_{1}} {\int_{0}^{1}} v^{q} t_{(1)}(vs)\,dv\,ds, \end{array} $$
where the independence of T 1,2 and U is used to claim the first equality while the second equality follows from a change of variables. Similarly (3.28) can be proved and this finishes the proof.
The multivariate skew-slash t distributions We now introduce a general multivariate skew-slash t distribution by incorporating location and scale parameters.
A continuous k-variate rv W has a multivariate skew-slash t distribution with location μ, scale Σ, skewness parameter \({\boldsymbol {\lambda }}\in {\mathbb {R}}^{k}\), tail parameter q and degrees of freedom r, written W∼S S L T k (λ,q,r,μ,Σ), if it can be represented as a linear transformation of the standard multivariate skew-slash t rv W 0∼S S L T k (λ,q,r) as follows:
$$\begin{array}{@{}rcl@{}} {\mathbf{W}}={\boldsymbol{\mu}}+\Sigma^{1/2}{\mathbf{W}}_{0}, \end{array} $$
where Σ 1/2 is the the choleski decomposition of the positive definite covariance matrix Σ.
By a change of variables in (3.16), we derive the density of W given by
$$ \begin{aligned} g_{k}({\mathbf{w}}; {\boldsymbol{\lambda}}, q, r, {\boldsymbol{\mu}},\Sigma)&=2q|\Sigma|^{-1/2}{\int_{0}^{1}} u^{k+q-1}t_{k}(u\Sigma^{-1/2}({\mathbf{w}}-{\boldsymbol{\mu}}); r)\\ &\quad \times\Psi\left(\frac{u{\boldsymbol{\lambda}}^{\top}\Sigma^{-1/2}({\mathbf{w}}-{\boldsymbol{\mu}})} {\sqrt{1+u^{2}Q({\mathbf{w}}; {\boldsymbol{\mu}}, \Sigma)/r}}; k+r\right)\,du, \quad {\mathbf{w}}\in{\mathbb{R}}^{k}, \end{aligned} $$
where Q(w;μ,Σ)=(w−μ)⊤ Σ −1(w−μ).
As in the case of the standard skew-slash t, one notices that the skew-slash t generalizes the slash t, the skew t of Azzalini and Capitanio, the slash normal of Kafadar, the skew normal of Azzalini and Dalla Valle and the skew-slash normal of Wang and Genton. This is stated below.
The limiting distribution of the multivariate skew-slash t distribution S S L T k (λ,q,r,μ,Σ), as q tends to infinity, is the skew t distribution S K T k (λ,r,μ,Σ). The limiting distribution of S S L T k (λ,q,r,μ,Σ) is, as r tends to infinity, the skew-slash normal S S L N k (λ,q,μ,Σ), which include as special cases the k-variate skew normal S K N k (λ,μ,Σ) (q=∞) and the k-variate slash normal S L N k (q;μ,Σ) (λ=0). As λ=0, S S L T k (λ,q,r,μ,Σ) simplifies to the k-variate slash t distribution S L T k (q,r,μ,Σ).
Linear combinations Since the distribution of a linear function of a k-variate skew t variable is still skew t (see e.g. Section 5.9 of Kotz and Nadarajah (2004)), it immediatly yields the following result. Note that the relationship between our skewness parameter λ and their α is \({\boldsymbol {\lambda }}=\sqrt {1+k/r}{\boldsymbol {\alpha }}\). Let D=diag(σ 1,1,…,σ k,k ) denote the diagonal matrix consisting of the diagonal entries of Σ=(σ i,j ) and R=D −1/2 Σ D −⊤/2 be the correlation matrix.
Let A be a nonsingular matrix. If W∼S S L T k (λ,q,r,μ,Σ), then \({\mathbf {A}}{\mathbf {W}} \sim {SSLT}_{k}(\tilde {\boldsymbol {\lambda }}, q, r, {\mathbf {A}}{\boldsymbol {\mu }}, \widetilde \Sigma)\) where \(\widetilde \Sigma ={\mathbf {A}}\Sigma {\mathbf {A}}^{\top }\) and
$$\begin{array}{@{}rcl@{}} \tilde{\boldsymbol{\lambda}}=\frac{\widetilde\Sigma^{-1/2}{\mathbf{B}}^{\top} {\boldsymbol{\lambda}}}{\sqrt{1+(1+k/r)^{-1}{\boldsymbol{\lambda}}^{\top} ({\mathbf{R}}-{\mathbf{B}} \widetilde\Sigma^{-1}{\mathbf{B}}^{\top}){\boldsymbol{\lambda}}}}, \quad {\mathbf{B}}={\mathbf{D}}^{-1/2}\Sigma{\mathbf{A}}. \end{array} $$
To give graphical view of the skewness and tail behaviors of the skew-slash t distributions, we plot the density curves of the univariate standard skew-slash t and contours of the bivariate standard skew-slash t below.
For the univariate and bivariate standard skew-slash t distributions S S L T 1(λ,q,r) and S S L T 2(λ 1,λ 2,q,r), the densities are given by
$$ g_{1}(w;\lambda,q,r)=2q{\int_{0}^{1}} u^{q} t_{1}(uw; r) \Psi\left(\frac{u\lambda w}{\sqrt{1+u^{2}w^{2}/r}}; 1+r\right)\,du, \quad w\in{\mathbb{R}}, $$
and, with \({\mathbf {w}}=(w_{1},w_{2})^{\top }\in {\mathbb {R}}^{2}\) and λ=(λ 1,λ 2)⊤,
$$ g_{2}({\mathbf{w}}; {\boldsymbol{\lambda}}, q, r)= 2q{\int_{0}^{1}} u^{q+1}t_{2}(u{\mathbf{w}}; r) \Psi\left(\frac{u(\lambda_{1}w_{1}+\lambda_{2}w_{2})}{\sqrt{1+u^{2}({w_{1}^{2}}+{w_{2}^{2}})/r}}; 2+r\right)\,du. $$
Displayed in Figure 3 are the density curves of the standard skew-slash t distribution S S L T 1(3,1,2), the skew t distribution S K T 1(3,2), and the slash t distribution S L T 1(1,2) distributions. The curves are shifted and rescaled for comparison. Observe that the skew-slash t has the fattest tail and is skewed to the right most.
Density curves of skew-slash t SSLT 1 (3,1,2) , slash t SLT 1 (1,2) , and skew t SKT 1 (3,2) .
Displayed in Figure 4 are the contour plots of the bivariate standard skew-slash t distribution S S L T 2(λ,3,5) for different values of the skewness parameter vector λ. Clearly the contours are more skewed as the skewness parameter vector λ gets longer.
Contours of the standard 2-dimensional skew-slash t distribution S S L T 2 ( λ ,3,5).
In this section, we discuss maximum likelihood estimation and the Bayesian method and provide the approximate sampling distribution of the estimates.
The likelihood approach Let p(z;θ) denote either the slash t density in (2.1) or the skew-slash t density in (3.29), where θ denotes the corresponding parameter vector, i.e. θ=(q,r,m,R) or θ=(λ,q,r,μ,Σ). As the degrees of freedom r is unknown, we estimate it by the MLE treating it as a positive real number. Let Z 1,…,Z n be a random sample from the density p. Based on the sample, the parameter θ can be estimated by the MLE \(\hat {\boldsymbol {\theta }}\). This is the parameter value \(\hat {\boldsymbol {\theta }}\) which maximizes \(L({\boldsymbol {\theta }})= \frac {1}{n}\sum ^{n}_{i=1}\log p({\mathbf {Z}}_{i}; {\boldsymbol {\theta }})\) of the log likelihood functions of the sample over the parameter space Θ, that is,
$$ L(\hat{{\boldsymbol{\theta}}}) =\max_{{\boldsymbol{\theta}} \in \Theta} \frac{1}{n}\sum^{n}_{i=1} \log p({\mathbf{Z}}_{i}; {\boldsymbol{\theta}}). $$
Let \({\mathbf {S}}_{i}({\boldsymbol {\theta }})=\frac {\partial }{\partial {\boldsymbol {\theta }}}\log p({\mathbf {Z}}_{i}; {\boldsymbol {\theta }})\) be the score for the observation Z i . Under suitable conditions, the MLE \(\hat {\boldsymbol {\theta }}\) is the solution to the score equation
$$ \sum_{i=1}^{n} {\mathbf{S}}_{i}({\boldsymbol{\theta}})=0. $$
Let J(θ)=E(S 1(θ)S 1(θ)⊤) be the information matrix. Clearly the information matrix can be estimated by
$$ \hat{\mathbf{J}}=\frac{1}{n}\sum^{n}_{i=1}{\mathbf{S}}_{i}(\hat{\boldsymbol{\theta}}){\mathbf{S}}_{i}(\hat{\boldsymbol{\theta}})^{\top}. $$
Under suitable conditions, the sampling distribution of \(\hat {\boldsymbol {\theta }}\) can be approximated by the normal distribution with mean vector θ and variance-covariance matrix \(\hat {\mathbf {J}}^{-1}\), that is, approximately
$$ \hat{\boldsymbol{\theta}} \sim \mathcal{N}({\boldsymbol{\theta}},\, n^{-1}{\hat{\mathbf{J}}^{-1}}). $$
Based on this normal approximation, one can perform hypothesis testing, construct confidence intervals and, in particular, calculate the standard error (SE) of each component \(\hat {\boldsymbol {\theta }}_{j}\) of the MLE \(\hat {\boldsymbol {\theta }}\) as follows:
$$ SE(\hat{\boldsymbol{\theta}}_{j})=\sqrt{n^{-1}\hat {\mathbf{J}}^{jj}}, \quad j=1, \ldots, k, $$
where \(\hat {\mathbf {J}}^{jj}\) is the (j,j)- entry of the estimated inverse information matrix \(\hat {\mathbf {J}}^{-1}\).
The numerical value of the MLE \(\hat {\boldsymbol {\theta }}\) can be found by solving the score equation (4.33) using the newton's method. Alternatively, one can directly search the solution of the maximization problem (4.32), for example, using the subroutine optim in the R package.
As for initial values of the newton's algorithm, one can use the moment estimates of the parameters or other available consistent estimates. One technical issue here is that the estimate \(\hat \Sigma \) of Σ must be positive definite. What we did in our applications was that we estimated the entries of Σ and then verified the positive definiteness of \(\hat \Sigma \).
The Bayesian approach Given observed data D, the likelihood function L(D|θ) can be obtained from the proposed multivariate slash or skew-slash t distribution with parameter vector θ. The posterior density then satisfies p(θ|D)∝L(D|θ)π(θ), where π(θ) is the joint prior density of θ based on the available prior information on it. We choose a prior density for each component of θ and take the joint prior density of θ to be equal to the product of the marginal prior densities. The resulting full Bayesian model has the hierarchical structure with the conditional density of D|θ and the prior distribution θ∼π(θ) in the proposed model. One can obtain a random sample from the joint posterior density by the Markov Chain Monte Carlo (MCMC) method, and a parametric Baysian analysis of the model can be implemented using the Gibbs sampling method in R or JAGS.
Simulations and applications
In this section, we use simulations to compare the performance of the skew-slash t and the skew-slash normal distributions. Simulations are also used to demonstrate parameter estimation using both the maximum likelihood criterion and Bayesian paradigm. Then the skew-slash t, with the density given in (3.29), and the skew-slash normal, with the density given below, are applied to fit two real datasets.
$$ \begin{aligned} \eta_{k}({\mathbf{w}}; {\boldsymbol{\lambda}}, q, {\boldsymbol{\mu}},\Sigma)=&\;2q{\int_{0}^{1}} u^{k+q-1}\phi_{k}(u{\mathbf{w}}; u{\boldsymbol{\mu}}, \Sigma) \\ &\times \Phi\left(u{\boldsymbol{\lambda}}^{\top}\Sigma^{-1/2}({\mathbf{w}}-{\boldsymbol{\mu}}) \right)\,du, \quad {\mathbf{w}}\in{\mathbb{R}}^{k}, \end{aligned} $$
where ϕ k is the density of the k-variate normal distribution N k (μ,Σ) with mean vector μ and covariance matrix Σ.
5.1 Simulation study
Comparison between the skew-slash t and skew-slash normal To compare the two distributions, we first generated data from the 2-variate skew-slash t then fitted it with both 2-variate skew-slash t and skew-slash normal, and vice versa (i.e. generated data from the 2-variate skew-slash normal then fitted it with the two distributions). Reported in Table 2 are the average AIC values and average MLE's of the parameters based on the sample size n=250 and repetitions M=200.
Notice that for data generated from both the skew-slash t and skew-slash normal, the average AIC values of the skew-slash t fitting were lower than those of the skew-slash normal, indicating a better overall model fitting of the former to the data than the latter.
Parameter estimation by the Bayesian method We now conducted simulations to study the behaviors of the MLE's and Bayesian estimates of the parameters in the data-generation models for the standard univariate and bivariate slash and skew-slash t distributions. Here the prior distributions are the standard normal N(0,1) for the skewness parameter λ (or the components of the parameter vector λ), the exponential distribution with rate 0.1 truncated at 2 for the tail parameter q and truncated at 4 for the degrees of freedom r. It follows from Sahu et al. (2003) and Fernandez and Steel (1998) that the resulting distributions have finite variances. We generated 100 random samples of size 200 from the standard univariate slash and skew-slash t and the standard bivariate slash and skew-slash t. Reported in Table 3 are the average estimates of the parameters based on the maximum likelihood criterion and Bayesian paradigm. Observe that the two types of estimates are close for all the simulation setups.
Table 3 Maximum likelihood and Bayesian estimates comparison
Model fitting to the GAD Data Gestational age at delivery (GAD) is a variable widely studied in epidemiology, see, for example, Longnecker et al. (2001). We applied the skew-slash t and skew-slash normal distributions to fit the log transformed GAD of n=100 observations. Figure 1 is the histogram superimposed with the fitted density curves, while Table 1 reports the MLE's, the standard errors (SE) of the parameter estimates and the AIC. We can see from Figure 1 that the skew-slash t distribution was able to better capture the peak of the histogram, giving a better estimation of the density to the majority of data points. In the mean time, the AIC values in Table 1 indicated that the skew-slash t fitting was better than the skew-slash normal.
Model fitting to the AIS data Azzalini and Dalla Valle (1996) used their skew normal distribution to fit (LBM-lean body mass, BMI-body mass index) pairs of the athletes from Australian Institute of Sport (AIS), where the data of n=202 observations were reported in Cook and Weisberg (1994). Wang and Genton (2006) used their skew-slash normal distribution to re-fit the data. Here we applied the proposed skew-slash t distribution to re-fit the (LBM, BMI) pairs in the AIS data. Before fitting we standardized the variables.
Figure 5 is the scatter plot superimposed with the fitted skew-slash t and skew-slash normal contours in the scale of LBM and BMI. The skew-slash normal contour is similar to what was reported in Wang and Genton (2006). The comparison between the two contours seemed to indicate that the fittings based on two models were close.
Scatter plot and fitted contours.
Reported in Table 4 are the MLE's, standard errors and AIC. A smaller AIC value of the skew-slash normal fitting than the skew-slash t fitting indicated that the former was a better fit to this data. When the skew-slash normal was used to fit the data we were able to obtain the MLE's. But the row and column corresponding to parameter q in the Hessian matrix were zero, suggesting a simpler skew normal fitting to this data. The skew-slash normal fitting by Wang and Genton (2006) led to the same conclusion though they did not report the standard errors of the parameter estimates.
Table 4 The skew-slash normal and t fitting to the AIS data
In conclusion, the proposed slash and skew-slash t are competitive candidate models for fitting skewed and heavy tailed data. The parameters can be estimated under either the frequentist method or Bayesian paradigm. Although for a particular dataset the skew-slash t may not be the final model, it is a good choice to start with in model selection due to its flexibility and the fact that it takes the skew normal, skew t and hence the usual normal and t as its submodels.
In this article, we defined the multivariate slash and skew-slash distributions in a pursuit of providing additional distributions to simulate and fit skewed and heavy tailed data. We investigated the heavy tail behaviors and tractable properties of these distributions which are useful in simulations and applications to real data. We characterized the skew t, the skew-slash normal and the skew-slash t distributions using both the hidden truncation or selective sampling model and the order statistics of the components of a bivariate normal or t variable. We demonstrated that the proposed skew-slash t model takes as sub-models the slash t, the slash normal, the skew-slash normal, the skew normal, the skew t and hence the usual normal and t. This nested property can be used in hypothesis testing.
Our simulations and applications to real data indicated that the proposed skew-slash t fitting outperformed the skew-slash normal fitting. Even though the skew-slash normal contains a tail parameter q, the fitting with it to the GAD data was unsatisfactory as the SE of the MLE of the tail parameter q was large, see Table 1. This suggests that not all heavy-tail properties in data can be explained by the tail parameter q in the slash distributions. Thus it makes sense for us to further search for distributions which can be used to fit heavy tailed data. Our proposed slash and skew-slash t distributions can be considered as an example in this attempt. We complete our remarks by pointing out that the degrees-of-freedom parameter r and the tail parameter q would explain different types of fat tail behaviors existed in data.
Arnold, BC, Lin, GD: Characterization of the Skew-normal and Generalized Chi Distributions. Sankhyā. 66, 593–606 (2004).
MATH MathSciNet Google Scholar
Azzalini, A, Capitanio, A: Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t distribution. J. R. Stat. Soc. Ser. B. 65, 367–389 (2003).
Article MATH MathSciNet Google Scholar
Azzalini, A, Dalla Valle, A: The multivariate skew-normal distribution. Biometrika. 83, 715–726 (1996).
Cook, RD, Weisberg, S: An Introduction to Regression Graphics. Wiley, New York (1994).
Dreier, I, Kotz, S: A note on the characteristic function of the t-distribution. Statist. & Probabil. Lett. 57, 221–224 (2002).
Fernandez, C, Steel, MFJ: On Bayesian Modeling of Fat Tails and Skewness. J. Am. Statist. Assoc. 93, 359–371 (1998).
Genton, MG (ed.): Skew-elliptical distributions and their applications: a journey beyond normality. Chapman and Hall/CRC, Boca Raton (2004).
Gupta, AK: Multivariate skew t-distribution,Statistics: A Journal of Theoretical and Applied Statistics. 37, 359–363 (2003). doi:10.1080/715019247.
Joarder, AH, Ali, MM: On the characteristic function of the multivariate student t distribution. Pak. J. Statist. 12, 55–62 (1996).
Johnson, N, Kotz, S: Distributions in Statistics: Continuous Multivariate Distributions. John Wiley & Sons, New York (1972).
Jones, MC: Marginal replacement in multivariate densities, with application to skewing spherically symmetric distributions. J. Multivar. Anal. 81, 85–99 (2002).
Kafadar, K: Slash Distribution. In: Johnson, NL, Kotz, S, Read, C (eds.)Encyclopedia of Statistical Sciences. vol. 8, pp. 510–511. Wiley, New York (1988).
Kotz, S, Nadarajah, S: Multivariate t distributions and their applications. Cambridge University Press, Cambridge (2004).
Longnecker, MP, Klebanoff, MA, Zhou, H, Brock, JW: Association between maternal serum concentration of the ddt metabolite dde and preterm and small-for-gestational-age babies at birth. Lancet. 358, 110–114 (2001).
Sahu, SK, Dey, DK, Branco, MD: A new class of multivariate skew distributions with applications to Bayesian regression models. Can. J. Stat. 31, 129–150 (2003).
Wang, J, Genton, M: The multivariate skew-slash distributions. J. Statist. Plann. Inferr. 136, 209–220 (2006).
The authors gratefully thank two anonymous reviewers for their suggestions that substantially improved the article.
Department of Mathematical Sciences, Indiana University Purdue University Indianapolis, N. Blackford Street, Indianapolis, 46202, IN, USA
Fei Tan & Hanxiang Peng
Department of Statistics, Florida State University, N. Woodward Ave., Tallahassee, 32306, FL, USA
Yuanyuan Tang
Fei Tan
Hanxiang Peng
Correspondence to Fei Tan.
FT contributed to the entire work, HP to the theoretical part, and YT to the inference, simulations and applications. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Tan, F., Tang, Y. & Peng, H. The multivariate slash and skew-slash student t distributions. J Stat Distrib App 2, 3 (2015). https://doi.org/10.1186/s40488-015-0025-9
Heavy tail
Skew distribution
Skew-slash distribution
Slash distribution
Student t distribution
2013 International Conference on Statistical Distributions and Applications
|
CommonCrawl
|
Extroverts tweet differently from introverts in Weibo
Zhenkun Zhou1,
Ke Xu1 &
Jichang Zhao ORCID: orcid.org/0000-0002-5319-80602,3
As dominant factors driving human actions, personalities can be excellent indicators to predict the offline and online behavior of individuals. However, because of the great expense and inevitable subjectivity in questionnaires and surveys, it is challenging for conventional studies to explore the connection between personality and behavior and to gain insight in the context of a large number of individuals. Considering the increasingly important role of online social media in daily communications, we argue that the footprints of massive numbers of individuals, such as tweets on Weibo, can be used as a proxy to infer personality and further understand its function in shaping online human behavior. In this study, a map from self-reports of personalities to online profiles of 293 active users on Weibo is established to train a competent machine learning model, which then successfully identifies more than 7000 users as extroverts or introverts. Systematic comparison from the perspectives of tempo-spatial patterns, online activities, emotional expressions and attitudes to virtual honors show that extroverts indeed behave differently from introverts on Weibo. Our findings provide solid evidence to justify the methodology of employing machine learning to objectively study the personalities of a massive number of individuals and shed light on applications of probing personalities and corresponding behaviors solely through online profiles.
The booming of online social media has made it an essential component of everyday life that reflects all aspects of human behavior. Millions of users have digitalized and virtualized themselves on popular platforms such as Twitter and Weibo, providing information ranging from basic demographics, status, and emotions to activities. These online profiles can be natural, detailed, long-term and objective footprints of massive numbers of individuals; thus, they are potential proxies to understand human personalities [1, 2]. As a sub-discipline of psychology, the study of human personalities has aimed at one general goal: to describe and explain the significant psychological differences between individuals. Revealing the connection between different personalities and the corresponding behavioral patterns, especially in the circumstance of online social media, is one of the most exciting issues [3] in recent decades. The growing body of evidence suggesting individual personality discrepancies in online social media further makes it imperative to probe online human behavior from the perspective of personality [4–6].
Personality is a stable set of characteristics and tendencies that specify similarities and differences in individuals' psychological behavior, and it is a dominant factor in shaping human thoughts, feelings and actions. However, personality traits, like many other psychological dimensions, are latent and difficult to measure directly. Self-report by asking subjects to fill out survey questionnaires based on personality is a classic way to assess respondents in conventional studies [7–9], but self-reporting has several limitations:
Cost. Questionnaires in self-reports can be time consuming and costly, and the response rate might be unexpectedly low [10]. These concerns substantially reduce the number of valid participants, which is generally less than 1000 [11]. Persuasive and universal conclusions are hard to reach based on such a small number of samples.
Subjectivity. Respondents fill out questionnaires mainly based on their cognition, memory or feelings, and they can hide their true responses or thoughts consciously or unconsciously. Particularly for self-reports on personality, an individual might not even recollect the circumstances exactly in a controlled laboratory environment. These sources of response bias may have a large impact on the validity of the conclusions [12, 13] and may even lead to a re-interview [14].
Low flexibility. Questionnaires are generally designed based on the study assumptions before conducting experiments, and it is difficult to obtain insights outside of the scope of the previously established goals, i.e., existing self-reports might be much less inspiring due to the lack of extension.
To some extent, these limitations can be overcome by the emergence of crowdsourcing marketplaces, such as Amazon Mechanical Turk (MTurk), which offer many practical advantages that reduce costs and make massive recruitment feasible [15] and thus have become dominant sources of experimental data for social scientists. In the meantime, new concerns are presented in [16, 17]. For example, researchers worry that volunteers are less numerous and less diverse than desired, while Turkers complain that the reward is too low. In addition, MTurk has suffered from growing participant non-naivety [18]. To account for these shortcomings, recent progress in machine learning, especially the idea of computation-driven solutions in social sciences [19], has received increasing interest in the modelling and understanding of human psychological behavior, such as personality.
Indeed, the popularity of online social media provides a great opportunity to examine personality inference using significant amounts of data. Twitter, the most popular social media and micro-blogging service, enables registered users to read and post short messages called tweets. At the beginning of 2016, Twitter had reached 310 million monthly active users (MAU). As of the third quarter of 2017, Twitter averaged 330 million MAU. However, more people are now using Weibo, the Chinese variant of Twitter. According to the Chinese company's first quarter reports, it has 340 million MAU, up 30% from the previous year. These numbers imply that the availability of vast and rich datasets of active individuals' digital footprints on Weibo will unprecedentedly increase the scale and granularity in measuring and understanding human behavior, especially personality, because the cost of the experiment will be substantially reduced, the objectivity of the samples will be guaranteed and the flexibility of the data will be adequately amplified. At the same time, there are new opportunities to combine social media with traditional surveys in personality psychology. Kosinski et al. demonstrate that available digital traces in Facebook can be used to automatically and accurately predict personality [20]. With the help of developments in machine learning, computer models can make valid personality predictions and even outperform self-reported scores [21]. In this study, we argue that from the perspective of computational social science, the profiles of active users on Weibo are excellent proxies in probing the interplay between personality and online behavior.
An online page with a 60-item version of the Big Five Personality Inventory is established first in our study to collect personality trait scores [22], and a total of 293 active users on Weibo are asked to complete the self-report on this page to provide a baseline for the study. Focusing on extraversion, the scores mainly follow a Gaussian distribution, and the subjects are accordingly divided into three groups of high, neutral and low extraversion scores. Then, by collecting online profiles of self-reporters from Weibo, a map between the self-reports of extraversion and the online profiles is built to train machine learning models to automatically evaluate the extraversion of more individuals without the help of self-reports. Three types of features, including 13 basic, 33 behavioral and 84 linguistic, are comprehensively considered in the support vector machine (SVM) model, and its performance is verified by cross-validations. With more than 7000 users labelled as extroverts or introverts by the model, we attempt to systematically study the differences in online behavior due to extraversion by investigating the following seven research questions:
RQ1. Do extroverts and introverts tweet temporally differently on Weibo?
RQ2. Do extroverts and introverts tweet spatially differently on Weibo?
RQ3. What types of information do extroverts and introverts prefer to share?
RQ4. Who is more socially active online?
RQ5. Who pays more attention to online purchasing and shopping?
RQ6. Do extroverts and introverts express emotions differently on Weibo?
RQ7. Who cares more about online virtual honor, extroverts or introverts?
Unexpected differences in the online behavior of extroverts and introverts is observed based on these questions. Introverts post more frequently than do extroverts, especially during the day. However, extroverts visit different cities rather than staying in one familiar city as introverts do. The spatial discrepancy becomes more unintuitive as we zoom in to increased resolution. For example, introverts tend to check in while shopping, whereas extroverts enjoy posting from their workplace. In addition, a tiny fraction of introverts might attempt to camouflage their own loneliness by tweeting from a large number of different areas (>20). Extroverts enjoy sharing music and selfies, whereas introverts prefer retweeting news. In terms of online interactions, extroverts mention friends more often than introverts do, implying higher social vibrancy. By presenting a purchasing index to depict online buying intention, we find that compared to extroverts, introverts devote more effort to posting shopping tweets to relieve loneliness due to a lack of social interaction with others. We also categorize the emotion delivered by tweets into anger, disgust, happiness, sadness, and fear [23] and find that introverts post more angry and fearful (high arousal) tweets while extroverts post more sad (low arousal) tweets. Finally, extroverts attach greater meaning to online virtual honor than introverts do, implying that they might be ideal candidates for online promotion campaigns with virtual honor. To the our best knowledge, this is the first study to thoroughly compare the online behavior of extroverts and introverts in a large-scale sample, and our findings will be helpful to understand the role of personality in shaping human behavior.
The contributions of the present study can be summarized into three aspects. Firstly, it evidently demonstrates that machine learning models can be employed to properly reach larger populations in personality study. Great expense and low efficiency always constrains the sample size of self-reports in conventional methodology, however, a machine learning model of competent performance can automatically identify a massive number of samples from social media to essentially enhance the reliability of study. Extracting multiple features from social media to predict personality is not new [24, 25], but employing the prediction model to produce new samples and facilitate further explorations is rarely touched. We build the extraversion classifier based on small samples, which then identifies more than 7000 users as extroverts or introverts. The larger dataset benefits us to study detailed differences of behaviors without social-desirability bias, for instance, we analyze users' POIs to find spatial behavioral pattern of extroverts and introverts. However, most previous studies [20, 26, 27] concentrated on the predictive power of personality prediction models, instead of properly applying the models into the further research.
Secondly, through footprints in Weibo instead of Twitter, the personality differences and resulted behavioral patterns are investigated in the context of Chinese culture. Varying average scores to the same personality test imply different personality landscapes across cultures [28, 29]. Specifically, it is found that Chinese tend to be more formal then the westerners [30]. Due to the fact that Twitter and Facebook blocked in China, existing findings from them [31] cannot be directly extended to understand Chinese users or predict their behaviors, which makes our exploration from Weibo, the most popular Twitter-like service in China, necessary and novel. In Facebook and Twitter, a positive link exists between extraversion and engagement in activities and extroverts use the social networking sites to relieve their anxiety [32, 33]. Here we find in Weibo, one of popular social media in China, however, lowly extraverted users are characterized by higher levels of activity. Specifically, introverts post tweets more frequently than extroverts, implying their stronger dependency on Weibo. As for online emotion expression, introversion, when compared to extroversion, is associated with higher arousal emotions like anger and fear. The fore-mentioned findings demonstrate that the behavioral pattern of extroverts and introverts in online social media of China is indeed different from the west. They also suggest that culture might play a profound role in understanding online behaviors. For example, Chinese often appear shy and self conscious to westerners, and generally may not express their feelings well [34], especially when they are around strangers or in circumstances that they are not familiar to. According to this theory, towards to the online and open social platform like Weibo, Chinese users might tend to hide "real themselves" from being exposed to strangers. This may explain why extroverts in Weibo surprisingly appear inactive and quiet. And on the contrary, introverts seem to be more active in the cyberspace so as to relieve excessive stress and loneliness from the real world.
Thirdly, the dimensions of online behavior are greatly enriched, ranging from locating, shopping, emotions to virtual honors. A more comprehensive map between personality difference and resulted behavioral patterns can be thus established. And this map will inspire realistic applications in scenarios like social media marketing.
Literature review and theoretical background
Several well-studied models have been established for personality traits, among which the Big Five model is the most popular [35, 36]. In this model, human personality is depicted on five dimensions, namely, openness, neuroticism, extraversion, agreeableness and conscientiousness, and personality type is determined based on an individual's behavior over time and under different circumstances. The Internet, currently one of the most pervasive scenarios, has profoundly changed human behavior and experience. With its explosive development, abundant research efforts have been devoted to investigating the relation between personality and Internet usage [37]. For example, research findings have demonstrated distinctive patterns of Internet use and usage motives of individuals with different personality types, where extroverts made more goal-oriented use of Internet services [38]. Focusing on online social media as a vital component of the Internet, extraversion and openness to experiences are found to be positively related with social media adoption [39] and introverted and neurotic people locate their "real me" through social interaction [40].
Moreover, it was also shown that users' psychological traits could be inferred from their digital fingerprints on online social media [41, 42]. Golbeck et al. attempted to bridge the gap between personality research and social media and demonstrated that social media (Facebook and Twitter) profiles reflect personality traits [43, 44]. They suggested that the number of parentheses used is negatively correlated with extraversion; however, an explanation of the correlation was not provided and probing the correlations in a larger dataset remains necessary. Quercia et al. employed numbers of followees, followers and tweets to determine personality and suggested that both popular users and influentials are extroverts with stable emotions [45]. Additionally, patterns in language use on online social media, such as words, phrases and topics, also offer a way to determine personality [46]. For example, using dimensionality reduction for the Facebook Likes of participants, Kosinski et al. proposed a model to predict individual psycho-demographic profiles [20]. In terms of social media in China, Weibo and RenRen are the ideal platforms for conducting personality research [26, 47]. Considering the recent progress that has enabled computer algorithms to outperform humans in judging personality [21], online social media offers unprecedented opportunities for personality inference and understanding human behavior.
Each bipolar dimension (like extraversion) in the Big Five model summarizes several facets that subsume several more specific traits (extraversion vs. introversion). In this paper, we focus on extraversion, which is an indispensable dimension of personality traits. Many previous studies have revealed the connection between extraversion and online behavior and can be roughly reviewed from the following perspectives.
Locations. For decisions referring to human spatial activity, the most fundamental features are arguably the personality traits, given that these are relatively persistent dispositions. Researchers argue that this is supported by evidence of personality trait correlation with diverse human activities, ranging from consumer marketing to individual tastes [48]. As for Foursquare users, extroverted people would be more sociable and outgoing and so visit more venues [49]. However, the findings of [50] for extroverts were unexpected in that they did not provide evidence of individuals preferring the same locations.
Social interactions. Highly extroverted individuals tend to have broad social communications [26] and large network size [51]. For instance, extraversion is generally positively related to the number of Facebook friends [52, 53]. Gosling et al. also found particularly strong consensus about Facebook profile-based personality assessment for extroverts [54]. However, Ross et al. showed that extroverts are not necessarily associated with more Facebook friends [4], which is contrary to the later results of Bachrach et al. [52] and Hamburger et al. [53]. Through posting tweets, extroverts are more actively sharing their lives and feelings with other people, and personality traits might shape the language styles used on social media. In English, extroverts are more likely to mention social words such as 'party' and 'love you', whereas introverts are more likely to mention words related to solitary activities such as 'computer' and 'Internet' [46]. In Chinese, extraversion is positively correlated with personal pronouns, indicating that extroverts tend to be more concerned about others [55].
Buying intention. The personality trait of extraversion is one of the main factors driving online behavior, including buying; therefore, exploration of the relationship between extraversion and shopping is valuable. DeSarbo and Edwards found that socially isolated individuals tended to perform compulsive buying in an effort to relieve feelings of loneliness due to a lack of interactions with others [56]. However, the results of subsequent studies on the relationship between compulsive buying and extraversion are inconsistent [57, 58].
Emotion expression. In psychology, it is widely believed that extraversion is associated with higher positive affect, namely, extroverts experience increased positive emotions [59, 60]. Extroverts are also more likely to utilize the supplementary entertainment services provided by social media, which bring them more happiness [61]. Qiu et al. suggested that highly extroverted participants use these services to relieve their existential anxiety about social media [62]. Thus, it is necessary to investigate the relation between various emotions and extraversion rather than only positive affect. However, most existing studies based their conclusions on self-reports from very small samples, and the lack of data and objectivity leads to inconsistent or even conflicting results. Moreover, a comprehensive understanding of how extroverts and introverts behave differently in the context of online social media remains unclear. Hence, in this study, we employ machine learning models to identify and establish a large group of samples and then investigate the behavioral differences from diverse aspects to obtain solid evidence and comprehensive views.
Identification of extraversion
Dataset and participant population
The Big Five model is the most accepted and commonly used framework to depict human personality [36], and several measuring instruments have been developed to assess the Big Five personality traits. As for Big Five, personality inventories have both longer (NEO PI-R) and shorter versions (NEO-FFI). The NEO PI-R consists of 240 items and takes 45 to 60 minutes to complete, whereas the shorter NEO-FFI has only 60 items and takes 10 to 15 minutes to complete. In addition, the revision of the NEO-FFI involved the replacement of 12 ones of the 60 items. The revised edition is thought to be more suitable for younger individuals [22]. In the existing literature, the NEO-FFI is employed more often [26, 27]. Meanwhile, the resulting scales of NEO PI-R showed modest improvements in reliability and factor structure [63]. In this study, considering the cost and age of subjects, we build a web page with revised NEO-FFI to collect self-reported scores of different personality traits. We target Weibo users for voluntary participant recruitment, and both online and offline invitations were sent from December 1, 2014 to March 31, 2015. All the participants are manually checked, and only valid Weibo users (identified by Weibo ID, a unique identification for each user) are considered. The sample size generally ranges from 60 and 600 in existing studies, such as 62 in [64], 176 in [65], 250 in [66], 470 in [55] and 647 in [26]. Considering a better trade-off between cost and the reliability of the study, here we set the sample size close to 300. Finally, a total of 293 valid participants are selected for the following study (144 men and 149 women), and participants range in age from 19 to 25 years. It is worth noting that according to the official report of Weibo in 2015, users aged 17 to 33 years account for approximately 80% of total users, indicating that our refined samples of self-reports sufficiently represent the majority of Weibo users.
We focus on the Big Five personality trait of extraversion, which measures the tendency to seek stimulation in the external world and the company of others and to express positive emotions [35]. People who score high in extraversion (called extroverts) are generally outgoing, energetic and friendly. By contrast, introverts are more likely to be solitary and seek environments characterized by lower levels of external simulation. The distribution of scores from the 293 valid samples (Weibo users) on extraversion is shown in Fig. 1. The scores follow a typical Gaussian distribution with a μ (mean value) of 39.03 and a σ (standard deviation) off 7.55. Figure 1 shows that the probability of scores near the mean value is higher than the occurrence of both high scores and low scores, indicating that a significant fraction of samples report neutral scores on extraversion and can be intuitively categorized as having no significantly distinct personality, i.e., neither extroverts nor introverts.
The distribution of extraversion scores from self-reports of 293 valid users
Therefore, it is reasonable to divide the samples into three groups: extroverts (high scores, labelled as 1), neutrals (scores around the mean, labelled as 0) and introverts (low scores, labelled as −1). Specifically, extroverts are subjects with scores greater than 42.81 (\(\mu+\sigma/2\)), introverts are users with scores less than 35.25 (\(\mu -\sigma/2\)) and neutrals are users whose scores range from 35.25 to 42.81. The thresholds (\(\mu\pm\sigma/2\)) are set to balance the sizes of the three categories to avoid bias in the machine learning models. By classifying 293 valid samples into three categories, we can obtain a training set to establish and evaluate machine learning models that do not require self-reports.
After obtaining permission from the valid users, we continuously collect their online profiles until March 1, 2016, including demographics and tweets posted through Weibo's open APIs. To guarantee the quality of the data, only active users with more than 100 tweets, including 45 extroverts (1), 44 introverts (−1) and 56 neutrals (0), are included in the training set. The training data are generally balanced with respect to the three classification labels, especially for extroverts and introverts, which is helpful to avoid bias in the machine learning models.
Extraversion classifier
As stated in the previous section, many aspects of online profiles have been found to be related to users' personalities. For machine learning, a basic but practical rule is to collect all the possible features in training models, especially for nonlinear ones such as support vector machine and random forest [67]. In the process of training, though mostly in a manner of black box, models will keep effective features or their combinations that play dominating roles in predicting the target and ignore weak features that functions trivially. Hence, to establish a competent classifier to identify the three categories of extraversion without the help of self-reports, we attempt to extract as many features as possible from the digital and textual records. These features are then roughly grouped into basic, interactive and linguistic features. The details of the different types of features are as follows.
Basic features. Basic features are selected to reflect the user's demographics, preliminary statuses and elementary interactions on social media, including gender, tweeting patterns and privacy settings. Specifically, tweeting patterns contain \(\log(\mathrm{AUW}+1)\) (where AUW is the age of a user on Weibo in units of days), \(\log(\mathrm{NT}+1)\) (where NT is the total number of tweets the user posted), \(\log(\mathrm{NT}/(\mathrm{AUW}+1))\) (the frequency of posting), \(\log(\mathrm{NFER}+1)\) (where NFER is defined as the number of the user's followers), \(\log(\mathrm{NFEE}+1)\) (where NFEE denotes the number of the user's followees), \(\mathrm{NT}/(\mathrm{NFER}+1)\), and \(\mathrm{NT}/(\mathrm{NFEE}+1)\). With respect to the privacy settings, corresponding binary features indicate whether a user allows comments from others, whether the user allows private messages sent from others and whether the user allows Weibo to track their real-time location. In addition, we consider the length of self-description as a feature. Investigation of the relationship between users' nicknames [26] and extraversion scores in our samples demonstrate that nicknames are not significantly associated with the extraversion in Weibo (\(\rho>0.05\)) and functions trivially in extroversion classification (with accuracy of 30.7%). As for the status of being verified or not, because only celebrities, institutions and governments in Weibo are verified and they only occupy a small proportion of the entire population. Meanwhile, most of our samples with self-reports are not verified users. Hence, we do not select users' nicknames and verified status as features in personality prediction.
Interactive features. Interactive features are designed to reflect the sophisticated patterns of social interactions on Weibo at different time granularities of days or weeks. Here, social interaction includes posting, mentioning, and retweeting, which have been verified to be key behaviors related to extraversion in previous research. Specifically, for a certain time granularity (daily or weekly) and a certain social interaction, a vector composed of the average occurrence of the interactions (over the entire life of a user in our collection) at different hours or days of a week is calculated. Then, the following features are extracted from this vector: (1) the average number of interactions, (2) the hour or day with the most interactions, (3) the maximum occurrence of hourly or daily interaction, (4) the hour or day with the lowest occurrence of the interaction, (5) the variance of the interaction occurrence on different hours or days. Additionally, the proportions of tweets containing mentions and retweets are also considered as features to reflect a user's interactive intensity.
Linguistic features. Previous efforts to explore extraversion have demonstrated that language styles on social media can be effective indicators of personality traits. For English (like in Twitter or Facebook), linguistic variables can be derived from LIWC (Linguistic Inquiry and Word Count), a text analysis software widely used in study of language usage in kinds of dimensions. And specific tools can also be built to tokenize and filter out linguistic features [27]. Therefore, we collect 261 keywords which are derived from terms or phrases that frequently used in personality tests, including both Chinese and English keywords, to linguistically model the tweets posted by users of different groups. After preprocessing the text, all tweets posted by a user are combined to form a document that represents the user's language style, and all users' documents compose the corpus. Then, the classic tf-idf scores are employed to evaluate the 261 keywords. Tf-idf stands for term frequency-inverse document frequency, which is a weight often used in text mining and features selection. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus [68]. Through adjusting the threshold of tf-idf, the top 84 keywords (\(\text{tf-idf}>500\)) are selected to extract linguistic features (these keywords are found to produce the best prediction in latter experiments). Specifically, for any term within the 84 selected keywords, if it occurs in a document (corresponding to a user) its feature value is 1; otherwise, it is 0. This method, called bag-of-words, is typically utilized in natural language processing [69]. We also consider the average length of tweets posted by the user.
It is worth noting that in our dataset of online profiles, there are significant differences in the scales of the extracted features. To train unbiased machine learning models, feature standardization is required. We perform min-max normalization and transform each feature into a range between zero and one. The transformation is given by
$$ X_{i} = \frac{X_{i} - X_{\mathrm{min}}}{X_{\mathrm{max}} - X_{\mathrm{min}}}, $$
where \(X_{i}\) is the ith item in feature set X, \(X_{\mathrm{max}}\) is the maximum value of X, and \(X_{\mathrm{min}}\) is the minimum value of X.
In summary, we extract a total of 130 features for each Weibo user, including 13 basic features, 32 interactive features and 85 linguistic features, which are used as the input of the machine learning models.
Models and accuracy
Based on the training data and feature set obtained from the previous sections, three popular machine learning models, including random forest, naive Bayes and support vector machine (SVM), are employed to perform the 3-category classification of extraversion. As for SVM, we choose C-SVM (multi-class classification) as the solution and RBF as the kernel function.
We adapt k-fold (\(k=10\) in this paper) cross-validation to estimate our model. In k-fold cross-validation [70], the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining \(k-1\) subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The baseline of accuracy for 3-category classification is 33.33%. As shown in Table 1, our 10-fold cross-validation results indicate that the random forest model cannot adequately classify extraversion (with accuracy close to the baseline). The naive Bayes and SVM models outperform the baseline solutions significantly, especially the SVM model, whose accuracy for both extroverts and introverts is approximately 50%. We also measure the average F1 score by calculating the precision and recall for each label i and find that the unweighted mean, which is defined as
$$ \textit{F1 score} = \frac{1}{3}\cdot\sum_{i=-1,0,1} \frac{2 \cdot (\mathit{precision}_{i} \cdot \mathit{recall}_{i})}{\mathit{precision}_{i} + \mathit{recall}_{i}}, $$
is 0.451, indicating that both precision and recall confirm the good performance of SVM.
Table 1 The average accuracy and F1 score of the machine learning models
As previous studies markedly varied in the methods employed to study the relationship between digital footprints and personality traits, Azucar et al. used Pearson's r to compared the accuracy of prediction models, and suggested this evaluation method is independent of the types of models [31]. We follow this work [31] and measure the correlation (\(r=0.26\), \(\rho <0.01^{**}\)) between the actual and predicted value (1 for extroverts, 0 for neutrals and −1 for introverts) for our model. Although the accuracy is lower than some excellent results (\(r=0.40\) in [20] and \(r=0.54\) in [26]), our model is still competitive of which the performance is close to and even better than many important studies (\(r=0.19\) in [27] and \(r=0.28\) in [71]).
Therefore, we train an SVM model as the extraversion classifier to identify extroverts and introverts on Weibo without the help of self-reports. Based on the good accuracy and F1 score, we argue that machine learning models such as SVM can overcome the limitations of conventional approaches, such as self-reports, greatly extend the scope of personalty exploration and offer an opportunity to comprehensively assess the behavioral differences between extroverts and introverts on social media.
Differences between extroverts and introverts
Employing the obtained SVM classifier, we attempt to identify extroverts and introverts from a large population of Weibo users whose online publicly available profiles were collected through Weibo's open APIs between November 2014 and March 2016. Users with less than 100 tweets were omitted to avoid sparsity. After converting each user into a representative feature set, our SVM classifier can automatically categorize the user as an extrovert, neutral or introvert. From 16,856 users, we identify 4920 extroverts and 2329 introverts. To establish a comprehensive spectrum of the behavioral discrepancy for extroverts and introverts on social media, patterns in time, geography, online activity, emotional expression and attitude to virtual honor are investigated according to our seven research questions.
Temporal differences
Users with different personality traits might post tweets unevenly at different hours of the day, i.e., hourly posting patterns, which can be reflected by the distribution of tweets by hour of the day. As shown in Fig. 2, introverts prefer to post tweets from 8:00 to 18:00, whereas extroverts are active and excited from 19:00 to 1:00 of the next day, indicating that extroverts are move vibrant than introverts at night. Further evidence at the individual level is presented in Table 2: the proportion of extroverts tweeting during the day (from 8:00 to 19:00) is 0.557 and that of introverts is 0.608; the proportion of extroverts tweeting at night (from 19:00 to 1:00 of the following day) is 0.358 while that for introverts is 0.305. The active posting of extroverts at night suggests that their nightlife is more diverse than that of introverts.
Hourly posting pattern. The statistics of hourly proportions are obtained from all tweets posted by extroverts and introverts
Table 2 Tweeting habits during different periods of a day at the individual level
The interval between two temporally consecutive tweets of an individual Weibo user is an excellent indicator of the degree of preference and dependency on social media. We calculate the average interval (in units of hour) for extroverts and introverts from the timestamps of their tweets. As shown in Table 3, introverts post tweets more frequently than do extroverts, suggesting stronger dependency on social media. Specifically, the mean interval of introverts is 19.09 hours while that for extroverts is 28.10 hours. This finding is consistent with and can be explained by the previous finding that individuals who are socially isolated tend to depend on and indulge in social media to relieve loneliness due to a lack of interactions with others in the real world [56]. Meanwhile, differences in the standard deviations (respectively, 62.25 and 74.36 hours) reveal that the posting frequency of introverts on Weibo is more consistent than that of extroverts. Moreover, if considering only the time interval within one day (i.e., ignoring intervals greater than 24 hours), the mean interval of extroverts decreases to 6.41 hours and that of introverts is 5.61 hours. In this case, the standard deviation of the time interval of extroverts is 6.76 hours, which is greater than that of introverts (6.19 hours). This results further justifies the finding that introverts post more frequently than extroverts do and illustrates the greater preference for social media usage. In addition, we perform analysis of variance (ANOVA), which analyses the differences among group means and their associated procedures, to further verify the results. Table 4 shows that the differences between the two groups are statistically significant in tweeting habits and average intervals. Note that most users are inactive on Weibo during the night-period between 1:00AM–8:00AM [23, 72], hence it is expected that the p-value of testing referring to tweeting during this period is above 0.01, i.e., the difference between extroverts and introverts cannot be sensed when they are all inactive.
Table 3 Average intervals and standard deviations of posting tweets for extroverts and introverts
Table 4 ANOVA of tweeting habits and average intervals
Spatial differences
An individual user can post geo-tagged tweets (or checkins) containing the latitude and longitude of their current location. This information provides a proxy to explore geographical differences between extroverts and introverts. To perform the geo-analysis, we extract the geographical locations of 57,710 tweets, of which 38,729 tweets are posted by extroverts and 18,981 by introverts.
For each geo-tagged tweet, we transform the longitude and latitude to the corresponding city (or county) through GeoPy project [73]. Then, for each user, we can obtain the list of cities or counties where they posted tweets. The results of the comparison of extroverts and introverts are shown in Fig. 3, which surprisingly illustrates significant differences in the spatial characteristics of users with different personality traits. The results show that 44.32% of introverts post tweets from a single city or county, perhaps their residence, suggesting that nearly half of introverts prefer to stay in a familiar city or county. By contrast, only 27.12% of extroverts are located in a single city or county, which is far less than that of introverts. More specific, 14% of extroverts and 15% of introverts post tweets from two cities (or counties), 29% of extroverts and 19% of introverts post tweets from 3–5 cities or counties, and the trend of extroverts tweeting at more places holds persistently as the city (or county) number increases from 6 to 20. It is noteworthy that the number of extroverts posting tweets in 3–5 cities is even more than the number posting from a single city, which implies that extroverts prefer going to or visiting more cites than do introverts, for whom posting from a single city or county dominates. When the number of cities (or counties) is greater than 20, the ratio within introverts is unexpectedly significantly greater than that of extroverts, implying that a tiny fraction of introverts might attempt to camouflage their own loneliness by posting tweets from a large number of different places [56].
Percentages of tweets posted at different number of cities (or countries) by extroverts and introverts
Beyond the city granularity, we can also perform a geographical comparison with better resolution, such as point-of-interest (POI), which is a detailed description of the featuring function of small regions or point locations within cities. Specifically, POIs exclude private facilities, such as personal residences, but include many public facilities that seek to attract the general public, such as retail businesses, amusement parks, and industrial buildings. Government buildings and significant natural features are also POIs. POIs also refer to hotels, restaurants, fuel stations and other categories in automotive navigation systems and recommendation systems [74]. In this study, nine types of POIs, namely, restaurants, hotels, life services, shops, enterprises, transportation, entertainment, neighborhoods and education, are considered and the percentages of the six most-visited POIs by extroverts and introverts are shown in Fig. 4. Most geo-tagged tweets are posted from restaurants, accounting for 66.38% of tweets within the nine POIs we select by extroverts and 61.10% by introverts. There are significant differences between extroverts and introverts in visiting shops and enterprises. The percentage of tweets from shops is 4.58% for extroverts and 7.68% for introverts, indicating that introverts prefer to checkin or post tweets while shopping. Furthermore, the percentage of tweets from companies and enterprises is 4.46% for extroverts and 2.59% for introverts. Since companies and enterprises are the workplaces of individuals, this result suggests that extroverts tend to post while they are working.
Percentages of geo-tagged tweets posted at different POIs by extroverts and introverts
Diverse online activities, such as sharing, interacting and buying, on Weibo can be identified only through tweets posted by users. Hence, by mining the text of tweets posted by extroverts and introverts, we attempt to illustrate the behavioral difference landscape of online activities.
Sharing. Each tweet in Weibo is labelled with a tag to indicate its posting source. For example, if a user logs into Weibo and posts one tweet, the source could be a mobile device (e.g., iPhone) or web browser (e.g., Chrome). Weibo users share news, videos, and music with their friends or the public on social media, and the diverse sources of this shared information are contained in the posted tweets, for example, news websites, mobile applications or other social platforms that offer a sharing interface to Weibo. Additionally, tweets shared from selfie mobile software are tagged as selfies, which contain a self-portrait typically taken by a camera phone. Because of these features, we utilize the source label of each tweet to analyse the sharing behavior of extroverts and introverts. The contributions of the above four types of sharing in terms of all tweets are shown in Fig. 5. The fraction of news sharing of introverts (0.612%) is three times greater than that of extroverts (0.194%). By contrast, extroverts enjoy sharing more videos, music and selfies on social media than do introverts, especially selfies, e.g., the fraction of selfie tweets for extroverts is 0.354%, which is much higher than that of introverts (0.128%). It is widely believed that selfies are related to individual narcissism [75–77], and our findings further suggest that extraversion is positively coupled with selfies on social media.
Percentages of four types of sharing by extroverts and introverts on Weibo
Interacting. Interacting patterns, especially the mentions and retweets, are comprehensively considered in the feature set we extract in section Extraversion classifier and are used as the input of the extraversion classifier. Intuitively, performing the analysis of behavioral differences of interacting patterns on extroverts and introverts identified by the classifier would be meaningless because the differences have already been latently considered in the classifier. To avoid the biased comparison and provide solid evidence, we perform the difference analysis directly on the training set, i.e., user self-reports. We use the Pearson correlation to measure the linear dependence between the interaction features and the extraversion scores of participants. Features with relatively high Pearson correlations (\(\text{Coef.}>0.13\)) with respect to extraversion scores are listed in Table 5. It is interesting to find that the features related to mentioning behavior (@ behavior) are positively correlated with extraversion scores. Mentioning behavior is regarded as one of the most important forms of online interactions. Specifically, both the rate of @ in all tweets and the average rate of tweets with @ within one hour on one day reflect the frequency of interaction with other users. Meanwhile, the variance of tweets posted with @ by hour of the day or by day of the week illustrate the irregularity and randomness of the mentioning behavior of users. Therefore, on the basis of Table 5, we conclude that extroverts are more socially active and interactive than introverts on social media; however, their interactions are more casual and temporally less regular than those of introverts.
Table 5 Interacting features with relatively high Pearson correlations with extraversion scores
Buying. The most intrinsic nature of social media is status updates; thus, an experience such as buying or shopping can be identified by counting related keywords, namely, the word-count method, which has been employed extensively in the field of psychology [78] in recent decades. In this study, 14 buying keywords are selected to identify buying behavior (e.g., BUY and SHOPPING), response in sales promotion (e.g., DISCOUNT and 11.11, a famous day for promotion sales in China advocated by Taobao Inc.) and mentioning or sharing of online shopping malls (like AMAZON and TAOBAO). For tweets posted by extroverts or introverts, those containing one or more of the selected keywords are labelled as buying related, and the fraction of buying tweets of each user is defined as the Purchasing Index, which reflects the intensity of the buying behavior or purchasing intention. After calculating the Purchasing Index of each user, the comparison of the buying behavior of extroverts and introverts is depicted in Fig. 6. The mean Purchasing Index of extroverts is 0.0440 and that of introverts is 0.048, which is 10% larger than that of extroverts. The 25th percentile, median, 75th percentile and maximum of the Purchasing Index of extroverts are, respectively, 0.020, 0.033, 0.054 and 0.774, and those of introverts are 0.0239, 0.0402, 0.0609 and 0.8480. Figure 7 shows the cumulative distribution function (CDF) and probability distribution function (PDF) of the Purchasing Index of extroverts and introverts. The Purchasing Indexes of more than 95% of users are less than 0.1, and the significant difference between extroverts and introverts is mainly located in this region. Specifically, at the same Purchasing Index level (e.g., >0, >0.04, >0.06, >0.08), the probability of introverts is always greater than that of extroverts, suggesting that introverts prefer to publish tweets that refer to purchasing compared to extroverts. This conclusion could apply to the advertising and sales of commodities and other realistic scenarios, i.e., introverts might be ideal marketing targets for online promotions. We adapt ANOVA to investigate the presence of significant differences in the Purchasing Index. As shown seen Table 6, the p-value of less than 0.001 indicates that the differences between the extroverts and introverts are statistically significant.
Box plot of the Purchasing Index of extroverts and introverts. The bottom line of the box represents the 25th percentile, the line inside the box represents the median, the uppermost line of the box represents the 75th percentile, and the topmost vertical line represents the maximum Purchasing Index
The probability distribution of the Purchasing Index of extroverts and introverts
Table 6 ANOVA of purchasing indexes
Online emotional expression
Tweets on social media not only deliver factual information but also the feelings of users, and these feelings can be automatically classified into different emotions by mining the text of tweets [23]. Because extraversion is widely believed to be associated with higher positive affect, namely, extroverts experience more positive emotions [59, 60], we investigate the differences between extroverts and introverts from the perspective of emotional expression. By employing a previously built system named MoodLens [23], we categorize each tweet into one of five emotions: anger, disgust, happiness, sadness or fear. Note that tweets without significant emotional propensity are ignored. Then, for each individual, either extrovert or introvert, we calculate the emotion index for all five sentiments, which is defined as the fraction of corresponding emotional tweets in the tweeting history and quantitatively represents the user's emotional disposition on social media.
Figure 8 shows the CDFs and PDFs of the five emotion indexes of extroverts and introverts. At the same Anger Index level (\((0.1, 0.4)\)), Fear Index level (\((0, 0.25)\)) and Disgust Index level (\((0.05, 0.15)\)), the probabilities of introverts are always greater than those of extroverts, indicating that introverts post more tweets associated with negative feelings than do extroverts. However, for the Sadness Index and Happiness Index, the probabilities of introverts are always less than those of extroverts, suggesting that extroverts post tweets associated with joy or sadness with greater likelihood. Note that as can be seen in Figs. 8(d) and 8(e), the differences in the Happiness Index and Disgust Index are subtle, but ANOVA (shown in Table 7) confirms the significance with \(p\text{-value}<0.001\). Our findings are consistent with the previous statement that extraversion is associated with higher positive affect; however, we also provide evidence that introversion is associated with high arousal and negative emotions like anger, fear and disgust and that extraversion is positively correlated with sadness. Indeed, on the basis of the data-driven approach on a large sample, our study simultaneously confirms the existing conclusion and provides new insights.
CDFs and PDFs of five emotion indexes for extroverts and introverts. The mean values for the Anger Index are 0.124 and 0.155, the mean values for the Fear Index are 0.050 and 0.073, the mean values for the Sadness Index are 0.247 and 0.208, the means values for the Happiness Index are 0.474 and 0.452 and the mean values for the Disgust Index are 0.106 and 0.113
Table 7 ANOVA of mood indexes
Attitudes to virtual honor
Weibo grants many optional badges to users that can be obtained by completing necessary operations following the demand of social media. For instance, users can connect their Weibo account and Taobao account to obtain the "Binding-Taobao" badge. This behavior, exposing the Taobao account to social media, is a risk to property security and privacy. However, the badges that users obtain are displayed publicly to others and are treated as an honor in the virtual world. Therefore, a user's response to badges is an indicator of their attitude to virtual honor in social media. We investigate the difference in attitude to virtual honor of extroverts and introverts. The "Binding-Taobao" badge is regarded as a relevant badge to perform the difference analysis, and the distributions of extroverts and introverts in terms of "Binding-Taobao" badges is shown in Fig. 9. The percentage of extroverts with the "Binding-Taobao" badge is 60.7% and that without the badge is 39.3%. The percentage of introverts with the "Binding-Taobao" badge is 53.9% and that without the badge is 46.1%. Clearly, the proportion of extroverts who obtain "Binding-Taobao" badges is larger than that of introverts. Additionally, we examine various other badges in Weibo, including "Red envelope 2015", "Public welfare", "Travel 2013", and "Red envelope 2014". The badge statistics indicate that extroverts tend to prefer badges more than introverts do; in other words, extroverts attach more importance to online virtual honor. Furthermore, from a marketing perspective, users with the "Binding-Taobao" badge are viewed as ideal targets, i.e., extroverts should be recommended with greater odds. However, as mentioned above, introverts with higher Purchasing Index demonstrate stronger potential shopping intentions. Therefore, the badge could be a misleading signal, and our findings suggest that additional features should be comprehensively considered in marketing decisions.
The proportions of extroverts and introverts with and without the Binding-Taobao badge
To summarize, from the perspective of tempo-spatial patterns, online activities, emotional expression and attitudes to virtual honor, we establish a comprehensive picture of how extroverts tweet differently from introverts in social media.
Personality traits, such as extraversion, are believed to play fundamental roles in driving human action; however, a detailed and comprehensive understanding of how people with different personality traits behave is missing, especially in the context of social media, which has become an indispensable part of daily life. Meanwhile, the lack of large samples and the unavoidable subjectivity cause conventional methods, such as self-reports, to produce bias. Hence, in this study, we argue that starting from a small-scale but refined voluntary sample and establishing a map between self-reports and online profiles can help to train a machine learning model to automatically infer the personalities of a massive number of individuals objectively without the costly expense of survey questionnaires. Specifically, a medium size of samples (active users with at least 100 posts in Weibo) are selected to finish self-reports and three types of features extracted from their profiles help train and optimize a model with competent performance (52.28% for extroverts and 49.49% for introverts) in extroversion prediction. Indeed, the SVM classifier helps us to filter out more than 7000 extroverts and introverts from Weibo and, to the best of our knowledge, build the first complete picture of how extroverts and introverts tweet differently on social media from the perspective of dimensions.
In addition to obtaining conclusions consistent with existing findings from conventional methods, new and insightful conclusions on Weibo are obtained. We demonstrate introverts on Weibo post more frequently than do extroverts, especially during the day, which is inconsistent with the findings on Twitter or Facebook [79, 80]. A tiny fraction of introverts locate themselves on a large number of different areas (>20). It is necessary for scholars to examine whether the unexpected phenomenon could appear on Twitter. As for online buying intention, introverts devote more efforts to posting shopping tweets. We also find that introverts post more high-arousal emotions. As a result, introverts tend to frequently use Weibo to assuage their isolation in the real world, which is consistent with early studies [39, 40]. Finally, we suggest that extroverted individuals may be optimal candidates for online promotion campaigns with virtual honor. Our findings offer solid evidence of the feasibility of machine learning approaches to personality research and shed light on realistic applications, such as online marketing and behavior analytics.
This study has inevitable limitations. The personality prediction model could be promisingly improved by adding new features including footprints of Likes [20], user-generated pictures [27], headshots and emoticons [81]. And exploring more details of how culture shapes online behavior will better interpret our findings. Besides, here we only investigate the personality difference and resulted behavioral patterns from the dimension of extroversion, ignoring other personality traits such as openness or conscientiousness. In fact, further explorations in these traits would essentially enrich the picture of how personality impact online behaviors. All these limitations will be promising directions in the future work.
MTurk:
MAU:
monthly active users
SVM:
NEO PI-R:
Revised NEO Personality Inventory
NEO-FFI:
NEO Five-Factor Inventory
AUW:
age of a user on Weibo in units of days
NT:
the total number of tweets the user posted
NFER:
the number of the user's followers
NFEE:
number of the user's followees
LIWC:
Linguistic Inquiry and Word Count
point-of-interest
Boyd DM, Ellison NB (2007) Social network sites: definition, history, and scholarship. J Comput-Mediat Commun 13(1):210–230
Barash V, Ducheneaut N, Isaacs E, Bellotti V (2010) Faceplant: impression (mis)management in Facebook status updates. In: ICWSM
Larsen RJ, Buss DM (2008) Personality psychology: domains of knowledge about human nature. McGraw-Hill Education, New York
Ross C, Orr ES, Sisic M, Arseneault JM, Simmering MG, Orr RR (2009) Personality and motivations associated with Facebook use. Comput Hum Behav 25(2):578–586
Ryan T, Xenos S (2011) Who uses Facebook? An investigation into the relationship between the Big Five, shyness, narcissism, loneliness, and Facebook usage. Comput Hum Behav 27(5):1658–1664
Simoncic TE, Kuhlman KR, Vargas I, Houchins S, Lopez-Duran NL (2014) Facebook use and depressive symptomatology: investigating the role of neuroticism and extraversion in youth. Comput Hum Behav 40:1–5
Costa PT, McCrae RR (1992) Revised NEO personality inventory (NEO PI-R) and NEO five-factor inventory (NEO-FFI). Psychological Assessment Resources, Odessa
Roberts BW, Chernyshenko OS, Stark S, Goldberg LR (2005) The structure of conscientiousness: an empirical investigation based on seven major personality questionnaires. Pers Psychol 58(1):103–139
Fast LA, Funder DC (2008) Personality as manifest in word use: correlations with self-report, acquaintance report, and behavior. J Pers Soc Psychol 94(2):334
Hoonakker P, Carayon P (2009) Questionnaire survey nonresponse: a comparison of postal mail and Internet surveys. Int J Hum–Comput Interact 25(5):348–373
Watt JH (2004) Internet systems for evaluation research. New Dir Eval 1999(84):23–43
Nederhof AJ (1985) Methods of coping with social desirability bias: a review. Eur J Soc Psychol 15(3):263–280
Furnham A (1986) Response bias, social desirability and dissimulation. Pers Individ Differ 7(3):385–400
Klausch T, Schouten B, Buelens B, Van Den Brakel J (2017) Adjusting measurement bias in sequential mixed-mode surveys using re-interview data. J Surv Stat Methodol 5(4):409–432
Paolacci G, Chandler J, Ipeirotis PG (2010) Running experiments on Amazon mechanical turk. Judgm Decis Mak 5(5):411–419
Wright KB (2005) Researching Internet-based populations: advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. J Comput-Mediat Commun 10(3):JCMC1034
Bohannon J (2016) Mechanical turk upends social sciences. Science 352(6291):1263–1264
Peer E, Brandimarte L, Samat S, Acquisti A (2017) Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. J Exp Soc Psychol 70:153–163
Lazer D, Pentland A, Adamic L, Aral S, Barabási A-L, Brewer D, Christakis N, Contractor N, Fowler J, Gutmann M, Jebara T, King G, Macy M, Roy D, Van Alstyne M (2009) Computational social science. Science 323(5915):721–723
Kosinski M, Stillwell D, Graepel T (2013) Private traits and attributes are predictable from digital records of human behavior. Proc Natl Acad Sci USA 110(15):5802–5805
Youyou W, Kosinski M, Stillwell D (2015) Computer-based personality judgments are more accurate than those made by humans. Proc Natl Acad Sci USA 112(4):1036–1040
McCrae RR, Costa PT (2004) A contemplated revision of the NEO five-factor inventory. Pers Individ Differ 36(3):587–596
Zhao J, Dong L, Wu J, Xu K (2012) MoodLens: an emoticon-based sentiment analysis system for Chinese tweets. In: Proceedings of the 18th ACM SIGKDD. ACM, New York, pp 1528–1531
Kedar SV, Bormane DS (2016) Automatic personality assessment: a systematic review. In: International conference on information processing, pp 326–331
Park G, Schwartz HA, Eichstaedt JC, Kern ML, Kosinski M, Stillwell DJ, Ungar LH, Seligman ME (2015) Automatic personality assessment through social media language. J Pers Soc Psychol 108(6):934
Li L, Li A, Hao B, Guan Z, Zhu T (2014) Predicting active users' personality based on micro-blogging behaviors. PLoS ONE 9(1):e84997
Liu L, Preotiuc-Pietro D, Samani ZR, Moghaddam ME, Ungar LH (2016) Analyzing personality through social media profile picture choice. In: ICWSM, pp 211–220
McCrae RR, Terracciano A (2005) Personality profiles of cultures: aggregate personality traits. J Pers Soc Psychol 89(3):407
Barceló J (2017) National personality traits and regime type: a cross-national study of 47 countries. J Cross-Cult Psychol 48(2):195–216
Lew WJ (1998) Understanding the Chinese personality: parenting, schooling, values, morality, relations, and personality. Edwin Mellen Press, Lewiston
Azucar D, Marengo D, Settanni M (2018) Predicting the big 5 personality traits from digital footprints on social media: a meta-analysis. Pers Individ Differ 124:150–159
Kuss DJ, Griffiths MD (2011) Online social networking and addiction—a review of the psychological literature. Int J Environ Res Public Health 8(9):3528–3552
Blackwell D, Leaman C, Tramposch R, Osborne C, Liss M (2017) Extraversion, neuroticism, attachment style and fear of missing out as predictors of social media use and addiction. Pers Individ Differ 116:69–72
Hays J (2015) Chinese personality traits: indirectness, pragamatism, competition and losing face. http://factsanddetails.com/china/cat4/sub18/item116.html. Accessed June 2015
Goldberg LR (1992) The development of markers for the Big-Five factor structure. Psychol Assess 4(1):26–42
Gosling SD, Rentfrow PJ, Swann WB (2003) A very brief measure of the Big-Five personality domains. J Res Pers 37(6):504–528
Orchard LJ, Fullwood C (2010) Current perspectives on personality and Internet use. Soc Sci Comput Rev 28(2):155–169
Amiel T, Sargent SL (2004) Individual differences in Internet usage motives. Comput Hum Behav 20(6):711–726
Correa T, Hinsley AW, de Zúñiga HG (2010) Who interacts on the Web?: the intersection of users' personality and social media use. Comput Hum Behav 26(2):247–253
Amichai-Hamburger Y, Wainapel G, Fox S (2002) "On the Internet no one knows I'm an introvert": extroversion, neuroticism, and Internet interaction. Cyberpsychol Behav 5(2):125–128
Ellison NB, Steinfield C, Lampe C (2007) The benefits of Facebook "friends": social capital and college students' use of online social network sites. J Comput-Mediat Commun 12(4):1143–1168
Ong EYL, Ang RP, Ho JCM, Lim JCY, Goh DH, Lee CS, Chua AYK (2011) Narcissism, extraversion and adolescents' self-presentation on Facebook. Pers Individ Differ 50(2):180–185
Golbeck J, Robles C, Turner K (2011) Predicting personality with social media. In: International conference on human factors in computing systems, CHI 2011, extended abstracts volume, pp 253–262
Golbeck J, Robles C, Edmondson M, Turner K (2011) Predicting personality from Twitter. In: SocialCom/PASSAT, pp 149–156
Quercia D, Kosinski M, Stillwell D, Crowcroft J (2011) Our Twitter profiles, our selves: predicting personality with Twitter. In: SocialCom/PASSAT, pp 180–185
Schwartz HA, Eichstaedt JC, Kern ML, Dziurzynski L, Ramones SM, Agrawal M, Shah A, Kosinski M, Stillwell D, Seligman ME et al. (2013) Personality, gender, and age in the language of social media: the open-vocabulary approach. PLoS ONE 8(9):e73791
Bai S, Gao R, Zhu T (2012) Determining personality traits from RenRen status usage behavior. In: Computational visual media: first international conference, CVM 2012. proceedings, pp 226–233.
Chorley MJ, Whitaker RM, Allen SM (2015) Personality and location-based social networks. Comput Hum Behav 46(C):45–56
Chorley MJ, Colombo GB, Allen SM, Whitaker RM (2013) Visiting patterns and personality of Foursquare users. In: 2013 third international conference on cloud and green computing (CGC), pp 271–276
Noë N, Whitaker RM, Chorley MJ, Pollet TV (2016) Birds of a feather locate together? Foursquare checkins and personality homophily. Comput Hum Behav 58(C):343–353
Noë N, Whitaker RM, Allen SM (2016) Personality homophily and the local network characteristics of Facebook. In: 2016 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pp 386–393
Bachrach Y, Kosinski M, Graepel T, Kohli P, Stillwell D (2012) Personality and patterns of Facebook usage. In: Proceedings of the 4th annual ACM web science conference. ACM, New York, pp 24–32
Amichai-Hamburger Y, Vinitzky G (2010) Social network use and personality. Comput Hum Behav 26(6):1289–1295
Gosling SD, Gaddis S, Vazire S (2007) Personality impressions based on Facebook profiles. In: ICWSM, vol 7, pp 1–4
Qiu L, Lu J, Ramsay J, Yang S, Qu W, Zhu T (2016) Personality expression in Chinese language use. Int J Psychol. https://doi.org/10.1002/ijop.12259
Desarbo W, Edwards E (1996) Typologies of compulsive buying behavior: a constrained clusterwise regression approach. J Consum Psychol 5(3):231–262
Mowen JC, Spears N (1999) Understanding compulsive buying among college students: a hierarchical approach. J Consum Psychol 8(4):407–430
Gohary A, Hanzaee KH (2014) Personality traits as predictors of shopping motivations and behaviors: a canonical correlation analysis. Arab Econ Bus J 9(2):166–174
McCrae RR, Costa PT (2003) Personality in adulthood: a five-factor theory perspective. Guilford Press, New York
Smillie LD, Wilt J, Kabbani R, Garratt C, Revelle W (2015) Quality of social experience explains the relation between extraversion and positive affect. Emotion 15(3):339–349
Deng S, Liu Y, Li H, Hu F (2013) How does personality matter? An investigation of the impact of extraversion on individuals' SNS use. Cyberpsychol Behav Soc Netw 16(8):575–581
Qiu L, Leung AK, Ho JH, Yeung QM, Francis KJ, Chua PF (2010) Understanding the psychological motives behind microblogging. Stud Health Technol Inform 154:140–144
Costa PT, McCrae RR (2008) The revised NEO personality inventory (NEO-PI-R). In: The SAGE handbook of personality theory and assessment, vol 2, pp 179–198
Skowron M, Tkalčič M, Ferwerda B, Schedl M (2016) Fusing social media cues: personality prediction from Twitter and Instagram. In: Proceedings of the 25th international conference companion on world wide web, pp 107–108
Gao R, Hao B, Bai S, Li L, Li A, Zhu T (2013) Improving user profile with personality traits predicted from social media content. In: Proceedings of the 7th ACM conference on recommender systems. ACM, New York, pp 355–358
Markovikj D, Gievska S, Kosinski M, Stillwell D (2013) Mining Facebook data for predictive personality modeling. In: Proceedings of the 7th international AAAI conference on weblogs and social media (ICWSM 2013), pp 23–26
Kotsiantis SB, Zaharakis I, Pintelas P (2007) Supervised machine learning: a review of classification techniques. In: Emerging artificial intelligence applications in computer engineering. Frontiers in artificial intelligence and applications, vol 160, pp 3–24
Aizawa A (2003) An information-theoretic perspective of tf-idf measures. Inf Process Manag 39(1):45–65
Jiang J, Zhai C (2007) A systematic exploration of the feature space for relation extraction. In: HLT-NAACL, pp 113–120
Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: IJCAI, pp 1137–1145
Qiu L, Lin H, Ramsay J, Yang F (2012) You are what you tweet: personality expression and perception on Twitter. J Res Pers 46(6):710–718
Hu Y, Zhao J, Wu J (2016) Emoticon-based ambivalent expression: a hidden indicator for unusual behaviors in Weibo. PLoS ONE 11(1):e0147079
ijl (nickname in GitHub): Geopy. GitHub (2016)
Ye M, Yin P, Lee W-C, Lee D-L (2011) Exploiting geographical influence for collaborative point-of-interest recommendation. In: Proceedings of the 34th international ACM SIGIR conference on research and development in information retrieval. ACM, New York, pp 325–334
Sorokowski P, Sorokowska A, Oleszkiewicz A, Frackowiak T, Huk A, Pisanski K (2015) Selfie posting behaviors are associated with narcissism among men. Pers Individ Differ 85:123–127
Weiser EB (2015) #Me: narcissism and its facets as predictors of selfie-posting frequency. Pers Individ Differ 86:477–481
Wang D (2017) A study of the relationship between narcissism, extraversion, drive for entertainment, and narcissistic behavior on social networking sites. Comput Hum Behav 66:138–148
Kramer AD (2010) An unobtrusive behavioral model of gross national happiness. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, pp 287–290
Michikyan M, Subrahmanyam K, Dennis J (2014) Can you tell who I am? Neuroticism, extraversion, and online self-presentation among young adults. Comput Hum Behav 33:179–183
Hughes DJ, Rowe M, Batey M, Lee A (2012) A tale of two sites: Twitter vs. Facebook and the personality predictors of social media usage. Comput Hum Behav 28(2):561–569
Wei H, Zhang F, Yuan NJ, Cao C, Fu H, Xie X, Rui Y, Ma W-Y (2017) Beyond the words: predicting user personality from heterogeneous information. In: Proceedings of the tenth ACM international conference on web search and data mining. ACM, New York, pp 305–314
This work was supported by the National Key Research and Development Program of China (Grant No. 2016QY01W0205), NSFC (Grant Nos. 71501005 and 61421003) and the fund of the State Key Lab of Software Development Environment (Grant No. SKLSDE-2017ZX-05).
The self-reports and online profiles of users mentioned in this study are publicly available to the research community after careful anonymization and can be downloaded freely through https://doi.org/10.6084/m9.figshare.4765150.v1.
State Key Lab of Software Development Environment, Beihang University, Beijing, China
Zhenkun Zhou & Ke Xu
School of Economics and Management, Beihang University, Beijing, China
Jichang Zhao
Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing, China
Zhenkun Zhou
Ke Xu
ZJ, ZZ and XK conceived of and designed the research. ZZ and ZJ conduted the experiments and analysed the results. ZJ, ZZ and XK wrote the paper. All authors read and approved the final manuscript.
Correspondence to Jichang Zhao.
Zhou, Z., Xu, K. & Zhao, J. Extroverts tweet differently from introverts in Weibo. EPJ Data Sci. 7, 18 (2018). https://doi.org/10.1140/epjds/s13688-018-0146-8
|
CommonCrawl
|
Prostate volume and its relationship with anthropometric variables among different ethnic groups of South-Kivu, DR Congo
L. E. Mubenga ORCID: orcid.org/0000-0002-9825-90231,2,
M. P. Hermans3,
D. Chimanuka1,
L. Muhindo1,
E. Bwenge4,5 &
B. Tombal6
The prevalence of benign prostate hyperplasia (BPH) varies among individuals from different races or ethnic groups. South-Kivu province (DR Congo) has several morphologically different ethnic groups. Our aim was to compare prostate volume and assess its possible association with specific anthropometric measurements among major ethnic groups.
This is a cross-sectional study of male subjects, ≥ 40 year old, enrolled in 10 different sites of South-Kivu allowing both easy access and ethnic diversity. We compared urological features, anthropometric parameters, and body fat composition among 979 subjects who met study criteria: Shi (n: 233), Lega (n: 212), Havu (n: 204), Bembe–Fuliru (n: 172), and minority ethnic groups (n: 158).
Prostate volume was statistically different among ethnic groups. Median (interquartile range) size of prostate gland was significantly larger in Lega: 55 (38–81) cc, and smaller in Havu, 20 (17, 24) cc; p < 0.001. Overall, an enlarged prostate (≥ 30 cc) was documented in 91% of Lega men, in 59% of intermediate class men (Shi, Bembe–Fuliru, others), and in a mere 11% of Havu men. In multivariate analysis, prostate volume was significantly associated with age (p < 0.001), ethnic group (p < 0.001), residence (p: 0.046), and fasting blood glucose (p: 0.001). Conversely, prostate volume was neither associated with anthropometric parameters, nor with body fat composition.
Prostatic size varies widely among men from different ethnic origins in South-Kivu. Different genetic determinants and cellular composition of prostatic gland could represent risk factors that need to be examined in forthcoming studies.
Prostate volume is modulated by numerous factors, the best documented being aging and androgen levels [1]. The prevalence of benign prostate hyperplasia (BPH) varies widely among individuals from different races or ethnicities [2].
However, most of the available literature describes separately associations between BPH and anthropometrics [3,4,5], or ethnicity [2, 6, 7].
Genetic and parental factors, cellular prostatic composition, socioeconomic status, and dietary habits vary considerably among different ethnic groups and are considered as additional risk factors for prostate enlargement [6,7,8,9,10,11,12,13,14].
The province of South-Kivu (DR Congo) is inhabited by many ethnic groups that differ in sociocultural as well as anthropo-morphological and geographical parameters [15,16,17,18,19,20].
This study aimed to compare the prostatic volume in major representative ethnic groups of South-Kivu and assess its possible association with anthropometric parameters and body fat composition to document candidate risk factors for prostate enlargement specific to that area.
South-Kivu province has 13 major ethnic groups among which the most numerically important are the Shi, Lega, Havu, Fuliru, and Bembe. These groups differ by culture, dietary habits, geographical location, and morphological characteristics [15,16,17,18,19,20,21,22].
The screening team visited 10 different sites given their ease of access, security, and ethnic diversity. Advertisement for the prostate screening project was made on local radios, using posters, and during worship ceremonies in churches, with announcements made by officiants 6 weeks prior to data gathering.
The recruitment of participants was based on opportunity sampling. Men ≥ 40 years of age were offered free prostatic screening, research team members having settled in major local urban hospitals and stayed for at least three days at each center.
The language used in the field was Swahili, since it is the commonest language in this part of Eastern DR Congo. Trained physician examiners used a structured questionnaire to gather ethno-tribal data, as well as medical history collection, urological scores [International Prostate Symptom Score (IPSS), quality of life score (QOL)], and duration of urological symptoms.
The French versions of the IPSS and QOL scores were translated into Swahili by a panel of professional translators. The translated versions were successively tested on an arbitrary sample of 50 subjects to ensure a good understanding of the different points addressed.
Fasting blood was collected, and each participant had a clinical examination, including blood pressure measurement, anthropometric parameters (body weight, height, and waist circumference), and body fat composition (fat mass, visceral fat) using a bioelectrical tetrapodal body fat analyzer OMRON BF 508 impedance meter. The fat mass categorization and visceral fat scale were provided in the manufacturer's manual.
The conicity index (CI), which evaluates the statural distribution of body fat, was calculated using the Valdez formula [23, 24]:
$${\text{CI}} = \frac{\text{WC}}{{0.109 \times \sqrt {W/H} }}$$
WC = waist circumference (m); W weight (kg); H height (m); 0.109: constant.
A cutoff of 1.25 was used to classify CI into normal and high categories [24].
Urological features of BPH were assessed from lower urinary tract symptoms severity, peak flow rates at uroflowmetry, post-voiding residual (PVR) urine volume measured by suprapubic ultrasonography, and prostate volume (PV) estimated by pelvic ultrasonography.
Men with a prostate volume ≥ 30 cc were considered to have an enlarged gland [25,26,27]. The prostate volume was categorized as follows: < 30 cc (normal volume), 30–60 cc (moderately high volume), and > 60 cc (very high volume).
Subjects with a medical history of prostate surgery, ongoing medication(s) for BPH, prostate abnormality on digital rectal examination (DRE), and/or pelvic ultrasonography, suggesting prostate cancer or prostatitis, were excluded from the study.
Statistical analyses were carried on using Stata 14. Categorical data were summarized into frequencies and proportions. The distribution of continuous data was assessed graphically and statistically. Normally distributed data are reported as means with standard deviations (SD), while medians with interquartile ranges (IQR) were used for reporting non-Gaussian data.
The bivariable association between dependent and independent variables was examined using analysis of variance (ANOVA) for Gaussian data. When assumptions of normality or equality of variances were not met, a Kruskal–Wallis rank test was used. Variables were selected in the multivariable regression model based either on a p value below ≤ 0.2 and/or on biological plausibility. We report the β-coefficients (average expected mean differences) with 95% confidence intervals (CIs) as measure of association. The significance level for all the analyses was set to an α type 1 error value < 5%.
Informed consent was obtained from each participant, and the study protocol was approved by the local ethical committee.
Data were obtained from 979 subjects from five ethnic groups. There was no significant difference regarding age between ethnicities. However, prostatic volume was statistically different among ethnic groups. Average prostate gland size was significantly larger in Lega men, and smaller in Havu men. Urological features of the studied population are listed in Table 1.
Table 1 Urological features
Obesity markers including waist circumference, BMI, body fat, and visceral fat were significantly higher in Bembe–Fuliru men, whereas metabolic correlates of BPH (conicity index and fasting glucose) were significantly higher in Lega men. There was no statistical difference in blood pressure between groups (Table 2).
Table 2 Anthropometric parameters and metabolic correlates of BPH
About prostatic volume, Lega and Havu men showed the greatest ethnic difference, and the other ethnic groups were intermediate in size.
Thus, 91% of Lega men, 59% of subjects from intermediate ethnicities, and 11% of Havu men had an enlarged prostate.
In addition, among BPH subjects (prostatic volume ≥ 30 cc) younger than 60 years (n = 184), Lega men amounted to 71 (39%) versus 9 (5%) among Havu men (Table 3).
Table 3 Prostate volume by ethnic origin
In unadjusted analysis, prostate volume was associated with age, ethnic group, place of residence, conicity index, BMI, waist circumference, visceral fat, fasting blood glucose, and quality of life score.
However, following adjustment, age, ethnic group, place of residence, and fasting blood glucose were significantly associated with prostatic volume (Table 4).
Table 4 Bivariate and multivariate analyses of the association between ethnic groups and prostate volume
In this population-based study, we assessed the interrelationships between anthropometric parameters, including body composition, with prostate volume in major ethnic groups of South-Kivu.
Among these groups, the Shi ethnonym [15, 18, 21, 22] concerns the major ethnic group in the province. They are in the center and the western part of the province. The works of Hiernaux [15] in 1953 noted an average height of 163, 5 cm among male adult Shi individuals. Their habitual diet is varied.
The Lega [17, 19] is the second largest group; they live in the southwestern part of the province [19].
The Lega are constitutively of smaller height: 160, 4 cm according to the data of Hiernaux in 1953 [15, 17].
Their diet is saltier, meat-based, and richer in fat. Fish are eaten mostly as salty or smoked.
The Bembe and Fuliru live in Southern Kivu, on a territory neighboring Burundi. Their morphological characteristics are like those of Burundian Hutu. By extrapolation, their average height would approach 165, 9 cm according to Hiernaux's data [16, 17].
Their diet is characterized by low fat and animal proteins intakes and a high proportion of carbohydrates and vegetable proteins, akin to their counterparts in Burundi.
The Havu [16, 20, 21] live in north of the province, mainly along the shores of Lake Kivu and on Idjwi Island.
They are morphologically like Rwandan Tutsi, whose history they share. By nature, Havu individuals are taller; by extrapolation, average height of 176, 5 cm would correspond to that of Rwandan Tutsi.
Their diet is enriched in fresh fish and dairy products. Cattle are a sign of wealth, thus intended for sale or prestige, and livestock is rarely used for meat consumption.
Other tribal groups have intermediate characteristics to these majority groups.
Interethnic marriages produce individuals whose morphological characteristics are the result of those of the two parents, whereas they inherit the paternal ethnonym and accordingly patrilineal sociocultural traits.
This study highlighted a significant variability of prostate volume among South-Kivu ethnic groups regardless of anthropometric parameters and body composition (Tables 1 and 2). In addition, increased prostatic volume was independently associated with ethno-geographic origin (Table 4), confirming previous reports [5, 6, 13, 28].
Among Lega men, prostate volume was significantly larger than in other ethnic groups. Conversely, among Havu men, prostate volume was significantly smaller than in other groups. It was unusual to measure prostate sizes < 30 cc among Lega subjects, and > 20 cc among Havu subjects (Table 3).
As noted by Colon et al. [6], genetic variations between patients of different ethnicities represent additional modulators for the development of BPH.
Several genetic factors putatively involved in the development of BPH have been extensively studied. One of them is related to androgen receptor cytosin adenosin Guanidin (CAG) repeats [8, 10]. The frequency of CAG repeats in the androgen receptor (AR) gene was hypothesized to contribute to BPH incidence. In addition, Mitsumori et al. [8] found an association between short CAG repeat length and larger prostatic gland. Further, altered gene expression was proposed in driving BPH [10]. Luo et al. [29] reported a subset of 76 genes consistently upregulated in BPH when compared to normal prostate tissue, while Stamey et al. [10] identified 22 genes. These genes are involved at various levels of cell proliferation and growth factors regulation [10].
Interestingly, BPH in men younger than 60 years was overrepresented among the Lega subgroup. Similarly, Patel and Parsons [30] noted that 50% of patients who underwent surgical treatment for BPH before 60 years of age had an inherited autosomal dominant form of the disease. In addition, these subjects had massively enlarged prostate glands. A genetic cause will be considered even more than BPH occurs in younger patients (under 50 years) [12]. Pearson et al. [31] demonstrated that a strong familial history of early onset and marked prostate enlargement was more likely associated with risk inheritance rather than with symptom severity.
The genetic contribution to BPH is also revealed by cellular composition analysis of prostate cells and/or androgenic expression. Henderson et al. [7] noted a difference in the cellular composition of prostatic biopsies performed in the transition zone among populations from different origins, namely African-Americans, European-Americans, and Japanese. African-Americans had a higher stroma/epithelium ratio. Further, these authors observed variability in dihydrotestosterone (DHT) concentration among subjects of different races. We hypothesize that racial differences in cellular composition of the prostate and DHT concentration could be at play in the variation of prostate parameters among different ethnic groups in the present study.
Conversely, no study has yet been able to demonstrate a protective influence of one or more genes. A study of Mehta and Vezina [32] highlighted the potential protective role of aryl hydrocarbon receptor (AHR) signaling in delaying the onset of BPH. They reported the putative mechanisms by which AHR-signaling activation interferes with pathological processes involved in driving BPH.
Similarly, the smaller prostate volume found among Havu subjects in the present study suggests the existence of ethnically derived and likely genetic intrinsic protective factor(s) to prostate enlargement.
Lifestyle and dietary habits contribute substantially to BPH pathogenesis [33]. Lega peoples are used to consume a fattier and saltier diet, with high amount of red meat, considered as driving up the odds for BPH. Espinosa [14] suggested that dietary patterns associated with increased risks of BPH include high consumption of starch and red meat. Parsons [34] also suggested that meat and fat consumption were associated with increased odds of BPH. Wang et al. [13] demonstrated the role of diet on difference in prostatic characteristics between Mongolians and Han Chinese. They observed that Mongolian men had a significantly greater consumption of meat, and consequently a higher incidence of intraprostatic chronic inflammation and intraglandular calcification than Han men.
Conversely, Havu subjects live along Lake Kivu and on Idjwi Island. This ethnic group has a high consumption of fresh fish, vegetables, and pulses. They also raise cattle for sale. Vegetables and fruits are known to have protective effects against prostate enlargement. Cold sea fish are also rich in polyunsaturated fatty acids (PUFAs), which also have a beneficial effect on BPH [35, 36]. We did not assess the amount of PUFAs provided by traditional fishing in the Great African lakes region.
Despite less healthy dietary habits, Lega subjects did not exhibit higher prevalence of obesity markers or body fat, and the anthropometric parameters (height, weight, BMI, waist circumference) and body composition (total fat and visceral fat) were not higher in the Lega group but in the intermediate group consisting of Bembe and Fuliru men (Table 2). Further, none of these parameters were associated with prostate volume in multivariable analyses.
Even though cultural characteristics are more prevalent in rural areas, differences in prostate volume between the two extreme groups (Lega and Havu) were consistent across rural and urban areas.
We consider that either individuals conserved their dietary habits wherever they live or genetics have more influence than dietary variables on determining prostate volume.
Study limitations
This survey has several obvious and theoretical limitations. One is the genetic mixing of populations from interethnic marriages which may produce offspring with intermediate ethnic characteristics. Another is the lack of prostate biopsies to rule out undiagnosed prostate cancer and/or analyze prostatic cellular distribution.
Our data provide clear evidence of considerable interethnic variability in prostatic size among men living in South-Kivu. Further, prostate volume was neither associated with anthropometric parameters nor with body fat composition. Thus, genetics and adverse modulators of cellular composition of the prostatic gland are candidate risk factors that need further investigation as to their potential role in the pathogenesis of BPH in sub-Saharan Africa.
AHR:
aryl hydrocarbon receptor
AR:
BMI:
BPH:
CAG:
cytosin adenosin Guanidin
conicity index
DHT:
dihydrotestosterone
DRE:
FBG:
fasting blood glucose
IIEF score:
International Index of Erectile Function Score
IPSS:
International Prostate Symptom Score
IQR:
interquartile range
PUFAs:
polyunsaturated fatty acids
prostate volume
PVR:
post-voiding residual
Qmax:
peak uroflowmetry
QOL score:
quality of life score
VFS:
visceral fat score
Ozden C, Ozdal OL, Urgancioglu G, Koyuncu H, Gokkaya S, Memis A (2007) The correlation between metabolic syndrome and prostatic growth in patients with Benign Prostatic Hyperplasia. Eur Urol 51:199–206
Zeng QS, Xu CL, Liu ZY, Wang HQ, Yang B, Xu WD, Jin TL et al (2012) Relationship between serum sex hormones levels and degree of benign prostate hyperplasia in Chinese aging men. Asian J Androl 14(5):773–777
Badmus TA, Asaleye CM, Badmus SA, Takure AO, Ibrahim MH, Arowolo OA (2013) Benign prostate hyperplasia: average volume in southwestern Nigerians and correlation with anthropometrics. Niger Postgrad Med J 20(1):52–56
Burke JP, Rhodes T, Jacobson DJ, McGree ME, Roberts RO, Girman CJ, Lieber MM, Jacobsen SJ (2006) Association of anthropometric measures with the presence and progression of Benign Prostatic Hyperplasia. Am J Epidemiol 164:41–46
Campbell B (2005) High rate of prostate symptoms among Ariaal men from Northern Kenya. Prostate 62(1):83–90
Colon I, Payne RE (2008) Benign prostatic hyperplasia and lower urinary tract symptoms in African Americans and Latinos: treatment in the context of common comorbidities. Am J Med 121:S18–S26
Henderson BE, Bernstein L, Ross RK, Depue RH, Judd HL (1988) The early in utero oestrogen and testosterone environment of blacks and whites: potential effects on male offspring. Br J Cancer 57:216–218
Mitsumori K, Terai A, Oka H, Segawa T, Ogura K, Yoshida O, Ogawa O (1999) Androgen receptor CAG repeat length polymorphism in benign prostatic hyperplasia (BPH): correlation with adenoma growth. Prostate 41(4):253–257
Partin AW, Page WF, Lee BR, Sanda MG, Miller RN, Walsh PC (1994) Concordance rates for benign prostatic disease among twins suggest hereditary influence. Urology 44(5):646–650
Tang J, Yang JC (2009) Etiopathogenesis of benign prostatic hyperplasia. Indian J Urol 25(3):312–317
Parsons JK (2007) Modifiable risk factors for benign prostatic hyperplasia and lower urinary tract symptoms: new approaches to old problems. J Urol 178(2):395–401
Sanda MG, Beaty TH, Stutzman RE, Childs B, Walsh PC (1994) Genetic susceptibility of benign prostatic hyperplasia. J Urol 152(1):115–119
Wang YB, Xie LP, Yuan YG, Zheng XY, Qin J (2007) Differences in the risk factors for clinical benign prostatic hyperplasia between the Mongolian and the Han people. Natl J Androl 13(1):33–36
Espinosa G (2013) Nutrition and benign prostatic hyperplasia. Curr Opin Urol 23(1):38–41
Hiernaux J (1953) Les caractères physiques des Bashi. Institut Royal Colonial Belge. Section des sciences naturelles et médicales. Mémoires-collection in 8°. Tome XXIII, fasc5 Mém Inst R Colon Belge 5–7:43–47
Hiernaux J (1954) Les caractères physiques des populations du Ruanda et de l'Urundi. Institut Royal des Sciences Naturelles de Belgique. Mémoires. Deuxième Série Fasc 52:21–44
Hiernaux J (1956) Analyse de la variation des caractères physiques humains en une région de l'Afrique centrale: Ruanda-Urundi et Kivu. Annales du musée Royal du Congo Belge, Sciences de l'Homme Anthr 3:1–8
Colle P (1971) Essai de monographie des Bashi. Centre d'étude des langues Africaines 198:2–12, 275–280
Biebuyck D (1973) Lega culture. Art, initiation, and moral philosophy among a Central African people. University of California Press, pp 3–11, 16, 31
Balegamire BJ and Rusimbuka JM (1991) Langue et Culture en Afrique. Le cas des Bahavu du Zaïre. Mélanges à la mémoire d'Aramazani Birusha A. Editions NORAF Ottignies-Kinshasa, pp 23–25
Dikonda WL (1971–1972) Les rites chez les Bashi et les Bahavu. Thèse présentée en vue de l'obtention du grade de Docteur en Sciences sociales. Université Libre de Bruxelles, Faculté des Sciences sociales, politiques et économiques, pp 18–20
Burume LL (1991) Histoire «Six derniers règnes» antérieurs à 1980 et culture des Bashi au Zaïre. Centre Protestant d'Editions et de Diffusion, CEDI 1:7–10
Fontela PC, Winkelmann ER, Ricardo P, Viecili N (2017) Study of conicity index, body mass index and waist circumference as predictors of coronary artery disease. Rev Port Cardiol 36(5):357–364
Shenoy U, Jagadamba J (2017) Influence of central obesity assessed by Conicity index on lung age in young adults. J Clin Diagn Res 11(4):CC09–CC12
Chokkalingam AP, Yeboah ED, De Marzo A, Netto G, Yu K, Biritwum RB, Tettey Y et al (2012) Prevalence of BPH and lower urinary tract symptoms in West Africans. Prostate Cancer Prostatic Dis 15:170–176
Chen WC, Yang CC, Chen GY, Wu MC, Sheu HM, Tzai TS (2004) Patients with a large prostate show a higher prevalence of androgenetic alopecia. Arch Dermatol Res 296:245–249
Cózar-Olmoa JM, Hernández-Fenández C, Minana-López B, Amón-Sesmerod JH, Montlleó-Gonzáleze M, Rodríguez-Antolínf A, Caballero-Martínezg F (2012) Consensus on the clinical impact of the new scientific evidence available on benign prostatic hyperplasia. Actas Urol Esp 36(5):265–275
Jin B, Turner L, Zhou Z, Handelson DJ (1999) Ethnicity and migration as determinants of human prostate size. J Clin Endocrinol Metab 84:3613–3619
Luo J, Dunn T, Ewing C, Sauvageot J, Chen Y, Trent J et al (2002) Gene expression signature of benign prostatic hyperplasia revealed by cDNA microarray analysis. Prostate 51:189–200
Patel ND, Parsons JK (2014) Epidemiology and etiology of benign prostatic hyperplasia and bladder outlet obstruction. Indian J Urol 30(2):170–176
Pearson J, Lei H, Beaty TH, Wiley KE, Isaacs SD, Isaacs WB et al (2003) Familial aggregation of bothersome benign prostatic hyperplasia symptoms. Urology 61:781–785
Mehta V, Vezina CM (2011) Potential protective mechanisms of aryl hydrocarbon receptor (AHR) signaling in benign prostatic hyperplasia. Differentiation 82(4–5):211–219
Ejike CECC (2008) Ezeanyika LUS. Metabolic syndrome in sub-Saharan Africa: "smaller twin" of a region's prostatic diseases? pp 909–920
Parsons JK (2011) Lifestyle factors, benign prostatic hyperplasia, and lower urinary tract symptoms. Curr Opin Urol 21(1):1–4
Bravi F, Bosetti C, Dal Maso L, Talamini R, Montella M, Negri E et al (2006) Food groups and risk of benign prostatic hyperplasia. Urology 67:73–79
Bravi F, Bosetti C, Dal Maso L, Talamini R, Montella M, Negri E et al (2006) Macronutrients, fatty acids, cholesterol, and risk of benign prostatic hyperplasia. Urology 67:1205–1211
The authors are grateful to the South-Kivu Provincial Health Division, and to the medical staff of host hospitals in Katana, Kalehe, Nyabibwe, Birava, Uvira, Walungu, Luhwinja, Kaziba, Bukavu, and Kamituga for helping to carry out this study.
This study was funded by research Grants from the "Société Belge d'Urologie" (SBU), and the Vlaamse Interuniversitaire Raad (VLIR).
Department of Urology, Université Catholique de Bukavu (UCB), 02, Michombero Street, Bukavu, Democratic Republic of Congo
L. E. Mubenga, D. Chimanuka & L. Muhindo
Institut Supérieur des Techniques Médicales de Bukavu (ISTM Bukavu), Bukavu, Democratic Republic of Congo
L. E. Mubenga
Division of Endocrinology and Nutrition, Cliniques Universitaires St-Luc and Institut de Recherche Expérimentale et Clinique (IREC), Université Catholique de Louvain, Brussels, Belgium
M. P. Hermans
Institute of Health and Society - Institut de Recherche Santé et Société (IRSS) School of Public Health, Université Catholique de Louvain, Brussels, Belgium
E. Bwenge
Ecole Régionale de Santé Publique, Université Catholique de Bukavu (UCB), Bukavu, Democratic Republic of Congo
Department of Urology, Université Catholique de Louvain (UCL), Brussels, Belgium
B. Tombal
D. Chimanuka
L. Muhindo
LEM provided a substantial contribution to conception and design, acquisition of data, and drafting the article. MPH revised critically the manuscript. DC actively participated in documentary research, participants' follow-up, and data collection. LM actively participated in documentary research and data collection. EB performed statistical analyses and interpretation of data.BT assessed the manuscript and provided final approval of the version to be submitted. All authors read and approved the final manuscript.
Correspondence to L. E. Mubenga.
Supplementary data to this article are available and can be obtained if needed.
The study was approved by the Catholic University of Bukavu Ethics and Research Committee (Reference: UCB/CIE/NC/002/2016). Each participant approved to participate in this study. At each step of the present study, confidentiality and anonymity rules were observed.
Mubenga, L.E., Hermans, M.P., Chimanuka, D. et al. Prostate volume and its relationship with anthropometric variables among different ethnic groups of South-Kivu, DR Congo. Afr J Urol 26, 32 (2020). https://doi.org/10.1186/s12301-020-00040-x
DOI: https://doi.org/10.1186/s12301-020-00040-x
Anthropometric parameters
South-Kivu
|
CommonCrawl
|
Journal of Engineering and Applied Science
Performance evaluation of furrow irrigation water management practice under Wonji Shoa Sugar Estate condition, in Central Ethiopia
Belay Yadeta ORCID: orcid.org/0000-0002-6348-83521,
Mekonen Ayana1,
Muluneh Yitayew2 &
Tilahun Hordofa3
Journal of Engineering and Applied Science volume 69, Article number: 21 (2022) Cite this article
Wonji Shoa Sugar Estate (WSSE) is one of the large-scale irrigation schemes in Ethiopia which was established in 1951. The aim of this study was to evaluate the performance of current furrow irrigation water management practices of WSSE. Performance evaluation of the current furrow irrigation was evaluated based on field experiment and the WinSRFR model. For this purpose, ten fields were selected from commonly used furrow lengths (32, 48, and 64 m). Application efficiency, distribution uniformity, and deep percolation performance indicators were used for evaluation. The performance of furrow irrigation showed poor performance, and as an improvement option, inflow rate and cutoff time were altered keeping furrow geometry constant. Advance and recession times for all furrow lengths were recorded and simulated using the WinSRFR model to obtain an accurate cutoff time of irrigation. The result obtained showed that the time allocated for all furrow lengths was not accurately determined. As an improvement option, both inflow rate and cutoff time changed and the performance of furrow irrigation significantly improved. By changing those decision variables, application efficacy and deep percolation performance indicators were significantly improved but distribution uniformity was not changed. In almost all statistical indices used, predicted performances by model were better than computed values in the existing situation. From the current result, it can be concluded that the inflow rate and cutoff time should be changed to attain good performance and increase furrow irrigation efficiency.
Performance of current furrow irrigation under Wonji Shoa sugar Estate was evaluated.
For the evaluation purposes, different performance indicators were used from the measured data and the result was simulated using WinSRFR model.
The performance of existing furrow irrigation practices was showed poor performances
Based on the result obtained from the existing furrow irrigation performance, the improvement options were proposed and evaluated.
Irrigation is an artificial application of water to crops by using gravity or pressure to convey water from the source to the field that is required by the crop to fulfill soil moisture deficit in the crop root zone. There are three major categories of irrigation systems which include surface irrigation, subsurface irrigation, and pressurized irrigation system (Tadele, 2019). Surface irrigation is the most commonly used and oldest type of irrigation that transports water from the source to the irrigated field via gravitational forces [9]. In many developing countries, surface irrigation is widely used because of its low cost and energy requirement as compared to subsurface and pressurized irrigation systems (sprinkler and drip irrigation systems) which are usually more efficient than surface irrigation [11]. From the surface irrigation methods, furrow irrigation is mostly used from small- to large-scale irrigation systems. But if not well designed and operated, furrow irrigation can be ineffective due to the complexity of the interactions between field design, soil infiltration characteristics, and irrigation water management practices. Maintaining efficiency and effective utilization of irrigation water on large-scale irrigation at an acceptable level is the major challenge in the management of irrigation systems. Most of the time, poor performance of the furrow irrigation system resulted from incorrect dimensioning and inappropriate design furrow system [7].
Evaluation of the furrow irrigation performance for appropriate irrigation water management practices is required to overcome the losses of excess water, water stress on the crop, and its adverse effects. Performance evaluation is the systematic analysis of an irrigation system or management based on measurements taken under field conditions and practices normally used and comparing the same with an ideal one. Traditionally, irrigation assessment is conducted to evaluate the performance of existing irrigation systems. A full irrigation evaluation involves an assessment of the water source characteristics, pumping, distribution system, storage system, and field application systems. However, assessments are also conducted on several components of farm irrigation systems. The performance of an irrigation system at the field scale depends on several design variables, management variables, and system variables. In general, a set of indicators are used for evaluating the performance of an irrigation scheme. The irrigation performance indicators are classified as engineering, field water use, crop and water productivity, and socioeconomic indicators (Adane, 2020). The performance of furrow irrigation can be enhanced through proper designing based on soil, crop, topography, size, and shape of the irrigated area that affect its performance [19], because furrow irrigation system performance is influenced by various design elements. Many studies have demonstrated that improving irrigation management and field design can greatly enhance irrigation performance [2].
Nowadays, models are increasingly used to optimize furrow irrigation decision variables. The most common models used in surface irrigation systems are fully hydrodynamic, zero inertia, kinematic, and volume balance models [20]. Fully hydrodynamic models use the most general form of the Saint-Venant equations, which are the nonlinear partial differential functions that characterize the unsteady, gradually varied flow of water in a furrow. Zero inertia models are based on the understanding that flow velocities encountered in surface irrigation are very small and changes in velocity with respect to time and space are virtually non-existent. Kinematic wave models assume that flow in the irrigation channel is all at a fixed depth. In the kinematic wave models, the depth gradient of the flow and inertial terms of the momentum is very small and neglected. The volume, balance model applied primarily onto the advance phase and the momentum equation was completely neglected [6, 20]. Furrow irrigation systems should be designed in such a way as to ensure an adequate and uniform water application over the fields and to minimize the potential water losses. Many researchers in this field have engaged in optimizing the design of furrow irrigation systems to improve their performance [1, 7, 15].
The performance evaluation of the furrow irrigation method can be undertaken to know how well the irrigation meets the water requirements and how well the applied water is distributed throughout the furrow run. The main purpose of furrow irrigation performance evaluation is to achieve efficient and effective use of irrigation water applied to the fields. Because, poor design and lack of suitable criteria for irrigation systems are generally responsible for uneven irrigation application leading to wastage of water, waterlogging, and salinity problems especially in surface irrigation systems (Eslamian et al., 2017).
There are several conditions imposed by the researchers to carry out performance evaluation of irrigation schemes and some of these are:
When something is wrongly undertaken but the factors which cause it is not clearly known, which requires finding to know what are the causes of this.
When it required to know whether the existing situation is in good condition or requires some improvement.
When a researcher seeks to understand the detailed workings of irrigation in order to draw generalized inferences about the performance of the irrigation scheme.
The performance evaluation of the existing surface irrigation system is a key whether further improvement of the irrigation system is required or not based on the current performances. The performance evaluation of an existing irrigation system can be conducted by different performance indicators. On a scientific basis, the properties of performance indicators should be based on an empirically quantified, statistically tested fundamental model of that part of the irrigation [2].
In the case of Ethiopia, the Wonji Shoa Sugar Estate is one of the large-scale irrigation systems under production dominantly by furrow irrigation for more than half of a century and requires evaluation of its irrigation performance. Evaluation of the irrigation performance for proper irrigation water management practices at Wonji Shoa sugarcane production is very important because the Wonji Shoa Sugar Estate is one of the major sugar producers in the country. In line with this, the performance of existing furrow irrigation under the Wonji Shoa Sugar Estate was evaluated. The result obtained from the selected field showed poor performance of the existing furrow irrigation system. Based on this, improvement options were undertaken. An observational field experiment was conducted to evaluate the improvement option proposed based on ten selected fields using commonly used furrow lengths during three irrigation events. Therefore, the main purpose of this study was to evaluate the performance of existing furrow irrigation practices and develop improvement options for better irrigation water management practices under field conditions.
Description of the study area
The Wonji Shoa Sugar Estate (WSSE) is located in the South East Shoa Zone of Oromia Regional State, at a distance of 110 km from Addis Ababa, the capital city of Ethiopia. Geographically, it is situated at 8° 21′–8° 29′ N and 39° 12′–39° 18′ E and altitude of 1223 to 1550 m above MSL (Fig. 1). The area is characterized by gentle and regular topography making it most suitable for irrigation. Sugarcane is grown in the area, mostly as a monoculture. The climate of the area is characterized as semi-arid with the main rainy season during the months of June to September. The rainfall of the area is erratic both in quantity and distribution. The area receives a mean annual rainfall of 831 mm with mean annual maximum and minimum temperature of 27 and 15 °C respectively. The soil of the area varies from sandy loam to heavy clay types.
Location map of the study area
The Estate is the first commercial large-scale irrigation scheme in Ethiopia and was established in 1951 at Wonji by Netherland's Hender Verneering Amsterdam (H.V.A.) Company private investors and the Ethiopian government. When the factory started production in 1954, its initial production was 140 tons/year (WSSF, 2018). The two factories known by the name Wonji and Shoa sugar factories altogether had the capacity of producing 75,000 tons of sugar per year till recent time (prior to the completion of the new Wonji Shoa Sugar Factory at Dodota site far apart in the old factories). After serving for more than half a century and getting obsolete, the two factories viz. Wonji and Shoa sugar factories were closed in 2011 and 2012 respectively. Replacing these pioneer factories, the new and modern factory had started production in 2013 with higher production capacity. Currently, the WSSE sugarcane plantation covers and irrigated area of 12,000 ha of which 5,000 ha is managed by the Estate itself and 7000 ha is managed by out-growers. The irrigation water source is the Awash River. The Wonji Shoa Sugar Estate location map is presented in Fig. 1.
This study design includes experimental and observational types of research to obtain the required data for the evaluation of the existing furrow irrigation under the Wonji-Shoa condition.
Field selection
To collect the required data, ten fields were selected with three different furrow lengths commonly used in the area (32, 48, and 64 m). The furrow width is common for all furrow lengths which is 1.45 m. Those fields were selected purposively based on their location difference in all directions by considering the soil type and furrow lengths. Even if these furrow lengths are commonly used in the study area, currently, furrow length 32 m is more dominantly used in all types of soils and furrow length 48 m is rarely used. But furrow length 64 m became obsolete and was used only in ratoon cane fields during the study period (2019 to 2020). By considering those conditions, ten fields were purposely selected from the three furrow lengths. The field selections were based on the dominantly used furrow lengths in the sugarcane fields. Accordingly, five fields with furrow length 32 m, three fields with furrow length 48 m, and two fields with a 64-m furrow length were selected.
Water application methods
Irrigation water was applied to the field by the furrow irrigation system. During the irrigation application, the water diverted to the field ditches was controlled using gates at the outlets of the secondary canals. The inflow rate applied to each furrow was 4.7 l/s under the existing furrow irrigation practices. This inflow rate was applied to each field and three irrigation events were considered for data collection. The three irrigation events were similarly considered for all furrow lengths (irrigation event one for furrow lengths 32, 48, and 64 m; irrigation event two for furrow lengths 32, 48, and 64 m; and irrigation event three for furrow lengths 32, 48, and 64 m). For each selected field, five furrow lengths were selected for data collection at one time for all furrow lengths. At one irrigation event, 50 furrows were considered which was similar to all irrigation events. The data collected from five furrows was averaged out that was presented as one data at one irrigation event for each furrow length.
Primary and secondary data were collected to evaluate the performance of the existing furrow irrigation practices. Based on the performance of the existing furrow irrigation practices, improvement options were proposed and the performance also evaluated again during the study periods. The primary data was collected from each selected field during three consecutive irrigation events which include furrow geometry, irrigation time, and soil moisture analysis. For furrow geometry, furrow length and furrow width (top, middle, and bottom width) were measured at field conditions using a meter tape. The maximum flow depth was measured using a ruler during each irrigation event. Advance time (time taken from the entrance of irrigation to the furrow to some specified points), cutoff time (time taken from the starting of irrigation to inflow cutoff), and recession time (time taken from the inflow cutoff to all water ponded on the field fully infiltrated) were recorded using a stopwatch. The advance and recession times were recorded at 16-m intervals for all fields and furrow lengths, while the secondary data were irrigation history and cane types.
Soil sample collection and analyses
Soil samples were collected at two soil depths (0–30 and 30–60 cm) of representative fields for the purpose of determining bulk densities, textures, and soil moisture content. The soil sample was analyzed in the laboratory at Wonji Shoa Sugar Research and Development Center. The soil moisture content at field capacity and permanent wilting point of the selected fields were taken from the center because it was done very recently. The soil parameters were analyzed following standard laboratory procedures and manual [4].
The soil sample was taken at 16-m intervals along the furrow for all fields from the specified depths of sampling and soil moisture was determined using gravimetric methods. The depth is believed to cover the effective root depth of the sugarcane. The soil samples were taken from inside furrow before and after irrigation water application from 2 to 3 days based on the soil types. The soil samples were taken by a soil core sampler and then the soil wet weight was weighted using a movable sensitive balance immediately after the sample was taken. The samples were then oven-dried at 105 °C for 24 h. The dried weight of the soil sample was again weighted to calculate the gravimetric moisture content using (Eq. 1) [4].
$${\mathrm{W}}_{\uptheta}=\left(\frac{{\mathrm{W}}_{\mathrm{w}}-{W}_{\mathrm{d}}}{{\mathrm{W}}_{\mathrm{d}}}\right)\times \mathrm{As}\times 100$$
where Wθ = gravimetric soil moisture content (% volume bases), Ww = wet weight of the soil (g), Wd = dry weight of the soil (g), As = apparent specific gravity (−)
The water holding capacity of the soil was analyzed by computing the total available water and readily available water using (Eqs. 2 and 3) [8].
$$\mathrm{TAW}=1000\left({\theta}_{\mathrm{FC}}-{\theta}_{\mathrm{PWP}}\right)\times {D}_i$$
$$\mathrm{RAW}=\rho \times \mathrm{TAW}$$
where TAW = the total available soil water in the root zone (mm), θFC = the soil water content at field capacity (m3/m3), θPWP = the soil, water content at wilting point (m3/m3), Di = the rooting depth (m), ρ = soil moisture depletion fraction.
The required depth of irrigation, depth of water stored in the soil profile of the root zone (Ds) and the total depth of water applied to the field (Da) and the total amount of water deep percolated were calculated using (Eqs. 4, 5, 6).
$${Z}_{\mathrm{req}}=\sum_{i=1}^n10\left({\theta}_{\mathrm{FC}}-{\theta}_{\mathrm{mi}}\right)\times {AS}_i\times {D}_i$$
where Zreq = amount of water required to fill the root zone to field capacity (mm), θFC= soil moisture content at field capacity (%), θmi = present soil moisture content (%), ASi = apparent specific gravity of the soil (unit less), Di = effective root depth (m), n = number of sampling depth in the root zones.
$${D}_{\mathrm{s}}=\sum_{i=1}^n\frac{\left({M}_{ai}-{M}_{bi}\right)}{100}\times {AS}_i\times {D}_i$$
$${D}_{\mathrm{a}}=\frac{\overline{Q}\times \Delta t}{A}$$
where Mai = Moisture content of ith layer of the soil after irrigation on a dry weight basis (%), Mbi = Moisture content of ith layer of soil before irrigation on dry weight basis (%), \(\overline{Q}\) = average discharge to furrow during the irrigation (m3/s), Δt = duration of the irrigation (s), A = furrow irrigated area (m2)
The depth of water percolating below the root zone is the amount of water infiltrated minus the water extracted by the plant roots to meet its evaporation demands. Or this means deep percolation is the difference in the amount of water applied to satisfy the crop water requirement and the amount of water excess applied or lost before the crop conception. This is computed using the soil water balance analysis. This depth of water can be computed experimentally in different methods like lysimeter but for the current study it was computed from the soil moisture analysis (Eq. 7)
$${D}_{\mathrm{p}}={\theta}_{\mathrm{i}}+{I}_{\mathrm{d}}+{R}_{\mathrm{d}}-{\theta}_{\mathrm{FC}}-\mathrm{ET}$$
where Dp = depth of water deep percolated, θi = initial soil moisture content, Id = depth of irrigation applied, Rd = depth of available rainfall, θFC = soil moisture content at field capacity and ET = evapotranspiration.
The soil infiltration function is one of the most important parameters in the surface irrigation system performance evaluation. There was a different infiltration function developed so far that was used based on some preferences of the users according to available input data and required outputs. The infiltration function is empirically determined to yield the relationship between the infiltrated water and the opportunity time (the time during which the water contacts the soil). One of the empirical models for infiltration is Kostiakov's infiltration model (Kostiakov, 1932), which is derived using the data observed under either field or laboratory conditions. The power type Kostiakov function is the most widely accepted to describe the infiltration characteristics (Eq. 8).
$$\mathrm{Z}={Kt}^a$$
Z = infiltrated water m3/m; k, a = empirical parameters; t = intake opportunity time, min
The zero inertia model is based on the understanding that flow velocities encountered in surface irrigation with very small and changes in velocity with respect to time and space are virtually non-existent. Under most surface irrigation conditions, inertial terms in the Saint-Venant equations are negligible when compared to the force terms (heat and friction). The zero inertia model is a simplified form of the fully hydrodynamic model without the acceleration and inertia terms which is given by Eq. 9 [21].
$$\frac{\delta y}{\delta x}={S}_0-{S}_{\mathrm{f}}$$
where y = depth of flow (m), x = distance along the field length (m), S0 = longitudinal slope of the field (m/m), SF = slope of energy grade line (friction slope) (m/m),
Performance evaluation of furrow irrigation
The performance evaluation of irrigation gives an insight into how well the irrigation meets the water requirements and the applied water is distributed uniformly throughout the field. The most common performance evaluation criteria in furrow irrigation are application efficiency, deep percolation ratio, tailwater ratio, distribution uniformity, and water requirement efficiency. But for the performance evaluation of close-ended furrow irrigation conditions, application efficiency, deep percolation, and distribution uniformity are commonly used. In general, to improve irrigation performance, mathematical models of the surface irrigation processes have been developed in the last decades. Developed models consider several variables, including field geometries, slope, hydraulic roughness, furrow discharges, and irrigation time. The interaction among these variables determines the advance times, recession times, and infiltrated water depth, which in turn influence performance indicators such as application efficiency and uniformity of water distribution along the furrow. Those selected parameters were evaluated by measuring the required data from the field and simulated using the WinSRFR model. Those parameters were mathematically determined using Eqs. 10, 11, 12 [13, 14].
$$\mathrm{AE}=\frac{D_{\mathrm{S}}}{D_{\mathrm{a}}}\times 100$$
$$\mathrm{DU}=\frac{D_{\mathrm{min}}}{D_{\mathrm{a}}}\times 100$$
$$\mathrm{DP}=\frac{D_{\mathrm{p}}}{D_{\mathrm{aver}}}\times 100$$
where Ds = depth of soil moisture stored in the root zone (mm), Da = total depth applied to the furrow (mm), Dmin = minimum depth of water infiltrated in the field (mm), Daver. = mean of depths infiltrated over the furrow length (mm).
To improve the performance of the furrow irrigation, the three main parameters considered were furrow geometry, cutoff time, and inflow rate. Since the performance evaluation was undertaken on the existing furrow situation, it was very difficult to change the furrow geometry. But in this study, to improve the performance of the furrow irrigation, cutoff time and inflow rate were alternatively tests. The procedure was as follows:
Step one: All parameters were used as they were measured from the field under current irrigation practice to evaluate the performance of furrow irrigation.
Step two: Inflow rate was varied while the other parameters (furrow geometry and cutoff time) were kept constant and the performance of the furrow irrigation was evaluated.
Step three: Cutoff time was varied while the other parameters (furrow geometry and inflow rate) were kept constant and similarly the performance of the furrow irrigation was evaluated.
Step four: Furrow geometry was taken as a constant and both inflow rate and cutoff time were varied in which similar to the other steps the performance of the furrow irrigation again evaluated.
The optimization performance of furrow irrigation was computed using Eq. 13 and the maximum values of the optimized result were used as the decision variables to improve the performance of the irrigation [13, 14].
$$\kern1.25em OP=\left(\alpha \times Ea\right)+\left(\beta \times DU\right)-\left(\gamma \times DP\right)$$
where the correction factors for the parameters α = β = γ = 0.33
Input data for WinSRFR model
The functionality and organization of WinSRFR were defined based on the analytical process typically followed in assessing and improving the hydraulic performance of surface irrigation systems. All required data for the WinSRFR 4.1.3 model was collected from the field with a close-ended furrow irrigation system. The input data required for the WinSRFR model were the required depth of irrigation, furrow geometry, advance, cutoff, and recession time of irrigation. These data were collected from the selected fields. The soil samples were taken from the field and analyzed in the soil laboratory to determine the required depth of irrigation. The furrow geometry was measured at constant intervals along the furrow. The advance, cutoff, and recession times were recorded during irrigation events. There are several efficiency terms which are used to evaluate irrigation system performance. But for this study, application efficiency, distribution uniformity, and deep percolation were used. The major input parameters for the simulations were presented in Table 1.
Table 1 Input parameters for the WinSRFR simulation model
Statistical indices used to evaluate the model performance
Measured and simulated performance indicators were evaluated using the statistical indices. To test the goodness of fit between measured and simulated values, different statistical indices were used [13, 14].
Coefficient of determination (R2)
The coefficient of determination is used to measure the model's goodness of fit. In the WinSRFR model, R 2 is an indicator of the efficiency of the model in comparing the measured values with the simulated values. It explains the amount of variance explained by the model in comparison to the measured values (Eq. 14).
$${R}^2=\frac{{\left[\sum_{i=1}^n\left({M}_{\mathrm{i}}-\overline{M}\ \right)\left({S}_i-\overline{S}\right)\right]}^2\kern0.5em }{\sum_{i=1}^n{\left({M}_i-\overline{M}\right)}^2\times \sum_{i=1}^n\left({S}_i-S\right)\Big]{}^2}$$
where n = number of observations, Mi = ith value of the measurement, Si = ith value of the simulated, \(\overline{M}\) = mean of the measured values, \(\overline{S}\) = mean of values simulated
Root mean square error
Root mean square error is a measure of the overall deviation between measured and simulated values. The root mean square error has a minimum value of 0 and the values closer to 0 indicate the better model performance (Eq. 15).
$$\mathrm{RMSE}=\sqrt{\frac{\sum_{i=1}^n{\left({S}_i-M\right)}^2}{n}}$$
Index of agreement (d)
Index of agreement is another statistical index used to measure the model efficiency. The value of d varies from − ∞ to + 1. The closer the value to + 1 indicates the simulated values are better than the measured values (Eq. 16).
$$d=1-\frac{\sum_{i=1}^n{\left({M}_i-{S}_i\right)}^2}{{\left(\sum_{i=1}^n\left|{S}_i-\overline{S}\right|+\left|{M}_i-M\right|\right)}^2}$$
Selected soil properties
The soil samples taken from each field were analyzed at the Wonji Shoa Sugar Research and Development Center. As the result obtained indicated, the average soil bulk density of the fields was ranging from 1.49 to 1.60 for the upper depth (0–30 cm) and 1.59 to 1.70 g/cm3 for lower depth (30–60 cm) but the recommended soil bulk density of the Wonji Shoa ranges from 1.5 to 1.7 g/cm3 from 0–30- to 30–60-cm depth respectively. The average soil moisture content was determined over the considered depths during each irrigation event and found to be 40.49 to 42.77 % in the upper depth and 41.82 to 45.03% in the lower depth respectively (Table 2).
Table 2 Analysis of selected soil parameters for the selected fields
The result obtained from the selected soil parameters revealed the soil parameters analyzed were within the range of the analysis obtained in the previous study by the center (Table 2). But the bulk density in upper depth indicated a little variation greater than the previous study by the center and this may be due to the compaction of the soil by agronomic operation and repeated irrigation.
Advance and recession times
For each furrow length considered, the advanced and recession time was measured during all the irrigation events. The measured advanced and recession time was simulated using the WinSRFR model. Whether the time allotted for irrigation is greater than or less than the required time for each furrow length during three irrigation events was evaluated. The results of the measured and simulated irrigation time are presented in Table 3.
Table 3 Measured and simulated irrigation time under existing irrigation practices
There was no variation between the measured advanced time during the first and third irrigation events in furrow length 32 m. But there was significant variation in furrow lengths 48 and 64 m during all irrigation events. In the case of simulated advance time, there was a significant variation in all furrow lengths during all irrigation events. The variation was high in furrow lengths 32 (22.03%) and 48 m (20.31%), and less in furrow length 64 m (2.12%) during the third irrigation event. This indicates when the variation is very high, the time allocated for irrigation is not accurate and less variance indicated that the allocated time for irrigation was more accurate. In the case of recession time, in all furrow lengths considered, the simulated recession time was lower than the measured recession time, which indicates irrigation water applied stagnated on the surface of the land for a long period. The variation was significant in furrow lengths 64 and 48 m than 32 m. The current study was similar to the study conducted by Mazarei et al. [13, 14] which indicates the best fit between measured and simulated was achieved for the advance phase than the recession time. Similarly, the study conducted by Ali and Mohammed [3] indicates a similar result with the current study. Also, the other study conducted by [16]) indicated low variation in advance and recession times between measured and simulated values.
Performance evaluation of furrow irrigation was done based on the measurement undertaken on selected furrows as described in the methodology section. The WinSRFR model was used to simulate the measured data whether they were best fitted or not. The results are presented in Table 4. The computed and simulated application efficiency, distribution uniformity, and deep percolation were compared across the furrow lengths considered compared to the standard ranges of recommendation for furrow irrigation [10]. The overall average values of measured and simulated application efficiency and deep percolation performance of the existing furrow irrigation practices showed poor but distribution uniformity of the irrigation was relatively good for all furrow lengths.
Table 4 Computed and simulated performance indicators for existing furrow irrigation practices
The performance of furrow irrigation under the current irrigation practices was poor. Application efficiency was less than the recommended values for almost all irrigation events in furrow lengths 32 and 64 m but furrow length 48 m during the first and third irrigation events and furrow length 64 m during the third irrigation event was within the recommended range. But relatively, furrow length 48 m was within the recommendation ranges for all irrigation events. However, the distribution uniformity for furrow length 48 m was less than the furrow lengths 32 and 64 m in all irrigation events. In terms of the distribution uniformity of irrigation, furrow length 64 m was very good followed by furrow lengths 32 and 48 m respectively. On the other hand, the deep percolation was greater than the recommendation ranges for furrow lengths 32 and 64 m but furrow length 48 m during all irrigation events showed within the recommendation ranges (Table 4) which indicates more amount of irrigation water lost before the crop conception.
To improve the performance of furrow irrigation, decreasing the inflow rate and increasing the cutoff time or vice versa are the main options. In line with this, for the first alternatives to improve the performance of the irrigation, the irrigation water applied was reduced by 25% from the normal irrigation water inflow rate used (28 l/s for six sets of furrows at one time) in the existing irrigation practices. After this reduction of inflow rate, the furrow irrigation performance was computed for all furrow lengths during all irrigation events. The computed and simulated performance indicators of the furrow irrigation after the inflow rate was reduced were improved (Table 5).
Table 5 Computed and simulated performance indicators by changing inflow rate
As can be seen from the results, decreasing the inflow rate of 4.7 l/s (current irrigation practices) to 3.5 l/s showed an improvement in furrow irrigation performances. From the furrow irrigation performance indicators considered, application efficiency was greatly improved by changing the inflow rate than the other performance indicators.
In the second alternative, irrigation cutoff time was changed by keeping the other variables as constant as same to the existing practices. In this scenario, the cutoff time was reduced by 2, 3, and 6 min for furrow lengths 32, 48, and 64 m respectively from the existing irrigation practices. The performance of the furrow irrigation was improved as compared to the existing furrow irrigation performance. As can be seen from Table 6, the irrigation performance of the furrow lengths considered was improved. As results revealed, the irrigation efficiency was improved by changing only the cutoff time and keeping the other parameters constant.
Table 6 Computed and simulated performance indicators by changing cutoff time
Finally, the performance of furrow irrigation was evaluated by varying both inflow rate and cutoff time. In furrow irrigation, performance can be improved by reducing the discharge rate and increasing the cutoff time or vice versa. But in relation to irrigation water resource concerns, reducing discharge rate and increasing cutoff time is more preferable. In the current study, the discharge rate was reduced from 4.7 to 3.5 l/s and increased cutoff time by 2, 3, and 6 min for furrow lengths 32, 48, and 64 m respectively. By combining those two decision variables, the performance of furrow irrigation in all furrow length during the three irrigation events were improved. The computed maximum and minimum optimized values were 47.69 and 38.21% recorded from furrow lengths 64 and 32 m during the first and third irrigation events respectively. The overall computed optimized values were 39.82, 44.81, and 46.26% for furrow lengths 32, 48, and 64 m respectively (Table 7).
Table 7 Computed and simulated performance evaluation by changing discharge and cutoff time
Discussion of the results
Application efficiency
For well-designed and managed furrow irrigation, the standard ranges of furrow irrigation application efficiency recommended is 50–70% but can be more based on soil types and management practices (Griffiths, 2007). The performance evaluation of three furrow lengths for the three irrigation events was evaluated based on the existing irrigation practices that showed poor performance. As an improvement option, changing the decision variables alternatively and combining the changed decision variables together were evaluated. The maximum and minimum application efficiency obtained from the existing furrow irrigation practices were 58 and 41.5% from furrow length 64 m during the first and third irrigation events. When the average computed application efficiencies of those three furrow lengths in the overall three irrigation events at the existing practices were compared, furrow lengths of 48 and 64 m were performing better with an application efficiency of 51.9 and 49.5% respectively. Similarly, in the simulated values using the WinSRFR model, the same furrow lengths perform better with 48.5 and 48.7% application efficiency respectively. From the current study findings, it can be concluded that based on the average computed values, only furrow length 48 m fulfill the recommendation for furrow irrigation application efficiency. This indicated that the performance of the existing furrow irrigation based on application efficiency showed poor performance.
When the inflow rate decision variable changed, the computed maximum and minimum application efficiency obtained were 68.3 and 56.0% from furrow lengths 48 and 32 m during the third and second irrigation events. The result obtained indicated by changing only the inflow rate decision variable, the application efficiency of furrow irrigation was improved. Similarly, from the simulated values using the model, the maximum and minimum application efficiencies obtained were 69.3 and 44.0% from furrow lengths 48 and 32 m during third and first irrigation events respectively. This indicated when the inflow rate was reduced, more application efficiency improvement was obtained from furrow length 48 m.
When cutoff time changed by keeping the other variables constant, the computed maximum and minimum application efficiencies obtained were 71.0 and 56% from furrow lengths 48 and 64 m during the third and first irrigation events. From the simulated values using the WinSRFR model, the maximum and minimum application efficiency obtained were 71.5 and 56% from furrow lengths 48 and 64 m during the second and first irrigation event respectively. By changing this decision variable, the application efficiency of all furrow lengths during all irrigation events was within the range of recommendation.
When both discharge and cutoff time changed by keeping the other variables constant, the computed maximum and minimum application efficiencies obtained were 77.7 and 63.4% from furrow lengths 48 and 32 m during the third irrigation event. From the simulated values using the WinSRFR model, the maximum and minimum application efficiencies obtained were 78.5 and 62.2% from furrow lengths 64 and 32 m during the third irrigation event respectively. By reducing the inflow and increasing the irrigation cutoff time, the furrow irrigation application efficiency was improved by 25.4% from the existing irrigation practices. The application efficiency was compared in all scenarios for all furrow lengths during the three irrigation events were presented in Fig. 2.
Application efficiency performance of furrow irrigation
The current study findings were similar to the finding of Shumye and Singh [17] which revealed that the application efficiency of furrow irrigation is highly dependent on the decision variables, especially discharge applied in relation to the cutoff time which was within the recommended ranges. Similar to the current study, the other study findings by Mamo and Wolde [12] indicated that the application efficiency of furrow irrigation significantly changed as the discharge applied changed and less on the cutoff time. The findings by Taddesse et al. [18] also showed the application efficiency of furrow irrigation obtained by changing the decision variable was higher than the current study findings as well as other related findings. From the field observation during the current experiment, the main reasons which resulted in lower application efficiency were mismanagement of irrigation water, which includes the loss of irrigation water after entering into the field ditches due to lateral flow to the side furrows, time of irrigation application, and lack of soil consideration for irrigation since all soil types were irrigated similarly with a similar rate of application.
Distribution uniformity
The achievable distribution uniformity for furrow irrigation was poor if it is less than 70%, good from 70 to 85%, very good 85–90, and excellent if greater than 90% [10]. As those authors conclude, distribution uniformity is poor and the irrigation water applied unevenly distributed if it is ≤ 60% and good if ≥ 80% which indicates that the applied is relatively uniform over the entire furrow length. Based on those, all furrow lengths were in the acceptable range during all irrigation events from both computed and simulated values. The highest and lowest computed distribution uniformity were 91 and 79.3% from furrow lengths 64 and 48 m during the second and first irrigation events. When the average distribution uniformity from the three furrow lengths during all irrigation events was compared, better distribution uniformity was obtained from furrow length 64 m (91.0%) and less distribution uniformity obtained from furrow length 48 m (79.3%). This indicated that the irrigation water applied was uniformly distributed throughout the furrow length. When the computed values were simulated using the WinSRFR model, the maximum and minimum distribution uniformity were 91.0 and 79.3% from furrow lengths 32 and 64 m during the first and second irrigation events.
Unlike the application efficiency, the distribution uniformity of furrow irrigation was not improved after the decision variables were considered changed. From the simulated values using the WinSRFR model, the maximum and minimum distribution uniformity obtained were 82.2 and 62.5% from furrow lengths 32 and 64 m during the first and second irrigation events respectively. In general, by changing the decision variables, the distribution uniformity of furrow irrigation was not significantly improved, but after those data were simulated using the WinSRFR model, in almost all furrow lengths and irrigation events, it was reduced significantly. The distribution uniformity performance of furrow irrigation for all furrow lengths considered under the three irrigation events is presented in Fig. 3.
Distribution uniformity performance of furrow irrigation
The study findings by [16]) indicate that the distribution uniformity of furrow irrigation was improved by changing inflow rate and cutoff time but the improvement was not significant as that of application efficiency similar to the current study results. The other study findings conducted by Mazarei et al. [13, 14] indicated that the minimum distribution uniformity was obtained during the low volume of irrigation applied to longer furrow length, but the distribution uniformity increased as the irrigation event increased but the current study result indicated that as the irrigation events increased, the distribution uniformity was not linearly increased.
Deep percolation
For properly designed and managed furrow irrigation, a deep percolation value ranges from 30 to 50%, but as the values of deep percolation increased, it is not recommended since more irrigation water applied is lost (Griffiths, 2007). From the existing irrigation practices, the maximum and minimum computed deep percolation were 58.5 and 46.0% from furrow lengths 64 and 48 m during the third irrigation event. From the simulated values using the WinSRFR model, the maximum and minimum deep percolation obtained were 58.5 and 44.0% from furrow lengths 64 and 48 m during third and first irrigation events.
In the case of deep percolation, as the furrow irrigation decision variable changed, deep percolation of furrow irrigation was significantly improved similar to application efficiency. When the applied inflow rate was reduced, the computed maximum and minimum deep percolation obtained were 44 and 31.7% from furrow lengths 64 and 48 m during the third irrigation event. From the simulated values using the WinSRFR model, the maximum and minimum deep percolation obtained were 58 and 30.7% from furrow lengths 32 and 48 m during first and third irrigation events respectively.
When cutoff time changed by keeping the other variables constant, the computed maximum and minimum deep percolation obtained were 44.0 and 29.0% from furrow lengths 64 and 48 m during the first and third irrigation events. From the simulated values using the WinSRFR model, the maximum and minimum deep percolation obtained were 44.0 and 28.5% from furrow length 64 during the first and second irrigation events respectively. When both discharge and cutoff time changed by keeping the other variables constant, the computed maximum and minimum deep percolation obtained were 36.6 and 22.3% from furrow lengths 32 and 48 m during the third irrigation event. When this result was simulated using the WinSRFR model, the maximum and minimum deep percolation obtained were 37.8 and 21.5% from furrow lengths 32 and 64 m during the second irrigation event respectively. The deep percolation performance of furrow irrigation for all furrow lengths considered under the three irrigation events is presented in Fig. 4.
Fig, 4
Deep percolation performance of furrow irrigation
The study findings by Mazarei et al. [13, 14] indicated that the deep percolation was improved by changing the decision variables and the values were within the range of recommendation for furrow irrigation. But in the current study, the deep percolation was out of the recommendation ranges from the existing irrigation practices. However, similar to this study's findings, the current study also showed an improvement by changing the decision variables. The other study findings conducted by Mamo and Wolde [12] indicated that deep percolation was also within the acceptable range of all furrow lengths considered under the sugarcane field. Similar to this study, the current study result indicated that the computed values of deep percolation were within the acceptable range of all furrow lengths after the decision variables changed. Generally, from this study, it can be concluded that there was a high loss of irrigation water applied in the form of deep percolation for all furrow lengths based on the existing irrigation practices in the study area.
Statistical analysis of advance and recession times
Advance and recession times were measured and simulated under different furrow lengths (32 m, 48 m, and 64 m) and uniform furrow slopes (0.05%) for determining the possibility of using WinSRFR as a prediction tool of the furrow irrigation performance evaluation under the Wonji Shoa climatic condition.
Based on the values of indices obtained from advance time, in terms of R2 measured values were better than the model for furrow length 32 m in all irrigation events. But for furrow lengths 48 and 64 m, model predicted values were better than the measured values for all irrigation events. In terms of RMSE, the model predicted values were better than the measured values in all furrow lengths during all irrigation events (Table 8). In general, for all statistical indices considered for advance time, the model prediction was better than the measured values. This indicated that the time allocated for irrigation was not accurate as required for almost all furrow lengths during all irrigation events. From the current study result, it can be concluded that, the applied discharge and cutoff time should be reduced alternatively from the existing situation or increasing one decision variable and decreasing the other decision variable for all furrow lengths to obtain good performance efficiency of furrow irrigation. To attain the good performance of furrow irrigation, the applied discharge should be reduced from 4.7 l/s (currently applied inflow rate) to 3.5 l/s, and the cut of time for each furrow should be increased from 9 to 11 min for furrow length 32 m, from 15 to 18 min for furrow length 48 m, and from 25 to 31 min for furrow length 64 m under the current study area conditions. Because, due to the emerging problems of irrigation water shortage, it is better to increase cutoff time rather than increase inflow rate to obtain a good performance of irrigation.
Table 8 Comparison of measured and simulated advance and recession times of the existing situation
Statistical analysis of furrow irrigation performance
The performance of existing furrow irrigation was evaluated using selected statistical indices. Those statistical indices were used to evaluate whether the measured performance of the furrow irrigation was in a good agreement with the simulated values or not. To obtain the improved performance of furrow irrigation, the decision variables (discharge and cutoff time) were changed alternatively from the existing irrigation practices. The computed performance from actually measured data during three irrigation events for all furrow lengths considered (32, 48, and 64 m) was simulated using the WinSRFR model.
There was a poor agreement between measured and simulated values of furrow irrigation for almost all irrigation events for furrow lengths 32 and 64 m but there was relatively a better agreement for furrow length 48 m in terms of all statistical indices (Table 9). In the case of all the furrow irrigation performances considered, the model prediction was better than the computed values for all furrow lengths in all irrigation events. This indicated that more irrigation water applied was lost before the crop uses; irrigation water applied was not uniformly distributed throughout the furrow lengths in all irrigation events for all furrow lengths considered in which more of the irrigation water losses was due to deep percolation since the furrow condition close-ended. But, after the decision variables were changed, the performance of furrow irrigation was significantly improved. All the statistical indices used to evaluate the goodness of fit between the measured and simulated values also showed good agreements in all furrow lengths during all irrigation events (Table 10).
Table 9 Statistical indices of existing furrow irrigation performances
Table 10 Statistical indices of improved furrow irrigation performances
The study findings by Dewedar et al. [5] revealed that with the indices used to evaluate the performance of furrow irrigation with different furrow length results, the model predictions were better than the computed value similar to the current study based on the existing irrigation practices. The other study findings by Taddesse et al. [18] indicated that the performance of furrow irrigation predicted by the model was better than the computed values which showed the goodness of fit between computed and predicted was not good similar to the current study under the existing situations.
Performance evaluation of furrow irrigation under the Wonji Shoa sugarcane plantation is important since it is the dominant irrigation system for sugarcane production for a long period of time. The irrigation performance indicators considered in this study were application efficiency, distribution uniformity, and deep percolation. In this study, to improve the performance of furrow irrigation under the sugarcane field, the decision variables which significantly influence the efficiency (inflow rate and cutoff time) were changed alternative to improve the performance of the furrow irrigation from existing irrigation practices. The inflow rate was reduced by keeping the other decision variables contestant and the performance of furrow irrigation was improved. Also, as another option, the irrigation cutoff time was increased by 2, 3, and 6 min for furrow lengths 32, 48, and 64 m respectively by keeping the other variables constant; again, the performances of furrow irrigation was significantly improved. As another improvement option, both the inflow rate and cutoff time changes were combined together that resulted in a better performance of the furrow irrigation as compared to the existing irrigation. From the irrigation performance indicators considered, application efficiency and deep percolation were significantly improved by changing inflow rate cutoff time as compared to the existing irrigation practices for all furrow lengths during all irrigation events but distribution uniformity was not significantly changed. In general, the application efficiency and deep percolation losses were significantly improved, while the distribution uniformity was not changed from the existing irrigation practices.
The performance of furrow irrigation was optimized by changing the discharge and cutoff time for all furrow lengths during all irrigation events. This result indicated that, the inflow rate of 3.5 l/s showed better furrow irrigation performance than the existing inflow rate of 4.7 l/s with respect to the cutoff time changed for each furrow length. The measured data were simulated using the WinSRFR model for each decision variable, and similarly, the better improvements of the furrow irrigation performances were obtained at the same altered decision variables for all furrow lengths. Therefore, from the current findings, it can be concluded that to improve the irrigation performance, inflow rate should be reduced by increasing a little bit irrigation cutoff time for all furrow lengths under the current study area. Also, furrow length 48 m showed better improvements as compared to 32 and 64 m. The result of these findings can be used in similar situations with the current study area. Finally, it is strictly recommended to reconsider and adjust inflow rate and irrigation cutoff time to improve the irrigation performances and reduce the loss of irrigation water.
All the raw data from where the analysis was done in the manuscript are available from the corresponding author. At any time with the reasonable requirements, the corresponding author can be contacted.
AE:
Index of agreement
DP:
MARC:
Melkassa Agricultural Research Center
OP:
Optimization performance
R 2 :
Coefficient of determination
RMSE:
WSSE:
Wonji Shoa Sugar Estate
Akbar G, Ahmad MM, Ghafoor A, Khan M, Islam Z (2016) Irrigation efficiencies potential under surface irrigated. J Engineering Appl Sci 35(2):15–24
Akbar G, Raine S, McHugh AD, Hamilton G, Hussain Q (2017) Strategies to improve the irrigation efficiency of raised beds on small farms. Sarhad J Agriculture 33(4):615–623. https://doi.org/10.17582/journal.sja/2017/33.4.615.623
Ali OAM, Mohammed ASH (2015) Performance evaluation of gated pipes technique for improving surface irrigation efficiency in maize hybrids. Agricultural Sciences 06(05):550–570. https://doi.org/10.4236/as.2015.65055
Bashour, I. I., and Sayegh, A. H. (2007). Methods of Analysis for soils of arid and semi-arid regions. Food and Agriculture Organisation of the United Nations, 128.
Dewedar, O. M., Mehanna, H. M. and El-shafie, A. F. (2019). Validation of Winsrfr For some hydraulic parameters of furrow irrigation in Egypt. 19(1), 2108–2115.
Ebrahimian H, Liaghat A (2011) Field evaluation of various mathematical models for furrow and border irrigation systems. Soil Water Res 6(2):91–101 https://doi.org/10.17221/34/2010-swr
El-Hazek A (2016) Challenges for optimum design of surface irrigation systems. Journal of Scientific Research and Reports 11(6):1–9. https://doi.org/10.9734/jsrr/2016/27504
FAO (Food and Agricultural Organization of the United Nations). (2020). Soil testing methods manual. In Soil testing methods manual. https://doi.org/10.4060/ca2796en
Haile GG, Kassa AK (2015) Irrigation in Ethiopia, a review. J Environment Earth Sci 3(10):264–269
Irmak S, Odhiambo LO, Kranz WL, Eisenhauer DE (2011) Irrigation efficiency, uniformity and crop water use efficiency. Biological Syst Engineering:1–10
Koech R, Langat P (2018) Improving irrigation water use efficiency: a review of advances, challenges and opportunities in the Australian context. Water (Switzerland) 10(12). https://doi.org/10.3390/w10121771
Mamo Y, Wolde Z (2015) Evaluation of water management in irrigated sugarcane production : case study of Wondogenet, Snnpr, description of study area. Global Adv Res J Phys Appl Sci 4(1):57–63
Mazarei R, Mohammadi AS, Naseri AA, Ebrahimian H, Izadpanah Z (2020a) Optimization of furrow irrigation performance of sugarcane fields based on inflow and geometric parameters using WinSRFR in Southwest of Iran. Agricultural Water Manag 228(November):105899. https://doi.org/10.1016/j.agwat.2019.105899
Mazarei R, Soltani A, Ali A, Ebrahimian H (2020b) Optimization of furrow irrigation performance of sugarcane fields based on inflow and geometric parameters using WinSRFR in Southwest of Iran. Agricultural Water Management 228(November 2019):105899 https://doi.org/10.1016/j.agwat.2019.105899
Nie WB, Fei LJ, Ma XY (2014) Applied closed-end furrow irrigation optimized design based on field and simulated advance data. J Agricultural Sci Technology 16(2):395–408
Pascual-Seva N, Bautista AS, Lopez-Galarza S, Maroto JV, Pascual B (2013) Performance analysis and optimization of furrow-rrigated chufa crops in Valencia (Spain). Spanish J Agricultural Res 11(1):268–278. https://doi.org/10.5424/sjar/2013111-3384
Shumye A, Singh IP (2018) Evaluation of canal water conveyance and on-farm water application for a small-scale irrigation scheme in Ethiopia. Int J Water Resources Environmental Engineering 10(8):100–110. https://doi.org/10.5897/IJWREE2018.0800
Taddesse F, Bekele M, Tadele D (2019) Evaluation of hydraulic performance : a case study of etana small scale irrigation scheme : wolaita zone. Ethiopia. International Journal Hydrology 3(5):369–374 https://doi.org/10.15406/ijh.2019.03.00202
Tesfaye tefera yigezu, Kannan Narayanan and Tilahun Hordof. (2016) Effect of Furrow Length and Flow Rate on Irrigation Performances and Yield of Maize. Int J Engineering Res Technology (IJERT) 5(04):602–607 https://doi.org/10.17577/ijertv5is040846
Valipour, M. (2012). Comparison of surface irrigation simulation models: full hydrodynamic, zero inertia, kinematic wave. J Agricultural Sci, 4(12). https://doi.org/https://doi.org/10.5539/jas.v4n12p68
Zayton AM, Eng A, Inst R (2015) Predicting Of Infiltration Parameters Under Different Inflow Rates. Egypt. J. Agric. Res. 93(1):163–184
We are grateful to Adama Science and Technology and Ambo University for the financial support for this research. We would like to appreciate the Melkassa Agricultural Research Center (MARC), especially the irrigation and drainage department, since they provided some materials required for data collection during this research. We also thank Mr. Demissie Tsegawu and Mr. Mengistu Bossie for their support by providing the transport facilities during data collection. Finally, we thank Sugar Corporation Research and Development at the Wonji Center for their collaboration during laboratory analysis throughout the research works undertaken.
There is no fund obtained for the research conducted and I think I indicated this in the submission process at the end before approving the submission.
Department of Water Resource Engineering, Adama Science and Technology University, Adama, Ethiopia
Belay Yadeta & Mekonen Ayana
Arizona State University, Tempe, USA
Muluneh Yitayew
Ethiopian Institute of Ethiopian Agricultural Research, Melkassa, Ethiopia
Tilahun Hordofa
Belay Yadeta
Mekonen Ayana
All authors played a vital role to accomplish this manuscript. The corresponding author, BY, developed the research idea, designed the research method, and collected all required data on furrow geometry, soil data, advance, cutoff, and recession time, and analyze the collected data. After all data were collected and arranged, required soil parameters were analyzed, the performance evaluation of furrow irrigation using WinSRFR model computed and simulated, and finally, the manuscript was organized and written. MA evaluated the overall work of the research starting from developing the idea of the research, critically commenting, editing the manuscript, and improving the scientific write-up of the manuscript. MY and TH have contributed consistent and inspiring guidance and valuable suggestions on the manuscript of the corresponding author. Finally, all the authors have read and approved the manuscript as correct and as to the standard for publication.
Correspondence to Belay Yadeta.
The authors declare that they have no conflict of interest regarding the publication of this article.
Yadeta, B., Ayana, M., Yitayew, M. et al. Performance evaluation of furrow irrigation water management practice under Wonji Shoa Sugar Estate condition, in Central Ethiopia. J. Eng. Appl. Sci. 69, 21 (2022). https://doi.org/10.1186/s44147-022-00071-x
Cutoff time
Furrow irrigation
WinSRFR
|
CommonCrawl
|
Limits of Vector-Valued Functions
Definition: If $\vec{r}(t) = (x(t), y(t), z(t))$ is a vector-valued function, then $\lim_{t \to a} \vec{r}(t) = \left ( \lim_{t \to a} x(t), \lim_{t \to a} y(t), \lim_{t \to a} z(t) \right )$ provided that the limits of the components exist.
//Limits of vector-valued functions in $\mathbb{R}^n$ are defined similarly as the limit of each component. //
Let's look at some examples of evaluating limits of vector-valued functions. Consider the vector-valued function $\vec{r}(t) = (t^2 - 1, t + 1, e^t)$ and suppose that we wanted to compute $\lim_{t \to 2} \vec{r}(t)$. To compute this limit, all we need to do is compute the limits of the components.
\begin{align} \quad \lim_{t \to 2} \vec{r}(t) = \left ( \lim_{t \to 2} t^2 - 1, \lim_{t \to 2} t + 1, \lim_{t \to 2} e^t \right ) = (3, 3, e^2) \end{align}
For another example, consider the vector-valued function $\vec{r}(t) = \left ( \frac{e^t - 1}{t}, \frac{t - 1}{t+1}, t^2 + 3 \right )$ and suppose that we wanted to compute $\lim_{t \to 0} \vec{r}(t)$. To compute this limit, we will compute all of the limits of the components again, however, this time the limits are a little trickier to compute. Fortunately, we have already learned about various rules to evaluate limits.
\begin{align} \quad \lim_{t \to 0} \vec{r}(t) = \left ( \lim_{t \to 0} \frac{e^t - 1}{t}, \lim_{t \to 0} \frac{t - 1}{t + 1}, \lim_{t \to 0} t^2 + 3 \right ) \end{align}
For $\lim_{t \to 0} \frac{e^t - 1}{t}$ we will use L'Hospital's Rule, and so $\lim_{t \to 0} \frac{e^t - 1}{t} \overset{H} = \lim_{t \to 0} \frac{e^t}{1} = 1$.
For $\lim_{t \to 0} \frac{t - 1}{t+1}$, we can use direct substitution and so $\lim_{t \to 0} \frac{t-1}{t+1} = -1$.
Now $\lim_{t \to 0} t^2 + 3$ is also easy to compute by direct substitution and so $\lim_{t \to 0} t^2 + 3 = 3$.
Thus we have that $\lim_{t \to 0} \vec{r}(t) = (1, -1, 3)$.
The following theorem gives us a formal definition to say a vector-valued function $\vec{r}(t)$ has limit $\vec{b}$ at $t = a$, which is analogous to that of limits of real-valued functions.
Theorem 1: Let $\vec{r}(t) = (x(t), y(t), z(t))$ be a vector-valued function and let $\vec{b} = (b_1, b_2, b_3) \in \mathbb{R}^3$. Then $\lim_{t \to a} \vec{r}(t) = \vec{b}$ if and only if $\forall \epsilon > 0$ $\exists \delta > 0$ such that if $0 < \mid t - a \mid < \delta$ then $\| \vec{r}(t) - \vec{b} \| < \epsilon$.
Proof: $\Rightarrow$ Suppose that $\lim_{t \to a} \vec{r}(t) = \vec{b}$. Then we have that:
\begin{align} \left ( \lim_{t \to a} x(t), \lim_{t \to a} y(t), \lim_{t \to a} z(t) \right ) = (b_1, b_2, b_3) \end{align}
Now recall that two vectors are equal if and only if their components are equal, and so the equation above implies that $\lim_{t \to a} x(t) = b_1$, $\lim_{t \to a} y(t) = b_2$, and $\lim_{t \to a} z(t) = b_3$. Now notice that these three limits are limits of real-valued functions.
Since $\lim_{t \to a} x(t) = b_1$ then $\forall \epsilon > 0$ $\exists \delta_1 > 0$ such that if $0 < \mid t - a \mid < \delta_1$ then $\mid x(t) - b_1 \mid < \frac{\epsilon}{3}$.
Since $\lim_{t \to a} y(t) = b_2$ then $\forall \epsilon > 0$ $\exists \delta_2 > 0$ such that if $0 < \mid t - a \mid < \delta_2$ then $\mid y(t) - b_2 \mid < \frac{\epsilon}{3}$.
Since $\lim_{t \to a} z(t) = b_3$ then $\forall \epsilon > 0$ $\exists \delta_3 > 0$ such that if $0 < \mid t - a \mid < \delta_3$ then $\mid z(t) - b_3 \mid < \frac{\epsilon}{3}$.
Let $\delta = \mathrm{min} \{ \delta_1, \delta_2, \delta_3 \}$. Then if $0 < \mid t - a \mid < \delta$ we have that:
\begin{align} \quad \quad \| \vec{r}(t) - \vec{b} \| = \| (x(t) - b_1, y(t) - b_2, z(t) - b_3)) \| = \sqrt{(x(t) - b_1)^2 + (y(t) - b_2)^2 + (z(t) - b_3)^2} \\ \quad \quad ≤ \sqrt{(x(t) - b_1)^2} + \sqrt{(y(t) - b_2)^2} + \sqrt{(z(t) - b_3)^2} = \mid x(t) - b_1 \mid + \mid y(t) - b_2 \mid + \mid z(t) - b_3 \mid < \frac{\epsilon}{3} + \frac{\epsilon}{3} + \frac{\epsilon}{3} = \epsilon \end{align}
$\Leftarrow$ Suppose that $\forall \epsilon > 0$ $\exists \delta > 0$ such that if $0 < \mid t - a \mid < \delta$ then $\| \vec{r}(t) - \vec{b} \| < \epsilon$. Therefore we have that $\| \vec{r}(t) - \vec{b} \| = \| (x(t) - b_1, y(t) - b_2, z(t) - b_3) \| < \epsilon$, which implies that:
\begin{align} \sqrt{(x(t) - b_1)^2 + (y(t) - b_2)^2 + (z(t) - b_3)^2} < \epsilon \\ (x(t) - b_1)^2 + (y(t) - b_2)^2 + (z(t) - b_3)^2 < \epsilon^2 \\ \end{align}
Now since all terms of the lefthand side of this equation are positive, we must have that for $0 < \mid t - a \mid < \delta$ then $(x(t) - b_1)^2 < \epsilon^2$, $(y(t) - b_2)^2 < \epsilon^2$, and $(z(t) - b_3)^2 < \epsilon^2$, and so $\mid x(t) - b_1 \mid < \epsilon$, $\mid y(t) - b_2 \mid < \epsilon$ and $\mid z(t) - b_3 \mid < \epsilon$. Therefore by the definition of real-valued function limits we have that $\lim_{t \to a} x(t) = b_1$, $\lim_{t \to a} y(t) = b_2$, and $\lim_{t \to a} z(t) = b_3$.
Thus $\lim_{t \to a} \vec{r}(t) = \left ( \lim_{t \to a} x(t), \lim_{t \to a} y(t), \lim_{t \to a} z(t) \right ) = (b_1, b_2, b_3) = \vec{b}$. $\blacksquare$
|
CommonCrawl
|
Multiplicity of closed characteristics on $ P $-symmetric compact convex hypersurfaces in $ \mathbb{R}^{2n} $
Weisong Dong 1, and Chang Li 2,,
School of Mathematics, Tianjin University, Tianjin 300354, China
Hua Loo-Keng Center for Mathematical Sciences, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
* Corresponding author: Chang Li
Received April 2020 Revised September 2020 Published June 2021 Early access November 2020
We derive second order estimates for $ \chi $-plurisubharmonic solutions of complex Hessian equations with right hand side depending on the gradient on compact Hermitian manifolds.
Keywords: Complex Hessian equations, Second order estimates, Hermitian manifolds.
Mathematics Subject Classification: Primary: 35J15, 53C55; Secondary: 58J05, 35B45.
Citation: Weisong Dong, Chang Li. Second order estimates for complex Hessian equations on Hermitian manifolds. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2619-2633. doi: 10.3934/dcds.2020377
S. Y. Cheng and S. T. Yau, On the existence of a complete Kähler metric on noncompact complex manifolds and the regularity of Fefferman's equation, Comm. Pure Appl. Math., 33 (1980), 507-544. doi: 10.1002/cpa.3160330404. Google Scholar
J. C. Chu, L. D. Huang and X. H. Zhu, The Fu-Yau equation in higher dimensions, Peking Math. J., 2 (2019), 71-97. doi: 10.1007/s42543-019-00016-z. Google Scholar
J. C. Chu, L. D. Huang and X. H. Zhu, The Fu-Yau equation on compact astheno-Kähler manifolds, Adv. Math., 346 (2019), 908-945. doi: 10.1016/j.aim.2019.02.006. Google Scholar
J. C. Chu, L. D. Huang and X. H. Zhu, The 2-nd Hessian type equation on almost Hermitian manifolds, preprint, arXiv: 1707.04072. Google Scholar
J. C. Chu, V. Tosatti and B. Weinkove, The Monge-Ampère equation for non-integrable almost complex structures, J. Eur. Math. Soc., 21 (2019), 1949-1984. doi: 10.4171/JEMS/878. Google Scholar
S. Dinew and S. Kołodziej, Liouville and Calabi-Yau type theorems for complex Hessian equations, Amer. J. Math., 139 (2017), 403-415. doi: 10.1353/ajm.2017.0009. Google Scholar
A. Fino, Y. Y. Li, S. Salamon and L. Vezzoni, The Calabi-Yau equation on 4-manifolds over 2-tori, Trans. Amer. Math. Soc., 365 (2013), 1551-1575. doi: 10.1090/S0002-9947-2012-05692-3. Google Scholar
J. X. Fu and S. T. Yau, The theory of superstring with flux on non-Kähler manifolds and the complex Monge-Ampère equation, J. Differential Geom., 78 (2008), 369-428. doi: 10.4310/jdg/1207834550. Google Scholar
B. Guan and H. M. Jiao, Second order estimates for Hessian type fully nonlinear elliptic equations on Riemannian manifolds, Calc. Var. Partial Differential Equations, 54 (2015), 2693-2712. doi: 10.1007/s00526-015-0880-8. Google Scholar
P. F. Guan, C. Y. Ren and Z. Z. Wang, Global $C^2$-estimates for convex solutions of curvature equations, Comm. Pure Appl. Math., 68 (2015), 1287-1325. doi: 10.1002/cpa.21528. Google Scholar
Z. L. Hou, X. N. Ma and D. M. Wu, A second order estimate for complex Hessian equations on a compact Kähler manifold, Math. Res. Lett., 17 (2010), 547-561. doi: 10.4310/MRL.2010.v17.n3.a12. Google Scholar
R. Kobayashi, Kähler-Einstein metric on an open algebraic manifold, Osaka J. Math., 21 (1984), 399-418. Google Scholar
C. Li and L. M. Shen, The complex Hessian equations with gradient terms on Hermitian manifolds, J. Differential Equations, 269 (2020), 6293-6310. doi: 10.1016/j.jde.2020.04.037. Google Scholar
Y. Y. Li, Some existence results of fully nonlinear elliptic equations of Monge-Ampere type, Comm. Pure Appl. Math., 43 (1990), 233-271. doi: 10.1002/cpa.3160430204. Google Scholar
D. H. Phong, S. Picard and X. W. Zhang, A second order estimate for general complex Hessian equations, Anal. PDE, 9 (2016), 1693-1709. doi: 10.2140/apde.2016.9.1693. Google Scholar
D. H. Phong, S. Picard and X. W. Zhang, Fu-Yau Hessian equations, preprint, arXiv: 1801.09842. Google Scholar
J. Song and B. Weinkove, On the convergence and singularities of the J-flow with applications to the Mabuchi energy, Comm. Pure Appl. Math., 61 (2008), 210-229. doi: 10.1002/cpa.20182. Google Scholar
G. Székelyhidi, Fully non-linear elliptic equations on compact Hermitian manifolds, J. Differential Geom., 109 (2018), 337-378. doi: 10.4310/jdg/1527040875. Google Scholar
G. Tian, On the existence of solutions of a class of Monge-Ampère equations, Acta Math. Sinica (N.S.), 4 (1988), 250-265. doi: 10.1007/BF02560581. Google Scholar
G. Tian and S. T. Yau, Complete Kähler manifolds with zero Ricci curvature. Ⅰ, J. Amer. Math. Soc., 3 (1990), 579-609. doi: 10.2307/1990928. Google Scholar
G. Tian and S. T. Yau, Complete Kähler manifolds with zero Ricci curvature. Ⅱ, Invent. Math., 106 (1991), 27-60. doi: 10.1007/BF01243902. Google Scholar
V. Tosatti and B. Weinkove, The complex Monge-Ampère equation on compact Hermitian manifolds, J. Amer. Math. Soc., 23 (2010), 1187-1195. doi: 10.1090/S0894-0347-2010-00673-X. Google Scholar
V. Tosatti and B. Weinkove, The complex Monge-Ampère equation with a gradient term, preprint, arXiv: 1906.10034. Google Scholar
S. T. Yau, On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. Ⅰ, Comm. Pure Appl. Math., 31 (1978), 339-411. doi: 10.1002/cpa.3160310304. Google Scholar
R. R. Yuan, On a class of fully nonlinear elliptic equations containing gradient terms on compact Hermitian manifolds, Canad. J. Math., 70 (2018), 943-960. doi: 10.4153/CJM-2017-015-9. Google Scholar
D. K. Zhang, Hessian equations on closed Hermitian manifolds, Pacific J. Math., 291 (2017), 485-510. doi: 10.2140/pjm.2017.291.485. Google Scholar
Wei Sun. On uniform estimate of complex elliptic equations on closed Hermitian manifolds. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1553-1570. doi: 10.3934/cpaa.2017074
Weisong Dong, Tingting Wang, Gejun Bao. A priori estimates for the obstacle problem of Hessian type equations on Riemannian manifolds. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1769-1780. doi: 10.3934/cpaa.2016013
Claudia Anedda, Giovanni Porru. Second order estimates for boundary blow-up solutions of elliptic equations. Conference Publications, 2007, 2007 (Special) : 54-63. doi: 10.3934/proc.2007.2007.54
Bo Guan, Heming Jiao. The Dirichlet problem for Hessian type elliptic equations on Riemannian manifolds. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 701-714. doi: 10.3934/dcds.2016.36.701
Mohamed Assellaou, Olivier Bokanowski, Hasnaa Zidani. Error estimates for second order Hamilton-Jacobi-Bellman equations. Approximation of probabilistic reachable sets. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 3933-3964. doi: 10.3934/dcds.2015.35.3933
Simone Fiori, Italo Cervigni, Mattia Ippoliti, Claudio Menotta. Synthetic nonlinear second-order oscillators on Riemannian manifolds and their numerical simulation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021088
Bi Ping, Maoan Han. Oscillation of second order difference equations with advanced argument. Conference Publications, 2003, 2003 (Special) : 108-112. doi: 10.3934/proc.2003.2003.108
José F. Cariñena, Javier de Lucas Araujo. Superposition rules and second-order Riccati equations. Journal of Geometric Mechanics, 2011, 3 (1) : 1-22. doi: 10.3934/jgm.2011.3.1
Paola Buttazzoni, Alessandro Fonda. Periodic perturbations of scalar second order differential equations. Discrete & Continuous Dynamical Systems, 1997, 3 (3) : 451-455. doi: 10.3934/dcds.1997.3.451
Kunquan Lan. Eigenvalues of second order differential equations with singularities. Conference Publications, 2001, 2001 (Special) : 241-247. doi: 10.3934/proc.2001.2001.241
Annamaria Canino, Elisa De Giorgio, Berardino Sciunzi. Second order regularity for degenerate nonlinear elliptic equations. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 4231-4242. doi: 10.3934/dcds.2018184
Alain Haraux, Mitsuharu Ôtani. Analyticity and regularity for a class of second order evolution equations. Evolution Equations & Control Theory, 2013, 2 (1) : 101-117. doi: 10.3934/eect.2013.2.101
Bo Guan, Qun Li. A Monge-Ampère type fully nonlinear equation on Hermitian manifolds. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1991-1999. doi: 10.3934/dcdsb.2012.17.1991
José M. Arrieta, Esperanza Santamaría. Estimates on the distance of inertial manifolds. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 3921-3944. doi: 10.3934/dcds.2014.34.3921
Victor Isakov, Nanhee Kim. Weak Carleman estimates with two large parameters for second order operators and applications to elasticity with residual stress. Discrete & Continuous Dynamical Systems, 2010, 27 (2) : 799-825. doi: 10.3934/dcds.2010.27.799
Liupeng Wang, Yunqing Huang. Error estimates for second-order SAV finite element method to phase field crystal model. Electronic Research Archive, 2021, 29 (1) : 1735-1752. doi: 10.3934/era.2020089
Jundong Zhou. A class of the non-degenerate complex quotient equations on compact Kähler manifolds. Communications on Pure & Applied Analysis, 2021, 20 (6) : 2361-2377. doi: 10.3934/cpaa.2021085
Lucas Bonifacius, Ira Neitzel. Second order optimality conditions for optimal control of quasilinear parabolic equations. Mathematical Control & Related Fields, 2018, 8 (1) : 1-34. doi: 10.3934/mcrf.2018001
Pablo Ochoa. Approximation schemes for non-linear second order equations on the Heisenberg group. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1841-1863. doi: 10.3934/cpaa.2015.14.1841
Min Tang. Second order all speed method for the isentropic Euler equations. Kinetic & Related Models, 2012, 5 (1) : 155-184. doi: 10.3934/krm.2012.5.155
Weisong Dong Chang Li
|
CommonCrawl
|
By using this website, you agree to our Cookie Policy. Learn more
Geometry beta
Account Details Login Options Account Management Settings Subscription Logout
English Español Português 中文(简体) עברית العربية
Popular Problems
Matrices & Vectors
Functions & Graphing
Pre Calculus
Popular Algebra Problems
(x^{2})^{3}
f^{-1}(x)
x^{2}-6x+9
\sqrt{29}
(x+2)^{2}
complete\:the\:square\:x^{2}+12x
\frac{1}{2}-\frac{1}{3}
\log_{6}(\frac{1}{36})
\log_{3}(\frac{1}{3})
\infty\:-\infty\:
x^{2}+5x+4
\sqrt[4]{-16}
\sqrt{4-x^{2}}
\sqrt{7}
5^{-3}
(x-1)^{2}
-3^{2}
\log_{8}(64)
x^{2}+8x+16
4x^{2}-12x+9=0
\sqrt[3]{24}
x^{2}-2x-15
4x^{2}-1
(x+h)^{3}
-4<5-3x\le\:17
\sqrt{-9}
x^{2}-4x+1=0
x^{2}+2x-3\le\:0
x^{2}-14x+49=0
5^{2}
\sqrt{300}
\ln(0)
x^{2}+2x+3=0
4\sqrt{3}
\log_{2}(3)
x^{2}+x-6
(-2)^{3}
\log_{7}(343)
\frac{0}{0}
(x+y)^{4}
\frac{1}{\sqrt{5}}
1-x^{2}
(x-3)(x+3)
4^{\frac{3}{2}}
x^{2}+4=0
x^{2}-4x-12=0
4x^{2}+12x+3=0
(x+3)(x-3)
x^{2}-8x+16
(3x)^{2}
5x^{2}+13x-6
factor\:x^{2}-12x+36
3^{x-2}=18
\sqrt[3]{512}
factor\:16x^{4a}-y^{8a}
x^{2}-24x-12=0
10^{6}
factor\:x^{2}-x-12
(\frac{1}{2})^{-2}
\sqrt{\frac{1}{3}}
\ln(\sqrt{e})
x^{-2}
\log(100)
x^{2}-2x-4=0
4x^{2}+4x+1=0
x^{2}+2x-8
(1-x)^{3}
\sin^{2}(x)-\cos^{2}(x)
\log_{4}(x)=3
© EqsQuest 2017
Home What's New Blog About Privacy Terms Popular Problems Help
Sign in with Office365
Change Password Email Address
We've sent
the email to:
To create your new password, just click the link in the email we sent you.
Join 100 million happy users!
1. Sign Up free of charge:
Digital Notebook
Practice problems (one per topic)
Create Study Groups
Join with Office365
Join with email
2. Subscribe to get much more:
Full access to solution steps
Web & Mobile subscription
Notebook (Unlimited storage)
Personalized practice problems
Detailed progress report
I'm Already Registered
Transaction Failed!
Please try again using a different payment method
Subscribe to get much more:
Remind me later »
One Time Payment $10.99 USD for 2 months
Weekly Subscription $1.99 USD per week until cancelled
Monthly Subscription $4.99 USD per month until cancelled
Annual Subscription $29.99 USD per year until cancelled
User Data Missing
Please contact support
Message received. Thanks for the feedback.
Generating PDF...
|
CommonCrawl
|
Attractors for A sup-cubic weakly damped wave equation in $ \mathbb{R}^{3} $
Detecting coupling directions with transcript mutual information: A comparative study
August 2019, 24(8): 4099-4116. doi: 10.3934/dcdsb.2019052
Stabilisation by delay feedback control for highly nonlinear hybrid stochastic differential equations
Zhenyu Lu 1, , Junhao Hu 2,, and Xuerong Mao 3,
School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, Jiangsu 210044, China
School of Mathematics and Statistics, South-Central University for Nationalities, Wuhan, Hubei 430074, China
Department of Mathematics and Statistics, University of Strathclyde, Glasgow G1 1XH, U.K
* Corresponding author: J. Hu
Received April 2018 Revised August 2018 Published August 2019 Early access February 2019
Fund Project: The authors would like to thank the Royal Society (WM160014, Royal Society Wolfson Research Merit Award), the Royal Society and the Newton Fund (NA160317, Royal Society-Newton Advanced Fellowship), the Royal Society of Edinburgh (61294), the EPSRC (EP/K503174/1), the Natural Science Foundation of China (61773220, 61473334, 61876192, 61374085), the Ministry of Education (MOE) of China (MS2014DHDX020) for their financial support.
Figure(2)
Given an unstable hybrid stochastic differential equation (SDE, also known as an SDE with Markovian switching), can we design a delay feedback control to make the controlled hybrid SDE become asymptotically stable? The paper [14] by Mao et al. was the first to study the stabilisation by delay feedback controls for hybrid SDEs, though the stabilization by non-delay feedback controls had been well studied. A critical condition imposed in [14] is that both drift and diffusion coefficients of the given hybrid SDE need to satisfy the linear growth condition. However, many hybrid SDE models in the real world do not fulfill this condition (namely, they are highly nonlinear) and hence there is a need to develop a new theory for these highly nonlinear SDE models. The aim of this paper is to design delay feedback controls in order to stabilise a class of highly nonlinear hybrid SDEs whose coefficients satisfy the polynomial growth condition.
Keywords: Brownian motion, Markov chain, asymptotic stability, Lypunov functional.
Mathematics Subject Classification: Primary: 60H10, 60J10; Secondary: 93D15.
Citation: Zhenyu Lu, Junhao Hu, Xuerong Mao. Stabilisation by delay feedback control for highly nonlinear hybrid stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4099-4116. doi: 10.3934/dcdsb.2019052
A. Ahlborn and U. Parlitz, Stabilizing unstable steady states using multiple delay deedback control, Physical Review Letters, 93 (2004), 264101. Google Scholar
A. Bahar and X. Mao, Stochastic delay population dynamics, Journal of International Applied Mathematics, 11 (2004), 377-400. Google Scholar
J. Cao, H. X. Li and D. W. C. Ho, Synchronization criteria of Lur's systems with time-delay feedback control, Chaos, Solitons and Fractals, 23 (2005), 1285-1298. doi: 10.1016/S0960-0779(04)00380-7. Google Scholar
L. Hu, X. Mao and Y. Shen, Stability and boundedness of nonlinear hybrid stochastic differential delay equations, Control Letters, 62 (2013), 178-187. doi: 10.1016/j.sysconle.2012.11.009. Google Scholar
[5] G. S. Ladde and V. Lakshmikantham, Random Differential Inequalities, Academic Press, 1980. Google Scholar
Y. Ji and H. J. Chizeck, Controllability, stabilizability and continuous-time Markovian jump linear quadratic control, IEEE Transaction on Automatic Control, 35 (1990), 777-788. doi: 10.1109/9.57016. Google Scholar
[7] V. B. Kolmanovskii and V. R. Nosov, Stability of Functional Differential Equations, Academic Press, 1986. Google Scholar
[8] A. L. Lewis, Option Valuation under Stochastic Volatility: with Mathematica Code, Finance Press, 2000. Google Scholar
X. Mao, Stability of Stochastic Differential Equations with Respect to Semimartingales, Longman Scientific and Technical, 1991. Google Scholar
X. Mao, Exponential Stability of Stochastic Differential Equations, Marcel Dekker, 1994. Google Scholar
X. Mao, Stochastic Differential Equations and Their Applications, 2$^{nd}$ edition, Horwood Publishing Limited, Chichester, 2007. Google Scholar
X. Mao, Stability of stochastic differential equations with Markovian switching, Stochastic Processes and Their Applications, 79 (1999), 45-67. doi: 10.1016/S0304-4149(98)00070-2. Google Scholar
X. Mao, Stabilization of continuous-time hybrid stochastic differential equations by discrete-time feedback control, Automatica, 49 (2013), 3677-3681. doi: 10.1016/j.automatica.2013.09.005. Google Scholar
X. Mao, J. Lam and L. Huang, Stabilisation of hybrid stochastic differential equations by delay feedback control, Control Letters, 57 (2008), 927-935. doi: 10.1016/j.sysconle.2008.05.002. Google Scholar
X. Mao, A. Matasov and A. B. Piunovskiy, Stochastic differential delay equations with Markovian switching, Bernoulli, 6 (2000), 73-90. doi: 10.2307/3318634. Google Scholar
[16] X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, 2006. doi: 10.1142/p473. Google Scholar
M. Mariton, Jump Linear Systems in Automatic Control, Marcel Dekker, 1990. Google Scholar
S.-E. A. Mohammed, Stochastic Functional Differential Equations, Longman Scientific and Technical, 1984. Google Scholar
K. Pyragas, Control of chaos via extended delay feedback, Physics Letters A, 206 (1995), 323-330. doi: 10.1016/0375-9601(95)00654-L. Google Scholar
L. Shaikhet, Stability of stochastic hereditary systems with Markov switching, Theory of Stochastic Processes, 2 (1996), 180-184. Google Scholar
L. Wu, X. Su and P. Shi, Sliding mode control with bounded $L_2$ gain performance of Markovian jump singular time-delay systems, Automatica, 48 (2012), 1929-1933. doi: 10.1016/j.automatica.2012.05.064. Google Scholar
S. You, W. Liu, J. Lu, X. Mao and Q. Qiu, Stabilization of hybrid systems by feedback control based on discrete-time state observations, SIAM Journal on Control and Optimization, 53 (2015), 905-925. doi: 10.1137/140985779. Google Scholar
D. Yue and Q. Han, Delay-dependent exponential stability of stochastic systems with time-varying delay, nonlinearity, and Markovian switching, IEEE Transaction on Automatic Control, 50 (2005), 217-222. doi: 10.1109/TAC.2004.841935. Google Scholar
Figure 4.1. The computer simulation of the sample paths of the Markov chain and the SDDE (2.4) with control (4.2) and $ \tau = 0.06 $ using the Euler–Maruyama method with step size $ 10^{-4} $.
Figure 4.2. The computer simulation of the sample paths of the Markov chain and the SDDE (2.4) with control (4.3) and $ \tau = 0.01 $ using the Euler–Maruyama method with step size $ 10^{-4} $
Defei Zhang, Ping He. Functional solution about stochastic differential equation driven by $G$-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 281-293. doi: 10.3934/dcdsb.2015.20.281
Hermann Brunner, Chunhua Ou. On the asymptotic stability of Volterra functional equations with vanishing delays. Communications on Pure & Applied Analysis, 2015, 14 (2) : 397-406. doi: 10.3934/cpaa.2015.14.397
Yong Ren, Xuejuan Jia, Lanying Hu. Exponential stability of solutions to impulsive stochastic differential equations driven by $G$-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2157-2169. doi: 10.3934/dcdsb.2015.20.2157
Yong Ren, Huijin Yang, Wensheng Yin. Weighted exponential stability of stochastic coupled systems on networks with delay driven by $ G $-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3379-3393. doi: 10.3934/dcdsb.2018325
Yong Ren, Wensheng Yin, Dongjin Zhu. Exponential stability of SDEs driven by $G$-Brownian motion with delayed impulsive effects: average impulsive interval approach. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3347-3360. doi: 10.3934/dcdsb.2018248
Brahim Boufoussi, Soufiane Mouchtabih. Controllability of neutral stochastic functional integro-differential equations driven by fractional brownian motion with Hurst parameter lesser than $ 1/2 $. Evolution Equations & Control Theory, 2021, 10 (4) : 921-935. doi: 10.3934/eect.2020096
Nobuyuki Kato. Linearized stability and asymptotic properties for abstract boundary value functional evolution problems. Conference Publications, 1998, 1998 (Special) : 371-387. doi: 10.3934/proc.1998.1998.371
Fabrice Baudoin, Camille Tardif. Hypocoercive estimates on foliations and velocity spherical Brownian motion. Kinetic & Related Models, 2018, 11 (1) : 1-23. doi: 10.3934/krm.2018001
Samuel N. Cohen, Lukasz Szpruch. On Markovian solutions to Markov Chain BSDEs. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 257-269. doi: 10.3934/naco.2012.2.257
Guolian Wang, Boling Guo. Stochastic Korteweg-de Vries equation driven by fractional Brownian motion. Discrete & Continuous Dynamical Systems, 2015, 35 (11) : 5255-5272. doi: 10.3934/dcds.2015.35.5255
Dingjun Yao, Rongming Wang, Lin Xu. Optimal asset control of a geometric Brownian motion with the transaction costs and bankruptcy permission. Journal of Industrial & Management Optimization, 2015, 11 (2) : 461-478. doi: 10.3934/jimo.2015.11.461
Yong Xu, Rong Guo, Di Liu, Huiqing Zhang, Jinqiao Duan. Stochastic averaging principle for dynamical systems with fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1197-1212. doi: 10.3934/dcdsb.2014.19.1197
Yong Xu, Bin Pei, Rong Guo. Stochastic averaging for slow-fast dynamical systems with fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2257-2267. doi: 10.3934/dcdsb.2015.20.2257
Soonki Hong, Seonhee Lim. Martin boundary of brownian motion on Gromov hyperbolic metric graphs. Discrete & Continuous Dynamical Systems, 2021, 41 (8) : 3725-3757. doi: 10.3934/dcds.2021014
Zhengyan Lin, Li-Xin Zhang. Convergence to a self-normalized G-Brownian motion. Probability, Uncertainty and Quantitative Risk, 2017, 2 (0) : 4-. doi: 10.1186/s41546-017-0013-8
Yousef Alnafisah, Hamdy M. Ahmed. Neutral delay Hilfer fractional integrodifferential equations with fractional brownian motion. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021031
Ajay Jasra, Kody J. H. Law, Yaxian Xu. Markov chain simulation for multilevel Monte Carlo. Foundations of Data Science, 2021, 3 (1) : 27-47. doi: 10.3934/fods.2021004
Tzong-Yow Lee. Asymptotic results for super-Brownian motions and semilinear differential equations. Electronic Research Announcements, 1998, 4: 56-62.
Daoyi Xu, Yumei Huang, Zhiguo Yang. Existence theorems for periodic Markov process and stochastic functional differential equations. Discrete & Continuous Dynamical Systems, 2009, 24 (3) : 1005-1023. doi: 10.3934/dcds.2009.24.1005
Jingzhi Tie, Qing Zhang. An optimal mean-reversion trading rule under a Markov chain model. Mathematical Control & Related Fields, 2016, 6 (3) : 467-488. doi: 10.3934/mcrf.2016012
Zhenyu Lu Junhao Hu Xuerong Mao
|
CommonCrawl
|
Mutual kernelized correlation filters with elastic net constraint for visual tracking
Haijun Wang ORCID: orcid.org/0000-0003-2481-96621 &
Shengyan Zhang1
In this paper, we propose a robust visual tracking method based on mutual kernelized correlation filters with elastic net constraint. First, two correlation filters are trained in a general framework jointly in a closed form, which are interrelated and interacted on each other. Second, elastic net constraint is imposed on each discriminative filter, which is able to filter some interfering features. Third, scale estimation and target re-detection scheme are adopted in our framework, which can deal with scale variation and tracking failure effectively. Extensive experiments on some challenging tracking benchmarks demonstrate that our proposed method is able to obtain a competitive tracking performance against other state-of-the-art algorithms.
Visual tracking is a fundamental task in computer vision with numerous applications, such as unmanned control systems, surveillance, assistant driving, and so on. Given the position of the tracked object in the first frame, the goal of visual tracking is to estimate the position of the tracked target in the subsequent frame precisely. Although great progress has been made in recent years [1, 2], designing a robust tracking algorithm is still a challenging problem due to negative factors such as background clutters, severe occlusion, motion blur, and illumination variation (see Fig. 1).
Tracking results in challenging environments including background clutters (motorRolling), severe occlusion (Jogging-1), fast motion (skiing), illumination change (Singer2). The tracking results of HDT, Staple, KCF, CNN-SVM, DSST, MEEM, and our tracker are shown by red, green, blue, black, magenta, cyan, and gray rectangles, respectively
Generally speaking, visual tracking methods can be divided into two categories: generative methods [3,4,5,6,7] and discriminative methods [8,9,10,11,12,13]. Generative methods attempt to build a model to represent tracked target and find the region with the minimum reconstruction error from a great deal of candidates. For example, under the particle filter framework, Mei et al. [14] developed a tracker method based on sparse representation, called the ℓ1 method, which reconstructs each candidate with dictionary template and trivial template. The sparse representation coefficients of each candidate can be computed by solving ℓ1 minimization. Despite ℓ1 method demonstrated impressive tracking performance, the tracking speed is very slow because of its huge computation load. In order to solve this problem, Bao et al. [15] proposed a fast ℓ1 tracking method by using accelerated proximal gradient approach. Xiao et al. [16] presented a fast object tracking method by solving ℓ2 regularized least square problem. Wang et al. [17] developed a novel and fast visual tracking method via probability continuous outlier model. Different from the general method, discriminative algorithms regard visual tracking as a binary classification problem which distinguishes the correct tracked object from the background. For example, Babenko et al. [18] trained an online discriminative classifier to separate the tracked object from the background by online multiple instance learning. Zhang et al. [19] formulated visual tracking as a binary classification via a naive Bayes classifier with an online update scheme in the compressed domain.
In recent years, visual tracking methods based on correlation filter [20,21,22,23,24,25] have attracted great attention due to its real-time tracking speed and robust tracking performance. Under the framework of correlation filter, a discriminative classifier is trained with a great deal of dense sampling examples. These dense sampling examples are with circulant structure which allows the use of the fast Fourier transform (FFT). Bolme et al. [26] first developed a minimum output sum of squared error filter for real-time visual tracking. After that, a great deal of tracking methods based on correlation filter has been proposed to improve tracking performance. Henriques et al. [27] developed a high-speed tracker with kernelized correlation filters which can deal with multi-channel features. Danelljan et al. [28] presented a discriminative scale space tracker with a correlation filter based on a scale pyramid representation. In order to mitigate the unwanted boundary effect which appeared in traditional correlation-based trackers, Danelljan et al. [29] figured out spatially regularized discriminative correlation filters (SRDCF) for visual tracking. Recent researches have shown that features from convolutional neural networks (CNN) can improve tracking performance greatly [30,31,32,33]. Zhang et al. [34] builded a simple two-layer convolutional network to learn robust representation for visual tracking without offline training. Ma et al. [35] utilized three convolutional layers to learn robust target appearance for visual tracking. Wang et al. [36] exploited robust target appearance representation from the top layer to lower layer for object tracking. Heng et al. [37] incorporated recurrent neural network (RNN) into CNN to improve tracking performance. He et al. [38] integrated weighted convolution responses from 10 layers and achieved a very promising performance.
Although correlation filters based trackers have obtained superior tracking performance, many trackers utilized a single correlation filter and could not achieve promising tracking results. Figure 2 gives the precision plots and success plots of OPE by methods with a different number of correlation filters on OTB-2013. It is obvious that just simply merging two correlation filters is able to greatly improve tracking performance in both precision and success rate. However, there is still much room for improvement for methods using two correlation filters which are independent of each other.
Precision plots and success plots of OPE by methods with different correlation filters on OTB-2013
Inspired by the above discussions, we develop a robust visual tracking method via mutual kernelized correlation filters using features from convolutional neural networks (MKCN_CNN), where each tracker works on its own and tries to correct the other one. At the same time, an elastic net constraint is imposed on each filter, which can eliminate some distractive features. Finally, the proposed tracking framework can be solved in a closed-form fashion. Extensive experiments demonstrate that our method can achieve promising tracking performance competing with some other state-of-the-art trackers.
The rest of this paper is organized as follows. Section 2 briefly summarizes the principle of visual tracking based on kernelized correlation filter. Section 3 introduces the proposed tracking algorithm in details. The experimental results and corresponding discussions are described in Section 4, followed by the conclusion in Section 5.
Visual tracking based on kernelized correlation filters
Henriques et al. [27] proposed a fast discriminative visual tracking method based on kernelized correlation filters (KCF). Given a n × 1 vector x = [x1, x2, …, xn]T denoting a base image, a shifted version of x can be defined by {Pux|u = 1} = [xn, x1, …, xn − 1]T. Here, P is a permutation matrix. So, the full shifted signals of x are given by {Pux|u = 1, 2, …, n − 1}. Then, the data matrix X is defined by all the cyclic shifted version of x which can be made diagonal by discrete Fourier transform (DFT).
$$ \mathbf{X}={F}^H\operatorname{diag}\left(\hat{\mathbf{x}}\right)F $$
Here, F means the DFT matrix, H stands for transpose and complex-conjugate, \( \hat{\mathbf{x}}=\mathcal{F}\left(\mathbf{x}\right) \), which computes the DFT of vector x. The goal of KCF is to find a discriminative correlation classifier f(x) over the data matrix X for separating the target object from the surrounding environment. Given the training dataset and their corresponding labels (x1, y1), …, (xm, ym), the discriminative correlation classifier f(x) can be obtained by the following equation,
$$ \underset{\mathbf{w}}{\min}\sum \limits_i{\left(f\left({\mathbf{x}}_i\right)-{y}_i\right)}^2+\lambda {\left\Vert \mathbf{w}\right\Vert}^2 $$
where λ means the regularization parameter. xi stands for the ith row element of the data matrix X. A Gaussian function is adopted to model the label yi. When xi is the centered target, yi is set to 1. For the other cyclic shifted version of xi around the center target, their labels smoothly decay to 0. The solution w can be easily obtained by w = (XHX + λI)−1XHy. In order to get a powerful model, kernel trick is introduced into Eq. (2). The new model is rewritten as
$$ \underset{\alpha }{\min }{\left\Vert \mathbf{K}\alpha -\mathbf{y}\right\Vert}_2^2+\alpha \mathbf{K}\alpha $$
where K is a n × n kernel matrix and one of its elements is Kij = k(xi, xj). Matrix K has a circulant structure and can be diagonalized as
$$ \mathbf{K}={F}^H\operatorname{diag}\left(\hat{\mathbf{k}}\right)F $$
Here, k is the first row of matrix K. The solution α in the dual space can be given by
$$ \alpha ={\left(\mathbf{K}+\lambda \mathbf{I}\right)}^{-1}\mathbf{y} $$
where I is an identity matrix. Just as the data matrix X, kernel matrix K is also circulant. So, the solution of Eq. (3) can be efficiently computed in the frequency domain.
$$ \hat{\alpha}=\frac{\hat{\mathbf{y}}}{{\hat{\mathbf{K}}}^{\mathbf{xx}}+\lambda } $$
In the next frame, a great deal of candidates, denoted as x', are extracted at the same position as the current frame. Actually, all these candidates' x' are obtained from the cyclic shift of the base image x. The response of these candidates can be computed from
$$ f\left({\mathbf{x}}^{\hbox{'}}\right)={\mathcal{F}}^{-1}\left({\hat{\mathbf{k}}}^{\hbox{'}}\circ {\hat{\alpha}}_t\right) $$
Here, ℱ−1 stands for the inverse discrete Fourier transform (IDFT). \( {\hat{\mathbf{k}}}^{\hbox{'}} \) means the kernel correlation of candidates x' and base image x in the frequency domain. ∘ denotes element by element multiplication. The candidate with the largest response is chosen as the final target object in the next frame.
Though the KCF method has obtained promising tracking performance, only one discriminative classifier is used in this model, which makes the KCF method not able to deal with complex sciences. In order to overcome these problems, inspired by ensemble tracking methods, we proposed mutual kernelized correlation filters with elastic net constraint for visual tracking. Extensive experiments show that our method can perform better than the state-of-the-art methods. The flowchart of our proposed tracking framework is demonstrated in Fig. 3.
The flowchart of our proposed tracking framework
In order to find the best target object from a great deal of candidates, we introduce a linear regressor model in the proposed method.
$$ \underset{\mathbf{w}}{\min }{\left\Vert \mathbf{y}-\mathbf{Xw}\right\Vert}_2^2 $$
Here, X has the same definition as KCF. y means regression label value of X. w represents the corresponding coefficient. In order to promote the performance of Eq. (8), just as least absolute shrinkage and selection operator (LASSO) model, ℓ1 norm is adopted to regularize the coefficients w.
$$ \underset{\mathbf{w}}{\min }{\left\Vert \mathbf{y}-\mathbf{Xw}\right\Vert}_2^2+\tau {\left\Vert \mathbf{w}\right\Vert}_1 $$
where τ is a constant weight parameter. In Eq. (9), some values of w are set to zero which can make some occluded pixels excluded in this new model. So, the occluded pixels have less effect on the final decision of regression values. However, we find that the occluded pixels often assemble in one position together. Eq. (9) cannot group these pixels with the same features. So, in order to overcome the limitations of the LASSO model, an elastic net regularization [39] is introduced in Eq. (9).
$$ \underset{\mathbf{w}}{\min }{\left\Vert \mathbf{y}-\mathbf{Xw}\right\Vert}_2^2+\lambda {\left\Vert \mathbf{w}\right\Vert}_2+\tau {\left\Vert \mathbf{w}\right\Vert}_1 $$
Here, λ is a constant weight parameter. ‖w‖2 is used to group pixels with the similar property. In order to promote the tracking performance of our method, kernel trick is exploited in Eq. (10). The candidates are mapped to a high-dimensional feature space φ(x). Then, in the dual space, the solution w is given by a linear combination of mapped candidates.
$$ \mathbf{w}=\sum \limits_i{\alpha}_i\varphi \left({\mathbf{x}}_i\right) $$
Equation (10) in the dual space can be described as
$$ \underset{\alpha }{\min }{\left\Vert \mathbf{y}-\mathbf{K}\alpha \right\Vert}_2^2+{\lambda \alpha}^T\mathbf{K}\alpha +\tau {\left\Vert \alpha \right\Vert}_1 $$
where K represents kernel matrix. The solution of α involves square norm and ℓ1 norm simultaneously. In order to compute α efficiently, another variable β is introduced in Eq. (12).
$$ \underset{\alpha }{\min }{\left\Vert \mathbf{y}-\mathbf{K}\alpha \right\Vert}_2^2+{\lambda \alpha}^T\mathbf{K}\alpha +\tau {\left\Vert \beta \right\Vert}_1+\mu {\left\Vert \alpha -\beta \right\Vert}_2^2 $$
Here, μ is a constant weight parameter.
Mutual kernelized correlation filters
In this part, we introduce mutual kernelized correlation filters based on Eq. (13). Then, the proposed mutual kernelized correlation filters will solve this following problem
$$ {\displaystyle \begin{array}{l}T\left({\alpha}_1,{\alpha}_2\right)=\underset{\alpha_1,{\alpha}_2}{\min }{\left\Vert \mathbf{y}-{\mathbf{K}}_1{\alpha}_1\right\Vert}_2^2+{\left\Vert \mathbf{y}-{\mathbf{K}}_2{\alpha}_2\right\Vert}_2^2+{\lambda \alpha}_1^T{\mathbf{K}}_1{\alpha}_1+{\lambda \alpha}_2^T{\mathbf{K}}_2{\alpha}_2+\tau {\left\Vert {\beta}_1\right\Vert}_1\\ {}+\tau {\left\Vert {\beta}_2\right\Vert}_1+\mu {\left\Vert {\alpha}_1-{\beta}_1\right\Vert}_2^2+\mu {\left\Vert {\alpha}_2-{\beta}_2\right\Vert}_2^2+2\rho {\left\Vert {\mathbf{K}}_1{\alpha}_1-{\mathbf{K}}_2{\alpha}_2\right\Vert}_2^2\end{array}} $$
The first two parts of Eq. (14) force each kernelized correlation filter model to have the minimum squared error with respect to the desired output regression label y. \( {\lambda \alpha}_1^T{\mathbf{K}}_1{\alpha}_1+{\lambda \alpha}_2^T{\mathbf{K}}_2{\alpha}_2 \) denote the elastic net regularization on two models respectively. \( \tau {\left\Vert \beta \right\Vert}_1+\tau {\left\Vert \beta \right\Vert}_2+\mu {\left\Vert {\alpha}_1-{\beta}_1\right\Vert}_2^2+\mu {\left\Vert {\alpha}_2-{\beta}_2\right\Vert}_2^2 \) are introduced to exclude the occluded pixels in the target object. \( 2\rho {\left\Vert {\mathbf{K}}_1{\alpha}_1-{\mathbf{K}}_2{\alpha}_2\right\Vert}_2^2 \) is used to weight the influence of the two kernelized correlation filter models.
It is obvious that Eq. (14) is convex with respect to α1, α2 if β1, β2 are fixed, and vice versa. So, we propose an iterative algorithm to compute the solution α1, α2. Thus, four subproblems with respect to α1, α2, β1, β2 are given as follows
$$ {T}_1\left({\alpha}_1\right)=\underset{\alpha_1}{\min }{\left\Vert \mathbf{y}-{\mathbf{K}}_1{\alpha}_1\right\Vert}_2^2+{\lambda \alpha}_1^T{\mathbf{K}}_1{\alpha}_1+\tau {\left\Vert {\beta}_1\right\Vert}_1+\mu {\left\Vert {\alpha}_1-{\beta}_1\right\Vert}_2^2+2\rho {\left\Vert {\mathbf{K}}_1{\alpha}_1-{\mathbf{K}}_2{\alpha}_2\right\Vert}_2^2 $$
$$ {T}_2\left({\beta}_1\right)=\underset{\beta_1}{\min}\tau {\left\Vert \beta \right\Vert}_1+\mu {\left\Vert {\alpha}_1-{\beta}_1\right\Vert}_2^2 $$
$$ {T}_4\left({\beta}_2\right)=\underset{\beta_2}{\min}\tau {\left\Vert {\beta}_2\right\Vert}_1+\mu {\left\Vert {\alpha}_2-{\beta}_2\right\Vert}_2^2 $$
Set the derivation of T1 with respect to α1 to be zero; Eq. (15) can be rewritten as follows:
$$ {\displaystyle \begin{array}{l}\frac{\partial {T}_1}{\partial {\alpha}_1}=-2{\mathbf{K}}_1\left(\mathbf{y}-{\mathbf{K}}_1{\alpha}_1\right)+2\lambda {\mathbf{K}}_1{\alpha}_1+4\rho {\mathbf{K}}_1\left({\mathbf{K}}_1{\alpha}_1-{\mathbf{K}}_2{\alpha}_2\right)+2\mu \left({\alpha}_1-{\beta}_1\right)\\ {}=-2{\mathbf{K}}_1\mathbf{y}+2{\mathbf{K}}_1{\mathbf{K}}_1{\alpha}_1+2\lambda {\mathbf{K}}_1{\alpha}_1+4\rho {\mathbf{K}}_1{\mathbf{K}}_1{\alpha}_1-4\rho {\mathbf{K}}_1{\mathbf{K}}_2{\alpha}_2+2{\mu \alpha}_1-2{\mu \beta}_1\\ {}=0\end{array}} $$
Change the order of formula (19), we obtain
$$ {\displaystyle \begin{array}{l}{\mathbf{K}}_1{\mathbf{K}}_1{\alpha}_1+\lambda {\mathbf{K}}_1{\alpha}_1+2\rho {\mathbf{K}}_1{\mathbf{K}}_1{\alpha}_1+{\mu \alpha}_1={\mathbf{K}}_1\mathbf{y}+2\rho {\mathbf{K}}_1{\mathbf{K}}_2{\alpha}_2+{\mu \beta}_1\\ {}\Rightarrow \left({\mathbf{K}}_1{\mathbf{K}}_1+\lambda {\mathbf{K}}_1+2\rho {\mathbf{K}}_1{\mathbf{K}}_1+\mu \mathbf{I}\right){\alpha}_1={\mathbf{K}}_1\mathbf{y}+2\rho {\mathbf{K}}_1{\mathbf{K}}_2{\alpha}_2+{\mu \beta}_1\end{array}} $$
Then, we obtain the solution α1
$$ {\alpha}_1={\left({\mathbf{K}}_1{\mathbf{K}}_1+\lambda {\mathbf{K}}_1+2\rho {\mathbf{K}}_1{\mathbf{K}}_1+\mu \mathbf{I}\right)}^{-1}\left({\mathbf{K}}_1\mathbf{y}+2\rho {\mathbf{K}}_1{\mathbf{K}}_2{\alpha}_2+{\mu \beta}_1\right) $$
Set the derivation of T3 with respect to α2 to be zero; a similar solution α2 is given as follows:
It is straightforward that Eqs. (16) and (18) are least squared by ℓ1 norm regularization. Thus, the solution β1 and β2 have closed form which can be easily achieved by a soft shrinkage function
$$ {\beta}_1=\operatorname{sign}\left({\alpha}_1\right)\max \left(0,\left|{\alpha}_1\right|-\frac{\tau }{2\mu}\right) $$
By introducing Eqs. (4), (21) can be reformulated as follows:
$$ {\displaystyle \begin{array}{l}{\alpha}_1={\left({\mathbf{K}}_1{\mathbf{K}}_1+\lambda {\mathbf{K}}_1+2\rho {\mathbf{K}}_1{\mathbf{K}}_1+\mu \mathbf{I}\right)}^{-1}\left({\mathbf{K}}_1\mathbf{y}+2\rho {\mathbf{K}}_1{\mathbf{K}}_2{\alpha}_2+{\mu \beta}_1\right)\\ {}={\left(\left(1+2\rho \right){F}^H\operatorname{diag}\left({\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_1\right)F+\lambda {F}^H\operatorname{diag}\left({\hat{\mathbf{k}}}_1\right)F+\mu \mathbf{I}\right)}^{-1}\\ {}\times \left({F}^H\operatorname{diag}\left({\hat{\mathbf{k}}}_1\right)F\mathbf{y}+2\rho {F}^H\operatorname{diag}\left({\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_2\right)F{\alpha}_2+{\mu \beta}_1\right)\\ {}={F}^H\operatorname{diag}\left(\frac{1}{\left(1+2\rho \right){\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_2+\lambda {\hat{\mathbf{k}}}_1+\mu}\right)\operatorname{diag}\left({\hat{\mathbf{k}}}_1\right)F\mathbf{y}\\ {}+2\rho {F}^H\operatorname{diag}\left(\frac{1}{\left(1+2\rho \right){\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_2+\lambda {\hat{\mathbf{k}}}_1+\mu}\right)\operatorname{diag}\left({\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_2\right)F{\alpha}_2\\ {}+\mu {F}^H\operatorname{diag}\left(\frac{1}{\left(1+2\rho \right){\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_2+\lambda {\hat{\mathbf{k}}}_1+\mu}\right)F{\beta}_1\end{array}} $$
Then, the DFT of α1 is found by
$$ {\displaystyle \begin{array}{l}{\hat{\alpha}}_1=\operatorname{diag}\left(\frac{{\hat{\mathbf{k}}}_1}{\left(1+2\rho \right){\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_1+\lambda {\hat{\mathbf{k}}}_1+\mu}\right)\hat{\mathbf{y}}+2\rho \operatorname{diag}\left(\frac{{\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_2}{\left(1+2\rho \right){\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_1+\lambda {\hat{\mathbf{k}}}_1+\mu}\right){\alpha}_2\\ {}+\mu \operatorname{diag}\left(\frac{1}{\left(1+2\rho \right){\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_1+\lambda {\hat{\mathbf{k}}}_1+\mu}\right){\beta}_1\\ {}=\frac{{\hat{\mathbf{k}}}_1\circ \hat{\mathbf{y}}+2\rho {\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_2\circ {\alpha}_2+{\mu \beta}_1}{\left(1+2\rho \right){\hat{\mathbf{k}}}_1\circ {\hat{\mathbf{k}}}_1+\lambda {\hat{\mathbf{k}}}_1+\mu}\end{array}} $$
In the same way, the DFT of α2 is obtained from
$$ {\hat{\alpha}}_2=\frac{{\hat{\mathbf{k}}}_2\circ \hat{\mathbf{y}}+2\rho {\hat{\mathbf{k}}}_2\circ {\hat{\mathbf{k}}}_1\circ {\alpha}_1+{\mu \beta}_2}{\left(1+2\rho \right){\hat{\mathbf{k}}}_2\circ {\hat{\mathbf{k}}}_2+\lambda {\hat{\mathbf{k}}}_2+\mu } $$
Here, k2 is the first row of matrix K2.
Model update
To update the proposed MKCF_CNN method for robust visual tracking, an incremental scheme is adopted to update the proposed model,
$$ {\displaystyle \begin{array}{l}{\alpha}_{1,t}=\left(1-\eta \right){\alpha}_{1,t-1}+{\eta \alpha}_{1,t}\\ {}{\alpha}_{2,t}=\left(1-\eta \right){\alpha}_{2,t-1}+{\eta \alpha}_{2,t}\end{array}} $$
$$ {\displaystyle \begin{array}{l}{\mathbf{x}}_{1,t}=\left(1-\eta \right){\mathbf{x}}_{1,t-1}+\eta {\mathbf{x}}_{1,t}\\ {}{\mathbf{x}}_{2,t}=\left(1-\eta \right){\mathbf{x}}_{2,t-1}+\eta {\mathbf{x}}_{2,t}\end{array}} $$
where η is a constant parameter which controls the learning rate. The subscript t denotes the tth frame. The incremental update strategy can deal with the abrupt change in successive frame.
Target detection
For kernel correlation filter K1, in the tth frame sequence, a great deal of circulant candidates, denoted as \( {\mathbf{x}}_{1,t}^{\hbox{'}} \), are extracted around the base image x1, t − 1. The base image x1, t − 1 locates at the position of the target at the (t − 1)th frame. The candidates \( {\mathbf{x}}_{1,t}^{\hbox{'}} \) have a circulant structure. Thus, the responses of these candidates are given by
$$ \mathrm{response}1={\mathcal{F}}^{-1}\left({\hat{\mathbf{k}}}_1^{\hbox{'}}\circ {\alpha}_{1,t}\right) $$
In the same way, the responses of these candidates \( {\mathbf{x}}_{2,t}^{\hbox{'}} \) with respect to kernel correlation filter K2 are obtained by
The maximum values of response1 and response2 are easily achieved by max(response1(:)) and max (response2(:)), respectively. if max(response1(:)) > max(response2(:)), the final response is equal to max(response1(:)). Otherwise, the final response is equal to max(response2(:)). The best position of the target is obtained according to the final response.
Convolutional neural network (CNN) features extracted from MatConvNet
Traditional features, such as histogram of oriented gradient (HOG), SIFT, and CN, have achieved promising tracking performance in the past decade. However, these handcrafted features are out-of-date along with the rise of CNN features. In [40], the properties of CNN-based representation have gained impressive results on image recognition and object detection. In [35], three convolutional layers, conv3 − 4, conv4 − 4, conv5 − 4, utilizing VGG-19 model are introduced to the field of visual tracking and demonstrate powerful representation ability. Inspired by [41], we used the conv5 − 4 convolution layer and conv4 − 4 convolution layer of VGG-19 to model the appearance of the target. Features from conv5 − 4 convolution layer with more semantic information can discriminate the target from the dramatically changing background. Features from conv4 − 4 convolution layer with more spatial details can locate the position of target precisely.
Target recovery
We adopt the EdgeBox method [42] to re-detect the target from the failures of tracking. A great deal of object bounding box detection proposals Pd are generated by the EdgeBox method, and these proposals are evaluated under the framework of correlation filter to decide the final tracking position. Given the position (xt − 1, yt − 1) of the target in the (t − 1)th frame, a set of bounding box proposals are extracted around the position of the target in the current frame. The position of each bounding box proposal pi is set to \( \left({x}_t^i,{y}_t^i\right) \) in the tth frame. The maximum response score of each bounding box proposal pi is given by r(pi), which is computed by Eq. (7) using the HOG feature. If the score of tracking results in the tth frame is smaller than the threshold T0, it can be believed that the tracker loses the target and the scheme of re-detection should be triggered. The optimal bounding box proposal in the tth frame is obtained by minimizing the following expression:
$$ {\displaystyle \begin{array}{l}\arg \underset{i}{\min }r\left({p}_t^i\right)+\alpha L\left({p}_t^i,{p}_{t-1}\right)\\ {}\kern2.75em s.t.\kern0.5em r\left({p}_t^i\right)>1.5{T}_0\end{array}} $$
where \( L\left({p}_t^i,{p}_{t-1}\right)=\exp \left(-\frac{1}{2{\sigma}^2}{\left\Vert \left({x}_t^i,{y}_t^i\right)-\left({x}_{t-1},{y}_{t-1}\right)\right\Vert}^2\right) \). The formula \( L\left({p}_t^i,{p}_{t-1}\right) \) is motion constraint between two successive frames. α is a constant parameter which controls the balance between the response score and the motion constraint. σ means the diagonal length of the initial target size.
Scale estimation
Scale estimation is very important for robust tracking. Motivated by [42], we use the EdgeBox method to deal with scale variation appeared in sequences. Given the size (wt − 1, ht − 1) of the target in the (t − 1)th frame, we use the EdgeBox method to conduct on the multi-scale bounding box proposals Ps with the size of swt − 1 × sht − 1 in the current frame and reject the proposals whose intersection over union (IoU) is lower than 0.6 or higher than 0.9. For each accepted scale proposal, we compute the response score under the framework of correlation filter. If the maximum response score {r(pi)|pi ∈ Ps} is smaller than response obtained in Section 3.4, we keep the size of the target in the (t − 1)th frame. Otherwise, we update the size of the target by the following equation:
$$ \left({w}_t,{h}_t\right)=\gamma \left({w}_t^{\ast },{h}_t^{\ast}\right)+\left(1-\gamma \right)\left({w}_{t-1},{h}_{t-1}\right) $$
where \( \left({w}_t^{\ast },{h}_t^{\ast}\right) \) is the size of the proposal with the maximum response score. γ is a constant parameter which controls the update rate.
In this section, we evaluate our proposed method on three public datasets: OTB-2013 [43], TColor-128 [44], and DTB70 [45]. Matlab pseudo-codes and tracking pipeline of our MKCF_CNN method are given in Tables 1 and 2, separately. Extensive experiments demonstrate that our method is able to achieve a very appealing performance in terms of effectiveness and robustness.
Table 1 Matlab pseudo-codes of MKCF_CNN
Table 2 Tracking pipeline of MKCF_CNN method
Experimental setup
The proposed MKCF_CNN method is implemented in MATLAB on a PC equipped with an Intel Xeon CPU E5-2640 v4 with 128G RAM and a single NVIDIA GeForce GTX 1080Ti. We adopt the pretrained VGGNet-19 as our feature extractor and utilize matcovnet for feature generation. We train two correlation filters utilizing outputs from the conv4 − 4 and conv5 − 4 layers. The linear kernel is adopted in this paper. The parameters λ, τ, μ, ρ in (14) are empirically set to 10−4, 10−5, 10−4, and 10−3 separately. We set the update rate η in (28) and (29) to 0.01 and the weight parameter γ in (33) to 0.6. The tracking failure threshold T0 is set to 0.2.
We use two measurements, precision plots and success plots [46], to quantitatively assess the tracking results of our method. Precision plots illustrate the percentage of frames in which the center location error is within a given threshold. The threshold is set to 20 pixels. The center location error means the Euclidean distance between the tracked location and the ground truth. The success plots are the percentage of frames where the overlap rate S is larger than a fixed threshold T1. The overlap rate S is defined as \( S=\frac{\mathrm{Aera}\left({B}_E\cap {B}_G\right)}{\mathrm{Aera}\left({B}_E{UB}_G\right)} \). ∩ and ∪ are intersection and union operators, respectively. BE denotes the estimated bounding box and BG is the ground-truth bounding box. T1 is set to 0.5 in this paper.
To evaluate the tracking performance of our method comprehensively, the challenging videos from OTB-2013 and TColor-128 are categorized with 11 attributes including background clutter (BC), deformation (DEF), fast motion (FM), in-plane rotation (IPR), illumination variation (IV), low resolution (LR), motion blur (MB), occlusion (OCC), out-of-plane rotation (OPR), out of view (OV), and scale variation (SV).
Comparison of tracking performance on OTB-2013
OTB-2013 benchmark dataset contains 51 sequences with 11 challenging attributes. We compare our method with 9 state-of-the-art algorithms which contain deep learning tracking methods (HCFT [35], HDT [47], CNN-SVM [48], DeepSRDCF [49]) and correlation filter tracking methods (MEEM [50], Staple [51], SAMF [52], DSST [28]). Figure 4 gives the precision plots and success plots of OPE of our proposed method against other state-of-the-state methods on OTB-2013. According to Fig. 4, our MKCF_CNN tracker outperforms most of the other trackers, demonstrating the effectiveness of MKCF_CNN. The proposed MKCF_CNN method achieves 2.3% performance gains in precision against HCFT, which is the most related tracking method with us. Meanwhile, MKCF_CNN and DeepSRDCF rank first on the success score.
Precision plots and success plots of OPE of our proposed method against other state-of-the-art methods on OTB-2013
In order to comprehensively assess the tracking performance of our proposed MKCF_CNN tracker, we present tracking results under OPE regarding 11 attributes in Figs. 5 and 6. We can observe that on the 51 videos with all the 11 challenging attributes, our method ranks first among the 10 evaluated trackers on precision plots. On the videos with attributes such as background clutter, deformation, in-plane rotation, illumination variation, low resolution, and out of view, MKCF_CNN ranks first among all the evaluated trackers on success plots. In the HCFT method, the outputs of the conv3 − 4, conv4 − 4, and conv5 − 4 layers are used as the deep features. In the HDT method, the outputs of six convolutional layers (10th–12th, 14th–16th) from VGGNet-19 are adopted as feature maps. However, only two layers (conv4 − 4, conv5 − 4) from VGGNet-19 are used in our proposed method, and two mutual kernelized correlation filters are trained to interact each other through all the tracking process without definite parameters as HCFT and definite initial parameters as HDT. From Figs. 5 and 6, it is clear that our method performs better than those most relevant methods.
Precision plots of OPE with different attributes on OTB-2013
Success plots of OPE with different attributes on OTB-2013
The tracking speed is very important for visual tracking. Correlation filter-based trackers obtained beyond real-time speed using handcrafted features. Except for DFT and inverse DFT, the computational complexity of trackers with a single correlation filter is O(n log n). n is the dimensionality of the features. Thus, the whole computational load of single correlation filter-based trackers is O(Mn log n). M is the number of base trackers. M = 2 in our method and M = 3 in HCFT. For trackers under the correlation filter framework with deep features, the computational burden mainly comes from the features extraction process. Thus, the tracking speed of our proposed method is 1.3 fps, which is a little faster than HCFT with a speed of 1.1 fps.
Comparison of tracking performance on TColor-128
The TColor-128 dataset consists of 128 challenging color videos and is designed to assess the tracking performance on color sequences. Similarly, we evaluated our proposed MKCF_CNN method with 9 state-of-the-art trackers, including HCFT [35], COCF [41], KCF_GaussianHog [27], SRDCF [29], MUSTER [53], SAMF [52], DSST [28], Struck [54], and ASLA [55]. Figure 7 shows precision plots and success plots of OPE of our proposed method against other state-of-the-art methods on TColor-128. Figures 8 and 9 present precision plots and success plots of OPE with different attributes on TColor-128, respectively. It is obvious that our method is the best one among the ten trackers on dataset TColor-128, following HCFT method. Our method obtains a precision rate of 73.5% and a success rate of 63.1%. HCFT and COCF rank second and third, respectively. Although HCFT utilizes deep features from three layers, its performance is not better than our method. COCF uses the same outputs as our method from two layers of VGGNet-19, and it performs worse than our MKCF_CNN tracker. This is because the scale estimation and re-detection scheme are able to locate the target precisely in our method. Figures 8 and 9 demonstrate the effectiveness of our method on TColor-128 with 11 challenging attributes. It can be seen that our method performs best against 9 other methods. Table 3 gives the data comparison of success rates of 8 trackers. The experimental results show that our method achieves the best performance under all challenging attributes except for scale variation.
Precision plots and success plots of OPE of our proposed method against other state-of-the-art methods on TColor-128
Precision plots of OPE with different attributes on TColor-128
Success plots of OPE with different attributes on TColor-128
Table 3 The success rates of 8 trackers with 11 challenging attributes on TColor-128 dataset. The best, second best, and third best tracking results are represented in red, blue, and green, respectively
Figure 10 shows some tracking results of two sequences with severe occlusion. In the Lemming video, the toy Lemming is severely occluded by a triangular rule when it is moving (e.g., #320, #340). It is obvious that the proposed method, SAMF, Struck, and OAB are robust to severe occlusion and can track the Lemming target steadily. In the skating2 sequence, the target woman dancer has obvious appearance variation and is totally occluded by the man dancer occasionally when they are skating (e.g., #150, #250). We can observe that the proposed method, HCFT and COCF with deep features, are able to deal with the severe occlusion and appearance variation effectively.
Tracking results of ten trackers on sequences Lemming and skating2, in which the targets undergo occlusion. The tracking results of ASLA, IVT, CSK, SAMF, OAB, Struck, HCFT, COCF, and our tracker are shown by red, green, blue, black, magenta, cyan, gray, dark red, and orange rectangles, respectively
Figure 11 demonstrates some screenshots of two videos with fast motion. In the Soccer sequence, the player target keeps jumping and undergoes fast motion, background clutter, and occlusion when celebrating the victory (e.g., #36, #76, #170). IVT, Struck, CSK, ASLA, and OAB lose the target completely because of the challenging interference factors. The target in the Biker sequence undergoes fast motion and scale variation because of fast riding (e.g., #10, #100, #200). It can be easily seen that our method performs well in the entire sequence and is able to deal with motion blur and scale variation effectively.
Tracking results of nine trackers on sequences Soccer and Biker, in which the targets undergo fast motion. The tracking results of ASLA, IVT, CSK, SAMF, OAB, Struck, HCFT, COCF, and our tracker are shown by red, green, blue, black, magenta, cyan, gray, dark red, and orange rectangles, respectively
Figure 12 illustrates some sampled tracking results of two sequences with appearance variation. The appearance of the target in the Surfing sequence changes severely when the player is going surfing (e.g., #100, #125). From the tracking results, we can see that most of the trackers are able to locate the target coarsely. However, only our method has the ability to track the target more precisely. In the Bikeshow sequence, the biker cycles in the square with severe appearance variation and scale change (e.g., #20, #120, #361). The proposed method, HCFT and COCF utilizing deep features, handle appearance change better than the other methods with handcrafted features.
Tracking results of nine trackers on sequences Surfing and Bikeshow, in which the targets undergo appearance variation. The tracking results of ASLA, IVT, CSK, SAMF, OAB, Struck, HCFT, COCF, and our tracker are shown by red, green, blue, black, magenta, cyan, gray, dark red, and orange rectangles, respectively
Figure 13 demonstrates some tracking results of two sequences with background clutter. The target in the Board sequence moves in the complex scenes with severe background clutter (e.g., #160, #300, #400). It can be seen that our method can track the target successfully through the sequence. In the Torus sequence, the target moves in a cluttered room with slight appearance variation (e.g., #100, #200, #220). We can observe that trackers with handcrafted features can not deal with this situation and drift away to other objects.
Tracking results of nine trackers on sequences Board and Torus, in which the targets undergo background clutter. The tracking results of ASLA, IVT, CSK, SAMF, OAB, Struck, HCFT, COCF and our tracker are shown by red, green, blue, black, magenta, cyan, gray, dark red and orange rectangles, respectively
Figure 14 shows some screenshots of tracking results in two sequences with illumination variation. In the Shaking video, a guitarist is playing on the stage with dim lights (e.g., #100, #200, #300). Although the target undergoes severe illumination variation, our method locates the target more precisely than other trackers. In the Singer2 sequence, the singer in dark clothes performing on the stage undergoes drastic illumination variation (e.g., #110, #210, #320). We can observe that HCFT and COCF with deep features move away from the target resulting in drastic illumination variation. Only our method is able to persistently track the target in the whole sequence.
Tracking results of nine trackers on sequences Shaking and Singer2, in which the targets undergo illumination change. The tracking results of ASLA, IVT, CSK, SAMF, OAB, Struck, HCFT, COCF, and our tracker are shown by red, green, blue, black, magenta, cyan, gray, dark red, and orange rectangles, respectively
Comparison of tracking performance on DTB
DTB dataset consists of 70 challenging videos captured by a camera mounted on an unmanned aerial vehicle (UAV). All of the 70 challenging sequences in the DTB dataset were manually annotated with 11 challenging attributes, including motion blur (MB), scale variation (SV), similar objects around (SOA), aspect ratio variation (ARV), background cluttered (BC), occlusion (OCC), out-of-view (OV), deformation (DEF), out-of-plane rotation (OPR), fast camera motion (FCM), and in-plane rotation (IPR). We compare our method with 9 representative trackers including HCFT [35], HDT [47], COCF [41], MEEM [50], SO-DLT [56], SRDCF [29], KCF [27], DAT [57], and DSST [28]. Figure 15 shows the overall tracking performance of OPE based on precision score and success score on DTB dataset. We can see that the proposed tracker can achieve the best tracking performance against 9 other trackers.
Precision plots and success plots of OPE of our proposed method against other state-of-the-art methods on DTB
Ablation study
Effect of mutual kernelized correlation filters
In order to demonstrate the effectiveness of mutual correlation filters, we investigate the tracking performance of our proposed method with mutual correlation filters and without mutual correlation filters on OTB-2013. Figure 16 gives the precision plots and success plots of OPE by different settings. Our method with mutual correlation filters achieves a score of 0.914 in terms of precision and the precision performance is improved by 0.9% compared with the method without mutual correlation filters. In success plots, owing to the interaction of mutual correlation filters, the tracking performance is improved by 2.0%. Figures 17 and 18 show the tracking results on OTB-2013 with 11 challenging attributes. It is obvious that our method with mutual correlation filters achieves better tracking performance in all the 11 attributes in both the average precision score and average success rate.
Precision plots and success plots of OPE by our proposed method with mutual kernelized correlation filters and our method without mutual kernelized correlation filters on OTB-2013
Average precision score of our proposed method with mutual kernelized correlation filters and our method without mutual kernelized correlation filters in terms of 11 challenging attributes on OTB-2013
Average success rate of our proposed method with mutual kernelized correlation filters and our method without mutual kernelized correlation filters in terms of 11 challenging attributes on OTB-2013
Effect of elastic net constraint
Figure 19 gives the tracking results on OTB-2013 by our method with elastic net constraint and our method without elastic net constraint in terms of precision and success rate. We can observe that the proposed method with elastic net constraint achieves slightly better than method without elastic net constraint. Table 4 demonstrates the tracking results on OTB-2013 with 11 challenging attributes. It is clear that our proposed method with elastic net constraint obtains better performance than method without elastic net constraint in terms of IPR, OC, SV, OPR, and IV.
Precision plots and success plots of OPE by our proposed method with elastic net constraint and our method without elastic net constraint on OTB-2013
Table 4 The success rates of our method with elastic net constraint and our method without elastic net constraint on OTB-2013. The best tracking results are represented in red. ENC denotes elastic net constraint
Effect of scale estimation
In this section, we investigate the tracking performance with scale estimation scheme and without scale estimation scheme. Experimental results conducted on OTB-2013 are demonstrated in Figs. 20 and 21. The first picture in Fig. 20 shows the comparison of success plots of OPE on OTB-2013 and the second picture in Fig. 20 gives the success plots of OPE in terms of scale variation. Figure 21 shows the average success rate of our proposed method with scale estimation scheme and our method without scale estimation scheme in terms of 11 challenging attributes on OTB-2013. It can be seen that the scale estimation mechanism is able to improve the tracking performance greatly.
The comparison of our method with scale estimation scheme and our method without scale estimation scheme on OTB-2013. The first figure demonstrates the success plots of OPE and the second one gives the success plots of OPE in terms of scale variation
Average success rate of our proposed method with scale estimation scheme and our method without scale estimation scheme in terms of 11 challenging attributes on OTB-2013
Effect of re-detection module
In this section, we compare the tracking performance with re-detection module and without re-detection module on OTB-2013. The first picture in Fig. 22 shows the comparison of success plots of OPE on OTB-2013 and the second picture in Fig. 22 gives the success plots of OPE in terms of occlusion. It is obvious that the re-detection module is able to recover target in case of tracking failures. Table 5 gives the tracking results on OTB-2013 in terms of 11 challenging attributes. The best tracking results are shown in red. It is clear that our method with re-detection module achieves better tracking results in almost all the 11 attributes except for the LR and DE.
Comparison of our method with re-detection module and our method without re-detection module on OTB-2013. The first picture demonstrates the success plots of OPE and the second one gives the success plots of OPE in terms of occlusion
Table 5 The success rates of our method with re-detection module and our method without re-detection module on OTB-2013. The best tracking results are represented in red. RD denotes re-detection
In this paper, we propose a novel visual tracking method based on mutual kernelized correlation filters with elastic net constraint. The proposed algorithm is able to train two interactive discriminative classifiers to cope with the challenging environment and severe appearance variation. The elastic net constraint is imposed on the mutual kernelized correlation filters to group the similar features and to alleviate the impact of outliers. Scale adaption and re-detection scheme are applied in our method to promote tracking performance. Extensive experimental results demonstrate that our proposed method is able to obtain appealing tracking performance by using the interacted kernelized correlation filters with elastic net constraint. Quantitative and qualitative results show the superiority of our method in terms of effectiveness and robustness, compared with other tracking algorithms.
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Background clutter
DEF:
DFT:
Discrete Fourier transform
ENC:
Elastic net constraint
FM:
HOG:
Histogram of oriented gradient
IPR:
In-plane rotation
Illumination variation
KCF:
Kernelized correlation filters
LASSO:
Least absolute shrinkage and selection operator
OCC:
OPR:
Out-of-plane rotation
OV:
Out of view
Re-detection
RNN:
SIFT:
Scale-invariant feature transform
SRDCF:
Spatially regularized discriminative correlation filters
SV:
Scale variation
A. Li, M. Lin, Y. Wu, M. Yang, S. Yan, NUS-PRO: a new visual tracking challenge. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 335–349 (2016)
P. Li, D. Wang, L. Wang, H. Lu, Deep visual tracking: review and experimental comparison. Pattern Recogn. 76, 323–338 (2018)
S. Zhang, X. Lan, Y. Qi, C. Yuen, Robust visual tracking via basis matching, IEEE Trans. Circuits Syst. Video Technol. 27(3), 421–430 (2017)
S. Zhang, H. Zhou, F. Jiang, X. Li, Robust visual tracking using structurally random projection and weighted least squares. IEEE Trans. Circuits Syst. Video Technol. 25(11), 1749–1760 (2015)
D. Wang, H. Lu, M. Yang, Robust visual tracking via least soft-threshold square. IEEE Trans. Circuits Syst. Video Technol. 26(9), 1709–1721 (2016)
L. Zhang, W. Wu, T. Chen, N. Strobel, D. Comaniciu, Robust object tracking using semi-supervised appearance dictionary learning. Pattern Recogn. Lett. 62, 17–23 (2015)
W. Zhong, H. Lu, M. Yang, Robust object tracking via sparse collaborative appearance model. IEEE Trans. Image Process. 23(5), 2356–2368 (2014)
Y. Song, C. Ma, L. Gong, J. Zhang, R. Lau, M. Yang, in Proceedings of the IEEE International Conference on Computer Vision. CREST: convolutional residual learning for visual tracking (2017), pp. 2555–2564
T. Zhang, C. Xu, M. Yang, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Multi-task correlation particle filter for robust object tracking (2017), pp. 4819–4827
W. Chen, K. Zhang, Q. Liu, Robust visual tracking via patch based kernel correlation filters with adaptive multiple feature ensemble. Neurocomput. 214, 607–617 (2016)
K. Zhang, X. Li, H. Song, Q. Liu, Visual tracking using spatio-temporally nonlocally regularized correlation filter. Pattern Recogn. 83, 185–195 (2018)
K. Zhang, Q. Liu, J. Yang, M.-H. Yang, Visual tracking via boolean map representations. Pattern Recogn. 81, 47–160 (2018)
S. Yao, Z. Zhang, G. Wang, Y. Tang, L. Zhang, in Proceedings of the European Conference on Computer Vision. Real-time visual tracking: promoting the robustness of correlation filter learning (2016), pp. 662–678
M. Xue, H. Ling, in Proceedings of the IEEE International Conference on Computer Vision. Robust visual tracking using ℓ1 minimization (2009), pp. 1436–1443
C. Bao, Y. Wu, H. Ling, H. Ji, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Real time robust ℓ1 tracker using accelerated proximal gradient approach (2012), pp. 1830–1837
Z. Xiao, H. Lu, D. Wang, L2-RLS based object tracking. IEEE Trans. Circuits Syst. Video Technol. 24(8), 1301–1308 (2014)
D. Wang, H. Lu, Fast and robust object tracking via probability continuous outlier model. IEEE Trans. Image Process. 24(12), 5166–5176 (2015)
B. Babenko, M. Yang, S. Belongie, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Visual tracking with online multiple instance learning (2009), pp. 983–990
K. Zhang, L. Zhang, M. Yang, Fast compressive tracking. IEEE Trans. on Pattern Anal. Mach. Intell. 36(10), 2002–2015 (2014)
K. Zhang, L. Zhang, Q. Liu, D. Zhang, M. Yang, in Proceedings of the European Conference on Computer Vision. Fast visual tracking via dense spatio-temporal context learning (2014), pp. 127–141
M. Wang, Y. Liu, Z. Huang, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Large margin object tracking with circulant feature maps (2017), pp. 4021–4029
H. Fan, H. Ling, in Proceedings of the IEEE International Conference on Computer Vision. Parallel tracking and verifying: a framework for real-time and high accuracy visual tracking (2017), pp. 5486–5494
F. Li, C. Tian, W. Zuo, L. Zhang, M. Yang, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Learning spatial-temporal regularized correlation filters for visual tracking (2018), pp. 4904–4913
W. Zuo, X. Wu, L. Lin, L. Zhang, M. Yang, Learning support correlation filters for visual tracking. IEEE Trans. on Pattern Anal. Mach. Intell. DOI: https://doi.org/10.1109/TPAMI.2018.2829180
M. Danelljan, G. Hager, F. Khan, M. Felsberg, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Adaptive decontamination of the training set: a unified formulation for discriminative visual tracking (2016), pp. 1430–1438
D. Bolme, J. Beveridge, B. Draper, Y. Lui, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Visual object tracking using adaptive correlation filters (2010), pp. 2544–2550
J. Henriques, R. Caseiro, P. Martins, J. Batista, High-speed tracking with kernelized correlation filters. IEEE Trans. on Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)
M. Danelljan, G. Hager, F. Khan, M. Felsberg, Discriminative scale space tracking. IEEE Trans. on Pattern Anal. Mach. Intell. 39(8), 1561–1575 (2017)
M. Danelljan, G. Hager, F. Khan, M. Felsberg, in Proceedings of the IEEE International Conference on Computer Vision. Learning spatially regularized correlation filters for visual tracking (2015), pp. 4310–4318
L. Bertinetto, J. Valmadre, F. Henriques, A. Vedaldi, H. Philip, in Proceedings of the European Conference on Computer Vision Workshops. Fully-convolutional siamese networks for object tracking (2016), pp. 850–865
N. Hyeonseob, B. Han, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Learning multi-domain convolutional neural networks for visual tracking (2016), pp. 4293–4302
Z. Chi, H. Li, H. Lu, M. Yang, Dual deep network for visual tracking. IEEE Trans. Image Process. 26(4), 2005–2015 (2017)
S. Zhang, Y. Qi, F. Jiang, X. Lan, P. Yuen, H. Zhou, Point-to-set distance metric learning on deep representations for visual tracking. IEEE Trans. Intell. Transp. Sys. 19(1), 187–198 (2018)
K. Zhang, Q. Liu, Y. Wu, M. Yang, Robust visual tracking via convolutional networks without training. IEEE Trans. Image Process. 25(4), 1779–1792 (2016)
C. Ma, J. Huang, X. Yang, M. Yang, in Proceedings of the IEEE International Conference on Computer Vision. Hierarchical convolutional features for visual tracking (2015), pp. 3074–3082
L. Wang, W. Ouyang, X. Wang, H. Lu, in Proceedings of the IEEE International Conference on Computer Vision. Visual tracking with fully convolutional networks (2015), pp. 3119–3127
F. Heng, H. Ling, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. SANet: structure-aware network for visual tracking (2017), pp. 42–49
Z. He, Y. Fan, J. Zhuang, Y. Dong, H. Bai, in Proceedings of the IEEE International Conference on Computer Vision. Correlation filters with weighted convolution responses (2017), pp. 1992–2000
S. Yao, G. Wang, L. Zhang, Correlation filter learning toward peak strength for visual tracking. IEEE Trans. Cybern. 48(4), 1290–1303 (2018)
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556(2015)
L. Zhang, P. Suganthan, Robust visual tracking via co-trained Kernelized correlation filters. Pattern Recogn. 69, 82–93 (2017)
D. Huang, L. Luo, M. Wen, Z. Chen, C. Zhang, in Proceedings of British Machine Vision Conference. Enable scale and aspect ratio adaptability in visual tracking with detection proposals (2015), pp. 185.1–185.12
Y. Wu, J. Lim, M. Yang, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Online object tracking: a benchmark (2013), pp. 2411–2418
P. Liang, E. Blasch, H. Ling, Encoding color information for visual tracking: algorithms and benchmark. IEEE Trans. Image Process. 24(12), 5630–5644 (2015)
S. Li, D. Yeung, in AAAI Conference on Artificial Intelligence. Visual object tracking for unmanned aerial vehicles: a benchmark and new motion models (2017), pp. 4140–4146
Y. Wu, J. Lim, M. Yang, Object tracking benchmark. IEEE Trans. on Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)
Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, J. Lim, M. Yang, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hedged deep tracking (2016), pp. 4303–4311
S. Hong, T. You, S. Kwak, B. Han, in Proceedings of the 32nd International Conference on International Conference on Machine Learning. Online tracking by learning discriminative saliency map with convolutional neural network (2015), pp. 597–606
M. Danelljan, G. Hager, F. Khan, M. Felsberg, in Proceedings of the IEEE International Conference on Computer Vision Workshop. Convolutional features for correlation filter based visual tracking (2015), pp. 621–629
J. Zhang, S. Ma, S. Sclaroff, in Proceedings of the European Conference on Computer Vision. MEEM: robust tracking via multiple experts using entropy minimization (2014), pp. 188–203
B. Luca, V. Jack, G. Stuart, M. Ondrej, P. Torr, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. Staple: complementary learners for real-time tracking (2016), pp. 1401–1409
Y. Li, J. Zhu, in Proceedings of the European Conference on Computer Vision. A scale adaptive kernel correlation filter tracker with feature integration (2014), pp. 254–265
Z. Hong, Z. Chen, C. Wang, M. Xue, D. Prokhorov, D. Tao, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Multi-store tracker (MUSTer): a cognitive psychology inspired approach to object tracking (2015), pp. 749–758
S. Hare, A. Saffari, H.S. Philip, in Proceedings of the IEEE International Conference on Computer Vision. Struck: structured output tracking with kernels (2011), pp. 263–270
X. Jia, H. Lu, M. Yang, Visual tracking via coarse and fine structural local sparse appearance models. IEEE Trans. Image Process. 25(10), 4555–4564 (2016)
N. Wang, S. Li, A. Gupta, D. Y. Yeung, Transferring rich feature hierarchies for robust visual tracking, arXiv:1501.04587(2015)
H. Possegger, T. Mauthner, H. Bischof, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. In defense of color-based model-free tracking (2015), pp. 2113–2120
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
This work was supported by A Project of Shandong Province Higher Educational Science and Technology Program under Grant No. J17KA088 and No. J16LN02, the Natural Science Foundation of Shandong Province under Grant No. ZR2015FL009 and No. ZR2019PF021, the Key Research and Development Program of Shandong Province under Grant No. 2016GGX101023, Scientific Research Fund of Binzhou University under Grant No. 2019ZD03 and Dual Service Projects of Binzhou University under Grant No. BZXYSFW201805.
Aviation Information Technology Research and Development, Binzhou University, Binzhou, 256603, China
Haijun Wang
& Shengyan Zhang
Search for Haijun Wang in:
Search for Shengyan Zhang in:
HW proposed the study, conducted the experiments, and wrote the manuscript. SZ analyzed the data and revised the manuscript. Both authors read and approved the final manuscript.
Correspondence to Haijun Wang.
Wang, H., Zhang, S. Mutual kernelized correlation filters with elastic net constraint for visual tracking. J Image Video Proc. 2019, 73 (2019) doi:10.1186/s13640-019-0474-z
|
CommonCrawl
|
The top heat mode of closed loop oscillating heat pipe with check valves at the top heat mode (THMCLOHP/CV): a thermodynamic study
Nipon Bhuwakietkumjohn1 &
Thanya Parametthanuwat1
International Journal of Mechanical and Materials Engineering volume 9, Article number: 5 (2014) Cite this article
The article reports a recent study on heat flux of the top heat mode closed looped oscillating heat pipe with check valves (THMCLOHP/CV). An experimental system was evaluated under normal operating conditions. The THMCLOHP/CV was made of a copper tube with an inside diameter of 2.03 mm. The working fluid was water, ethanol and R123 with a filling ratio of 30%, 50% and 80% with respect to the total volume of the tube. The angles of inclination were 20°, 40°, 60°, 80° and 90° from the horizontal axis. The number of turn was 40 turns and 2 check valves. Three lengths of evaporator investigated were 50, 100 and 150 mm. The operating temperatures were 45°C, 55°C and 65°C. Experimental data showed that the THMCLOHP/CV at evaporator length of 50 mm gave a better heat flux with filling ratio at 50% when using R123 as working fluid and the operating temperature of 65°C at angles of inclination of 90°. It was further found that an evaporator length of 50 mm was superior in heat flux over other length in all experimental conditions under this study. Moreover, the presence of operating temperature had clearly contributed to raise the heat flux of THMCLOHP/CV, but the heat flux had decreased when evaporator length increased.
The closed loop oscillating heat pipe had a check valve (CLOHP/CV). The heat transfer of this CLOHP/CV occurred because of the self-sustaining oscillatory flow using a vapour or liquid circulation cycle between the heating and cooling sections: latent heat is transferred. Under normal operating conditions, the liquid and vapour are effectively separated into two parts with the liquid in the cooling regions and the vapour in the heating regions. The liquid forms U-shaped columns in individual turns, and these oscillations form waves. Under such flow conditions, the effective heat transfer area is limited by the amplitude of the waves. When the amplitude of oscillatory flow is sufficient and the heat transfer area is not included in the waves, effective working fluid supply to the heat transfer area cannot be obtained and heat transfer cannot be maintained. However, the installation of check valves in the closed loop eliminates this operating limit whereby a single-direction flow is imposed and the heat transfer area is not restricted by the amplitude of the oscillatory flow. The advantages of a CLOHP/CV are its properties of transferring heat in any orientation, its faster response and its internal structure. Miyazaki et al. (2000) studied the oscillating heat pipe including a check valve under normal operating conditions; the liquid and vapour are effectively separated into two parts with the liquid in the cooling region and the vapour in the heating region. The liquid forms U-shaped columns in individual turns and these oscillations form waves. When the amplitude of oscillatory flow is insufficient and the heat transfer area is not affected by the waves, an effective working fluid supply to the heat transfer area cannot be obtained and the heat transfer cannot be maintained. This operating limit is peculiar to oscillating heat pipes. Charoensawan and Terdtoon (2008) investigated the thermal performance of a horizontal closed loop oscillating heat pipe (HCLOHP) at normal operating conditions. The HCLOHPs tested were made of copper capillary tubes with various inner diameters, evaporator lengths and number of turns. The working fluids used within the HCLOHPs were distilled water and absolute ethanol, which were added into the tubes to various filling ratios. The thermal performance of a HCLOHP improves by increasing the evaporator temperature and decreasing the evaporator/effective length. The best performance of all the HCLOHPs occurred at the maximum number of 26 turns. (Rittidech et al. (2010)) investigated the thermal performance of various horizontal closed loop oscillating heat pipe systems with check valves (HCLOHPs/CVs). The results showed that the heat transfer performance of an HCLOHP/CV system could be improved by decreasing the evaporator length. The highest performance of all tested systems was obtained when the maximum number of system check valves was 2. The maximum heat flux occurred with a 2-mm inner diameter tube, and R123 was determined to be the most suitable working fluid. Rittidech et al. (2007) studied the heat transfer characteristics of CLOHP/CV. The inclination of the CLOHP/CV used in the experiments was 90°C to the horizontal. The experimental results showed that the heat flux increases with an increase of ratio of check valve (R cv) and decreases with an increased aspect ratio. However, it normally performs better when oriented vertically. Unfortunately, vertical orientation from a horizontal is not always practical. For example, top heat mode orientation is commonly favoured in cooling electronic devices, humidity control in air conditioning systems, etc. Despite these common applications, limited reliable experimental research findings are available on the operation of a top heat mode closed loop oscillating heat pipe with check valves (THMCLOHP/CV) (see in Figure 1). In response to the lack of detailed data, this study focuses on determining the actual thermal performance of such a system.
Top heat mode closed-loop oscillating heat-pipe with check valves.
There are indications that exploratory research are indeed required to study the filling ratio, working fluid, length of evaporator section, operating of temperature and angle of inclination of these on top heat mode in engineering systems. Furthermore, this article aims to study heat flux behaviour of THMCLOHP/CV.
The check valve (Figure 2) is a floating type valve that consists of a stainless steel ball and copper tube, in which a ball stopper and conical valve seat are provided at the ends of the check valve's case: a conical valve seat is provided at the bottom of the case and a ball stopper is provided at the top of the case, respectively. The ball can move freely between the ball stopper and conical valve seat. A conical valve seat contacts the stainless steel ball in order to prevent reversal of the flow of the working fluid. The ball stopper allows the working fluid to travel to the condenser section for transferring heat.
Check valve.
Experimental setup
An important factor that has to be considered in building a THMCLOHP/CV is the design of the tube diameter. For this research, the maximum inner diameter of CLOHP/CV can be defined in Equation 1 (Zorigtkhuu et al. 2006).
$$ {d}_{i, \max }<2\sqrt{\frac{\rho}{\rho_l g}} $$
where, d i,max is the maximum inner diameter of the capillary tube (m), σ is the surface tension of the fluid (N/m), ρ i is the liquid density (kg/m3) and g is the gravitational acceleration (m/s2). The working principle of the THMCLOHP/CV and the oscillation in the THMCLOHP/CV rely on three driving forces, surface tension, gravity force and oscillating force. These forces are influenced by many parameters. The gravity force is influenced by the inclination angle of the device; the physical features such as diameter size and evaporator section length can have a significant effect on the surface tension, and the heat flux has great effect on the oscillating force.
Figure 3 shows an experimental setup which consists of a THMCLOHP/CV with the lengths of evaporator and adiabatic (which is equal to condenser sections) of 50, 100 and 150 mm. The selected THMCLOHP/CV was made of copper tubes with an internal diameter of 2.03 mm. The evaporator section was heated by a heater and cooled by air and then air inlet flow into the condenser section, while four points of thermocouples (OMEGA type K) were installed at the inlet and outlet of the condenser section to determine the heat transfer rate. The temperature probes were installed at four points on the high temperature copper tube of the evaporator and at one point for ambient to determine the heat loss. A temperature recorder (Yokogawa DX 200 with ± 0.1°C accuracy, 20 channel input and −200°C to 1,100°C measurement temperature range) was used with type K thermocouples (Omega with ± 1°C accuracy) to monitor all temperatures at specified times.
Experimental setup which consists of a THMCLOHP/CV with the lengths of evaporator and adiabatic. (a) Diagram of the experimental procedure. (b) Experimental setup.
During the experiment, the inclination angles were set at 20°, 40°, 60°, 80° and 90° from the vertical. The controlled parameters included a tube internal diameter of 2.03 mm, and ethanol and pure water as the working fluid. The variable parameters were the lengths of evaporator 50, 100 and 150 mm and the working temperatures of 45°C, 55°C and 65°C, respectively. The experiment was conducted as follows: the THMCLOHP/CV was set into the test rig, the temperature of the heater and air inlet was set at the required value and inlet air was supplied to the jackets of the condenser section. After reaching steady state, a continuous temperature was recorded by the data logger. In order to experiment with a wide range of aspect ratios, the following parameters were set corresponding to those shown in Table 1 to formulate the heat transport characteristics of the THMCLOHP/CV.
Table 1 Controlled and variable parameters
The working principle of the THMCLOHP/CV and the oscillation in the THMCLOHP/CV relies on three driving forces: surface tension, gravity force and oscillating force. These forces are influenced by many parameters. The gravity force is influenced by the inclination angle of the device. The physical features, such as diameter size and evaporator section length, can have a significant effect on the surface tension. Additionally the heat flux has a great effect on the oscillating force. The temperature change of the condenser section's air inlet and air outlet was measured to calculate the heat flux of THMCLOHP/CV. The heat transfer rate of THMCLOHP/CV at the condenser section can then be calculated by Equation 2 as follows (Incropera and Dewitt 1996):
$$ Q=\overset{\cdotp }{m}{C}_p\left({T}_{\mathrm{out}}-{T}_{\mathrm{in}}\right) $$
where, Q is the heat transfer rate (W), \( \overset{\cdotp }{m} \) is the mass flow rate (kg/s), C p is the specific heat capacity constant pressure (J/kg·C), T out is the outlet temperature of the condenser section and T in is the inlet temperature of the condenser section. In this experiment, the heat flux was calculated using Equation 3 as follows (Rittidech et al. 2010):
$$ q=\frac{Q}{A_C}=\frac{Q}{\pi {D}_o{L}_C N} $$
where A c is the all outer surface area of the tube in the condenser section (m2), Q is the heat transfer rate (W), D o is the outside diameter of the capillary tube (m), L c is the condenser length (m) and N is the number of meandering, thus, the calculating standard uncertainty for a type A evaluation. When a set of several repeated readings were taken (for a type A estimate of uncertainty), \( \overline{x} \) (arithmetic mean) and SD (standard deviation) can be calculated using Equations 4, 5 and 6 as follows (Beirlant et al. 2004; Hibbeler 2004);
$$ \overline{X} = \frac{X_1+{X}_2+.....{X}_n}{n_s} $$
$$ \mathrm{SD}=\frac{\sqrt{{\left({X}_1-\overline{X}\right)}^2+{\left({X}_2-\overline{X}\right)}^2+\dots +{\left({X}_n-\overline{X}\right)}^2}}{\left({n}_s-1\right)} $$
$$ {u}_{i,\mathrm{type}\;\mathrm{A}}=\frac{\mathrm{SD}}{\sqrt{n_s}} $$
where 'n' is the number of measurements in the set. The calculating standard uncertainty is a type B evaluation as follows as Equation 7 (Hibbeler 2004; Beirlant et al. 2004).
$$ {u}_{i,\mathrm{type}\;\mathrm{B}}=\frac{a}{\sqrt{n_s}} $$
where 'a' is the semi-range (or half-width) between the upper and lower limits. The combined standard uncertainty was calculated by type A and type B evaluations can be combined, shown by \( {u}_c \) in Equation 8 (Hibbeler 2004; Beirlant et al. 2004).
$$ {u}_c=\sqrt{{\left({u}_{i,\mathrm{type}\;\mathrm{A}}\right)}^2+{\left({u}_{i,\mathrm{type}\;\mathrm{B}}\right)}^2+\dots +\mathrm{etc}.} $$
Expand uncertainty, shown by the symbol U.
$$ U= k{u}_c $$
A particular value of coverage factor gives a particular confidence level for the expanded uncertainty. Most commonly, overall uncertainty was obtained using the coverage factor k = 2 to give a level of confidence of approximately 95%. Some other coverage factors (for a normal distribution) are as follows (Beirlant et al. 2004; Hibbeler 2004):
$$ k = 1\ \mathrm{for}\ \mathrm{a}\ \mathrm{confidence}\ \mathrm{level}\ \mathrm{of}\ \mathrm{a}\mathrm{pproximately}\ 68\% $$
$$ k = 2.5\ \mathrm{for}\ \mathrm{a}\ \mathrm{confidence}\ \mathrm{level}\ \mathrm{of}\ 99\% $$
$$ k = 3\ \mathrm{for}\ \mathrm{a}\ \mathrm{confidence}\ \mathrm{level}\ \mathrm{of}\ 99.7\% $$
The uncertainty analysis for this study is shown in Table 2.
Table 2 Uncertainty analysis result
Effect of inclination angles on heat flux
Figures 4, 5, 6 show the angle of inclination of the THMCLOHP/CV on the heat flux at length of evaporator 50, 100 and 150 mm for the THMCLOHP/CV with 40 turns, R123, ethanol and pure water as the working fluids and a working temperature of 55°C at inclination angles of 20°, 40°, 60°, 80° and 90°. In Figure 4, the effect of the angle of inclination of the THMCLOHP/CV on the heat flux at an evaporator length of 50 mm was remarkable; the heat flux at 80° using R123 as working fluid was higher than other angles of inclination. Therefore, the best value of heat flux of all was 1,150.53 W/m2. It can be concluded that the maximum heat flux is obtained when using R123 as the working fluid. It was found that, the heat flux for a filling ratio of 50% was the highest heat flux. In Figure 5, the results of heat flux were similar to the results at an evaporator length of 100 mm. The heat flux at 80° using R123 as working fluid was higher than the other angles of inclination. The best heat flux with an evaporator length of 100 mm was 697.23 W/m2. It can be concluded that the maximum heat flux is obtained using R123 as the working fluid. It was found that heat flux for a filling ratio of 50% was the highest heat flux. In Figure 6, the results of heat flux were similar to the results of those with an evaporator, length of 150 mm. The heat flux at 90° using R123 as working fluid was higher than other angles of inclination. The best heat flux at an evaporator length of 150 mm was 309.29 W/m2. Thus, the maximum heat flux was obtained when using R123 as the working fluid. It was also found that the heat flux for a filling ratio of 30% was the highest. When the L e increased, then the heat flux clearly decreased (Rittidech et al. 2010; Xu et al. 2005). Moreover, the angle of inclination of THMCLOHP/CV affected the heat flux because of the gravitational head (Dobson 2004; Bhuwakietkumjohn et al. 2012). It depended on fluid density, acceleration from gravity force, tube length and a corner of THMCLOHP/CV to the horizontal line. At the top heating mode, it is hard for the working fluid to flow back to the evaporator section to form steady circulation and resulted in bad heat transfer performance of the THMCLOHP/CV, thereby increasing thermal resistance (Reay and Kew 2006; Bhuwakietkumjohn and Rittidech 2010). The gravity has a significant influence on the characteristics of heat transfer (Bhuwakietkumjohn and Rittidech 2010; Dobson 2004).
Angle of inclination of THMCLOHP/CV on the heat flux at length of evaporator 50 mm.
Angle of inclination of THMCLOHP/CV on the heat flux at length of evaporator 100 mm.
Effect of filling ratio on heat flux
The filling ratio also had a significant influence on the characteristic of heat transfer. Figure 7 shows the comparative heat flux rates among three filling ratios with L e of 50, 100 and 150 mm. The maximum heat flux occurred at 50% filling ratio with L e of 50, 100 and 150 mm and was shown as 1,150.53, 707.23 and 309.29 W/m2, respectively. However, the heat flux of the THMCLOHP/CV was compared between R123, ethanol and water. It can be observed that the heat flux experienced a peak as a function of the filling. The optimum filling ratio for the addition of working fluid in the THMCLOHP/CV was 50% in which critical slung flow patterns occurred and the highest heat flux was achieved (Bhuwakietkumjohn et al. 2012; Bhuwakietkumjohn and Rittidech 2010; Thongdaeng et al. 2012).
Effect of filling ratio on the heat flux of THMCLOHP/CV at operating temperature of 55°C.
Effect of operating temperature on heat flux
Dependence of the operating temperature on the heat flux of THMCLOHP/CV filled with the water, ethanol and R123 is shown in Figure 8. Also shown are the data for working fluid. In all cases, the R123 shows superior performance than other working fluid. The maximum heat flux of 1,480 W/m2 has occurred with the R123 at the operating temperature of 65°C with L e of 50. It can be observed that that the filling ratio has no effect on the ratio of heat flux in the THMCLOHP/CV, but the properties of the working fluid affected the heat flux which depends on the operating temperature (Reay and Kew 2006; Parametthanuwat et al. 2010; Ma et al. 2006).
Effect of operating temperature on the heat flux of THMCLOHP/CV.
Effect of working fluid on heat flux
Their properties are different with respect to their density, surface tension and latent heat of vapourization (Reay and Kew 2006; Dobson 2004). Concerning these three properties, the latent heat of vapourization is the major property that has the greatest effect on the motion of the liquid slugs and vapour bubbles in a tube, as well as in the heat transfer rate of the HCLOHP/CVs of Rittidech et al. (2010). Therefore, if the working fluid changes from water and ethanol to R123, the heat flux increases; these are shown in Figures 4, 5, 6. This may be because of R123 that has a low latent heat of vapourization, as well as the fact that the boiling point of R123 is lower than those of the the water and ethanol (Dunn and Reay 1982; Incropera and Dewitt 1996; Bhuwakietkumjohn and Rittidech 2010). The boiling point is an important parameter for the THMCLOHP/CV working temperature and performance. If the boiling point is low, the THMCLOHP/CV will work at low temperatures (Rittidech et al. 2010; Thongdaeng et al. 2012; Koito et al. 2009). However, the latent heat and boiling point of the working fluid also has an effect on THMCLOHP/CV performance.
Effect of evaporator length on heat flux
In this experiment, the evaporator, adiabatic and condenser were of equal length. This research will concentrate on studying the effect that length of evaporator has on the heat flux of THMCLOHP/CV. The experimental results clearly present the effect length of the evaporator has on heat flux. Figure 9 shows the experimental results, which can be compared to those of Rittidech et al.'s (2010), of horizontal heat mode (HCLOHP/CVs) with ethanol and water; it can be seen that as the L e increases from 50 to 150 mm, the heat flux slightly decreases. In this study, when L e is very long, the boiling phenomenon approaches pool boiling, and at the pool boiling, low heat flux occurs at the evaporator section (Hsu 1962; Nimkon and Rittidech 2011; Cieslinski 2011). On the other hand, at a short L e , the boiling phenomenon approaches boiling inside a confined channel at which high heat flux occurs (Reay and Kew 2006; Charoensawan and Terdtoon 2008; Bhuwakietkumjohn et al. 2012). However, this work gave lower heat flux when compared with that of Rittidech et al.'s (2010) who used water to receive heat from the condenser section due to the specific heat capacity being better than the air in this study.
Relationship length of evaporator on heat flux of THMCLOHP/CV.
From the results obtained, it can be concluded that:
The filling ratio had a slight effect on thermal performance of the THMCLOHP/CV. The thermal performance of the THMCLOHP/CV with L e of 50 mm was higher than the L e of 100 and 150 mm at a filling ratio of 50% when using R123 as working fluid.
The operating temperature had an effect on the heat flux of the THMCLOHP/CV; when the operating temperature was increased, the heat flux increased.
The angle of inclination of THMCLOHP/CV affected the heat flux because of the gravitational head. It depended on fluid density, acceleration from gravity force and the length of tube.
As the L e increases from 50 to 150 mm, the heat flux slightly decreases. The longer L e had occurred, the boiling phenomenon approaches pool boiling, and at pool boiling, a low heat flux occurs.
It was further found that the physical properties (filling ratio, L e , angle of inclination and operating temperature) had effect on the ratio of heat transfer rates in normal operation, but the properties of the working fluid affected the heat transfer rate.
A c , all outer surface area of tube, m 2
Q, heat transfer rate, W
q, heat flux (W/m2)
\( \overset{\cdotp }{m} \), mass flow rate, kg/s
C p , specific heat capacity constant pressure, J/kg°C
T, temperature,°C
Fr, filling ratio, %
L e , length of evaporator, mm
d i,max, inner diameter of copper tube, mm
D o , outside diameter, m
g, gravitational acceleration, m/s2
σ, surface tension, N/m
ρ, density of fluid, kg/m3
in, inlet
out, outlet
Beirlant, J, Goegebeur, Y, & Teugels, J. (2004). Statistics of extremes theory and applications. The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England: John Wiley & Sons Ltd.
Book MATH Google Scholar
Bhuwakietkumjohn, N, & Rittidech, S. (2010). Internal flow patterns on heat transfer characteristics of a closed-loop oscillating heat-pipe with check valves using ethanol and a silver nano-ethanol mixture. Experimental Thermal and Fluid Science, 34(8), 1000–1007. doi:http://dx.doi.org/10.1016/j.expthermflusci.2010.03.003.
Bhuwakietkumjohn, N, Rittidech, S, & Pattiya, A. (2012). Heat-transfer characteristics of the top heat mode closed-loop oscillating heat pipe with a check valve (THMCLOHP/CV). Journal of Applied Mechanics and Technical Physics, 53(2), 224–230. doi:10.1134/s0021894412020101.
Charoensawan, P, & Terdtoon, P. (2008). Thermal performance of horizontal closed-loop oscillating heat pipes. Applied Thermal Engineering, 28(5–6), 460–466. doi:10.1016/j.applthermaleng.2007.05.007.
Cieslinski, JT. (2011). Flow and pool boiling on porous coated surfaces. Reviews in Chemical Engineering, 27(3–4), 179–190. doi:10.1515/Revce.2011.007.
Dobson, RT. (2004). Theoretical and experimental modelling of an open oscillatory heat pipe including gravity. International Journal of Thermal Sciences, 43(2), 113–119. doi:http://dx.doi.org/10.1016/j.ijthermalsci.2003.05.003.
Dunn, P, & Reay, D. (1982). Heat pipe. Pergamong Intrnational Library Elsevier; Oxford; United Kingdom.
Hibbeler, RC. (2004). Engineering mechanics statics. Upper Saddle River, New Jersey: Pearson Education.
Hsu, YY. (1962). On the size range of active nucleation cavities on a heating surface. Journal of Heat Transfer, 84, 207–216.
Incropera, FP, & Dewitt, DP. (1996). Fundamental of heat and mass transfer, 4h Edition. New York: John Wiley & Son.
Koito, Y, Ikemizu, Y, Torii, S, & Tomimura, T. (2009). Operational characteristics of a top-heat-type heat transport loop utilizing vapor pressure (fundamental experiments and theoretical analyses). Kagaku Kogaku Ronbunshu, 35(5), 495–501.
Ma, HB, Hanlon, MA, & Chen, CL. (2006). An investigation of oscillating motions in a miniature pulsating heat pipe. Microfluidics and Nanofluidics, 2(2), 171–179. doi:10.1007/s10404-005-0061-8.
Miyazaki, Y, Polasek, S, & Akachi, H. (2000). Oscillating heat pipe with check valves. Paper Presented at the 6th International Heat Pipe Symposium, Chiang Mai, Thailand.
Nimkon, S, & Rittidech, S. (2011). Effect of working fluids and evaporator temperatures on internal flow patterns and heat transfer rates of a top heat mode closed-loop oscillating heat pipe withcheck valves (thmclohp/Cv). Australian Journal of Basic and Applied Sciences, 5(10), 1013–1019.
Parametthanuwat, T, Rittidech, S, & Pattiya, A. (2010). A correlation to predict heat-transfer rates of a two-phase closed thermosyphon (TPCT) using silver nanofluid at normal operating conditions. International Journal of Heat and Mass Transfer, 53(21–22), 4960–4965. doi:10.1016/j.ijheatmasstransfer.2010.05.046.
Reay, D, & Kew, P. (2006). Heat pipe, theory, design and application, Fifth Edition. Butterworth: Heinemann.
Rittidech, S, Pipatpaiboon, N, & Terdtoon, P. (2007). Heat-transfer characteristics of a closed-loop oscillating heat-pipe with check valves. Apply Energy, 84(5), 565–577. doi:10.1016/j.apenergy.2006.09.010.
Rittidech, S, Pipatpaiboon, N, & Thongdaeng, S. (2010). Thermal performance of horizontal closed-loop oscillating heat-pipe with check valves. Journal of Mechanical Science and Technology, 24(2), 545–550. doi:10.1007/s12206-009-1221-7.
Thongdaeng, S, Rittidech, S, & Bubphachot, B. (2012). Flow patterns and heat-transfer characteristics of a top heat mode closed-loop oscillating heat pipe with check valves (THMCLOHP/CV). Journal of Engeneering Thermophysics Russia, 21(4), 235–247. doi:10.1134/s1810232812040029.
Xu, JL, Li, YX, & Wong, TN. (2005). High speed flow visualization of a closed loop pulsating heat pipe. International Journal of Heat and Mass Transfer, 48(16), 3338–3351. doi:http://dx.doi.org/10.1016/j.ijheatmasstransfer.2005.02.034.
Zorigtkhuu, D, Kim, YS, Kim, H, & Suh, YK. (2006). Manufacture of an oil-based Fe-Co magnetic fluid by utilization of the pickling liquid of steel. Metals and Materials International, 12(6), 517–523.
Generous support from the Faculty of Industrial and Technology Management through Department of Design and Production Technology of Agricultural Industrial Machinery (Grant No. FITM-5602004-15) to this research is acknowledged. Thanya Parametthanuwat and Nipon Bhuwakietkumjohn were also supported generously by Sampan Rittidech, head of the Heat-Pipe and Thermal Tools Design Research Unit (HTDR), Faculty of Engineering, Mahasarakham University; Thailand, and Thailand Research Fund and Office of The Higher Education Commission.
Heat pipe and Nanofluids Technology Research Laboratory (HNTRL), Faculty of Industrial Technology and Management, King Mongkut's University of Technology North Bangkok Prachin Buri Campus, Prachin Buri, 25230, Thailand
Nipon Bhuwakietkumjohn & Thanya Parametthanuwat
Nipon Bhuwakietkumjohn
Thanya Parametthanuwat
Correspondence to Thanya Parametthanuwat.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Bhuwakietkumjohn, N., Parametthanuwat, T. The top heat mode of closed loop oscillating heat pipe with check valves at the top heat mode (THMCLOHP/CV): a thermodynamic study. Int J Mech Mater Eng 9, 5 (2014). https://doi.org/10.1186/s40712-014-0005-8
Top heat mode (THM)
Oscillating heat pipe (OHP)
Check valve (CV)
|
CommonCrawl
|
Decomposing Multivariate Information in Complex Systems
Quantum Materials in the Quantum Information Era
Application for online participation
Dynamical Control of Quantum Materials
New Perspectives in Active Systems
Active Matter at Surfaces and in Complex Environments
Scientific program structure
Control of Ultrafast (Attosecond and Strong Field) Processes Using Structured Light
Elastic Manipulation and Dynamics of Quantum Materials
Extreme Waves
Interdisciplinary life of microbes: from single cells to multicellular aggregates
Quantum Transport with ultracold atoms
Topology and non-equilibrium dynamics in engineered quantum systems
For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.
A topologically protected quantum dynamo effect in a driven spin-boson model
Bernhardt , Ephraim
We describe a driven system coupled to a single cavity mode or a collection of modes forming an Ohmic boson bath. When the system Hamiltonian changes in time, this induces a dynamical field in the bosonic modes having resonant frequencies with the driving velocity. This field opposes the change of the external driving field in a way reminiscent of the Faraday effect in electrodynamics, justifying the term 'quantum dynamo effect'. For the specific situation of a periodically driven spin-1/2 with adiabatic ground state on the Bloch sphere, we show that the work done by rolling the spin from north to south pole can efficiently be converted into a coherent displacement of the resonant bosonic modes. The effect thus corresponds to a work-to-work conversion and allows to interpret this transmitted energy into the bath as work. We study this effect, its performance and limitations in detail for a driven spin-1/2 in the presence of a radial magnetic field addressing a relation with topological systems through the formation of an effective charge in the core of the sphere. We show that the dynamo effect is directly related to the dynamically measured topology of this spin-1/2 and thus in the adiabatic limit provides a topologically protected method to convert driving work into a coherent field in the reservoir. The quantum dynamo model is realizable in mesoscopic and atomic systems.
Restoration of the non-Hermitian bulk-boundary correspondence via topological amplification
Brunelli, Matteo
Non-Hermitian (NH) lattice Hamiltonians display a unique kind of energy gap and extreme sensitivity to changes of boundary conditions—the so-called NH skin effect. Due to the NH skin effect, the separation between edge and bulk states is blurred and the bulk-boundary correspondence in its conventional form breaks down [1]. Despite considerable efforts to accommodate the NH skin effect into a modified bulk-boundary correspondence, a formulation for point-gapped spectra has remained elusive. In this contribution, I will show how to restore the bulk-boundary correspondence for the most paradigmatic class of NH lattice models, namely single-band models without symmetries. This is achieved by presenting an alternative route to the classification of NH topological phases, where the focus is shifted (i) from effective NH Hamiltonians to NH Hamiltonians obtained from the unconditional dynamics of driven-dissipative arrays of cavities, and (ii) from the eigen-decomposition to the singular value decomposition as the main tool for studying their bandstructure. The class of NH Hamiltonians that reveal the bulk-boundary correspondence are unconditional NH Hamiltonians, which do not neglect quantum jumps altogether but instead retain a contribution from fluctuation-dissipation processes by averaging over the quantum state of the system. Concretely, the desired NH Hamiltonian is implemented in one-dimensional driven-dissipative cavity arrays, in which Hermiticity-breaking terms — in the form of non-reciprocal hopping amplitudes, gain and loss — are explicitly modelled via coupling to (engineered and non-engineered) reservoirs. I will show that this approach introduces extra constraints to the NH Hamiltonian, neglected so far, which determine the following major changes to the topological characterization: First, the complex spectrum is not invariant under complex energy shifts, which removes the arbitrariness in the definition of the topological invariant. Second, topologically non-trivial Hamiltonians are only a strict subset of those with a point gap; this implies that the NH skin effect does not have a topological origin. Third, topological phase transitions are accompanied by the closure of a real-valued gap, defined in terms of the singular values. I will then show how to reinstate the bulk-boundary correspondence in terms of the singular value decomposition, instead of the eigen-decomposition, and explain its physical significance. The NH bulk-boundary correspondence takes the following simple form: An integer value $\nu\in \mathbb{Z}$ of the winding number defined on the complex spectrum of the system under periodic boundary conditions corresponds to $\vert \nu\vert$ exponentially small singular values, associated with singular vectors that are exponentially localized at the system edge under open boundary conditions and vice versa; the sign of $\nu$ determines at which edge the vectors localize. Non-trivial topology manifests as directional amplification with gain exponential in system size, which is the hallmark of NH topology [2]. I will explain the physical relevance of this peculiar behaviour. More details in: M. Brunelli, C. C. Wanjura, and A. Nunnenkamp, arXiv:2207.12427 (2022). References: [1] E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Rev. Mod. Phys. 93, 015005 (2021). [2] C. C. Wanjura, M. Brunelli, and A. Nunnenkamp, Nature Communications 11, 3149 (2020).
Topological charge pumping in subwavelength Raman lattices
Burba, Domantas
Ultra-cold atoms in optical lattices have demonstrated utility for simulating various condensed matter phenomena as well as realizing paradigmatic models. However, conventional optical lattices for ultra-cold atoms rely on the AC Stark shift to produce a potential proportional to the local optical intensity. As a direct result, the lattice period cannot be smaller than half the optical wavelength $\lambda$. Recently, two techniques have emerged to create deeply sub-wavelength lattices; both can be understood in terms of "dressed states" created by coupling internal atomic states with one- or two- photon optical fields. Here we focus on a scheme, relying on sequentially coupling $N$ internal atomic states using two photon Raman transitions. This results in an adiabatic potential for each of the $N$ dressed states, displaced by $\lambda/2N$ from each other. We show that adding temporal modulation to the detuning from Raman resonance can couple the $s$ and $p$ bands of adjacent lattice sites belonging to different dressed states. In the tight-binding limit, this gives rise to a pair of coupled Rice-Mele (RM) chains with new regimes of topological charge pumping. The present study opens new possibilities in studying the topological properties of subwavelength optical lattices induced by periodic driving.
Disorder free localization transition and spectral features in a 2D lattice gauge theory
Chakraborty, Nilotpal
We show that there exists a disorder free localization transition in two dimensions in a generic model of a lattice gauge theory, namely the U(1) quantum link model, relevant to frustrated magnets. We study the nature of localization transition using a percolation model from which we show that in a certain regime of the Hamiltonian, the disorder free localization transition is a continuous transition whose universality class we determine by calculating exact critical exponents. We also calculate spectral features of such a localized system deep in the localized phase using a cluster expansion approach. We show that such a localized system has sharp peaks in spatially averaged high temperature spectral functions even in the infinite size limit, whose positions we exactly estimate analytically. Our results highlight unique features of disorder free localization which distinguish it from conventional many body localization in disordered systems as well as otherwise expected high temperature paramagnetic response in frustrated magnets.
Fractional Topology in interacting $1$D Superconductors
del Pozo, Frederick
We investigate the topological phases of two interacting super-conducting wires in one-dimension, and propose topological invariants directly measurable from ground state correlation-functions. These numbers remain powerful tools in the presence of couplings and interactions. We show with \emph{density matrix renormalization group} that the \emph{double critical Ising (DCI)} phase discovered in \cite{Herviou_2016} { is a fractional topological phase with gapless Majorana modes in the bulk and one-half topological invariant per wire. Using both numerics and quantum field theoretical methods we show that the effect of an inter-wire hopping amplitude $t_{\bot}$ on the phase diagram reveals that the \emph{DCI} phase is stable at length scales below $\sim 1/t_{\bot}$. For large inter-wire hopping instead we show the emergence of two integer topological phases hosting one edge mode per boundary, shared between both wires. At large interactions the two wires are described by Mott physics, with the $t_{\bot}$ hopping resulting in a paramagnetic order.
Floquet physics of circularly-driven $\alpha$-$\mathcal{T}_3$ lattice
Dey, Bashab
The $\alpha$-$\mathcal{T}_3$ lattice is a two-dimensional three-band crossing model. It has a hopping parameter α which, on tuning from 0 to 1, results in a continuous evolution of its low energy Dirac-Weyl Hamiltonian from pseudospin 1/2 to pseudospin-1. The system also has an $\alpha$-dependent Berry phase whose effect has been observed in various physical quantities. We have studied the Floquet states of $\alpha$-$\mathcal{T}_3$ lattice irradiated by intense circularly polarized radiation in both low and high frequency regimes. At low frequency driving, we obtain the quasienergy bands close to the Dirac points which exhibit a strong dependence on $\alpha$ unlike the static bands. The quasienergy bands have no valley degeneracy and electron-hole symmetry for $0<\alpha<1$. The analytical expressions of the quasienergy gaps show that they are related to the $\alpha$-dependent Berry phase, which is acquired by the quasiparticles on traversing closed loops around the Dirac points triggered by circularly polarized light. The tunability of Berry phase in this system makes it an ideal platform to probe the effect of geometric phase through generation of quasienergy bands. In high frequency regime, we obtain an effective static Hamiltonian within the Floquet formalism. We observe that the effective Hamiltonian does not preserve time-reversal symmetry and gives rise to Floquet topological insulator phases. The three-fold degeneracy at the Dirac point is lifted for all values of $\alpha$. Topological phase transitions occur at $\alpha=1/\sqrt{2}$ characterized by a change in Chern number of the valence (conduction) band from 1(-1) to 2(-2). The point of transition is independent of the polarization of light (except linear) within the high-frequency approximation. Hence, this system offers a Floquet topological insulator phase with higher Chern number than those of conventional two-band systems like graphene, HgTe/CdTe quantum wells, etc.
Fate of Floquet topological phases upon adding interactions
Dutta, Arijit
Periodically driven clean noninteracting systems are known to host several interesting topological phases. Particularly, for high frequency driving, they have been found to host the analogues of equilibrium topological phases, like the Haldane phase. However, upon lowering the driving frequencies these systems have been found to host anomalous phases with robust edge modes despite all Chern numbers being zero. Moreover, theoretical works have shown that adding disorder to such anomalous phases leads to quantized charge pumping through the edge modes even when all bulk states become localised. We investigate the fate of these phases in presence of electron-electron interactions of the Falicov-Kimball type.
The Frustration of Being Odd
Franchini, Fabio
We consider the effects of so-called Frustrated Boundary Conditions (FBC) on quantum spin chains, namely periodic BC with an odd number of sites. In absence of external fields, FBC allow for the direct determination of correlation functions that signal a spontaneous symmetry breaking, such as the spontaneous magnetization. When paired with anti-ferromagnetic interactions, FBC introduce geometrical frustration into the system and the ground state develops properties which differ from those present with other boundary conditions, thus brining striking, yet puzzling, evidence that certain boundary conditions can affect the bulk properties of a 1D system. We argue that FBC introduce long-range order in the system, similar to that enjoyed by SPT phases, and add a sizeble amount of complexity to the ground state. Our results prove that even the weakest form of geometrical frustration can deeply affect a system's properties and pave a way for a bottom-up approach to better understand the effects of frustration and their exploitations also for technological purposes.
Dynamical construction of higher-order topological systems
Ghosh, Arnob Kumar
We propose a three-step periodic drive protocol to engineer two-dimensional~(2D) Floquet quadrupole superconductors and three-dimensional~(3D) Floquet octupole superconductors hosting zero-dimensional Majorana corner modes~(MCMs), based on unconventional $d$-wave superconductivity. Remarkably, the driven system conceives four phases with only $0$ MCMs, no MCMs, only anomalous $\pi$ MCMs, and both regular $0$ and anomalous $\pi$ MCMs. To circumvent the subtle issue of characterizing $0$ and $\pi$ MCMs separately, we employ the periodized evolution operator to architect the dynamical invariants, namely quadrupole and octupole motion in 2D and 3D, respectively, that can distinguish different higher order topological phases unambiguously. Furthermore, we extend our study using the periodic harmonic drive and generalize the definitions of the dynamical quadrupolar moment for this drive.
Quantum many-body Jarzynski equality and dissipative noise on a digital quantum computer
Hahn, Dominik
The quantum Jarzynski equality and the Crooks relation are fundamental laws connecting equilibrium processes with nonequilibrium fluctuations. They are promising tools to benchmark quantum devices and to measure free energy differences. While they are well established theoretically and also experimental realizations for few-body systems already exist, their experimental verification in the quantum many-body regime has remained an outstanding challenge. Here, we present results for nonequilibrium protocols in systems with up to sixteen interacting degrees of freedom obtained on trapped ion and superconducting qubit quantum computers, which verify the quantum Jarzynski equality and the Crooks relation in the many-body regime. To achieve this, we overcome present-day limitations in the preparation of thermal ensembles and in the measurement of work distributions on noisy intermediate-scale quantum devices. We discuss the accuracy to which the Jarzynski equality holds on different quantum computing platforms subject to platform-specific errors. Our analysis reveals a novel dissipative nonequilibrium regime, where a fast unitary drive compensates for dissipation and restores the validity of Jarzynski's equality. Our insights provide a new way approach to analyze errors in many-body quantum simulators.
Nonlinear edge modes from topological 1D lattices
Jezequel, Lucien
We propose a method to address the existence of topological edge modes in one-dimensional (1D) nonlinear lattices, by deforming the edge modes of linearized models into solutions of the fully nonlinear system. For large enough nonlinearites, the energy of the modied edge modes may eventually shift out of the gap, leading to their delocalisation in the bulk. We identify a class of nonlinearities satisfying a generalised chiral symmetry where this mechanism is forbidden, and the nonlinear edge states are protected by a topological order parameter. Different behaviours of the edge modes are then found and explained by the interplay between the nature of the nonlinarities and the topology of the linearized models.
Thermal deconfinement in doped $\mathbb{Z}_2$ lattice gauge theories
Linsel, Simon
Mobile charge carriers in high-$T_c$ superconductors, i.e. holes doped into a Mott-insulator, leave behind a string of displaced spins when they move through the antiferromagnetic spin background. Here we study the thermal deconfinement of holes in a many-body setting using classical $\mathbb{Z}_2$ lattice gauge theories as a strongly simplified model of such strings. The confined phase is characterized by localized hole pairs connected by (short) strings while deconfinement implies a global net of strings spanning over the entire lattice. We probe the deconfinement phase transition using classical Monte Carlo and percolation-inspired order parameters. In two dimensions, we show that for small hole doping, there is a thermal deconfinement phase transition. For large hole doping, we find strong indications that holes are always confined in the thermodynamic limit. The Hamiltonian in two dimensions is designed from scratch to be experimentally realistic in Rydberg atom array experiments. In three dimensions, in contrast to the two dimensional case, a thermal deconfinement phase transition exists for arbitrary hole doping. We map out the phase diagram and calculate the critical exponents of the phase transition. Our results provide new insights into the physics of the deconfinement of holes and can be tested experimentally using Rydberg atom array experiments.
Non-Hermitian skin effect of anomalous Floquet edge states
Liu, Hui
The non-Hermitian skin effect refers to the accumulation of an extensive number of eigenstates at the edges or corners of a system. Here, we introduce a new type of skin effect, which is generated by the accumulation of chiral edge modes in a two-dimensional unitary system. We show that removing a boundary hopping from the real-space time-evolution operator stops the propagation of edge modes in anomalous Floquet topological insulators. This is in contrast to the behavior of periodically-driven Chern insulators, in which boundary modes continue propagating. By evaluating the local density of states, we found that the resulting non-Hermitian skin effect is critical, i.e. scale-invariant, due to the nonzero coupling between the bulk and the edge. Further, it is a consequence of nontrivial topology, which we show by introducing a real-space topological invariant. The latter predicts the appearance of a skin effect in an anomalous Floquet topological insulator phase, and its absence for either trivial or Chern phases. Our work opens the possibility to realize topological, non-Hermitian skin effects via periodic driving, as well as a new direction to establish the bulk-boundary correspondence for non-Hermitian topological classifications.
Extended mean-field theory of strong coupling Bose polarons
Mostaan, Nader
A mobile quantum impurity resonantly coupled to a BEC forms a quasiparticle termed Bose polaron. Bose polaron is a generic concept central to describing many phenomena in numerous condensed matter systems. Ultracold atomic mixtures have emerged as a promising platform to study polaron physics with unprecedented accuracy and control. Especially, the strength of the impurity-boson interactions is tunable via Feshbach resonances, allowing access to interaction regimes beyond the capacity of conventional solid-state platforms. In particular, in the repulsive side of a Feshbach resonance, the impurity-boson potential admits a bound state, which significantly affects the properties of the interacting system in the many-body limit. At strong coupling when an impurity-boson dimer exists, the impurity can bind a diverging number of non-interacting bosons, resulting in an unbound ground state energy. In this case, interboson interactions are crucial to stabilize the Bose polaron, resulting in a strongly correlated state. To describe the strong coupling Bose polaron across unitarity we employ a Fock-Coherent-State (FCS) ansatz, which takes an arbitrary number of bound state occupations into account ($n=0,1,2,\cdots$) as well as includes a coherent state of excitations on top. In this manner, we extend the standard coherent state ansatz which has proven to be powerful in describing the dynamics and quasiparticle properties of Bose polarons. We study polaron dynamics at strong coupling, and we find that the multi-body resonances predicted by the coherent state ansatz dissolve when adding the interboson repulsive interaction, which was absent in the prior mean-field analysis of the Bose polaron.
Operator Spreading in Perturbed Dual-Unitary Circuits
Rampp, Michael
The dynamical behaviour of strongly correlated many-body systems out of equilibrium is notoriously hard to describe both analytically and numerically. In recent years dual-unitary circuits have emerged as paradigmatic examples of chaotic many-body systems in which a variety of dynamical quantities can be computed exactly. However, in many respects dual-unitary circuits display behaviour that differs strikingly from the phenomenology observed numerically in more generic models. We investigate the behaviour of OTOCs in a broad class of perturbed dual-unitary circuits and show that even arbitrarily weak perturbations lead to the emergence of a diffusively broadening operator front at late times. This can be captured by a simple approximation in which the only characteristic of the gate entering is its operator entanglement.
Nonlinear current and dynamical quantum phase transitions in the flux-quenched Su-Schrieffer-Heeger model
Rossi, Lorenzo
Abstract: We investigate the dynamical effects of a magnetic flux quench in the Su-Schrieffer-Heeger model in a one-dimensional ring geometry. We show that even when the system is initially in the half-filled insulating state, the flux quench induces a time-dependent current that eventually reaches a finite stationary value. Such persistent current, which exists also in the thermodynamic limit, cannot be captured by the linear response theory and is the hallmark of nonlinear dynamical effects occurring in the presence of dimerization. Moreover, we show that for a range of values of dimerization strength and initial flux, the system exhibits dynamical quantum phase transitions, even though the quench is performed within the same topological class of the model. Reference: L. Rossi, and F. Dolcini, Phys. Rev. B 106, 045410 (2022)
Bath enginneering as a tool to stabilize Floquet-topolgoical insulating steady states in weakly interacting quantum gas mixtures
Schnell, Alexander
Motivated by recent experimental progress, we consider a mixture of two species of cold atoms, non-interacting Fermions in a 2D optical lattice, embedded in a bath of weakly interacting bosons that are in a BEC. The 2D lattice is driven time-periodically such that the Floquet-Bands inhibit Floquet-Chern insulating behavior. We show that by careful engineering of the BEC bath, the steady state of the system can have an effective temperature that is low enough to observe quantized charge pumping, which is absent for generic ohmic bath environments. The results suggest a strategy for robust state-preparation in quantum simulators and might also generalize as a strategy to fight Floquet-heating in interacting systems.
Gauge-theoretic origin of Rydberg quantum spin liquids
Tarabunga, Poetri Sonya
Recent atomic physics experiments and numerical works have reported complementary signatures of the emergence of a topological quantum spin liquid (QSL) in arrays of Rydberg atoms with blockade interactions. To elucidate the origin of this QSL phase, we introduce an exact relation between an Ising-Higgs lattice gauge theory on the kagome lattice and the Rydberg blockaded models on the Ruby lattice. Based on this relation, we argue that the observed QSL phase is linked to the deconfined phase of an analytically solvable lattice gauge theory. We show that our model hosts the QSL phase in a broad region of the parameter space, thus opening up further possibilities to realize QSLs in different experimental settings with Rydberg dressing techniques.
Adiabatic charge pumping in bosonic Chern-insulator analogs
Tesfaye, Isaac
Isaac Tesfaye, Botao Wang, André Eckardt Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstraße 36, 10623 Berlin, Germany Mimicking fermionic Chern insulators with bosons has drawn a lot of interest in experiments by using, for example, cold atoms [1,2] or photons [3]. Here we present a scheme to prepare and probe a bosonic Chern insulator analog by using (a) an ensemble of randomized bosonic states and (b) an initial Mott state configuration. By applying a staggered superlattice, we can identify the lowest band with individual lattice sites. The delocalization over this band in quasimomentum space is then achieved by introducing on-site disorder or local random phases (a). Switching off the interactions and adiabatically decreasing the superlattice then gives rise to a bosonic Chern insulator, whose topologically non-trivial property is further confirmed from the Laughlin-type quantized charge pumping. Adding to this, we propose a detection scheme allowing for the observation of the bosonic quantized charge pump using a feasible number of experimental snapshots. Our protocol provides a useful tool to realize and probe topological states of matter in quantum gases or photonic systems. [1] Aidelsburger, Monika, et al. "Measuring the Chern number of Hofstadter bands with ultracold bosonic atoms." Nature Physics 11.2 (2015): 162-166. [2] Cooper, N. R., J. Dalibard, and I. B. Spielman. "Topological bands for ultracold atoms." Reviews of modern physics 91.1 (2019): 015005. [3] Ozawa,Tomoki, et. al. "Topological photonics." Rev. of Mod. Phys. 91.1 (2019): 015006
Dynamical localization transition of string breaking in quantum spin chains
Verdel Aranda, Roberto
Quantum spin chains, readily implemented in current state-of-the-art quantum simulators, have been shown to be a versatile platform to simulate lattice gauge theories, featuring rich physical phenomena ranging from string breaking to meson collisions. Yet, many questions concerning such phenomena still remain open. In particular, previous microscopic studies suggest a dichotomy for the fate of the confining string: its fission can occur relatively fast or be dramatically delayed. Here we aim to provide a unified account of the aforementioned scenarios in terms of an underlying dynamical phase transition. As a first step, we map the problem of string breaking for a short string to an impurity diffusion problem in Fock space. This effective description captures accurately the decay of the string in the regime of weak transverse fields. Next, we generalize such a description to a spin-boson model to approximate the breaking of a long string, which is effectively represented as a few level impurity immersed in a weakly interacting meson bath. We find that, within the considered limit, there is a localization-delocalization transition that separates a phase with a long-lived (prethermal) string from a fast string-breaking phase. This transition is identified through the scaling of the inverse participation ratio of the impurity modes, captured by universal scaling exponents and functions. Our description thus sheds light on possible universal aspects of nonequilibrium string breaking dynamics on the lattice and connects this phenomenon to the physics of quantum impurity models.
Liouvillian skin effect in an exactly solvable model
Yang, Fan
The interplay between dissipation, topology, and sensitivity to boundary conditions has recently attracted tremendous amounts of attention at the level of effective non-Hermitian descriptions. Here we exactly solve a quantum mechanical Lindblad master equation describing a dissipative topological Su-Schrieffer-Heeger (SSH) chain of fermions for both open boundary condition (OBC) and periodic boundary condition (PBC). We find that the extreme sensitivity on the boundary conditions associated with the non-Hermitian skin effect is directly reflected in the rapidities governing the time evolution of the density matrix giving rise to a Liouvillian skin effect. This leads to several intriguing phenomena including boundary sensitive damping behavior, steady state currents in finite periodic systems, and diverging relaxation times in the limit of large systems. We illuminate how the role of topology in these systems differs in the effective non-Hermitian Hamiltonian limit and the full master equation framework. Reference: F. Yang, Q.-D. Jiang, and E. J. Bergholtz, Phys. Rev. Research 4, 023160 (2022)
ImprintData Protection Advice
|
CommonCrawl
|
Households and food security: lessons from food secure households in East Africa
Silvia Silvestri1,
Douxchamps Sabine2,
Kristjanson Patti3,
Förch Wiebke4,
Radeny Maren4,
Mutie Ianetta1,
Quiros F. Carlos1,
Herrero Mario5,
Ndungu Anthony3,
Ndiwa Nicolas1,
Mango Joash3,
Claessens Lieven6 &
Rufino Mariana Cristina1,7
What are the key factors that contribute to household-level food security? What lessons can we learn from food secure households? What agricultural options and management strategies are likely to benefit female-headed households in particular? This paper addresses these questions using a unique dataset of 600 households that allows us to explore a wide range of indicators capturing different aspects of performance and well-being for different types of households—female-headed, male-headed, food secure, food insecure—and assess livelihoods options and strategies and how they influence food security. The analysis is based on a detailed farm household survey carried out in three sites in Kenya, Uganda and Tanzania.
Our results suggest that food insecurity may not be more severe for female-headed households than male-headed households. We found that food secure farming households have a wider variety of crops on their farms and are more market oriented than are the food insecure. More domestic assets do not make female-headed households more food secure. For the other categories of assets (livestock, transport, and productive), we did not find evidence of a correlation with food security. Different livelihood portfolios are being pursued by male versus female-headed households, with female-headed households less likely to grow high-value crops and more likely to have a less diversified crop portfolio.
These findings help identify local, national and regional policies and actions for enhancing food security of female-headed as well as male-headed households. These include interventions that improve households' access to information, e.g., though innovative communication and knowledge-sharing efforts and support aimed at enhancing women's and men's agricultural market opportunities.
The potential impacts of climate change on food security in East Africa, while complex and variable due to highly heterogeneous landscapes, are a cause for concern considering that more than half of people depend on agriculture for all or part of their livelihoods [1, 2]. Impacts of climate change on agriculture include potentially significant yield losses of key staple crops, including maize, sorghum, millet, groundnut, and cassava [3, 4]. How well people are able to adapt to climate change, or reduce its negative impacts, will depend upon many factors (e.g., access to timely information, availability of cash, behavioural barriers, etc.) that often constrain the adoption of improved agricultural technologies and management strategies. Just as there are no 'silver bullet' technologies, there is an increasing realisation that 'transformative' agricultural changes are needed [5, 6].
Food security remains a serious challenge for many households in East Africa. There is evidence that the least food secure households, and especially female-headed ones, are less likely to adopt new agricultural technologies and practices that could improve their farm productivity and make them more resilient or less vulnerable to climate change [7, 8]. While there is increasing evidence that farmers are changing their practices in response to several drivers—including both climate shocks and longer term climatic trends—adoption rates of new practices remain low and the changes being made typically involve relatively small rather than more forward-looking investments aimed at conserving scarce resources and enhancing resilience [8, 9]. Many rural households are unable to try new crop, livestock, water, soil and agroforestry-related technologies and improved management techniques and innovations due to multiple constraints, including lack of money needed for such investments, poor access to natural resources (water or land), lack of inputs (including labour), and lack of information [9, 10].
Faced with increasing population pressure, rising agricultural input prices, land fragmentation and degradation, as well as a changing climate, farming households will need to pursue new agricultural and non-agricultural adaptation options including leaving farming. While there is a rapidly growing literature on vulnerability and adaptation to increased climatic variability and climate change [11–14], significant knowledge gaps still exist, especially regarding the assessment of adaptation options in different environments and how these might be appropriately targeted to different types of households to reduce food insecurity [5, 15].
One approach to addressing this challenge is to learn from households that are doing better than others across different areas. Most studies aimed at explaining differences in agricultural productivity between households find that characteristics such as education levels, land and household size, and off-farm income are key variables that explain the variation in productivity [16, 17]. However, there is still little understanding of whether there are specific options that influence food security outcomes and that households are, or could be, implementing—such as the adding or switching types and/or mixes of crops and livestock and other assets. Yet, considering that gender norms play a big role in shaping how well households will be able to adapt [18], additional information that helps us to better target male- and female-headed households regarding agricultural options and management strategies that are likely to benefit them would be very useful.
We address these aspects using a unique dataset [19] that allows us to explore a wide range of indicators capturing different aspects of livelihoods and well-being for different types of households (female-headed, male-headed, food secure, food insecure) and assess livelihood strategies and the ways in which they can influence food security. The paper addresses a call for multidisciplinary investigation of food security challenges, providing much needed evidence on the circumstances of more versus less food secure households [5].
Methods and data
Sampling strategy and survey implementation
We use household survey data collected through a detailed farm characterisation tool called 'IMPACTlite' and implemented in 2012 in East Africa [20]. The data are available online at http://data.ilri.org/portal/dataset [21].
The survey includes information on: household size and composition; household assets; ownership of land and livestock; agricultural inputs and labour use for cropping, aquaculture and livestock activities; utilisation of agricultural products including sales, consumption, and seasonal food consumption; off-farm employment and other sources of livelihoods such as remittances and subsidies. It leads to a detailed characterisation of households for broadly representative agricultural production systems, and allows us to develop farm-level indicators that show ranges of income, productivity, etc. These can be used to parameterise household models and to examine ex ante the impact of climate change shocks on food security, for example, and the effects of various adaptation and mitigation strategies on farming households' labour demand, incomes, and nutrition.
We used a stratified sampling strategy, described in detail by [20], consisting in identifying key agricultural production systems in targeted research sites. The set of research sites that have been analysed in this study are CCAFS sites, chosen in a highly participatory manner with local partners [8, 22]. Within each of the identified production systems, representative villages were randomly selected up to a total of 20 villages per site. In each village, 10 households were randomly selected from a list of households. The surveys covered 68 villages and 600 households. Informed consent was obtained from each household. This cross-sectional approach offers a snapshot in time of highly dynamic agricultural systems. Panel data would better capture annual fluctuations in yields and incomes that occur with variations in rainfall or prices, for example. However, as a key objective is to compare and learn from the differences as well as similarities of households living within key agricultural production systems, a cross-sectional approach was chosen. A goal of the CCAFS program is to follow-up with these same households in the future to better understand longer term changes that they have been making.
Site characteristics
This paper focuses on data from three sites in East Africa [22] that were identified in 2010 as benchmark sites of CCAFS. These sites are: Rakai (Kagera Basin, Uganda), Wote (Makueni, Kenya) and Lushoto (Usambara, Tanzania).
The sites were selected using criteria such as poverty levels, vulnerability to climate change, key biophysical, climatic and agro-ecological gradients, agricultural production systems, and partnerships, etc. [23]. Figure 1 shows the locations of the CCAFS sites and Table 1 provides a description of the sites, summarising climate, farming systems, main crops and livestock. A more detailed description of these sites can be found in [23–25] and [22]. These sites are also hot spots of climate change and food insecurity as identified by [26]. The three sites are characterised by bimodal rain, and different levels of rainfall, with Wote being the driest of the sites. All sites present mixed crop-livestock production systems, with one, two and three dominant types of production systems in Rakai, Wote and Lushoto, respectively.
Research site locations
Table 1 Site description
The analysis is articulated in two parts. We first analyse the characteristics of food secure and food insecure female-headed and male-headed households and then analyse the factors influencing households' food security. We use a logistic regression model to analyse the factors influencing household food security. The dependent variable in this case, food security, is a binary variable (with a value of 1 if the household is food secure and 0 otherwise).
The concept of food security is of course quite complex, relating to availability access, affordability and use of food, as well as stability concerns [26, 27]. This study focuses primarily on food availability, considering a household 'food secure' when they have sufficient food (from any source) to meet their dietary (energy) needs throughout the year, as defined below. Prior to including the predictor variables in the regression analysis, we tested them for collinearity. We excluded from the model those variables whose variance inflation factor (VIF) was >5.0.
The main explanatory variables (the independent variables) were selected based on previous studies examining factors influencing food security [7, 28, 29]. These variables included: income, assets, labour, crop and activity diversification, agricultural yields and market orientation.
Energy availability was calculated for each household based on production and food consumption data following [14]. Households reported food items consumed on a weekly basis by each member of the household, indicating seasonal differences between what they considered being a 'good period' and a 'bad period' in a given year. This information was used to calculate a food security ratio (FSR) as shown in Eq. (1) to reflect how households rely on farm production and food purchase to meet their energy needs, calculated using World Health Organization standards [30]. FSR is defined as the total energy in available food divided by the total energy requirements for the household. FSR values greater than one (FSR > 1) means that the household meets its energy requirements and has access to surplus energy.
$$ {\text{FSR}}_{i} = \frac{{\mathop \sum \nolimits_{m = 1}^{p} \left( {{\text{QtyC}}_{m} \times {\text{E}}_{m} } \right) + ({\text{QtyP}}_{m} \times {\text{E}}_{m} )}}{{\mathop \sum \nolimits_{j = 1}^{n} {\text{K}}_{j} }} $$
where FSR i is the food security ratio for household i; QtyC m is the quantity of food item m produced on-farm that is available for consumption (kg or L); QtyP m is the quantity of food item m purchased that is consumed (kg or L); E m is the energy content of food item m (MJ kg−1or L); K j is the energy requirement in MJ per capita for j member; and n is the number of members in household j.
Income is considered as one of the most important factors impacting food insecurity and hunger of populations, since hunger rates decline sharply with rising incomes [28, 31, 32]. Gross farm and off-farm income were calculated using revenues from crop, livestock and off-farm activities, respectively. Crop income for each household was calculated based on sales of crops. Livestock income for each household was calculated based on sales of live animals and livestock products. Off-farm income was the sum of the cash earned from all off-farm activities and it included remittances.
Land, livestock, domestic, transport and productive assets affect food security in different ways. Land ownership has been shown to strongly influence incomes and livelihoods, and is highly skewed within villages across Africa [17]. Livestock assets contribute directly to food security by providing energy through consumption, and indirectly through the sales of animals and animal products that generate cash, the provision of manure and draft power [33]. Domestic assets such as radios, cell phones, stoves, etc. improve household welfare and assist in the exchange of information, thus facilitating decision making [11, 34]. Transport assets (bicycles, trucks, motorbikes, etc.) help increase access to markets and mobility to attend meetings, training and other events, enhancing access to, and use of, information, social capital and social networks [7]. The use of farm machinery, tools, etc. (productive assets) leads to an increase in production and potentially income [35].
The ratio of total land area owned per adult equivalent (land per capita) was used in this analysis. To calculate livestock, domestic, transport and productive assets, we assigned weights (w) to each of the items in each asset category, with the weights adjusted according to the age of the item, following guidelines developed for Bill and Melinda Gates funded projects [36]. Asset indices were then calculated as the sum of the number of assets, weighted by type of asset and age [37], as shown in Eq. 2.
$$ {\text{Household Domestic Asset Index}} = \mathop \sum \limits_{g = 1}^{\text{G}} \left[ {\mathop \sum \limits_{i}^{\text{N}} \left( {{\text{w}}_{gi} \times {\text{a}}} \right)} \right]\;\;\;\;\;i = { 1}, 2, \ldots {\text{N}};\;g = { 1},{ 2}, \ldots {\text{G}} $$
where w gi = weight of the i'th item of asset g; N is the number of asset g owned by household; a is the age adjustment to weight; G is the number of assets owned by household.
Crop diversity and activity diversity
Crop diversity together with diversity of income sources (cash and in-kind, farm and off-farm) are considered to be key 'buffer strategies' households pursue to deal with risk in agrarian environments [29, 38]. Activity diversity is one of the strategies that can minimise household income variability, enhance food security, and also represents a primary means by which many individuals reduce risk [16, 39]. Crop diversity was calculated as the total number of crops grown by the households. Activity diversity was calculated as the total number of farm and off-farm activities households were engaged in.
Labour availability is an important determinant of household agricultural productivity and thus food security, especially in subsistence-oriented households, typically with small farms reliant on variable rainfall [40–42]. Crop and livestock labour were calculated as the total number of man/days spent working on crop and livestock-related activities, respectively.
Higher crop yields per acre typically help improve household food security as there is more food available, both for consumption and selling [30]. The ratio of total quantity of local varieties of maize harvested to the size of maize plots was used to measure yields since local maize is the main crop produced and consumed across these sites.
Market orientation
Market orientation can have differential effects on household food security. It can increase diversification, which helps spread risk, but it can also reduce subsistence production, exposing households to higher risk and food insecurity during periodic shocks [43].
Market orientation was calculated as the ratio of products sold to those produced, in energy units, as shown in Eq. 3.
$$ {\text{MO}} = \frac{{\sum\nolimits_{i = 1}^{n} {\left( {{\text{QCs}}_{i} \times {\text{E}}_{i} } \right)} + \sum\nolimits_{j = 1}^{m} {\left( {{\text{QLs}}_{j} \times {\text{E}}_{j} } \right)} }}{{\sum\nolimits_{i = 1}^{k} {\left( {{\text{QCp}}_{i} \times {\text{E}}_{i} } \right)} + \sum\nolimits_{j = 1}^{1} {\left( {{\text{QLp}}_{j} \times {\text{E}}_{j} } \right)} }} $$
where QCs and QLs are the quantities of crop and livestock products i and j sold on the market (kg or litre); QCp and QLp are the quantities of crop and livestock products i and j produced on-farm (kg or L); and E i and E j are the energy contents of products i and j (MJ kg−1 or L).
About two-thirds (76 %) of the households were male-headed and about one-third (24 %) were female-headed households. Table 2 summarises the descriptive statistics for food secure and food insecure households, those able or not able to meet their food energy requirements throughout the year. An additional table shows this more in detail (see Additional file 1: Table S1). The results show that there are (too) many food insecure households in all the sites—62 % in Rakai (Uganda), 80 % in Lushoto (Tanzania) and 85 % in Wote (Kenya). Of these, many are female-headed (15 % in Rakai, 35 % in Lushoto, and 11 % in Wote). However, the share of female-headed households among food insecure households is not greater (20.7 %) than the share of female-headed households in the population (23.7 %) suggesting that food insecurity may not be more severe for female-headed households than male-headed households.
Table 2 Mean and standard deviation for food secure and food insecure households in the three sites (p value <0.05)
The family size of food secure and food insecure households differs significantly (p < 0.01) across the three sites. On average, food secure households were smaller (4.5) compared with the food insecure households (5.8). This result is consistent with findings of previous studies where larger household sizes have been found to have a negative impact on calorie availability, especially in the context of female-headed households [44–47]. Since resources are very limited, the increase in family size may put more pressure on consumption than it contributes to the production.
Livestock and other assets
Most of these households own livestock (90 %)—41 % have cattle, 81 % own small ruminants (goats, sheep), and 89 % raise non-ruminants (poultry, pigs). We did not find a significant difference in livestock asset ownership between food secure and food insecure households. In fact, Table 2 shows that on average for each asset category, land per capita included, food secure households in all our sites do not have significantly more assets than do food insecure households—contrary to what one might expect, given that assets are often used as a proxy for wealth.
Across sites, maize is the most widely grown crop, cultivated by 93 % of the sampled households, followed by beans at 70 % (Table 3). Other common crops include banana, cassava, green pea, pigeon pea and cowpea. Figure 2 shows how land is allocated to the different crop categories in each site by food secure versus food insecure households. It suggests that food secure households allocate more land to all types of food crops, as well as cash crops, than do food insecure households. This is possible as they also own more land than the food insecure.
Table 3 Crop production in the study sites and gender patterns by cropping (percentages of male, MHH, and female-headed households, FHH)
Land allocation across the sites by food secure and food insecure households
In Lushoto, land is used to grow food crops such as potato and sweet potato, onion, maize, beans, cassava and vegetables, fruits such as banana and avocado and cash crops such as sugarcane, tea and coffee. Whilst both male- and female-headed households grow coffee, sugarcane and tea are the prerogative of male-headed households (Table 3). Food secure farmers have more land allocated to fruits but less to tree–herb–shrubs than food insecure farmers (Fig. 2). Yet, we observe that food secure female-headed households allocate more land to orchards, cereals (in particular maize) and starches (in particular potatoes) than food insecure female-headed households.
In Rakai, households cultivate cash crops such as coffee, groundnuts and tobacco. Households also grow fruits such as banana and passion fruit and food crops such as potato, maize, cassava, beans, sorghum and vegetables. Although the difference in land allocation was not significant between food secure and food insecure households in most cases (Fig. 2), the food secure households allocated significantly more land to starches, in particular cassava, cereals (especially maize), and vegetables than food insecure households. An additional table shows this in more detail (see Additional file 2: Table S2).
In Wote, land allocation for crops differs between food secure and food insecure households (Fig. 2). Fruits such as mango, oranges and papaya are grown, as well as crops that include green gram, pigeon pea, cowpea, beans, maize, sorghum, cassava and vegetables. Food secure female-headed households allocate on average more than twice the amount of land for cultivation of sorghum, and on average 40 % more for cultivation of green gram, than food insecure female-headed households. An additional table shows this in more detail (see Additional file 2: Table S2). The food secure female-headed households also allocate twice as much land to fruits, such as mango and oranges, than their food insecure counterparts. This could be due to the fact that when female-headed households feel more food secure they try to diversify into cash crops, which will increase their cash flow. There is also a marketing cooperative in the area for the sale of mangoes that may represent an additional incentive for food secure households to produce and sell mangoes in particular [48].
Overall, the results suggest that food secure households have a higher diversity of crops, although the difference is only highly significant in Rakai (Table 2). Furthermore, male-headed households are more diversified in their crops compared to female-headed households (with 4.7 ± 1.8 crop types versus 3.9 ± 1.9). Larger farms potentially allow for more crop diversification (Fig. 3).
Regression between land per capita and crop diversity. The 12 scores are the mean values per household type (male- vs female-headed households; food secure vs food insecure households) and per site
Crop diversification is a strategy that allows households to cope with changes in livelihoods [49]. A previous study in East Africa found that loss of markets for traditional crops and opening of new markets for new crops can increase the incentive for households to grown many different crops (e.g. up to six or seven) [50]. Capacity building also plays an important role in facilitating the adoption of alternative crops [51].
Across the sites, most households derive income from multiple sources (livestock, crops, and off-farm activities). Food insecure households in all three sites had lower income per capita than food secure households, although the difference is not highly significant. Therefore, even the food secure households are not earning much. Poverty levels are high; 37 % of households fall below the poverty line of USD 1.25 per capita per day—two-thirds of these are female-headed households. The proportion of households below the poverty line varies across these sites—ranging from 23 % in Wote to 63 % in Lushoto. However, within-site variability in incomes is high, reflecting large wealth differences between households that also emerge from other studies [17, 52, 53].
The relative contribution of crop income to total income can be more than 40 % for food secure households (Fig. 4). In contrast, the contribution of livestock to total income decreases from 10 % to virtually nothing for households with higher levels of food security. The contribution of off-farm income to total income for food insecure households is relatively high, on average 60 %. Livestock production contributes to annual gross income on average 40 USD for female- and 104 USD for male-headed households, while crop income contributes to total income on average 206 USD for female- and 523 USD for male-headed households per year, across all sites.
Contribution to household income of a cropping activities; b livestock activities and c off-farm activities. The 12 scores are the mean values per household type (male- vs female-headed households; food secure vs food insecure households) and per site
The contribution of livestock revenues to total income is higher for food insecure households. The majority of food insecure households largely dependent on livestock income are found in Wote, the driest of the three sites analysed, characterised by uncertainty of rainfall [54] and low nutrient levels, and low water-holding capacity [55]. These results confirm findings of other studies that show the significant contribution of livestock-related earnings for households' income and livelihoods in areas where rainfall levels are low [52, 56, 57]. In fact, where cropping is very risky due to low and unpredictable rainfall, the role of livestock as a livelihood option is likely to become even more important in the face of a changing climate [4].
Other studies such as [14, 58–60] suggest that crop diversification can boost total household income (Fig. 5a), and our data support that hypothesis. However, rather non-intuitively, our data also show an inverse relationship between activity diversity and total household income—i.e., the more agricultural and non-agricultural activities a household is pursuing, the lower the income (Fig. 5b). Thus, it appears that perhaps household welfare depends more on the activity mix than on the total number of activities per se, and/or that low-income households have to diversify their income sources because they are not able to meet their needs with one income source alone.
Regression between total income per capita and crop diversity (a) and activity diversity (b). The 12 scores are the mean values per household type (male- vs female-headed households; food secure vs food insecure households) and per site
Determinants of food security for all households per each site
We included in the model the following variables whose variance inflation factor (VIF) was <5.0: livestock assets, domestic assets, productive assets, transport assets, maize yields, crops diversity, land per capita, activity diversity, crop income, livestock income, off-farm income, crop labour, livestock labour and market orientation. Estimated parameters for the factors influencing the likelihood of being food secure are presented in Table 4.
Table 4 Factors influencing food security (corresponding regression coefficients)
Crop diversity is a significant and important factor in all sites in terms of increasing the likelihood of food security. This study thus confirms findings of other research on the topic of the importance of crop diversification as a potential strategy to mitigate food insecurity by smallholders in Sub-Saharan Africa [61]. A diverse range of crops enhances food security for several reasons—it can increase yield stability, result in more diversified human diets, and lead to more regular and reliable household income that allows purchase of additional food [61].
A finding of this study that is rather intuitive is the positive correlation between maize yield and food security. Higher production of this main staple food translates into an increase in food available for home consumption. Where a surplus can be produced, this also can be sold, increasing cash earnings that can be spent on alternative food, mitigating seasonal food shortages.
As previously observed, the contribution of livestock revenues to total income is higher for food insecure households. The negative coefficient of livestock income could indicate that the main contribution of livestock to food security comes from the sale of livestock as a safety net during crisis, rather than the sale of animal products. Better-off households would be under less pressure to liquidate livestock holdings because of their ability to self-insure against the harvest shortfall through other means and would therefore rely less on livestock income [62]. Households in Wote in particular are highly dependent on sources of income vulnerable to agro-climatic shocks, such as drought. Furthermore, they experience low demand, translating into low selling prices, combined with highly regulated markets [62].
Livestock labour presents a negative coefficient, which may be linked to much labour being allocated to livestock in a context where households are experiencing an increased difficulty in always finding sufficient feed for their animals, together with lack of price incentives for animal products and increasing costs of keeping livestock [63]. Alternative strategies should be put in place by these households for better coping with challenging conditions [64]. A positive correlation between crop labour and food security is found in Wote and Lushoto, whilst a negative one is found in Rakai. Food insecure households in Rakai have larger farms than the food secure ones, and therefore they tend to allocate more labour to crop activities. However, these results suggest their use and allocation of land may not be efficient.
Market orientation is not significantly related to food insecurity in Rakai, but we observe a decrease in food insecurity with an increase in market orientation for Wote and Lushoto. Thus, trade in local markets seems to be contributing in these sites to smoothing home consumption, increasing income and/or allowing for additional and alternative food purchases.
Having more domestic assets increases the likelihood of food insecurity in female-headed households. Domestic assets include goods that are used to process food for consumption (e.g. stoves), and those that aid communication and provide access to information (e.g. radios and mobile phones). Phones and radios are the most commonly used sources of information across the three sites [48, 65, 66]. Some of the local radio stations provide information on available seed varieties in the market, post-harvest crop handling tips, information on effective preservation of farm products, as well as weather reports. The radio is the major source of weather and climate-related information that is in most cases received by women [67]. However, recent research shows that access to agricultural-related information is largely structured by gender, and that when the information reaches women farmers they may either not see the need for a change or not have enough money and/or enough labour to implement changes [68].
The rest of the variables (livestock assets, productive assets, transport assets, land per capita, activity diversity, crop income, off-farm income) and all two-way interactions with gender and country were assessed for inclusion in the model and were not found to be statistically significant at the 0.1 significance level.
Our results suggest that food insecurity may not be more severe for female-headed households than male-headed households. In terms of the key factors contributing to food security, our results show that food secure households are largely those that have the greatest diversity of crops. Larger land size could potentially allow for more diversification, however, we also see that this does not often translate into an increase in food security. This particularly holds for female-headed households, where our evidence shows many farm management constraints, including fewer assets, less labour, fewer crop varieties, smaller household sizes, smaller income per capita and less market orientation.
We also found evidence that more domestic assets do not make female-headed households more food secure. For the other categories of assets (livestock, transport, and productive), we did not find evidence of a correlation with food security. Since assets have been found to be so key in helping understand poverty, this is somewhat surprising, but does reinforce that poverty is a process, not an event, and food security is also a complex issue that we are only beginning to understand in these types of environments.
Livestock labour is negatively correlated with food security, suggesting that the overall efficiency of the system should be improved and that therefore alternative strategies should be put in place (e.g., improving livestock husbandry and health; changing feeding practices; changing breeds; rotational grazing). A more in-depth analysis of livestock labour dynamics could help to better identify and target interventions to increase productivity.
What we are seeing is that different factors are important in terms of explaining variations in food security levels across the three sites. This means that site-specific characteristics and factors (agro-ecological zone, type of production system, socio-economic conditions, etc.) are important. Thus, improved targeting of food security or adaptation options to take into consideration these local conditions is critical.
What we have learned from examining the food secure households is that larger farms are not necessarily more food secure, even though these households do tend to have higher per capita total incomes. Food secure households typically devote more land to growing vegetables, starches, pulses, fruits as well as cereals than do the food insecure. Different livelihood portfolios are being pursued by male- and female-headed households, with the latter less likely to grow high-value crops, for example.
However, further research is needed to better understand intra-household characteristics and factors that underpin food security status from a gender perspective.
Other implications of our findings include the need for greater investment in specific actions and initiatives that are likely to contribute significantly to food security. These include, for example, those interventions that improve the targeting of information delivered to farming households, especially to the women within those households, and those that enhance access to new market opportunities.
FSR:
food security ratio
ILRI:
International Livestock Research Institute
CCAFS:
CGIAR Research Programme on Climate Change, Agriculture and Food Security
ICRAF:
ICRISAT:
International Crops Research Institute for the Semi-Arid Tropics
CIFOR:
Center for International Forestry Research
Thornton PK, Jones PG, Owiyo T, Kruska RL, Herrero M, Kristjanson P, Notenbaert A, Bekele N, Omolo A. Climate change and poverty in Africa: mapping hotspots of vulnerability. AfJARE. 2008;2(1):24–44.
Thornton PK, Jones PG, Alagarswamy G, Andresen J. Spatial variation of crop yield response to climate change in East Africa. Global Environ Chang. 2009;19:54–65.
Schlenker W, Lobell DB. Robust negative impacts of climate change on African agriculture. Environ Res Lett. 2010;5:1-8.
Herrero M, Thornton PK, Notenbaert AM, Wood S, Msangi S, Freeman HA, Bossio D, Dixon J, Peters M, Van de Steeg J, Lynam J, Parthasarathy RP, Macmillan S, Gerard B, McDermott J, Seré C, Rosegrant M. Smart investments in sustainable food production: revisiting mixed crop-livestock systems. Science. 2010;327:822–5.
Beddington J, Asaduzzaman M, Fernandez A, Clark M, Guillou M, Jahn M, Erda L, Mamo T, Bo N Van, Nobre CA, Scholes R, Sharma R, Wakhungu J. Achieving food security in the face of climate change: summary for policy makers from the Commission on Sustainable Agriculture and Climate Change. Copenhagen, Denmark: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2011. https://cgspace.cgiar.org/handle/10568/10701.
Vermeulen SJ, Challinor AJ, Thornton PK, Campbell BH, Eriyagama N, Vervoort JM, Kinyangi J, Jarvis A, Laderach P, Ramirez-Villegas J, Nicklin KJ, Hawkins E, Smith DB. Addressing uncertainty in adaptation planning for agriculture. Proc Natl Acad Sci. 2013;110:8357–62.
Kassie M, Ndiritu SW, Stage J. Gender inequalities and Food security in Kenya: application of exogenous switching treatment regression. World Dev. 2014;56:153–71.
Kristjanson P, Neufeldt H, Gassner A, Mango K, Kyazze FB, Desta S, Sayula G, Thiede B, Förch W, Thornton PK, Coe R. Are food insecure smallholder households making changes in their farming practices Evidence from East Africa. Food Sec. 2012;4(3):381–97.
Bryan E, Ringler C, Okoba B, Roncoli C, Silvestri S, Herrero M. Adapting agriculture to climate change in Kenya: household strategies and determinants. J Environ Manage. 2013;114:26–35.
Branca G, Lipper L, McCarthy N, Jolejole MC. Food security, climate change, and sustainable land management. Agron Sustain Dev. 2013;33:635–50.
Silvestri S, Bryan E, Ringler C, Herrero M, Okoba B. Climate change perception and adaptation of agro-pastoral communities in Kenya. Reg Environ Chang. 2012;12(4):791–802.
Bryan E, Ringler C, Okoba B, Koo J, Herrero M, Silvestri S. Can agriculture support climate change adaptation, greenhouse gas mitigation and rural livelihoods Insights from Kenya. Clim Chang. 2012;118:151–65.
Parry ML, Canziani OF, Palutikof JP. In: Parry ML, Canziani OF, Palutikof JP, van der Linden PJ, Hanson CE, editors. Climate change: impacts, adaptation and vulnerability contribution of working group II to the fourth assessment report of the intergovernmental panel on climate change. Cambridge: Cambridge University Press; 2007. p. 23–78.
Rufino MC, Thornton PK, Ng'ang'a SK, Mutie I, Jones P, van Wijk MT, Herrero M. Transitions in agro-pastoralist systems of East Africa: impacts on food security and poverty. Agric Ecosyst Environ. 2013;179:215–30.
Thornton PK, Jones PG, Alagarswamy G, Andresen J, Herrero M. Adapting to climate change: agricultural system and household impacts in East Africa. Agric Syst. 2010;103:73–82.
Barrett CB, Reardon T, Webb P. Nonfarm income diversification and household livelihood strategies in rural Africa: concepts, dynamics, and policy implications. Food Pol. 2001;26:315–31.
Jayne T, Yamano T, Weber MT, Tschirley D, Benfica R, Chapoto A. Smallholder income and land distribution in Africa: implications for poverty reduction strategies. Food Pol. 2003;28(3):253–75.
Perez C, Jones E, Kristjanson P, Cramer L, Thornton P, Förch W, Barahona C. How resilient are farming households, communities, men and women to a changing climate in Africa. CCAFS Working Paper no. 80. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2014.
Silvestri S, Rufino MC, Quiros C, Douxchamps S, Teufel N, Singh D, Mutie I, Ndiwa N, Ndungu A, Kiplimo J, Herrero M. ImpactLite surveys. Cambridge: CCAFS, Harvard Dataverse Network; 2014.
Rufino MC, Quiros C, Boureima M, Desta S, Douxchamps S, Herrero M, Kiplimo J, Lamissa D, Joash M, Moussa AS, Naab J, NdourY Sayula G, Silvestri S, Singh D, Teufel N, Wanyama I. Developing generic tools for characterizing agricultural systems for climate and global change studies (IMPACTlite-phase 2). Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2012.
Silvestri S, Rufino MC, Quiros C, Douxchamps S, Teufel N, Singh D, Mutie I, Ndiwa N, Ndungu A, Kiplimo J, Herrero M. ImpactLite dataset. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2014.
Förch W, Sijmons K, Mutie I, Kiplimo J, Cramer L, Kristjanson P, Thornton P, Radeny M, Moussa A, Bhatta G. Core sites in the CCAFS regions; East Africa, West Africa and South Asia, version 3. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2013.
Sijmons K, Kiplimo J, Förch W, Thornton PK, Radeny M, Kinyangi J. CCAFS site atlas—Usambara/Lushoto. CCAFS site atlas series. Copenhagen: The CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2013.
Sijmons K, Kiplimo J, Förch W, Thornton PK, Radeny M, Kinyangi J. CCAFS site atlas—Kagera Basin/Rakai. CCAFS site atlas series. Copenhagen: The CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2013.
Sijmons K, Kiplimo J, Förch W, Thornton PK, Radeny M, Kinyangi J. CCAFS site atlas—Makueni/Wote. CCAFS site atlas series. Copenhagen: The CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2013.
Ericksen P, Thornton P, Notenbaert A, Cramer L, Jones P, Herrero M. Mapping hotspots of climate change and food insecurity in the global tropics. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2011.
Food and Agricultural Organization (FAO). Rome declaration on world food security and world food summit plan of action. Food and Agriculture Organization, Rome. http://www.fao.org/docrep/003/w3613e/w3613e00.HTM. Retrieved 10 June 2015.
Maxwell S, Smith M. Household food security: a conceptual review. In: Maxwell S, Frankenberger T, editors. Household food security: concepts, indicators, and measurements: a technical review. New York and Rome: UNICEF and IFAD; 1992.
Bashir MK, Schilizzi S. Determinants of rural household food security: a comparative analysis of African and Asian studies. J Sci Food Agric. 2013;93(6):1251–8.
Food and Agricultural Organization (FAO). Human energy requirements: Report of a Joint FAO/WHO/UNU Expert Consultation. Rome: FAO Food and Nutrition Technical Report Series No. 1; 2004.
Food and Agricultural Organization (FAO). North and South Gonder—Food security assessment in parts of the Tekeze River Watershed. Addis Ababa: United Nations Development Programme, Emergencies Unit for Ethiopia; 1999.
Beyene F, Muche M. Determinants of food security among rural households of Central Ethiopia: an empirical analysis. QJIA. 2010;49(4):299–318.
Smith J, Sones K, Grace D, MacMillan S, Tarawali S, Herrero S. Beyond meat, milk, and eggs: livestock's role in food and nutrition security. Anim Front. 2013;3(1):6–13.
Bryan E, Deressa TT, Gbetibouo GA, Ringler C. Adaptation to climate change in Ethiopia and South Africa: options and constraints. Environ Sci Polic. 2009;12(4):413–26.
Gittinger JP, Chernick S, Horenstein NR, Saito K. Household food security and the role of women. World Bank; 1990. (discussion papers).
Bill and Melinda Gates Foundation (BMGF). Agricultural development outcome indicators: initiative and sub-initiative progress indicators and pyramid of outcome indicators. Seattle, WA: BMGF; 2010.
Njuki J, Kaaria S, Sanginga P, Chamunorwa A, Chiuri W. Linking smallholder households to markets, gender and intra-household dynamics: does the choice of commodity matter. Eur J Dev Res. 2011;23:426–43.
Goshu D, Kassa B, Ketema M. Does crop diversification enhance household food security Evidence from rural Ethiopia. AASER. 2012;2(11):503–15.
Alderman H, Paxson CH. Do the poor insure A synthesis of the literature on risk and consumption in developing countries. World Bank Policy Research Working paper 1008. 1992.
Jiggins J. Women and seasonality: coping with crisis and calamity. IDS Bull. 1986;17(1):9–18.
Thomas RB, Leatherman TL. Household coping strategies and contradictions in response to seasonal food shortage. Eur J Clin Nutr. 1990;44(1):103–11.
Chen MA. Coping with seasonality and draught. Thousand Oaks: Sage Publications; 1991.
International Found for Agriculture and Development (IFAD). Market orientation and household food security. Rome, Italy; 2014. http://www.ifad.org/hfs/thematic/rural/rural_4.htm#market. Retrieved 10 June 2015.
Feleke S, Kilmer RL, Gladwin C. Determinants of food security in Southern Ethiopia. Agric Econ. 2005;33:351–63.
Kraybill DS, Bashaasha B: Explaining Poverty in Uganda: Evidence from the Uganda National Household Survey. In: Omare MN, Makokha MO, Oluoch-Kosura W. Shaping the future of African agriculture for development: role of social scientists. Proceedings of the Inaugural Symposium of African Association of Agricultural Economists, Nairobi. 2005. http://www.aaae-africa.org .
Kaloi E, Tayebwa B, Bashaasha B. Food security status of households in Mwingi district, Kenya. Afr Crop Sci Conf Proc. 2005;7:867–73.
Turyahabwe N, Kakuru W, Tweheyo M, Tumusiime DM. Contribution of wetland resources to household food security in Uganda. Agric Food Sec. 2013;2:5.
Onyango L, Mango J, Loo L, Odiwuor H, Mwangangi M, Mutua E, Mutuo T. Village Baseline study: site analysis report for Makueni–Wote, Kenya. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2013.
Place F, Njuki J, Murithi F, Mugo F. Agricultural enterprises and land management in the highlands of Kenya. In: Pender J, Ehui S, editors. Strategies for sustainable land management in the East African highlands. Washington: International Food Policy Research Institute; 2006.
Njuki J. Verdeaux F Changes in land use and land management in the Eastern Highlands of Kenya: before land demarcation to the present. Nairobi: International Centre for Research on Agroforestry; 2001.
CCAFS: Annual Report 2014. Climate-smart agriculture—acting locally, informing globally. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2015.
Gem Argwings-Kodhek, Kiiru MW, Tschirley D, Ochieng BA, Landan BW. Measuring income and the potential for poverty reduction in rural Kenya. Nakuru: Egerton University, Tegemeo Institute of Agricultural Policy and Development TAMPA; 2000.
Kristjanson P, Mango N, Krishna A, Radeny M, Johnson N. Understanding poverty dynamics in Kenya. J Int Devel. 2010;22(7):978–96.
Gichuki FN. Makueni district profile: rainfall variability, 1950–1997. Drylands Research Working Paper 2, Drylands Research; 2000.
Fischer G, Nachtergaele F, Prieler S, van Velthuizen HT, Verelst L, Wiberg D. Global Agro-ecological Zones Assessment for Agriculture (GAEZ 2008). Rome: IIASA, FAO; 2008.
Radeny M, Nkedianye D, Kristjanson P, Herrero M. Livelihood choices and returns among pastoralists: evidence from Southern Kenya. Nomadic Peoples. 2007;11(2):31–5.
Homewood KM, Trench PC, Brockington D. Pastoralist livelihoods and wildlife revenues in East Africa: a case for coexistence. Pastor Res Policy Pract. 2012;2:19.
Tache B, Oba G. Is poverty driving Borana herders in Southern Ethiopia to crop cultivation. Hum Ecol. 2010;38:639–49.
Matlon P, Kristjanson P. Farmer's strategies to manage crop risk in the West African semi-arid tropics. In: Unger PW, Jordan WR, Sneed TV, Jensen RW, editors. Challenges in dryland agriculture: a global perspective. Proceedings of the International Conference on Dryland Farming, Bushland. 1988; p. 604–606.
Milgroom J, Giller KE. Courting the rain: rethinking seasonality and adaptation to recurrent drought in semi-arid southern Africa. Agric Syst. 2013;118:91–104.
Njeru EM. Crop diversification: a potential strategy to mitigate food insecurity by smallholders in sub-Saharan Africa. J Agric Food Syst Community Dev. 2013;3(4):63–9.
Reardon T, Taylor EJ. Agro-climatic shock, income inequality, and poverty: evidence from Burkina Faso. World Dev. 1996;24(5):901–14.
Mbilinyi A, Ole Saibul G, Kazi V: Impact of climate change on small scale households: voices of households in village communities in Tanzania. ESRF discussion paper No. 47, 2013.
Mbilinyi A. Do we really know how climate change affects our livelihood Evidences from village communities in rural Tanzania. ESRF Policy Brief No 3; 2013.
Onyango L, Mango J, Kurui Z, Wamubeyi B, Basisi H, Musoka E, Sayula G: Village Baseline Study: Site Analysis Report for Usambara – Lushoto, Tanzania. CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) 2012, Copenhagen, Denmark.
Onyango L, Mango J, Zziwa A, Kurui Z, Wamubeyi B, Sseremba O, Asiimwe J. Village baseline study: site analysis report for Kagera Basin—Rakai. Uganda. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2012.
Mwangangi M, Mutie M, Mango J. Summary of baseline household survey results: Makueni, Kenya. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2012.
Twyman J, Green M, Bernier Q, Kristjanson P, Russo S, Tall A, Ampaire E, Nyasimi M, Mango J, McKune S, Mwongera C, and Ndourba, Y. Adaptation actions in Africa: evidence that gender matters. CCAFS Working Paper no. 83. Copenhagen: CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS); 2014.
SS contributed to survey design, conceived the study, performed most of the analysis, wrote most of the manuscript and led its development. PK contributed to survey design and the writing of the manuscript. CQ and MCR led the developed the household survey. SD, IM, AN, NN contributed to the analysis. JM, MH, LC, MCR, WF, MR, SD reviewed and made editorial comments on the draft of the manuscript. All authors read and approved the final manuscript.
We would like to acknowledge the field teams, householders and villagers, community leaders, and other CCAFS partners who helped with this collaborative research effort in the various sites. We would like to especially thank the CCAFS Regional Programme team for their support. We would like to thank in particular: Solomon Desta, Jusper Kiplimo, George Sayula, and Ibrahim Wanyama. We would also like to thank for their review of the statistics: Elinor Jones and Sam Dumble from the University of Reading.
CCAFS is funded by the CGIAR Fund, AusAid, Danish International Development Agency, Environment Canada, Instituto de Investigação Científica Tropical, Irish Aid, Netherlands Ministry of Foreign Affairs, Swiss Agency for Development and Cooperation, UK Aid, and the European Union, with technical support from the International Fund for Agricultural Development.
International Livestock Research Institute (ILRI), PO Box 3079, Nairobi, 00100, Kenya
Silvia Silvestri
, Mutie Ianetta
, Quiros F. Carlos
, Ndiwa Nicolas
& Rufino Mariana Cristina
International Livestock Research Institute (ILRI), C/o CIFOR, 06 BP 9478, Ouagadougou, Burkina Faso
Douxchamps Sabine
World Agroforestry Centre (ICRAF), United Nations Avenue, Gigiri, P.O. Box 30677, Nairobi, 00100, Kenya
Kristjanson Patti
, Ndungu Anthony
& Mango Joash
CGIAR Research Programme on Climate Change, Agriculture and Food Security (CCAFS), ILRI, PO Box 30709, Nairobi, 00100, Kenya
Förch Wiebke
& Radeny Maren
Commonwealth Scientific and Industrial Research Organization (CSIRO), 306 Carmody Road, St Lucia, QLD, 4067, Australia
Herrero Mario
International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), PO Box 39063, 00623, Nairobi, Kenya
Claessens Lieven
Centre for International Forestry Research (CIFOR), c/o ICRAF, P.O Box 30677, Nairobi, 00100, Kenya
Rufino Mariana Cristina
Search for Silvia Silvestri in:
Search for Douxchamps Sabine in:
Search for Kristjanson Patti in:
Search for Förch Wiebke in:
Search for Radeny Maren in:
Search for Mutie Ianetta in:
Search for Quiros F. Carlos in:
Search for Herrero Mario in:
Search for Ndungu Anthony in:
Search for Ndiwa Nicolas in:
Search for Mango Joash in:
Search for Claessens Lieven in:
Search for Rufino Mariana Cristina in:
Correspondence to Silvia Silvestri.
40066_2015_42_MOESM1_ESM.docx
Additional file 1: Table S1. Mean and standard deviation for food secure and food insecure male and female-headed households in the three sites.
Additional file 2: Table S2. Land allocation (ha) for different crops (mean and standard deviation) for food secure (FS) and food insecure (FI) male (M) and female-headed (F) households in the three sites.
Silvestri, S., Sabine, D., Patti, K. et al. Households and food security: lessons from food secure households in East Africa. Agric & Food Secur 4, 23 (2015) doi:10.1186/s40066-015-0042-4
Livelihoods strategies
Income diversification
Female-headed households
|
CommonCrawl
|
The averaging of fuzzy hyperbolic differential inclusions
Topological stability in set-valued dynamics
July 2017, 22(5): 1977-1986. doi: 10.3934/dcdsb.2017116
On practical stability of differential inclusions using Lyapunov functions
Volodymyr Pichkur
Taras Shevchenko National University of Kyiv, Department of Computer Science and Cybernetics, Volodymyrska Str. 60,01033, Kyiv, Ukraine
Received January 2016 Revised February 2016 Published March 2017
In this paper we consider the problem of practical stability for differential inclusions. We prove the necessary and sufficient conditions using Lyapunov functions. Then we solve the practical stability problem of linear differential inclusion with ellipsoidal righthand part and ellipsoidal initial data set. In the last section we apply the main result of this paper to the problem of practical stabilization.
Keywords: Differential inclusion, practical stability, stabilization, Lyapunov function, set-valued analysis.
Mathematics Subject Classification: Primary: 34A60, 34D20; Secondary: 34H1.
Citation: Volodymyr Pichkur. On practical stability of differential inclusions using Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1977-1986. doi: 10.3934/dcdsb.2017116
D. Angeli, B. Ingalls, E. D. Sontag and Y. Wang, Uniform global asymptotic stability of differential inclusions, Journal of Dynamical and Control Systems, 10 (2004), 391-412. doi: 10.1023/B:JODS.0000034437.54937.7f. Google Scholar
E. Arzarello and A. Bacciotti, On stability and boundedness for lipschitzian differential inclusions: The converse of Lyapunov's theorems, Set-Valued Analysis, 5 (1997), 377-390. doi: 10.1023/A:1008603707291. Google Scholar
J. P. Aubin and A. Cellina, Differential Inclusions. Set-Valued Maps and Viability Theory Berlin-Heidelberg-New York-Tokyo, Springer-Verlag, 1984. Google Scholar
J. P. Aubin and H. Frankowska, Set-valued Analysis Boston, Birkhäuser, 2009. Google Scholar
A. Bacciotti and L. Rosier, Liapunov Functions and Stability in Control Theory Berlin -Heidelberg -New York, Springer, 2005. Google Scholar
O. M. Bashnyakov, F. G. Garashchenko and V. V. Pichkur, Practical Stability, Estimations and Optimization, Kyiv : Taras Shevchenko National University of Kyiv, 2008. Google Scholar
A. N. Bashnyakov, V. V. Pichkur and I. V. Hitko, On Maximal Initial Data Set in Problems of Practical Stability of Discrete System, J. Automat. Inf. Scien., 43 (2011), 1-8. doi: 10.1615/JAutomatInfScien.v43.i3.10. Google Scholar
B. N. Bublik, F. G. Garashchenko and N. F. Kirichenko, Structural -Parametric Optimization and Stability of Bunch Dynamics, Kyiv: Naukova dumka, 1985. Google Scholar
N. G. Chetaev, On certain questions related to the problem of the stability of unsteady motion, J. Appl. Math. Mech., 24 (1960), 6-19. doi: 10.1016/0021-8928(60)90135-0. Google Scholar
K. Deimling, Multivalued Differential Equations Berlin-New York: Walter de Gruyter, 1992. Google Scholar
R. Gama and G. Smirnov, Stability and optimality of solutions to differential inclusions via averaging method, Set-Valued and Variational Analysis, 22 (2014), 349-374. doi: 10.1007/s11228-013-0261-4. Google Scholar
F. G. Garashchenko and V. V. Pichkur, Garashchenko and V. V. Pichkur, Properties of optimal sets of practical stability of differential inclusions. Part Ⅰ. Part Ⅱ, (Russian), Problemy Upravlen. Inform., (2006), 163-170. Google Scholar
A. F. Filippov, Differential Equations with Discontinuous Righthand Sides Dordrecht-Boston-London: Kluwer Academic, 1988. Google Scholar
A. F. Filippov, Differential Equations with Discontinuous Righthand Sides and Differential Inclusions, in Nonlinear Analysis and Nonlinear Differential Equations (eds. V. A. Trenogin and A. F. Filippov), Moscow: FIZMATLIT, (2003), 265-288. Google Scholar
N. F. Kirichenko, Introduction to the Stability Theory, Kyiv: Vyshcha Shkola, 1978. Google Scholar
V. Lakshmikantham, S. Leela and A. A. Martynyuk, Practical Stability of Nonlinear Systems Singapore : World Scientific, 1990. Google Scholar
[17] J. Lasalle and S. Lefshetz, Stability by Lyapunov Direct Method and Application, Academic Press, New York:, 1961. Google Scholar
A. Michel, K. Wang and B. Hu, Qualitative Theory of Dynamical Systems. The Role of Stability-Preserving Mappings, Marcel Dekker, Inc. , New York, 1995. Google Scholar
V. V. Pichkur and M. S. Sasonkina, Maximum set of initial conditions for the problem of weak practical stability of a discrete inclusion, J. Math. Sci., 194 (2013), 414-425. doi: 10.1007/s10958-013-1537-9. Google Scholar
G. Smirnov, Introduction to the Theory of Differential Inclusions, American Mathematical Society, 2002. Google Scholar
V. Veliov, Stability-like properties of differential inclusions, Set-Valued Analysis, 5 (1997), 73-88. doi: 10.1023/A:1008683223676. Google Scholar
Roger Metzger, Carlos Arnoldo Morales Rojas, Phillipe Thieullen. Topological stability in set-valued dynamics. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1965-1975. doi: 10.3934/dcdsb.2017115
Xing Wang, Nan-Jing Huang. Stability analysis for set-valued vector mixed variational inequalities in real reflexive Banach spaces. Journal of Industrial & Management Optimization, 2013, 9 (1) : 57-74. doi: 10.3934/jimo.2013.9.57
Zhenhua Peng, Zhongping Wan, Weizhi Xiong. Sensitivity analysis in set-valued optimization under strictly minimal efficiency. Evolution Equations & Control Theory, 2017, 6 (3) : 427-436. doi: 10.3934/eect.2017022
Robert Baier, Thuy T. T. Le. Construction of the minimum time function for linear systems via higher-order set-valued methods. Mathematical Control & Related Fields, 2019, 9 (2) : 223-255. doi: 10.3934/mcrf.2019012
Yihong Xu, Zhenhua Peng. Higher-order sensitivity analysis in set-valued optimization under Henig efficiency. Journal of Industrial & Management Optimization, 2017, 13 (1) : 313-327. doi: 10.3934/jimo.2016019
Geng-Hua Li, Sheng-Jie Li. Unified optimality conditions for set-valued optimizations. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1101-1116. doi: 10.3934/jimo.2018087
Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3461-3474. doi: 10.3934/dcdsb.2015.20.3461
Yu Zhang, Tao Chen. Minimax problems for set-valued mappings with set optimization. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 327-340. doi: 10.3934/naco.2014.4.327
Qingbang Zhang, Caozong Cheng, Xuanxuan Li. Generalized minimax theorems for two set-valued mappings. Journal of Industrial & Management Optimization, 2013, 9 (1) : 1-12. doi: 10.3934/jimo.2013.9.1
Sina Greenwood, Rolf Suabedissen. 2-manifolds and inverse limits of set-valued functions on intervals. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5693-5706. doi: 10.3934/dcds.2017246
Mariusz Michta. Stochastic inclusions with non-continuous set-valued operators. Conference Publications, 2009, 2009 (Special) : 548-557. doi: 10.3934/proc.2009.2009.548
Guolin Yu. Topological properties of Henig globally efficient solutions of set-valued problems. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 309-316. doi: 10.3934/naco.2014.4.309
Zengjing Chen, Yuting Lan, Gaofeng Zong. Strong law of large numbers for upper set-valued and fuzzy-set valued probability. Mathematical Control & Related Fields, 2015, 5 (3) : 435-452. doi: 10.3934/mcrf.2015.5.435
Michele Campiti. Korovkin-type approximation of set-valued and vector-valued functions. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021032
C. R. Chen, S. J. Li. Semicontinuity of the solution set map to a set-valued weak vector variational inequality. Journal of Industrial & Management Optimization, 2007, 3 (3) : 519-528. doi: 10.3934/jimo.2007.3.519
Guolin Yu. Global proper efficiency and vector optimization with cone-arcwise connected set-valued maps. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 35-44. doi: 10.3934/naco.2016.6.35
Jiawei Chen, Zhongping Wan, Liuyang Yuan. Existence of solutions and $\alpha$-well-posedness for a system of constrained set-valued variational inequalities. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 567-581. doi: 10.3934/naco.2013.3.567
Benjamin Seibold, Morris R. Flynn, Aslan R. Kasimov, Rodolfo R. Rosales. Constructing set-valued fundamental diagrams from Jamiton solutions in second order traffic models. Networks & Heterogeneous Media, 2013, 8 (3) : 745-772. doi: 10.3934/nhm.2013.8.745
Shay Kels, Nira Dyn. Bernstein-type approximation of set-valued functions in the symmetric difference metric. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 1041-1060. doi: 10.3934/dcds.2014.34.1041
Ying Gao, Xinmin Yang, Jin Yang, Hong Yan. Scalarizations and Lagrange multipliers for approximate solutions in the vector optimization problems with set-valued maps. Journal of Industrial & Management Optimization, 2015, 11 (2) : 673-683. doi: 10.3934/jimo.2015.11.673
HTML views (71)
|
CommonCrawl
|
Solve for: 369/43
Expression: $369\div43$
Find the approximate value
$\begin{align*}&\approx8.5814 \\&\begin{array} { l }\frac{ 369 }{ 43 },& 8 \frac{ 25 }{ 43 }\end{array}\end{align*}$
Evaluate: 4x+20y=-60
Evaluate: 3-7 * 12+16/4
Evaluate: log_{4}(g)=4.5
Solve for: cos((3pi)/(2))
Solve for: {\text{begin}array l-4x=6y+10 }-12x+9y=30\text{end}array .
Solve for: {\text{begin}array l (1)/(3)x+(1)/(5)y=-(14)/(15) } (1)/(3)x+3y=(14)/(3)\text{end}array .
Solve for: 5^{25543}/545^{49}
Evaluate: (x-3)/2 = (x+2)/3
Calculate: f(x)=2x^3-6x^2-2x+6
Calculate: (2)/(25)
A Step-By-Step Guide To Solving Differential Equations
The Best Apps and Sites for Learning Math
Top 10 Best Math Apps for Solving Tasks
How To Get The Most Out Of Math Apps
A Guide to Understanding Calculus
Ppowerful tool for solving complex problems in mathematics
A Guide To Working Through Simultaneous Equations
Achieving Academic Success With Math Apps
|
CommonCrawl
|
(Redirected from Roman numeral)
Template:Table Numeral Systems The system of Roman numerals is a numeral system originating in ancient Rome, and was adapted from Etruscan numerals. The system used in antiquity was slightly modified in the Middle Ages to produce the system we use today. It is based on certain letters which are given values as numerals:
I or i for one,
V or v for five,
X or x for ten,
L or l for fifty,
C or c for one hundred (centum),
D or d for five hundred, derived from halving the 1,000 Phi glyph (see below)
M or m for one thousand (mille), or the Greek letter Φ (Phi).
Roman numerals are commonly used today in numbered lists (in outline format), clockfaces, pages preceding the main body of a book, and the numbering of movie sequels.
For arithmetics involving Roman numerals, see Roman arithmetic and Roman abacus.
1 Origins
3 IIII or IV?
4 XCIX or IC?
5 Calendars and clocks
6 Year in Roman numerals
7 Other modern usage by English-speaking peoples
8 Modern non English speaking usage
9 Alternate forms
10 Table of Roman numerals
Although the Roman numerals are now written with letters of the Roman alphabet, they were originally separate symbols. They appear to derive from notches on tally sticks, such as those used by Italian and Dalmatian shepherds into the 19th century. Thus, the I was the basic score cut across the stick. Every fifth notch was double cut (⋀, ⋁, ⋋, ⋌, etc.), and every other Λ or V was cross cut (X). This gives the positional system: Eight on a counting stick was IIIIΛIII, but this could be written ΛIII (or VIII), with the first four notches assumed. Likewise, position four was the I that could be felt just before the cut of the V, and so could be written IIII or IV. Thus the system was not actually additive or subtractive in origin, but ordinal. When transfered to writing, the tally marks were easily identified with with the existing letters I, V, X.
The tenth V or X received an extra stroke. Thus 50 was N, И, K, Ψ, ⋔, etc., but perhaps most often with a chicken-track shape like a superimposed V and I. This flattened to ⊥ (an inverted T) by the time of Augustus, and soon thereafter became identified with the graphically similar letter L. Likewise, 100 was Ж, ⋉, ⋈, H, etc. (or as 50 plus an extra stroke). The form Ж (that is, superimposed X and I) came to predominate, then was transformed through >I< and ƆIC to Ɔ or C, with C finally winning out because, as a letter, it stood for centum (Latin for 'hundred').
The hundredth V or X was marked with a box or circle. 500 was like a Ɔ superposed on a ⋌ or ⊢ (that is, like a Þ with a cross bar), becoming a struck-through D or a Ð by the time of Augustus, under the graphic influence of the letter D. It was then identified as the letter D. Meanwhile, 1000 was a circled ten, Ⓧ, ⊗, ⊕, and by Augustinian times was partially identified with the Greek letter Φ. It then evolved along several routes. Some, such as Ψ and CD (actually a reversed D adjacent to a regular D), were historical dead ends (although, because of CD, a folk etymology identified D for 500 as half of Φ), while two variants survive to this day: in one, ↀ became CIƆ, leading to the convention of parentheses to indicate multiplication by 1000 (later extended to ↁ, ↂ, etc.); in the other, ↀ became ∞ and ⋈, changed to M under the influence of the word mille ('thousand').
In general, the number zero did not have its own Roman numeral, but the concept of zero as a number was well known by all medieval computists (responsible for calculating the date of Easter). They included zero (via the Latin word nullae meaning nothing) as one of nineteen epacts, or the age of the moon on March 22. The first three epacts were nullae, xi, and xxii (written in minuscule or lower case). The first known computist to use zero was Dionysius Exiguus in 525, but the concept of zero was no doubt well known earlier. Only one instance of a Roman numeral for zero is known. About 725, Bede or one of his colleagues used the letter N, the initial of nullae, in a table of epacts, all written in Roman numerals.
A notation for the value zero is quite distinct from the role of the digit zero in a positional notation system. The lack of a zero digit prevented Roman numerals from developing into a positional notation, and led to their gradual replacement by Arabic numerals in the early second millennium.
IIII or IV?
The notation of Roman numerals has varied through the centuries. Originally, it was common to use IIII to represent "four", because IV represented the god Jove (and later YHWH). The subtractive notation (which uses IV instead of IIII) has become universally used only in modern times. For example, Forme of Cury, a manuscript from 1390, uses IX for "nine", but IIII for "four". Another document in the same manuscript, from 1381, uses IV and IX. A third document in the same manuscript uses both IIII and IV, and IX. Constructions such as IIX for "eight" have also been discovered. In many cases, there seems to have been a certain reluctance in the use of the less intuitive subtractive notation. Its use increased the complexity of performing Roman arithmetic, without conveying the benefits of a full positional notation system.
XCIX or IC?
Rules regarding Roman numerals often state that a symbol representing 10x may not precede any symbol larger than 10x+1. For example, C cannot be preceded by I or V, only by X (or, of course, by a symbol representing a value larger than C). Thus, one should represent the number "ninety-nine" as XCIX, not as the "shortcut" IC. However, these rules are not universally followed.
Calendars and clocks
Clock faces that are labelled using Roman numerals conventionally show IIII for 4 o'clock and IX for 9 o'clock, using the subtractive principle in one case and not in the other. There are several suggested explanations for this:
The four-character form IIII creates a visual symmetry with the VIII on the other side, which IV would not.
IIII was the preferred way for the ancient Romans to write 4, since they to a large extent avoided subtraction.
It has been suggested that since IV is the first two letters of IVPITER, the main god of the Romans, it was not appropriate to use.
The number of symbols on the clock totals twenty Is, four Vs, and four Xs; so clock makers need only a single mould with five I's, a V, and an X in order to make the correct number of numerals for the clocks. The alternative uses seventeen Is, five Vs, and four Xs, possibly requiring several different moulds.
The I symbol would be the only symbol in the first 4 hours of the clock, the V symbol would only appear in the next 4 hours, and the X symbol only in the last 4 hours. This would add to the clock's radial symmetry.
IV is difficult to read upside down and on an angle, particularly at that location on the clock.
Louis XIV, king of France, preferred IIII over IV, ordered his clockmakers to produce clocks with IIII and not IV, and thus it has remained.
Year in Roman numerals
In seventeenth century Europe, using Roman numerals for the year of publication for books was standard; there were many other places it was used as well. Publishers attempted to make the number easier to read by those more accustomed to Arabic positional numerals. On British title pages, there were often spaces between the groups of digits: M DCC LXI is one example. This may have come from the French, who separated the groups of digits with periods, as: M.DCC.LXV. or M. DCC. LXV. Notice the period at the end of the sequence; many foreign countries did this for roman numerals in general, but not necessarily Britain.
These practices faded from general use before the start of the twentieth century, though the cornerstones of major buildings still occasionally use them. Roman numerals are today still used on building faces for dates: 2005 can be represented as MMV.
The film industry has used them perhaps since its inception to denote the year a film was made, so that it could be redistributed later, either locally or to a foreign country, without making it immediately clear to viewers what the actual date was. This became more useful when films were broadcast on television to partially conceal the age of films. From this came the policy of the broadcasting industry, including the BBC, to use them to denote the year in which a television program was made (the Australian Broadcasting Corporation has largely stopped this practice but still occasionally lapses).
Other modern usage by English-speaking peoples
Roman numerals remained in common use until about the 14th century, when they were replaced by Arabic numerals (thought to have been introduced to Europe from al-Andalus, by way of Arab traders and arithmetic treatises, around the 11th century). The use of Roman numerals today is mostly restricted to ordinal numbers, such as volumes or chapters in a book or the numbers identifying monarchs (e.g. Elizabeth II).
Sometimes the numerals are written using lower-case letters (thus: i, ii, iii, iv, etc.), particularly if numbering paragraphs or sections within chapters, or for the pagination of the front matter of a book.
Undergraduate degrees at British universities are generally graded using I, IIi, IIii, III for first, upper second, lower second and third class respectively.
Modern English usage also employs Roman numerals in many books (especially anthologies), movies (e.g., Star Wars), sporting events (e.g., the Super Bowl), and historic events (e.g.: World War I, World War II ). The common unifying theme seems to be stories or events that are episodic or annual in nature, with the use of classical numbering suggesting importance or timelessness.
In music theory a scale degrees or diatonic functions are often identified by Roman numerals (as in chord symbols) as follows:
Roman numeral I II III IV V VI VII
Scale degree tonic supertonic mediant subdominant dominant submediant leading tone/subtonic
Modern non English speaking usage
The above uses are customary for English-speaking countries. Although many of them are also maintained in other countries, those countries have some additional uses for them which are unknown in English-speaking regions.
The French and the Spanish use capital roman numerals to denote centuries, e.g., 'XVIII' refers to the eighteenth century, so as to not confuse the first two digits of the century with the first two digits of most, if not all, of the years in the century. The Italians do not, instead referring to the digits in the years, e.g., quattrocento is their name for the fifteenth century. Some scholars in English-speaking countries prefer the French method, among them Lyon Sprague de Camp.
In Germany, Poland, and Russia, roman numerals were used in a method of recording the date. Just as an old clock recorded the hour by roman numerals while minutes were measured in arabic numerals, in this system, the month was in roman numerals while the day was in arabic numerals, e.g. 14-VI-1789 was June the fourteenth, 1789. It is by this method that dates are inscribed on the walls of the Kremlin, for example. This method has the advantage that days and months are not confused in rapid note-taking, and that any range of dates or months could be expressed in a mixture of arabic and roman numerals with no confusion, e.g., V-VIII is May to August, while 1-V-31-VIII is May first to August thirty-first.
But as the French use capital roman numerals to refer to the quarters of the year, e.g., 'III' is the third quarter, and which has apparently become standard in some European standards organization, (but which in American business is 'Q3'), the aforementioned method of recording the date has had to switch to minuscule roman numerals, e.g., 4-viii-1961. (Later still, the ISO specified that dates should be given in all arabic numerals, which can lead to confusion.)
Alternate forms
In the Middle Ages, Latin writers used a horizontal line above a particular numeral to represent one thousand times that numeral, and additional vertical lines on both sides of the numeral to denote one hundred times the number, as in these examples:
<math>\mathrm{\bar{I}}<math> for one thousand
<math>\mathrm{\bar{V}}<math> for five thousand
<math>\mathrm{\bar{|I|}}<math> for one hundred thousand
<math>\mathrm{\bar{|V|}}<math> for five hundred thousand
The same overline was also used with a different meaning, to clarify that the characters were numerals.
Sometimes 500, usually D, was written as I followed by an apostrophus, resembling a backwards C (Ɔ), while 1,000, usually M, was written as CIƆ. This is believed to be a system of encasing numbers to denote thousands (imagine the Cs as parentheses). This system has its origins from Etruscan numeral usage. The D and M symbols to represent 500 and 1,000 were most likely derived from IƆ and CIƆ, respectively.
An extra Ɔ denoted 500, and multiple extra Ɔs are used to denote 5,000, 50,000, etc. For example:
Base Number:
CIƆ = 1,000 CCIƆƆ = 10,000 CCCIƆƆƆ = 100,000
1 extra Ɔ:
IƆ = 500 CIƆƆ = 1,500 CCIƆƆƆ = 10,500 CCCIƆƆƆƆ = 100,500
2 extra Ɔs:
IƆƆ = 5,000 CCIƆƆƆƆ = 15,000 CCCIƆƆƆƆƆ = 105,000
IƆƆƆ = 50,000 CCCIƆƆƆƆƆƆ = 150,000
Sometimes CIƆ was reduced to an infinity symbol (<math>\infty<math>) for denoting 1,000. John Wallis is often credited for introducing this symbol to represent infinity, and one conjecture is that he based it off of this usage, since 1,000 was hyperbolically used to represent very large numbers.
In medieval times, before the letter j emerged as a distinct letter, a series of letters i in Roman numerals was commonly ended with a flourish; hence they actually looked like ij, iij, iiij, etc. This proved useful in preventing fraud, as it was impossible, for example, to add another i to vij to get viij. This practice is now merely an antiquarian's note; it is never used. (It did, however, lead to the Dutch diphthong IJ.)
Table of Roman numerals
The "modern" Roman numerals, post-Victorian era, are shown below:
none none 0 There was no need for a zero.
I Ⅰ 1
II ⅠⅠ (or Ⅱ) 2
III ⅠⅠⅠ (or Ⅲ) 3
IV ⅠⅤ (or Ⅳ) 4 IIII (ⅠⅠⅠⅠ) is still used on clock and card faces.
V Ⅴ 5
VI ⅤⅠ (or Ⅵ) 6
VII ⅤⅠⅠ (or Ⅶ) 7
VIII ⅤⅠⅠⅠ (or Ⅷ) 8
IX ⅠⅩ (or Ⅸ) 9
X Ⅹ 10
XI ⅩⅠ (or Ⅺ) 11
XII ⅩⅠⅠ (or Ⅻ) 12
XIII ⅩⅠⅠⅠ 13
XIV ⅩⅠⅤ 14
XV ⅩⅤ 15
XIX ⅩⅠⅩ 19
XX ⅩⅩ 20
XXX ⅩⅩⅩ 30
XL ⅩⅬ 40
L Ⅼ 50
LX ⅬⅩ 60
LXX ⅬⅩⅩ 70 The abbreviation for the Septuagint
LXXX ⅬⅩⅩⅩ 80
XC ⅩⅭ 90
C Ⅽ 100 This is the origin of using the slang term "C-bill" or "C-note" for "$100 bill".
CC ⅭⅭ 200
CD ⅭⅮ 400
D Ⅾ 500 Derived from I Ↄ, or half of the alternative symbol for 1000, see above.
DCLXVI ⅮⅭⅬⅩⅤⅠ 666 Using every basic symbol but M once gives the beast number.
CM ⅭⅯ 900
M Ⅿ 1000
MCMXLV ⅯⅭⅯⅩⅬⅤ 1945
MCMXCIX ⅯⅭⅯⅩⅭⅠⅩ 1999 Shortcuts like IMM and MIM disagree with the rule stated above
MM ⅯⅯ 2000
MMM ⅯⅯⅯ 3000
ↁ ⅠↃↃ 5000 I followed by two reversed C, an adapted Chalcidic sign
An accurate way to write large numbers in Roman numerals is to handle first the thousands, then hundreds, then tens, then units.
Example: the number 1988.
One thousand is M, nine hundred is CM, eighty is LXXX, eight is VIII.
Put it together: MCMLXXXVIII (ⅯⅭⅯⅬⅩⅩⅩⅤⅠⅠⅠ).
Unicode has a number of characters specifically designated as Roman numerals, as part of the Number Forms range from U+2160 to U+2183. For example, MCMLXXXVIII could alternatively be written as ⅯⅭⅯⅬⅩⅩⅩⅤⅠⅠⅠ. This range includes both upper- and lowercase numerals, as well as pre-combined glyphs for numbers up to 12 (Ⅻ or XII), mainly intended for the clock faces for compatibility with non–West-European languages. The pre-combined glyphs should only be used to represent the individual numbers where the use of individual glyphs is not wanted, and not to replace compounded numbers. Similarily precombined glyphs for 5000 and 10000 exist.
The Unicode characters are present only for compatibility with other character standards which provide these characters; for ordinary uses, the regular Latin letters are preferred. Displaying these characters requires a user agent that can handle Unicode and a font that contains appropriate glyphs for them.
After the Renaissance, the Roman system could also be used to write chronograms. It was common to put in the first page of a book some phrase, so that when adding the I, V, X, L, C, D, M present in the phrase, the reader would obtain a number, usually the year of publication. The phrase was often (but not always) in Latin, as chronograms can be rendered in any language that utilises the Roman alphabet.
Template:Book reference
Roman Numeral Conversion Calculator and Self Test (http://ostermiller.org/calc/roman.html)
FAQ: Roman IIII vs. IV on Clock Dials (http://www.ubr.com/clocks/faq/iiii.html)
Why do clocks with Roman numerals use "IIII" instead of "IV"? (http://www.straightdope.com/classics/a2_153) (from The Straight Dope)be:Рымскія лічбы
ca:Nombre romà da:Romertal de:Römische Ziffer es:Numeración romana eo:Romiaj ciferoj fr:Numération romaine ko:로마 숫자 it:Numero romano he:ספרות רומיות la:Numeri Romani hu:Római számok nl:Romeinse cijfers ja:ローマ数字 no:Romertall pl:Cyfra rzymska pt:Numeração romana ru:Римские цифры sl:Rimske številke fi:Roomalaiset numerot sv:Romerska siffror ta:ரோம எண்ணுருக்கள் th:เลขโรมัน uk:Римська система цифр zh:罗马数字
Retrieved from "http://academickids.com/encyclopedia/index.php/Roman_numerals"
Categories: Ancient Rome
This page has been accessed 20662 times.
|
CommonCrawl
|
Using symmetry to elucidate the importance of stoichiometry in colloidal crystal assembly
Synthesis and assembly of colloidal cuboids with tunable shape biaxiality
Yang Yang, Guangdong Chen, … Zhihong Nie
Tunable assembly of hybrid colloids induced by regioselective depletion
Mingzhu Liu, Xiaolong Zheng, … Marcus Weck
High-order elastic multipoles as colloidal atoms
Bohdan Senyuk, Jure Aplinc, … Ivan I. Smalyukh
Hierarchical self-assembly of 3D lattices from polydisperse anisometric colloids
Binbin Luo, Ahyoung Kim, … Qian Chen
Frustrated self-assembly of non-Euclidean crystals of nanoparticles
Francesco Serafin, Jun Lu, … Xiaoming Mao
Identity crisis in alchemical space drives the entropic colloidal glass transition
Erin G. Teich, Greg van Anders & Sharon C. Glotzer
Revealing pseudorotation and ring-opening reactions in colloidal organic molecules
P. J. M. Swinkels, S. G. Stuij, … P. Schall
Assembly of a patchy protein into variable 2D lattices via tunable multiscale interactions
Shuai Zhang, Robert G. Alberstein, … F. Akif Tezcan
Entropically engineered formation of fivefold and icosahedral twinned clusters of colloidal shapes
Sangmin Lee & Sharon C. Glotzer
Nathan A. Mahynski ORCID: orcid.org/0000-0002-0008-87491 na1,
Evan Pretti2,
Vincent K. Shen1 &
Jeetain Mittal ORCID: orcid.org/0000-0002-9725-64022 na1
Condensed-matter physics
Theory and computation
We demonstrate a method based on symmetry to predict the structure of self-assembling, multicomponent colloidal mixtures. This method allows us to feasibly enumerate candidate structures from all symmetry groups and is many orders of magnitude more computationally efficient than combinatorial enumeration of these candidates. In turn, this permits us to compute ground-state phase diagrams for multicomponent systems. While tuning the interparticle potentials to produce potentially complex interactions represents the conventional route to designing exotic lattices, we use this scheme to demonstrate that simple potentials can also give rise to such structures which are thermodynamically stable at moderate to low temperatures. Furthermore, for a model two-dimensional colloidal system, we illustrate that lattices forming a complete set of 2-, 3-, 4-, and 6-fold rotational symmetries can be rationally designed from certain systems by tuning the mixture composition alone, demonstrating that stoichiometric control can be a tool as powerful as directly tuning the interparticle potentials themselves.
In order to design colloidal systems which self-assemble into crystals of arbitrary complexity, the interparticle interactions between colloids are typically treated as degrees of freedom to be optimized1,2,3. In practice, this tuning can be achieved through various means, including particle charge, shape, and functionalization4,5,6,7,8,9,10. The breadth of this design space can be appealing, and previous research efforts have yielded a wide range of different structures via this route11. Unfortunately, interactions which may be theoretically optimal for creating a given target structure are often quite complex, involving multiple length scales and inflections at relatively large distances, making them difficult to realize experimentally. As an alternative approach to using a single colloidal component with a complex interaction potential, exotic lattices may be assembled using multiple components with a set of relatively simple pairwise potentials. A system in which each particle is unique is said to have addressable complexity12,13,14,15. In this particular case, it is necessary to select the mixture composition to provide precisely the correct number of components to assemble each structure.
For multicomponent mixtures in general, however, composition is a tunable thermodynamic parameter which is often overlooked in the context of self-assembly. Recent work has shown that stoichiometry can be exploited to make adjustments to the outcome of equilibrium self-assembly of binary mixtures of DNA-functionalized particles (DFPs)16,17,18. DFP systems provide a particularly useful framework to study these effects in multicomponent mixtures known as the multi-flavoring motif18, which can be used to readily control the relative strengths of different pairwise interactions experimentally. However, the implications of stoichiometric control on stabilizing new phases, especially with an increasing number of components, have yet to be fully understood. It is especially unclear if changing stoichiometry alone can be used to direct assembly into different structures in the same way as changing pairwise interactions, since this requires knowledge of the phase diagram for each system of interest. In principle, if all possible structures which could appear on a phase diagram for a system were known a priori, their relative free energies could be calculated and the diagram constructed; yet, such a library of structures is difficult to obtain and its completeness is often unclear.
Indeed, predicting the stable crystal structure of a set of known constituents remains an outstanding challenge in condensed matter physics19,20 and is a predominant barrier to the rational design of functional materials. Numerous mathematical and computational approaches have been developed to make this problem tractable, including random structure searching21,22, optimization and Monte Carlo tools23,24,25,26,27,28,29, evolutionary algorithms30,31,32,33,34,35, and machine learning36. While powerful, the stochastic nature of these methods means that it is not possible to guarantee all relevant configurations and different symmetries have been considered. In certain cases where entropic considerations are significant, candidate structures can be found via direct enumeration schemes based on packing17. Complex network materials such as metal-organic frameworks, zeolites and other silicates, and carbon polymorphs often require more rigorous approaches and have been fruitfully enumerated through the use of topological methods37,38 to identify crystalline nets and assess their chemical feasibility20,39,40,41,42,43. To our knowledge, however, such techniques have yet to be leveraged to explore multicomponent colloidal crystals.
To this end, we present a method based on symmetry to easily enumerate and refine candidate crystalline lattices with any number of components: one of the primary barriers to investigating the impact of stoichiometry on equilibrium self-assembly. We consider two-dimensional systems in this work to readily demonstrate the nature of our method; however, we emphasize that it is general and extensible to three-dimensional crystalline systems as well. Furthermore, there are many important technological applications for ordered two-dimensional materials including interfacial films, monolayers, porous mass separating agents, and structured substrates which require careful tuning of their crystalline structure44,45,46,47. Epitaxial growth and layer-by-layer assembly also require a detailed understanding of two-dimensional precursors to grow three-dimensional crystals48,49,50. By combining geometric information from symmetry groups with stoichiometric constraints, it is possible to more systematically search the energy landscapes of colloidal systems for candidate structures than with stochastic optimization methods alone. Ground state phase diagrams may thus be computed with relative ease and without a priori knowledge of possible configurations. Our results reveal how stoichiometry, without any changes to pairwise interactions, can be used to rationally control the symmetry of the resulting crystal lattices. We demonstrate how enthalpically dominated colloidal systems with only two or three components, interacting with simple isotropic potentials, can give rise to a wide range of structures, and how selection between close-packed and open structures can be performed by changing composition alone. Furthermore, the generality of our method suggests this tactic is applicable to a range of experimentally realizable colloidal systems and can provide useful routes to complex structures for the design of advanced materials.
Combining symmetry and stoichiometry
In order to understand how symmetry can be employed to aid in multicomponent crystal structure prediction, consider a primitive cell with periodic boundary conditions, as is typically employed for molecular simulations (cf. Fig. 1a). We may consider discretizing this cell into nodes upon which particles can be placed—although this is only an approximation to the continuous nature of configuration space, this assumption proves very convenient for generating candidate cells and will be relaxed later to make the method fully general. Generating all configurations for a multicomponent mixture on such a grid is effectively impossible for all but the smallest grids due to a combinatorial explosion of the number of possibilities as the size of the cell increases32,51,52. For instance, in our two-dimensional example, a discretization of the unit cell with area A into equal subunits of size δ2 leads to Nconfig total configurations:
$$N_{{\mathrm{config}}} = \frac{{(A/\delta ^2)!}}{{(A/\delta ^2 - N_{{\mathrm{tot}}})!\mathop {\prod}\nolimits_i {N_i} !}},$$
where Ni is the number of i-type species (such that \(N_{{\mathrm{tot}}} = \mathop {\sum}\nolimits_i {N_i}\)).
Summary of candidate enumeration strategy. a Discretization of a periodic p1 primitive cell into nodes. Symmetry introduces boundary conditions which constrain the placement of particles on the grid as if it were mapped onto the surface of a torus. This represents the orbifold of the p1 wallpaper group. b Fundamental domain and primitive cell for the p6 group; edge nodes are circled in gray on the right and will be wrapped on top of symmetrically equivalent nodes on the left. Here there are 6 node categories we could consider, which may be reduced to 4 if ones with the same stoichiometric contribution are combined. c Representative solutions to the CSP for the p6 lattice shown in b for a binary mixture with a 1:2 stoichiometric ratio of components. Images on the left are drawn from the solutions with the lowest 5% of realizations, and those on the right from the solution with the most. The blue histogram represents the case where node categories with the same stoichiometric contribution are combined, while the orange corresponds to when they are kept separate; the total number of solutions is the same for both
However, all two-dimensional crystals may be classified into one of 17 different planar symmetry groups, known as wallpaper groups37,38,53. In three dimensions, 230 space groups are required to describe all unique symmetries. Wallpaper groups describe the set of unique combinations of isometries (translation, rotation, and reflection) of the Euclidean plane containing two linearly independent translations. These operations act on a tile, or fundamental domain, to tessellate the plane. In addition to the p1 wallpaper group corresponding to the conventional periodic simulation cell discussed above, 16 additional groups exist with differing symmetries: a detailed summary of these groups and their fundamental domains is available in Supplementary Tables 1 and 2, and elsewhere53. Topology provides a compact representation of each group, known as an orbifold, which describes how to fold or wrap the fundamental domain to superimpose all equivalent nodes (cf. Supplementary Fig. 1)38,54. For p1, this is a torus; Fig. 1a demonstrates that wrapping a grid onto it brings nodes on separate edges and corners into contact, effectively enforcing boundary conditions and constraining how particles may be positioned.
For each group there is a different set of connected fundamental domains that form the primitive cell, which contains the group's symmetries and may be used to cover the plane by translation operations alone. In groups other than p1, between 2 and 12 fundamental domains comprise the primitive cell53; thus, only a fraction of the primitive cell contains the independent configurational degrees of freedom in those groups, enabling a significant reduction of A in Eq. (1). Consider for example p6, in Fig. 1b, in which the fundamental domain is triangular and has one sixth the area of the primitive cell. Furthermore, a large proportion of nodes are now found on the edges and corners, where symmetry-imposed boundary conditions cause some nodes to become equivalent to others. Our method leverages this, along with constraints due to stoichiometry, to achieve a significant computational advantage over the brute-force, combinatorial search method which uses only the p1 group. While colloids placed on face nodes are entirely contained within the domain, those at edge or corner nodes contribute only a fraction to its contents since they will be shared across multiple adjacent domains. Symmetrically equivalent boundary nodes may be collapsed to a single effective node with a net contribution equal to the sum of its equivalent nodes. Placing colloids over each group's fundamental domain may then be reduced to a constraint satisfaction problem (CSP) in which the sum of the contributions from nodes where different colloid types are placed must satisfy a desired stoichiometric ratio (cf. Methods). The CSP is, in general, underspecified and admits many different solutions; each solution specifies how many of each type of colloid to place in different categories of nodes. For a k-component system with n different node categories, the number of realizations of each different CSP solution, W, is
$$\begin{array}{*{20}{l}} {W = \mathop {\prod }\limits_{j = 1}^n \frac{{C_j!}}{{(C_j \,- \,\mathop {\sum}\nolimits_{i = 1}^k {m_{i,j}} )!\mathop {\prod}\nolimits_{i = 1}^k {m_{i,j}} !}}} \end{array},$$
where Cj refers to the number of nodes belonging to category j, and mi,j refers to the number of colloids of type i assigned to nodes in that category. As a representative example, Fig. 1c shows the resulting solutions for a 1:2 stoichiometric ratio in a binary system for the p6 group.
Equation (2) is very similar to Eq. (1), and W will also undergo a combinatorial explosion if Cj, the number of nodes in a category, j, is very large. However, relative to Eq. (1) this explosion is delayed by two factors. First, we have used symmetry to reduce the degrees of freedom by considering only the fundamental domain, which can be as little as one twelfth of the total primitive cell area. Second, we have reduced these degrees of freedom even further by using the symmetry of each group to remove edge nodes within a fundamental domain's lattice which are not independent. The second condition plays a significant role when the number of edge nodes relative to those on the face is large.
Enumerating structures
Combining symmetry and stoichiometry to cast the structure prediction problem as a CSP permits the tractable enumeration of crystalline configurations satisfying a given stoichiometric ratio up to moderately sized primitive cells. To see this, one may compute the number of nodes per edge of the fundamental domain for each group such that the nodal density approaches, but does not exceed, that of a chosen p1 reference cell. This reference cell is assumed to have Ng nodes per edge and represents the case where no internal symmetry is present so that configurations are generated combinatorially without constraint. In our approach, an equally weighted average over all groups suggests that when Ng ≈ 8 the total number of edge nodes will be equal to the number of face nodes (cf. Methods). For fundamental domains smaller than this, we expect that boundary symmetry for the groups will play a dominant role in determining valid configurations in the CSP. Taking the spatial discretization to also be on the order of the colloidal diameter, δ ~ σ, the limiting p1 fundamental domain is on the order of A ~ 8σ × 8σ. This is as large as boxes used to simulate many coarse-grained or colloidal fluids, implying that the upper bound for the primitive cell that can be feasibly generated with this method is reasonably large.
Examples of binary lattices generated by this scheme are presented in Fig. 2, along with a more concrete analysis of how it leads to a reduction in the number of possible configurations. Ternary examples can be found in the SI. In these cases, we have also allowed for the p1 reference cell parallelogram to be sheared to 4 different angles α ∈ [π/2, π/3, π/4, π/6] so that the resulting lattice is commensurate with other symmetries. Compared to p1, our approach to systematically enumerate non-trivial lattices over a similar area, i.e., size of primitive cell, for the other 16 wallpaper groups results in far fewer crystalline candidates that need to be considered. As anticipated, the total number of configurations does grow combinatorially at large Ng, which is dominated by lattices with a small number of fundamental domains per primitive cell (cf. SI); however, for \(N_{\mathrm{g}} \ \lesssim \ 8\), the total number of configurations is quite tractable.
Enumeration of solutions to the constraint satisfaction problem (CSP). The total number of configurations, representing the sum of all realizations of all solutions to the CSP for all of the wallpaper groups besides p1, is given by the dashed blue line for a 1:1 binary mixture at various Ng. When not constrained by the symmetry of a given group, we set the sides of its fundamental domain equal to each other and α = π/2. The solid blue line is the number of solutions for a total of 4 different p1 cells, each with different angles in their fundamental domain; the green line is the ratio between the two blue ones. Randomly chosen configurations for each group are also depicted which have been scaled to contact for equally sized colloids. A breakdown of the number of solutions each group contributes is also provided for representative Ng values and stoichiometries. Above, the p6m group's solution has been scaled to contact assuming different diameters for the red colloids to illustrate how the same pattern can be used for differently sized colloids
For the binary system with a 1:1 stoichiometry shown in Fig. 2 there are less than 109 configurations compared to an equivalent combinatorial search with the p1 cell, which results in \({\cal{O}}(10^{22})\) candidates when Ng = 8; this represents a reduction by over 13 orders of magnitude. A similar reduction occurs with ternary systems as well (cf. Supplementary Fig. 2). In both cases, the 1:1(:1) stoichiometry generates the most possible candidates; all other stoichiometries we investigated produced fewer solutions to the CSP, and thus 109 configurations serves as a benchmark. A breakdown of these configurations into different groups is also shown, illustrating that for sufficiently small Ng it is not possible to observe certain stoichiometries, which is expected from a packing perspective. It is important to point out that the structures resulting from the 16 groups besides p1 are, in principle, a subset of the configurations resulting from the random search. This small subset composed of the other 16 groups contains additional symmetry beyond translation alone; this method simply enables those configurations to found directly rather than searching over all combinatorial realizations of where to place different colloids.
Building phase diagrams
To engineer the assembly of multicomponent mixtures, their equilibrium phase behavior must be understood. We now illustrate how phase diagrams can be computed using this enumeration scheme. Specifically, we have applied this methodology to probe the self-assembly of monodisperse colloidal monolayers formed from systems inspired by the multi-flavoring motif used in DFP assembly; this scheme enables all pairwise interactions in the system to become independent of one another, qualitatively ranging from being attractive to repulsive. In the limit of strong binding, the ground state (T* → 0) serves as a reasonable approximation of the thermodynamically stable state55. Multi-flavored binary mixtures of colloids dominated by enthalpic interactions are known to exhibit a wide variety of morphologies, both experimentally and theoretically18,55; however, the full impact of stoichiometry on the thermodynamics of their self-assembly is not yet understood. Here we employ a simplified model (cf. Methods and Fig. 3a) to capture the tunability of the adhesiveness of arbitrary species pairs via a single parameter, λi,j, which ranges from −1 (repulsive) to +1 (attractive). This allows our model to maintain relevance beyond the specific case of DNA-mediated interactions; however, we emphasize that these kinds of interactions can be realized in various DFP systems, and that experimental results in such systems are consistent with simulations employing potentials with pairwise tunable interactions18. Other, non-multi-flavored experimental DFP systems have also been successfully modeled with similar potential forms56.
Construction of phase diagrams. a Potential energy function used in this work for representative values of the parameter λ. b Results of basin hopping (stochastic) optimization of an initial set of 25 candidates (inset) from each of the 16 groups considered, with a 1:1 stoichiometry. The main panel shows the specific energy of each structurally unique final candidate found, representing distinct local minima in energy. Several representative images are also shown. c Schematic of a representative convex hull (in red) showing thermodynamically stable structures on the hull as well as metastable structures (blue points). The ellipse encircles the points that result from the basin hopping stage as in b, with the magenta point corresponding to the ground state
To predict the assembly of these mixtures, we first employed our scheme to enumerate a large number of the possible candidates within our framework. Although the grids constructed over the fundamental domains are consistent with each group's symmetry, they are artificial. Therefore, we subsequently relaxed these initially proposed candidates with a stochastic global optimization method known as basin hopping25. Note that lower symmetry structures which do not belong to any wallpaper group, such as quasicrystals, are not generally proposed in the initial candidate pool. A relaxation stage with basin hopping is therefore important since it allows these lower symmetry structures to emerge from higher symmetry parent structures. Figure 3b illustrates an example where we have taken only the 25 candidates with the lowest energy from each group initially proposed (unrelaxed), and then performed this optimization procedure. The final, structurally unique lattices are plotted in the main panel, as only a few minima, including the ground state, tend to dominate the landscape and are found repeatedly. The ground state was often found multiple times by direct enumeration, which corresponds to the low-energy plateau in the inset. In fact, all stable periodic lattices reported in this work were found by direct enumeration, ultimately requiring no stochastic relaxation, demonstrating the robustness of this enumeration scheme.
For all sets of pairwise interactions, enumeration and optimization runs were performed for each canonical system corresponding to a fixed mole fraction. Phase diagrams were then computed by constructing the convex hull of (free) energy points in composition space (see schematic, Fig. 3c)8,17,21. States that lie on the hull are the thermodynamically stable states a system can attain, while all points above the hull represent metastable states. If a system's composition is prepared so it exactly matches one of the vertices on the hull, the associated structure will be produced. However, when the system's composition is intermediate between two vertices it will phase separate into the two corresponding structures, each with a different stoichiometry, as determined by the lever rule.
Stoichiometric control
Phase separation can, therefore, be harnessed as a powerful mechanism for controlling self-assembly. A system with a fixed set of interparticle potentials that assembles into one structure out of a solution initially prepared at one composition, can give rise to a completely different structure when the solution is prepared with a different ratio of the same components. In this way, a single system can be designed so that simply by varying the solution mixing ratio of constituents, a number of structures with different stoichiometries can be produced. Figure 4a shows a ground state phase diagram computed for a binary system. In Fig. 3b, we have illustrated how the square lattice is the lowest energy configuration for this system's set of pairwise interactions at a 1:1 stoichiometry; this forms the point on the convex hull at x1 = 0.5 in Fig. 4a. However, other honeycomb phases intervene on the hull and enable the set of pair potentials to provide either square or honeycomb structures depending on the composition of the initial mixture.
Constructing phase diagrams for a binary mixture. a Optimized results, as in Fig. 3b, from many stoichiometries can be combined to form a phase diagram. All points above the hull are metastable structures. b Representative snapshots from molecular dynamics simulations at T* ≈ 0.05 for mole fractions of x1 ∈ [0.17, 0.33, 0.42, 0.50] depicted in red, orange, green, and blue, respectively, demonstrate the validity of the predictions in panel a and the ability of stoichiometry to affect assembly outcomes
We have validated our predictions using canonical molecular dynamics simulations, as shown in Fig. 4b. This demonstrates that the phase diagram accurately represents the composition dependence of the system's behavior, and that this effect is realizable at finite temperatures and can be used to select structures under actual self-assembly conditions. The molecular dynamics results also show phase separation into the stable structures occurring when compositions between the vertices of the phase diagram are chosen. Note that in the red panel corresponding to x1 = 0.17, a mixture of honeycomb crystals and a non-interacting vapor of particles have formed (expected since this species is self-repulsive, λ1,1 = −0.50). Similarly, in the green panel with x1 = 0.42, a mixture of the honeycomb and square lattice structures are obtained. Clearly, for this system with a single set of pairwise interactions, changing the mixture stoichiometry from x1 = 0.33 to x1 = 0.50 allows for controlling assembly into these two different lattices which possess entirely different structural ordering and symmetry.
Interactions vs. stoichiometry
To understand the generality of this mechanism, we performed a broad survey of binary multi-flavored systems, computing phase diagrams at various λ, to elucidate how stoichiometry changes the relative stability of different lattices. We found a plethora of transitions that can be driven by stoichiometric effects alone, and overall, found that stoichiometric control can be as powerful as tuning the interparticle interactions themselves. For a binary mixture there can be up to two coexisting phases in the ground state, and for each set of λ = (λ1,1, λ1,2, λ2,2) values we considered, we report the most stable phase or phases as determined by the phase diagram constructed at those conditions. The key findings of this extensive set of calculations are summarized in Fig. 5.
Thermodynamically stable structures for binary, multi-flavored colloidal mixtures. Various sets of λ = (λ1,1, λ1,2, λ2,2) were explored and mapped to a unit sphere. Only the projection of the hemisphere where λ1,2 ≥ 0 is shown, where the outermost dashed circle denotes the equator \(( {\lambda _{1,2} = 0,\lambda _{1,1}^2 + \lambda _{2,2}^2 = 1} )\), and the central point denotes the zenith (λ1,2 = 1, λ1,1 = λ2,2 = 0). Structures were determined from the phase diagram computed at each λ, and the results are indicated by a color-coded circle for each stoichiometry of species 1 (blue) and species 2 (green) considered. Depictions of the corresponding structures are outlined by the color of the point they correspond to; note that the choice of species coloring (blue vs. green) may be reversed for some regions of the diagram where one species is in excess of the other. The background coloring serves only for visualization purposes to guide the eye. Representative snapshots from molecular dynamics simulations of a few conditions are also depicted at the bottom
In the ground state, the absolute values of the λi,j do not matter, only the ratio of their values. In other words, a system where λ = (0.25, 0.5, 0.2) will yield an identical structure to the case of λ = (0.5, 1, 0.4). As a result, we can cast these λi,j coordinates onto the surface of a unit sphere; in fact, since we are only concerned with the case where unlike species have a favorable interaction and will not simply phase separate into their pure component states (λ1,2 ≥ 0), we need only consider one hemisphere. In Fig. 5, we report the structures found for three different representative stoichiometries. Unless explicitly shown, where the stoichiometry of the structures found is not equal to the composition of the solution, the remaining particles were found to coexist in an unstructured gas-like phase. In the parlance of Fig. 5, the fact that the color-coded structural changes occurring at a fixed λ point between different mole fractions can be as dramatic as color-coded changes occurring at a fixed mole fraction as λ is varied illustrates that stoichiometric control (changing x1) is as potent as engineering the potentials themselves (changing λ).
Transitions occurring in Fig. 5 are discussed at greater length in the Supplementary Discussion; however, the formation of the open honeycomb lattice is of special interest, as this 3-fold symmetric structure is an open, low-density lattice which is stabilized energetically, rather than entropically. In fact, although the pairwise interactions themselves follow a simple Lennard–Jones-like form, the ground state phase diagram contains numerous low-density lattices. When x1 = 0.66 (2:1 stoichiometry), the lower left quadrant contains several cluster phases. In particular, where the open honeycomb structure was stable at x1 = 0.5, now we find coexistence between extended rings, which follow a Kagome pattern (cf. Supplementary Fig. 8), and heptamer clusters. At x1 = 0.75, these larger Kagome rings and heptamers give way to tetramer clusters. These predictions have been validated with molecular dynamics simulations as shown in Fig. 5. The native stoichiometry for this Kagome lattice is x1 = 0.6, and once the mixture composition has been changed to this value, the system indeed forms only a single Kagome phase instead of coexisting with a second cluster phase. Self-assembly continues to occur well as density is increased up to its ideal value determined by that of the perfect lattice (ρ* = 0.382). Additionally, the square and various hexagonal phases have been realized in other simulations as well as experiments on multi-flavored DFP assembly18, once again illustrating the consistency of this style of pairwise interactions with real physical systems, and the potential of this stoichiometric control scheme to be exploited for material design applications.
Extension to ternary mixtures
Among other transitions, Fig. 5 shows that tuning the stoichiometry alone can induce a ring opening event from a 3-fold open honeycomb lattice to an even lower density Kagome lattice in binary mixtures. To understand this further, and as a demonstration of our structure prediction approach for ternary systems, next we consider the impact of introducing a third component. We repeated our enumeration and optimization procedure for various ternary mixtures; as a representative result, here we restrict our discussion to the case where the third component is self-avoiding (λ3,3 = −1), yet interacts favorably with the second component (λ3,2 = 1) and essentially as a hard sphere with the first (λ3,1 = 0). Figure 6 summarizes the resulting phase diagram. We find that this third component can exert significant influence over the resulting morphology. While the ground-state phase diagram contains many different structures, a clear pattern emerges, which is entirely controlled by the composition of the initial mixture. When species 3 is absent, the relative amounts of species 1 and 2 can be tuned to drive the system through transitions from gas-like phases (0-fold rotational symmetry), to clusters, to Kagome rings, to open honeycomb (3-fold rotational symmetry). Upon introducing species 3, depending on the composition of the parent solution, we may drive the system into 4-fold square lattices, 6-fold hexagonal ones, or even more extended ring structures (cf. Fig. 6a, b). The complete binary phase diagram is included for reference in Fig. 6c. These principal directions are highlighted by colored arrows and provide a basic compass for navigating the phase diagram (cf. Supplementary Discussion for more details).
Ternary phase diagram for an example multi-flavored system. To a binary mixture where (λ1,1 = −1.0, λ1,2 = 0.5, λ2,2 = −1.0) a third component is added such that (λ3,1 = 0, λ3,2 = 1, λ3,3 = −1). a The triangular phase diagram is shown with arrows indicating directions along which the rotational symmetry of the most stable lattice is changed, starting from the open honeycomb (3-fold rotational symmetry) lattice. For reference, this structure is everywhere indicated in magenta. b Detailed diagram of structures that belong to the convex hull of (free) energy. Only landmark structures are depicted, in addition to certain very low density (ring-like) structures of interest which are outlined in red. More detail is available in the Supplementary Discussion. c The binary phase diagram for the initial mixture to which the third component was added. The cyan arrows correspond to those in a. d Molecular dynamics snapshots for various systems taken at T* ≈ 0.02. From left to right, the stoichiometries, (x1, x2, x3), are as follows: (0.40, 0.47, 0.13), (0.4285, 0.1430, 0.4285), (0.4165, 0.5000, 0.0835). The predicted structures from the phase diagram are shown above the snapshots
We emphasize that this set of transformations, resulting in a complete range of rotational symmetries from gas-like (0-fold) up to hexagonal (6-fold) structures including low density rings and clusters, is brought about by changing the mixing ratio of the components alone. Furthermore, although temperature is expected to have a significant impact on the quantitative stability of different lattices, especially the cluster phases and rings, we achieved most of the predictions in molecular dynamics simulations at temperatures within an order of magnitude of the temperature at which we observed initial aggregation of the components. Thus, entropic contributions are not expected to change the qualitative conclusion that controlling stoichiometry in multicomponent mixtures can be a tool as powerful as engineering the interparticle potentials for designing complex structures.
In summary, we have presented a method for investigating the stability of enthalpy-dominated multicomponent colloidal lattices and have used it to demonstrate that tuning the mixture composition can have as much impact as adjusting the interparticle potentials between the colloids themselves. Our approach is premised on recasting the structure prediction problem as a CSP in which symmetry and stoichiometry combine to form the constraints; the solutions to this problem, which may be enumerated and subsequently optimized with relative ease, are the candidate lattices to be considered. This method effectively generates a library of structures using only an upper bound for the size of the lattice's primitive cell and the desired stoichiometry. Such a library must otherwise be found by methods which are generally incomplete and prone to miss important candidates. In fact, every stable crystal structure reported in this work was found initially via enumeration, and subsequent optimization did not reveal additional stable candidates. This approach serves as an efficient way to explore all possible symmetry groups which helps ensure that the correct ground state is discovered.
It is important to highlight the general applicability of both the presented method and the results regarding stoichiometric control. Although we have focused on presenting results from two-dimensional systems, the concepts presented here are extensible to higher dimensions as well. The interaction potentials considered here are very general, but experimental schemes for realization of such interactions in multicomponent systems exist using multi-flavored DFPs. These DFP systems are not limited to two dimensions, and simulations and experiments in both two18,55 and three57,58 dimensions have been performed on these systems to show the capacity of simple pairwise models to capture DFP assembly effects. They additionally demonstrate the feasibility of fine-tuning interactions in multicomponent mixtures as necessary to achieve self-assembly of particular structures. Finally, the results presented here have the potential to be particularly useful for physical realization of many superlattices including unique open structures, given that mixture stoichiometry is often easier to control than pairwise interactions, and has the potential to be just as powerful in terms of controlling structural ordering during self-assembly.
Creating regular grids on fundamental domains
First, a regular grid, as depicted in Fig. 1a, is created over the surface of a group's fundamental domain. Nodes are placed along each edge with as close to the same spacing as possible such that there exist nodes at the termini of each edge. If this domain is triangular, the number of nodes along each edge must be identical so that interior nodes fall on the resulting parallelogram's diagonals. This happens regardless of the relative lengths of the sides. If this domain is a parallelogram, the number of nodes placed along adjacent edges may sometimes be different if the two sides have unequal lengths, as allowed by symmetry constraints. This scheme covers different wallpaper groups differently, but in a consistent fashion which is commensurate with each group's unique symmetry.
Different groups have differently shaped fundamental domains, with the number of domains per primitive cell ranging from 1 to 12; therefore, we cannot simply place nodes at a fixed spacing along the edges of each group's fundamental domain and compute all possible resulting primitive cells as they would vary significantly in size. A more even-handed comparison can be made by working in reverse to compute the requisite grid spacings for each group's fundamental domain so that their primitive cells all cover a similar area. Although fundamental domains vary in shape, an approximate comparison may be made as follows.
As a reference, we consider a p1 primitive cell containing \(N_{\mathrm{g}}^2\) total nodes, and attempt to make the primitive cells of other groups have the same number of nodes. The number of nodes per edge of a group's fundamental domain may be estimated as
$$N_1 = \left\lfloor {\sqrt {\frac{{N_{\mathrm{g}}^2}}{{rN_{\mathrm{d}}\left( {1 - \frac{1}{2} \times (N_{\mathrm{s}}\,{\mathrm{mod}}\,2)} \right)}}} } \right\rfloor ,$$
where N1 is the number of nodes along the shorter of the two edges which define the group's primitive cell, r ≥ 1 is the ratio of the lengths of these edges, Nd is the number of fundamental domains per primitive cell, and Ns is the number of sides the fundamental domain has. The number of nodes on the longer edge is given by \(N_2 = \left\lfloor {rN_1} \right\rfloor\). A more detailed derivation is presented in the Supplementary Methods. The result is always a lattice that has no more than \(N_{\mathrm{g}}^2\) total nodes; consequently, Ng should be viewed as a parameter that simply provides a way to compare the groups to each other by making their primitive cells congruent.
The constraint satisfaction problem (CSP)
For a system with k total different colloid types, the number of times a colloid of type i may be placed in a certain node category is Mi = (mi,1, mi,2, … mi,n), which is a vector whose length is equal to the number of categories that exist on a given fundamental domain, n. If the number of nodes in each category, j, is Cj, then \(N_{{\mathrm{nodes}}} = \mathop {\sum}\nolimits_j^n {C_j}\), where Nnodes is the total number of independent nodes on the fundamental domain. For the p6 group depicted in Fig. 1b, Nnodes = 10. The total number belonging to each category is bounded \(0 \le \mathop {\sum}\nolimits_i^k {m_{i,j}} \le C_j\), if we allow only one colloid per node.
Each node has a net fractional (stoichiometric) contribution, F = (f1, f2, … fn), which is determined by symmetry and is independent of colloid type. For example, in Fig. 1b there are two distinct types of corners, one with f1 = 1/6, and another with f2 = 1/3. The total number of i-type colloids is Ni = Mi ⋅ F; F is generally converted to whole numbers so that Ni strictly contains integers. In principle, the categorization of nodes based on anything other than net fractional contribution is fictitious and those with the same value may be combined; in Fig. 1c the blue histogram shows this combined result, whereas the orange keeps the categories distinct. The total number of realizations, \(\mathop {\sum}\nolimits_i {W_i}\), is the same in both instances, and is on the order of 103; however, keeping categories distinct can be advantageous. This tends to create more solutions, each with less individual realizations. When sorted by frequency, solutions with less combinatorial realizations tend to involve using fewer different categories of nodes, or nodes with special constraints, to solve the CSP. Consequently, solutions with less realizations (left side of Fig. 1c) tend to produce simpler structures which grow in apparent complexity as the number of solutions increases (right side).
We impose the constraints that at least one of each type of colloid must be placed somewhere, Ni > 0 ∀i, and require that the final ratio of Ni values satisfies the desired stoichiometry, Starget = (1, N2/N1, N3/N1, …, Nk/N1), where we have arbitrarily used N1 to normalize. The value of Ni is implicitly bounded above by the total number, and fractional contribution, of nodes available, though in principle this may also be constrained further. All \(\bar M = \left( {{{\mathbf{M}}_1}^{\mathrm{T}}{{\mathbf{M}}_2}^{\mathrm{T}} \ldots {{\mathbf{M}}_k}^{\mathrm{T}}} \right)\), where it is understood that each \({{\mathbf{M}}_i}^{\mathrm{T}}\) forms a column in the \(\bar M\) matrix, represent solutions that may be enumerated using a recursive backtracking algorithm. Each solution, \(\bar M\), defines a prescription of how many of each type of colloid to place at each type of node. Some solutions will use only a small fraction of the available nodes, whereas others may employ them all.
All solutions to the CSP simply produce point patterns on lattices without any intrinsic length scale, and any lattice may be uniformly scaled without changing its symmetry. As a result, we choose to scale the resulting patterns to the contact point to produce the final candidate. For monodisperse, hard-sphere systems this is well defined. For other softer potentials one may use some characteristic length scale for the pairwise interactions; if there are multiple such length scales, e.g., multiple minima in the pairwise potential or if the system contains colloids of different diameters, multiple lattices can be generated from the same point pattern. The size-asymmetric case is illustrated in Fig. 2 with the p6m group. Also note that identical structures may sometimes be obtained from different groups as a given solution to the CSP may not use all of the edges or subtle features that distinguish groups from each other; e.g., consider the p6m and p3 for the binary case in Fig. 2. Each CSP solution does not violate any rules imposed by a group's symmetry constraints, but does not necessarily make use of them all (cf. Supplementary Methods for more details).
Faces vs. edges of fundamental domains
Consider a parallelogram with an equal number of nodes, Ng, along each edge. The number of nodes on the face, Nf = (Ng − 2)2, exceeds the number of edge nodes, Ne = 4(Ng − 1), when Ng ≥ 7. For a triangular domain, Ne = 3(Ng − 1) and Nf = (Ng − 2)(Ng − 3)/2, so that Ng ≥ 10 represents this bound. In our systems there are 10 groups with parallelograms for fundamental domains, and 7 with triangular ones. Thus, an equally weighted average suggests that when Ng ≈ 8, the number of edge nodes will be equal to the number of nodes on the face of the fundamental domain.
Multi-flavored pairwise interactions
The set of pairwise interactions used in this work are inspired by multi-flavored DFP systems18,58,59. With conventional DFPs, complementary strands of DNA are grafted on different particles inducing a favorable cross interaction due to DNA hybridization. However, when two colloids of the same type approach each other they simply repel each other as their grafts are identical. In multi-flavored systems, mixtures of different strands of complementary DNA are blended on the surfaces of different colloids decoupling the self- and cross-interactions in these mixtures. Controlling the surface composition of many different strands has the effect that one can independently tune the effective interactions between each pair of colloids. These enthalpy-dominated systems are typically assembled at relatively low density and ambient conditions58,59, which corresponds to the limit where (osmotic) pressure effectively approaches zero. Furthermore, in the limit of strong binding, the ground state (T* ~ 0) serves as a reasonable approximation of the system's thermodynamic state. In this case, the lowest energy lattice represents the most thermodynamically stable crystal as the Gibbs free energy is equal to potential energy in this limit.
To qualitatively model this behavior, all colloids interacted through a pair potential akin to Lennard–Jones which has been divided into its attractive and repulsive portions, then recombined according to some modulus, λi,j55,60, which we refer to as the adhesiveness parameter:
$$U_{i,j}(r) = U_{i,j}^{\mathrm{r}}(r) + \lambda _{i,j}U_{i,j}^{\mathrm{a}}(r),$$
$$U_{i,j}^{\mathrm{r}}(r) = \left\{ {\begin{array}{*{20}{l}} {4{\it{\epsilon }}_{i,j}\left[ {\left( {\frac{{\sigma _{i,j}}}{r}} \right)^{12} - \left( {\frac{{\sigma _{i,j}}}{r}} \right)^6} \right] + {\it{\epsilon }}_{i,j}} \hfill & {r \le 2^{1/6}\sigma _{i,j}} \hfill \\ 0 \hfill & {r \ > \ 2^{1/6}\sigma _{i,j},} \hfill \end{array}} \right.$$
$$U_{i,j}^{\mathrm{a}}(r) = 4{\it{\epsilon }}_{i,j}\left[ {\left( {\frac{{\sigma _{i,j}}}{r}} \right)^{12} - \left( {\frac{{\sigma _{i,j}}}{r}} \right)^6} \right] - U_{i,j}^{\mathrm{r}}(r).$$
The parameter, λi,j, effectively scales the energy from \(U_{i,j}(2^{1/6}\sigma _{i,j}) = - {\it{\epsilon }}_{i,j}\) at λi,j = 1, to \(U_{i,j}(2^{1/6}\sigma _{i,j}) = + {\it{\epsilon }}_{i,j}\) at λi,j = −1 (cf. Fig. 3a)55,60. Each pair of interactions has its own λi,j value which may be tuned independently, mimicking the multi-flavoring motif of DFPs58,59. As a result, the characteristic contact point for this model is taken to be 21/6σi,j. All colloids were given equal diameters, σi,j = σ, and energy scales, \({\it{\epsilon }}_{i,j} = {\it{\epsilon }}\). Only the value of λi,j was varied to control the relative degree to which pairs of colloids attracted or repelled each other. Thus, all units reported herein are given in terms of \({\it{\epsilon }}\) and σ. All interactions were cut off at rc = 3σ.
Sampling wallpaper ensembles
As described previously, a grid is generated for each wallpaper group in question, over which the CSP defined by the desired stoichiometric ratio of components in the final structure is solved recursively to enumerate all solutions. For wallpaper groups where the ratio between the lengths of the fundamental domain's sides, r, is not constrained by symmetry, we sampled \(r \in [1,\sqrt 2 ,\sqrt 3 ,2]\). In addition, for groups where the angle α between two adjacent edges of the fundamental domain is not constrained by symmetry, we generated all realizations of α ∈ [π/2, π/3, π/4, π/6]. Each solution to the CSP yields a prescription to place a certain number of each colloid type on different types of nodes in the fundamental domain; the total number of realizations of each prescription is given by combinatorially choosing the number of colloids to be placed at each designated location. In all cases, we discarded p1 as the trivial method of prediction which quickly undergoes a combinatorial explosion for even a small grid. For all stoichiometries of interest in the binary mixture, we considered three cases: Ng = 6, Ng = 8, and a variable Ng. When Ng = 6 we exhaustively enumerated all primitive cells. These were scaled so that the minimum distance between colloids was 21/6σ, then ranked based on energy; the lowest energy candidates from each group were subsequently refined with basin hopping. We did not repeat fully exhaustive sampling for Ng = 8, instead taking only 50,000 realizations of primitive cells from each group. Subsequent ranking and optimization yielded identical results. As a final check we also allowed Ng to be variable, increasing to the point where each group yielded at least 105 solutions to the CSP; from these ranked candidates we drew the best 100 structures from each group and optimized them. Again, the final results were the same.
Basin hopping
Basin hopping is a stochastic optimization approach well-suited to locating the global minimum of systems with hundreds of degrees of freedom and local minima separated by large barriers25,61,62. Atomistic, molecular, and colloidal systems often fall in this category and we adopted this approach here. The primitive cell is constructed from the fundamental domain according to the prescription provided by the CSP, which is the cell that is optimized. Basin hopping follows an iterative procedure where each cycle is composed of a perturbation followed by a deterministic relaxation to generate a new candidate, which is accepted as the new state of the system stochastically; here we used a Metropolis acceptance criterion:
$$p_{{\mathrm{acc}}} = {\mathrm{min}}\left[ {1,{\mathrm{exp}}\left( { - \frac{{u_{{\mathrm{final}}} - u_{{\mathrm{initial}}}}}{{\hat T}}} \right)} \right],$$
where u is the potential energy per particle in each state and \(\hat T\) is a parameter which controls the rate of acceptance. This was usually set to \(\hat T = 0.50\) but is not related to the system's actual temperature, which in the ground state, is zero. Up to 104 iterations were used to optimize each structure. We used the L-BFGS-B algorithm63 to relax the initial candidate structure before performing the basin hopping, which also employed this algorithm. The total potential energy is a function of the coordinates, ri, of each of the m colloids present and the primitive cell's vectors, U = f(r1, r2, …, rm, L1, L2) = f(ψ); all variables in ψ were optimized simultaneously. For precision, the candidate structure with the lowest energy resulting from basin hopping was further minimized with the Nelder–Mead simplex method64 to achieve the final result.
Perturbation moves consisted of displacing a set of randomly chosen colloids, exchanging the locations of a randomly chosen set of pairs of colloids, perturbing the cell's vectors, shearing the cell, uniformly scaling the cell, and displacing local clusters of colloids as determined by a k-means algorithm65. These typically occurred with a 4:2:1:1:1:1 ratio. After each perturbation the cell's vectors were iteratively checked to find a more orthorhombic unit cell, if possible, to reduce the number of nearest neighbor images needed to compute the energy of the cell. This is done by computing the distortion factor, \({\cal{C}}\)29,30:
$${\cal{C}}({\mathbf{L}}_1,{\mathbf{L}}_2) = \frac{1}{4}\left( {\parallel {\mathbf{L}}_1\parallel + \parallel {\mathbf{L}}_2\parallel } \right)\frac{{\cal{P}}}{{2A}},$$
where \({\cal{P}}\) is the perimeter of the cell, and A is its area. For a given primitive cell, new vectors are subsequently proposed: (L1 ± L2, L2), (L1, L2 ± L1). \({\cal{C}}\) is recomputed for each of these candidates, and the lattice with the lowest \({\cal{C}}\) is taken. This process is repeated until either \({\cal{C}}\) is not reduced by an iteration, or falls below a threshold of \({\cal{C}} \le 1.5\). No more than 10 iterations are performed. A square cell has \({\cal{C}} = 1\). Importantly, symmetry was not constrained during optimization which allows an initially proposed structure to transform from one group into another and allows lower symmetry structures to emerge from higher symmetry parents.
Structural similarity
Various methods exist for determining the structural similarity of lattice configurations66,67,68,69,70,71,72. Our algorithm does not depend on differentiating lattices; however, we often removed similar structures from the final optimized set of non-ground-state structures to reduce the number to be examined a posteriori. This screening may also be performed as an intermediate stage to remove proposed candidates to be sent to basin hopping that may be considered too similar and therefore redundant. We employed radial distribution functions to determine this similarity, as this information is readily available on-the-fly following pairwise energy calculation. For two configurations, denoted α and β, we consider the cosine similarity of each colloid type to produce a vector, S = (S1,1, S1,2, …, Sn,n), such that
$$S_{i,j} = \frac{{{\mathbf{g}}_{i,j}^\alpha (r) \cdot {\mathbf{g}}_{i,j}^\beta (r)}}{{\left\| {{\mathbf{g}}_{i,j}^\alpha (r)} \right\|\left\| {{\mathbf{g}}_{i,j}^\beta (r)} \right\|}},$$
where gi,j(r) denotes the radial distribution function for the (i, j) pair. We consider two configurations to be only as similar as their least similar pair; thus, S = min[S] and we consider two configurations to represent different structures if S < 0.90. A more restrictive threshold of S < 0.99 did not change our final results. The radial distribution functions were computed out to a cutoff of rcut = 3σ with bins of width δr = 0.2σ.
Phase diagrams
Convex hulls of total energy per particle vs. mole fraction(s) were constructed using the QuickHull algorithm73, as implemented in SciPy62. All points along the hulls reported were checked for energetic degeneracy, that is, unique structures that had energies per particle within δ(U/Ntot) ≤ 10−6; no degeneracies were found for any of the conditions reported here. For ternary mixtures, the three-dimensional hull of U/Ntot vs. x1 and x2. is projected onto the (x1 − x2)-plane; the faces of the three-dimensional hull indicate which phases are in coexistence and are depicted by the orange lines in Fig. 6, where the vertices correspond to the structures on the hull. All unique, integer stoichiometries ξ1:ξ2 up to ξi ≤ 6 were considered for binary mixtures, and similar bounds were used for ternary mixtures (cf. Supplementary Discussion).
Canonical (NVT) molecular dynamics simulations were performed in LAMMPS74 using a Langevin thermostat with a time constant \(\tau = \sigma m^{ - 1/2}{\it{\epsilon }}^{ - 1/2}\). Simulations were run with at least 103 particles for at least 108 timesteps, with each step Δt = 10−2τ. Numbers of components were rounded from the desired stoichiometric ratios to nearest integers. Temperatures \(T^ \ast = {\it{\epsilon }}^{ - 1}k_{\mathrm{B}}T\) and number densities ρ* = ρσ2 were set as desired. Here, kB is the Boltzmann constant and T is the absolute temperature. Initial configurations were generated by random placement of particles, followed by energy minimization and equilibration at T* = 1 for 105 timesteps. The potential described previously, with a cutoff rc = 3σ, was used.
Data is available on reasonable request. Direct requests to N.A.M. and J.M.
Code is available on reasonable request. Direct requests to N.A.M. and J.M.
Jain, A., Errington, J. R. & Truskett, T. M. Inverse design of simple pairwise interactions with low-coordinated 3D lattice ground states. Soft Matter 9, 3866–3870 (2013).
Jadrich, R. B., Lindquist, B. A. & Truskett, T. M. Probabalistic inverse design for self-assembling materials. J. Chem. Phys. 146, 184103 (2017).
Piñeros, W. D., Lindquist, B. A. & Truskett, T. M. Inverse design of multicomponent assemblies. J. Chem. Phys. 148, 104509 (2018).
Leunissen, M. E. et al. Ionic colloidal crystals of oppositely charged particles. Nature 437, 235–240 (2005).
Glotzer, S. C. & Solomon, M. J. Anisotropy of building blocks and their assembly into complex structures. Nat. Mater. 6, 557–562 (2007).
Macfarlane, R. J. et al. Nanoparticle superlattice engineering with DNA. Science 334, 204–208 (2011).
van Anders, G., Ahmed, N. K., Smith, R., Engel, M. & Glotzer, S. C. Entropically patchy particles: engineering valence through shape entropy. ACS Nano 8, 931–940 (2014).
Oganov, A. R., Ma, Y., Lyakhov, A. O., Valle, M. & Gatti, C. Evolutionary crystal structure prediction as a method for the discovery of minerals and materials. Rev. Minerol. Geochem. 71, 271–298 (2010).
Ciach, A., Pękalski, J. & Góźdź, W. T. Origin of similarity of phase diagrams in amphiphilic and colloidal systems with competing interactions. Soft Matter 9, 6301–6308 (2013).
Godfrin, P. D., Valadez-Pérez, N. E., Castañeda Priego, R., Wagner, N. J. & Liu, Y. Generalized phase behavior of cluster formation in colloidal dispersions with competing interactions. Soft Matter 10, 5061–5071 (2014).
Vogel, N., Retsch, M., Fustin, C.-A., del Campo, A. & Jonas, U. Advances in colloidal assembly: the design of structure and heirarchy in two and three dimensions. Chem. Rev. 115, 6265–6311 (2015).
Frenkel, D. Order through entropy. Nat. Mater. 14, 9–12 (2015).
Jones, M. R., Seeman, N. C. & Mirkin, C. A. Programmable materials and the nature of the DNA bond. Science 347, 1260901 (2015).
Jacobs, W. M., Reinhardt, A. & Frenkel, D. Rational design of self-assembly pathways for complex multicomponent structures. Proc. Natl. Acad. Sci. USA 112, 6313–6318 (2015).
Tian, Y. et al. Lattice engineering through nanoparticle–DNA frameworks. Nat. Mater. 15, 654–661 (2016).
Vo, T. et al. Stoichiometric control of DNA-grafted colloid self-assembly. Proc. Natl. Acad. Sci. USA 112, 4982–4987 (2015).
Tkachenko, A. Generic phase diagrams of binary superlattices. Proc. Natl. Acad. Sci. USA 113, 10269–10274 (2016).
Song, M., Ding, Y., Zerze, H., Snyder, M. A. & Mittal, J. Binary superlattice design by controlling DNA-mediated interactions. Langmuir 34, 991–998 (2018).
Maddox, J. Crystals from first principles. Nature 335, 201 (1988).
Woodley, S. M. & Catlow, R. Crystal structure prediction from first principles. Nat. Mater. 7, 937–946 (2008).
Pickard, C. J. & Needs, R. J. Ab initio random structure searching. J. Phys. Condens. Matter 23, 053201 (2011).
Stevanovíc, V. Sampling polymorphs of ionic solids using random superlattices. Phys. Rev. Lett. 116, 075503 (2016).
Pannetier, J., Bassas-Alsina, J., Rodriguez-Carvajal, J. & Caignaert, V. Prediction of crystal structures from crystal chemistry rules by simulated annealing. Nature 346, 343–345 (1990).
Schoen, C. J. & Jansen, M. First step towards planning of syntheses in solid‐state chemistry: determination of promising structure candidates by global optimization. Angew. Chem. Int. Ed. 35, 1286–1304 (1996).
Wales, D. J. & Doye, J. P. K. Global optimization by basin-hopping and the lowest energy structures of Lennard–Jones clusters containing up to 110 atoms. J. Phys. Chem. A 101, 5111–5116 (1997).
Goedecker, S. Minima hopping: an efficient search method for the global minimum of the potential energy surface of complex molecular systems. J. Chem. Phys. 120, 9911 (2004).
Kummerfeld, J. K., Hudson, T. S. & Harrowell, P. The densest packing of AB binary hard-sphere homogeneous compounds across all size ratios. J. Phys. Chem. Lett. B 112, 10773–10776 (2008).
Filion, L. et al. Efficient method for predicting crystal structures at finite temperature: variable box shape simulations. Phys. Rev. Lett. 103, 188302 (2009).
de Graaf, J., Filion, L., Marechal, M., van Roij, R. & Dijkstra, M. Crystal-structure prediction via the floppy-box Monte Carlo algorithm: method and application to hard (non)convex particles. J. Chem. Phys. 137, 214101 (2012).
Gottwald, D., Kahl, G. & Likos, C. N. Predicting equilibrium structures in freezing processes. J. Chem. Phys. 122, 204503 (2005).
Glass, C. W. & Oganov, A. R. USPEX—evolutionary crystal structure prediction. Comput. Phys. Commun. 175, 713–720 (2006).
Oganov, A. R. & Glass, C. W. Crystal structure prediction using ab initio evolutionary techniques: principles and applications. J. Chem. Phys. 124, 244704 (2006).
Fornleitner, J., Lo Verso, F., Kahl, G. & Likos, C. N. Genetic algorithms predict formation of exotic ordered configurations for two-component dipolar monolayers. Soft Matter 4, 480–484 (2008).
Wang, Y., Lv, J., Zhu, L. & Ma, Y. Crystal structure prediction via particle-swarm optimization. Phys. Rev. B 82, 094116 (2010).
Srinivasan, B. et al. Designing DNA-grafted particles that self-assemble into desired crystalline structures using the genetic algorithm. Proc. Natl. Acad. Sci. USA 110, 18431–18435 (2013).
Fischer, C. C., Tibbetts, K. J., Morgan, D. & Ceder, G. Predicting crystal structure by merging data mining with quantum mechanics. Nat. Mater. 5, 641–646 (2006).
Conway, J. H., Delgado Friedrichs, O., Huson, D. H. & Thurston, W. On three-dimensional space groups. Contrib. Algebra Geom. 42, 475–507 (2001).
Conway, J. H. & Huson, D. H. The orbifold notation for two-dimensional groups. Struct. Chem. 13, 247–257 (2002).
Wells, A. F. The geometrical basis of crystal chemistry. Acta Crystallogr. 7, 535–544 (1954).
Smith, J. V. Enumeration of 4-connected 3-dimensional nets and classification of framework silicates. I. Perpendicular linkage from simple hexagonal net. Am. Minerol. 62, 703–709 (1977).
Foster, M. D. et al. Chemically feasible hypothetical crystalline networks. Nat. Mater. 3, 234–238 (2004).
Wilmer, C. E. et al. Large-scale screening of hypothetical metal-organic frameworks. Nat. Chem. 4, 83–89 (2012).
Boyd, P. G. & Woo, T. K. A generalized method for constructing hypothetical nanoporous materials of any net topology from graph theory. CrystEngComm 18, 3777–3792 (2016).
Fornleitner, J., Lo Verso, F., Kahl, G. & Likos, C. N. Ordering in two-dimensional dipolar mixtures. Langmuir 25, 7836–7846 (2009).
Ferraro, M. E., Bonnecaze, R. T. & Truskett, T. M. Graphoepitaxy for pattern multiplication of nanoparticle monolayers. Phys. Rev. Lett. 113, 085503 (2014).
Lin, X. et al. Intrinsically patterned two-dimensional materials for selective adsorption of molecules and nanoclusters. Nat. Mater. 16, 717–721 (2017).
Peng, L. et al. Two-dimensional holey nanoarchitectures created by confined self-assembly of nanoparticles via block copolymers: from synthesis to energy storage property. ACS Nano 12, 820–828 (2018).
Jain, A., Errington, J. R. & Truskett, T. M. Dimensionality and design of isotropic interactions that stabilize honeycomb, square, simple cubic, and diamond lattices. Phys. Rev. X 4, 031049 (2014).
Patra, N. & Tkachenko, A. V. Layer-by-layer assembly of patchy particles as a route to nontrivial structures. Phys. Rev. E 96, 022601 (2017).
Hou, X.-S. et al. Mesoscale graphene-like honeycomb mono- and multilayers constructed via self-assembly of coclusters. J. Am. Chem. Soc. 140, 1805–1811 (2018).
Oganov, A. R., Lyakhov, A. O. & Valle, M. How evolutionary crystal prediction works—and why. Acc. Chem. Res. 44, 227–237 (2011).
Morgan, W. S., Hart, G. L. W. & Forcade, R. W. Generating derivative superstructures for systems with high configurational freedom. Comput. Mater. Sci. 136, 144–149 (2017).
Schattschneider, D. The plane symmetry groups: their recognition and notation. Am. Math. Mon. 85, 439–450 (1978).
Thurston, W. P. The Geometry and Topology of Three-Manifolds. (Princeton University, Princeton, NJ, 1979).
Mahynski, N. A., Zerze, H., Hatch, H. W., Shen, V. K. & Mittal, J. Assembly of multi-flavored two-dimensional colloidal crystals. Soft Matter 13, 5397–5408 (2017).
Auyeung, E. et al. DNA-mediated nanoparticle crystallization into Wulff polyhedra. Nature 505, 73–77 (2014).
Pretti, E. et al. Assembly of three-dimensional binary superlattices from multi-flavored particles. Soft Matter 14, 6303–6312 (2018).
Casey, M. T. et al. Driving diffusionless transformations in colloidal crystals using DNA handshaking. Nat. Commun. 3, 1209 (2012).
Scarlett, R. T., Ung, M. T., Crocker, J. C. & Sinno, T. Mechanistic view of binary colloidal superlattice formation using DNA-directed interactions. Soft Matter 7, 1912–1925 (2011).
Ashbaugh, H. S. & Hatch, H. W. Natively unfolded protein stability as a coil-to-globule transition in charge/hydropathy space. J. Am. Chem. Soc. 130, 9536–9542 (2008).
Wales, D. J. & Scheraga, H. A. Global optimization of clusters, crystals, and biomolecules. Science 285, 1368–1372 (1999).
Jones, E. et al. SciPy: Open Source Scientific Tools for Python. http://www.scipy.org/ (2001).
Byrd, R. H., Nocedal, L. P. J. & Zhu, C. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16, 1190–1208 (1995).
Nelder, J. A. & Mead, R. A simplex method for function minimization. Comput. J. 7, 308–313 (1965).
Llyod, S. P. Least squares quantization in PCM. IEEE Trans. Inf. Theory 28, 129–137 (1982).
Oganov, A. R. & Valle, M. How to quantify energy landscapes of solids. J. Chem. Phys. 130, 104504 (2009).
Behler, J. Atom-centered symmetry functions for constructing high-dimensional neural network potentials. J. Chem. Phys. 134, 074106 (2011).
Phillips, C. L. & Voth, G. Discovering crystals using shape matching and machine learning. Soft Matter 9, 8552 (2013).
Sadeghi, A. et al. Metrics for measuring distances in configuration spaces. J. Chem. Phys. 139, 184118 (2013).
Zhu, L. et al. A fingerprint based metric for measuring similarities of crystalline structures. J. Chem. Phys. 144, 034203 (2016).
Reinhart, W. F., Long, A. W., Howard, M. P., Ferguson, A. L. & Panagiotopoulos, A. Z. Machine learning for autonomous crystal structure identification. Soft Matter 13, 4733–4745 (2017).
Spellings, M. & Glotzer, S. C. Machine learning for crystal identification and discovery. AIChE J. 64, 2198–2206 (2018).
Barber, C. B., Dobkin, D. & Huhdanpaa, H. T. The quickhull algorithm for convex hulls. ACM Trans. Math. Softw. 22, 469–483 (1996).
Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995).
This work was supported by the U.S. Department of Energy, Office of Basic Energy Science, Division of Material Sciences and Engineering under Award (DE-SC0013979). E.P. acknowledges support from the National Institute of Standards and Technology Summer Undergraduate Research Fellowship (NIST SURF) program with Grant No. 70NANB16H. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported under Contract No. DE-AC02-05CH11231. Use of the high-performance computing capabilities of the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by the National Science Foundation, Project No. TG-MCB120014, is also gratefully acknowledged. Contribution of the National Institute of Standards and Technology, not subject to US Copyright.
These authors jointly supervised this work: Nathan A. Mahynski, Jeetain Mittal.
Chemical Sciences Division, National Institute of Standards and Technology, Gaithersburg, MD, 20899-8320, USA
Nathan A. Mahynski & Vincent K. Shen
Department of Chemical and Biomolecular Engineering, Lehigh University, 111 Research Drive, Bethlehem, PA, 18015-4791, USA
Evan Pretti & Jeetain Mittal
Nathan A. Mahynski
Evan Pretti
Vincent K. Shen
Jeetain Mittal
N.A.M., E.P., V.K.S., and J.M. designed the research. N.A.M. and E.P. performed the simulations and analyzed the data. All authors contributed to the writing of the manuscript.
Correspondence to Nathan A. Mahynski or Jeetain Mittal.
Journal peer review information: Nature Communications thanks Riccardo Ferrando, Andrei Petukhov and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Mahynski, N.A., Pretti, E., Shen, V.K. et al. Using symmetry to elucidate the importance of stoichiometry in colloidal crystal assembly. Nat Commun 10, 2028 (2019). https://doi.org/10.1038/s41467-019-10031-4
Self-templating assembly of soft microparticles into complex tessellations
Fabio Grillo
Miguel Angel Fernandez-Rodriguez
Lucio Isa
|
CommonCrawl
|
What is the precise definition of Gravitational Sphere of Influence (SOI)?
I am trying to understand the gravitational sphere of influence (SOI), but all I get by searching is the formula that you can find on Wikipedia, that is
$$ r_{SOI} = a \left( \frac{m}{M} \right)^{2/5} $$
m: mass of orbiting (smaller) body
M: mass of central (larger) body
a: semi-major axis of smaller body
When inputing the Moon numbers in this formula, we get a SOI of 66,183 km for the Moon over the Earth. This is consistent with other sources on the web, for example the Apollo mission transcripts when they talk about entering the Moon SOI.
What I don't understand is that when I calculate the gravitational forces between different bodies using Newton's laws, an object placed at this distance between the Earth and the Moon still gets a bigger pull from the Earth. Say for example that we had an object with a mass of 100 kg, these are the gravitational pull (in Newtons) that it would receive from the Earth and the Moon at different distances :
Force from Earth on Earth's surface : 979.866 N
Force from Earth at 384400 km (Moon dist) : 0.27 N
Force from Moon at 66183 km from Moon : 0.112 N
Force from Earth at 318216 km (66183 km from Moon) : 0.394 N
As you can see, the pull from Earth and Moon cancel each other at around 38,000 km, not 66,000 km. This is somewhat counterintuitive to me, as I first thought that a spacecraft (for example) would get more pull from the Moon than from the Earth when it entered the Moon's gravitational sphere of influence. I suspect that it has to do with the fact that the Moon is in orbit around the Earth, i.e. it is in constant acceleration in the same direction as the Earth's pull, but I would like a clear explanation if somebody had one.
newtonian-gravity
solar-system
orbital-motion
edited Oct 24, 2013 at 8:35
Martin VézinaMartin Vézina
I was also wondering this for a while and found an not entirely complete derivation of the formula (starting from page 14).
In which the following equation is used, $$ \ddot{\vec{r}}+\underbrace{\frac{\mu_i}{\|\vec{r}\|^3}\vec{r}}_{-A_i}=\underbrace{-\mu_j\left(\frac{\vec{d}}{\|\vec{d}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right)}_{P_j}, $$ where $\vec{r}$ is the vector between the centers of gravity of a spacecraft, indicated with $m$, and the celestial body with gravitational parameter $\mu_i$, $\vec{d}$ is the vector between the centers of gravity of a spacecraft and the celestial body with gravitational parameter $\mu_j$ and $\vec{\rho}$ is the vector between the centers of gravity of celestial body $\mu_i$ and $\mu_j$. These vectors are also illustrated in the following figure.
And looking at the spacecraft from an accelerated reference frame of a celestial body, then $A$ is defined as the primary gravitational acceleration and $P$ as the perturbation acceleration due to the other celestial body.
And the SOI is defined due to Laplace as the surface along which the following equation satisfies, $$ \frac{P_j}{A_i}=\frac{P_i}{A_j}, $$ so $$ \frac{\mu_j\left(\frac{\vec{d}}{\|\vec{d}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right)}{\mu_i\frac{\vec{r}}{\|\vec{r}\|^3}}=\frac{\mu_i\left(\frac{\vec{r}}{\|\vec{r}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right)}{\mu_j\frac{\vec{d}}{\|\vec{d}\|^3}}. $$ This will not return a spherical surface, but it can be approximated by one when $\mu_i << \mu_j$, who's radius is equal to $$ \|\vec{r}\|\approx r_{SOI}=\|\vec{\rho}\|\left(\frac{\mu_i}{\mu_j}\right)^{\frac{2}{5}}. $$ This is where the slides of the lecture stop and I will try to to fill in the rest. When $\mu_i << \mu_j$ than the SOI will be relatively close to $\mu_i$ so $$ \|\vec{\rho}\|\approx\|\vec{d}\|, $$ and if you look at the figure above you can see that when $\|\vec{r}\|$ is small than $\vec{d}$ and $\vec{\rho}$ almost point in opposite direction and form a triangle with $\vec{r}$ such that $$ \vec{\rho}+\vec{d}=\vec{r}. $$ By rewriting the definition of the surface using the approximation you get $$ \mu_j^2\frac{\vec{d}}{\|\vec{\rho}\|^6}=\mu_i^2\frac{1}{\|\vec{r}\|^3}\left(\frac{\vec{r}}{\|\vec{r}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\right) $$ The other approximation which has to be made is that $\|\vec{r}\|<<\|\vec{\rho}\|$ so that $$ \frac{\vec{r}}{\|\vec{r}\|^3}+\frac{\vec{\rho}}{\|\vec{\rho}\|^3}\approx\frac{\vec{r}}{\|\vec{r}\|^3}. $$ Now the equation can be reduced to $$ \mu_j^2\frac{\vec{d}}{\|\vec{\rho}\|^6}=\mu_i^2\frac{\vec{r}}{\|\vec{r}\|^6}. $$ By generalizing $\vec{r}$ as a constant radius you can make this problem one dimensional, so $\|\vec{r}\|$ can substitute $\vec{r}$ and since there are no more vector additions (such that small differences between them could matter) therefore $\|\vec{\rho}\|$ can also substitute $\vec{d}$, which gives final equation $$ \mu_j^2\|\vec{r}\|^5=\mu_i^2\|\vec{\rho}\|^5\longrightarrow\|\vec{r}\|=\|\vec{\rho}\|\left(\frac{\mu_i}{\mu_j}\right)^{\frac{2}{5}}. $$
fibonaticfibonatic
$\begingroup$ Thank you for this answer, it gives a good explanation of the formula. Originally, I posted my question because I programmed a gravity simulation, and in a 3-bodies scenario I was never able to put a body anywhere near the SOI of another (not even half of it). It always ends up around the larger one, though other orbits behave as expected with the same calculations. I see the theoretical formula, but I don't see it happening in a concrete application. Anybody aware of a simulation where we could observe it? Maybe my physic's not right. $\endgroup$
– Martin Vézina
$\begingroup$ @MartinVézina, the SOI has more of an application in patched conics approximation, which mostly deals with hyperbolic orbits within a SOI. The thing you are referring to is called the Hill sphere. $\endgroup$
– fibonatic
$\begingroup$ You're absolutely right, there's a lot of confusion between both, I see it now. A lot of sources use equigravisphere and sphere of influence alternatively, and can mean any of both definitions. Some sources even suggest that SOI and Hill sphere are the same thing, which I understand now is not the case. Very confusing for a layman like me. $\endgroup$
Finding sphere of influence in multibody system
What is the reasoning behind the Hill Sphere?
Determinstic implementation Sphere of Influence change using Patched conic approximation
Moon Gravity Vs Earth Gravity Pull
Points in space where the gravitational force due to two bodies are equal in magnitude
What is the smallest range at which we have measured the effects of gravitational force between two masses?
What caused the curve in the Apollo transearth trajectory?
What happens to the water on the surface of the Earth if the Earth is not rotating about its axis in the Earth-Moon system?
How to calculate the sphere of influence of a planet?
|
CommonCrawl
|
Parametric Press
The Climate Issue
Your Personal
Carbon History
Aatish Bhatia
The Corporations
Behind Climate Change
Geoffrey Litt, Seth Thompson
Tiny Algae and the Political Theater
of Planting One Trillion Trees
Benjamin Cooley
Drought of the Sinking Delta
Christina Orieschnig
The Hidden Cost
of Digital Consumption
Halden Lin, Aishwarya Nirmal, Shobhit Hathi, Lilian Liang
The Hidden Cost of Digital Consumption
Digital doesn't mean green. How much carbon dioxide was released when you loaded this article?
Halden Lin
Aishwarya Nirmal
Shobhit Hathi
Lilian Liang
Source CodeOffline ArchiveDOI
A Paradigm Shift
In 2014, Google had to shark-proof their underwater cables. It turned out that sharks were fond of chewing on the high-speed cables that make up the internet. While these "attacks" are no longer an issue, they are a reminder that both the internet and sharks share the same home.
Book →
E-Book →
It is difficult to ignore the environmental consequences of ordering a physical book on Amazon. But what happens when you download an E-Book instead?
When you buy a book on Amazon, the environmental consequences are difficult to ignore. From the jet fuel and gasoline burned to transport the packages, to the cardboard boxes cluttering your house, the whole process is filled with reminders that your actions have tangible, permanent consequences. Download that book onto an e-reader, however, and this trail of evidence seems to vanish. The reality is, just as a package purchased on Amazon must be transported through warehouses and shipping centers to get to your home, a downloaded book must be transported from data centers and networks across the world to arrive on your screen.
And that's the thing: your digital actions are only possible because of the physical infrastructure built to support them. Digital actions are physical ones. This means that Google has to be mindful of sharks. This also means that we should be mindful of the carbon emissions produced by running the world's data centers and internet networks. According to one study, the Information and Communications Technology sector is expected to account for 3–3.6% of global greenhouse gas emissions in 2020 [1]. That's more than the fuel emissions for all air travel in 2019, which clocked in at 2.5%. The answer to "why?" and "how?" may not be immediately obvious, but that's not the fault of consumers. A well-designed, frictionless digital experience means that users don't need to worry about what happens behind the scenes and, by extension, the consequences. This is problematic: the idea of "hidden costs" runs contrary to principles of environmental awareness. Understanding how these digital products and services work is a crucial first step towards addressing their environmental impact.
Get updates from the Parametric Press
A Look at Different Types of Media
Each type of digital activity produces different levels of emissions. The amount of carbon dioxide emitted by a particular digital activity is a factor of the quantity of information that needs to be loaded. More specifically, we can estimate emissions using the following formula:
nnn bytes ×\times× XXX KwH/byte ×\times× YYY g CO₂/KwH
A byte is a unit for information.
X=6×10−11X = 6 \times 10^{-11}X=6×10−11 is the global average for energy consumed for transmitting one byte of data in 2015, as calculated by Aslan et al. (2017) [2].
Y=707Y = 707Y=707 represents the EPA's U.S. national weighted average for grams of CO₂ emitted for electricity consumed.
Ideally, this formula would also include the energy usage of the data source and your device, but these will vary across digital media providers, users, and activities. This formula therefore provides a reasonable lower bound for emissions.
Emissions of Websites
Each bar represents the carbon emitted when scrolling through a website for 60 seconds. Car distance equivalent is calculated using the fuel economy of an average car. ClickTap each bar to show a preview clip of the scroll.
Aslan et al. 2017 [2] , EPA
In order to load the Parametric Press article that you are currently reading, we estimate that 51 milligrams of CO₂ were produced. The same amount of CO₂ would be produced by driving a car 0.20 meters (based on the fuel economy of an average car, according to the EPA). These emissions are a result of loading data for the text, graphics, and visualizations that are then rendered on your device.
The chart to the rightbelow displays the carbon emitted when loading various websites and scrolling through each at a constant speed for 60 seconds (this scrolling may incur more loading, depending on the site). All data collection was done using a Chrome web browser under a 100mbps Wi-Fi connection. Clicking Tapping each bar in the chart will show a preview of that scroll, sped up to a 5 second clip.
Note: Subsequent visits to websites may produce fewer emissions, as parts of the site may be saved in something called a cache.
Each bar represents the carbon emitted when scrolling through a website for 60 seconds. Car distance equivalent is calculated using the fuel economy of an average car, according to the EPA. ClickTap a bar to show a preview clip of the scroll.
As shown to the rightabove, loading websites like Google, which primarily show text, produces much lower emissions than loading websites like Facebook, which load many photos and videos to your device.
Emissions of Audio
Each bar represents the carbon emitted when listening to an audio clip for 60 seconds (no audio preview). Note: The amount of data loaded may represent more than a minute's worth due to buffering.
Let's take a closer look at one common type of non-text media: audio. When you listen to audio on your device, you generally load a version of the original audio file that has been compressed into a smaller size. In practice, this size is often determined by the choice of bitrate, which refers to the average amount of information in a unit of time.
The NPR podcast shown in the visualization was compressed to a bitrate of 128 kilobits per second (there are 8 bits in a byte), while the song "Old Town Road", retrieved from Spotify, was compressed to 256 kilobits per second. This means that in the one minute timespan that both audio files were played, roughly twice as much data needed to be loaded for "Old Town Road" than for "Digging into 'American Dirt'", which leads to the song having about twice as large of a carbon footprint. The fact that the song has greater carbon emissions is not a reflection on the carbon footprint of songs versus podcasts, but rather the difference in the bitrate of each audio file. These audio examples have lower carbon emissions than most of the multimedia websites shown earlier.
Emissions of Video
Each bar represents the carbon emitted when watching a video for 60 seconds at two different qualities, 360p and 1080p (to reduce the size of this article, there is only one 360p preview for both qualities). Note: like audio, videos are buffered, which means that playing the video may have loaded more than 60 seconds of content.
Videos are a particularly heavy digital medium. The chart to the rightbelow shows emissions for streaming different YouTube videos at two different qualities—360p and 1080p, for 60 seconds each.
When you view a video at a higher quality, you receive a clearer image on your device because the video that you load contains more pixels. Pixels are units of visual information. In the chart, the number in front of the "p" for quality refers to the height, in pixels, of the video. This is why there are greater emissions for videos at 1080p than those at 360p: more pixels means more data loaded per frame.
Bitrate also plays a role in video streaming. Here, bitrate refers to the amount of visual and audio data loaded to your device over some timespan. From the chart, it is clear that the "Old Town Road" music video has a higher bitrate than the 3Blue1Brown animation at both qualities. This difference could be attributed to a variety of factors, such as the frame rate, compression algorithm, and the producers' desired video fidelity.
In the examples provided, videos produced far more CO₂ than audio over the same time span. This is especially apparent when comparing the emissions for the audio of "Old Town Road" and its corresponding music video.
Ads are another common form of digital content. Digital media providers often include advertisements as a method of generating revenue. When the European version of the USA Today website removed ads and tracking scripts to comply with GDPR (General Data Protection Regulation), the size of its website decreased from 5000 to 500 kilobytes with no significant changes to the appearance of the site. This means that the updated, ad-free version of the website produced roughly 10 times less CO₂ on each load than the original.
Not only does a video require loading both audio and visual data, but visual data is also particularly heavy in information. Notice that loading the website Facebook produced the most emissions, likely a result of loading multiple videos and other heavy data.
Media Emissions by Medium
Click Tap on the "Show Timeline" button to see a timeline of loading packets for each type of media. HoverTap and drag to scrub through and view cumulative emissions.
Hide Timeline
Aslan et al. 2017 [2]
In a lot of cases, when you stream content online, you don't receive all of the information for that content at once. Instead, your device loads incremental pieces of the data as you consume the media. These pieces are called packets. In each media emission visualization, we estimated emissions based on the size and quantity of the packets needed to load each type of media.
Click Tap the "Show Timeline" button at the bottom of the visualization to the rightbelow to see a timeline breakdown of each type of media.
In this timeline breakdown, we can see that the way in which packets arrive for video and audio differs from the pattern for websites. When playing video and audio, packets tend to travel to your device at regular intervals.
In contrast, the packets for websites are weighted more heavily towards the beginning of the timeline, but websites may make more requests for data as you scroll through and load more content.
A Case Study: YouTube's Carbon Emissions
We've just seen how digital streams are made up of packets of data sent over the internet. These packets aren't delivered by magic. Every digital streaming platform relies on a system of computers and cables, each part consuming electricity and releasing carbon emissions. When we understand how these platforms deliver their content, we can directly link our digital actions to the physical infrastructure that releases carbon dioxide.
Let's take a look at YouTube. With its 2 billion + users, YouTube is the prototypical digital streaming service. How does one of its videos arrive on your screen?
YouTube Pipeline + Electricity Usage for 2016
Origin Data Centers
350 GWh
1,900 GWh
Enough to power over U.S. homes for a year
YouTube (2016) [Preist et al. 2017]
19.6 TWh
ICT Sector (2020, projected) [Belkhir & Elmeligi 2017]
ICT Sector (2010) [Belkhir & Elmeligi 2017]
Videos are stored on servers called "data centers": warehouses full of giant computers designed for storing and distributing videos. For global services, these are often placed around the world.
YouTube's parent company, Google, has 21 origin data centers strategically placed throughout 4 continents (North America, South America, Europe, and Asia).
Let's take a closer look at one of these origin data centers. This one is in The Dalles, Oregon, on the West Coast of the United States.
For information to get from this data center to you, it first goes through Google's own specialized data network to what they call Edge Points of Presence (POPs for short), which bring data closer to high traffic areas. There are three metro areas with POPs in this region: Seattle, San Francisco, and San Jose.
Show worldwide POPs
Hide Globe
From these POPs, data is routed through smaller data centers that form the "Google Global Cache" (GGC). These data centers are responsible for storing the more popular or recently watched videos for users in a given area, ensuring no single data center is overwhelmed and service stays zippy. There are 22 in the region shown on the map. A more general term for this collection of smaller data centers is a Content Delivery Network (CDN for short).
Show entire GGC
In 2018, researchers from the University of Bristol used publically available data to estimate the energy consumption of each step of YouTube's pipeline in 2016.
Google does not disclose its data center energy consumption for YouTube traffic specifically. Therefore, Chris Preist, Daniel Schien and Paul Shabajee used the energy consumption numbers released for a similar service's (Netflix) data centers to estimate YouTube's data center energy consumption. They found that all data centers accounted for less than 2% of YouTube's electricity use in 2016 [3].
Google doesn't have their own network for communication between POPs and the Google Global Cache. For that, they use the internet.
The internet is a global "highway of information" that allows packets of data to be transmitted as electrical impulses. A packet is routed from a source computer, through cables and intermediary computers, before arriving at its destination. In addition to the 550,000 miles of underwater cables that form the backbone of the internet, regions have their own land-based networks.
Here's (roughly) what the major internet lines of the west coast look like [4]. Perhaps not surprisingly, it resembles our interstate highway system. Preist et al. estimate that this infrastructure consumed approximately 1,900 Gigawatt-hours of electricity to serve YouTube videos in 2016 [3], enough to power 170,000 homes in the United States for a year, according to the EIA.
The packets traveling across this information highway need "off-ramps" to reach your screen. The off-ramps that packets take are either "fixed line" residential networks (wired connections from homes to the internet) or cellular networks (wireless connections from cell phones to the internet). The physical infrastructure making up these two types of networks differ, and therefore have distinct profiles of energy consumption and carbon emissions.
An estimated 88% of YouTube's traffic went through fixed line networks (from your residential Cable, DSL, or Fiber-Optic providers), and this accounted for approximately 4,400 Gigawatt-hours of electricity usage [3]—enough to power over 400,000 U.S. homes.
In comparison, only 12% of YouTube's traffic went through cellular networks, but they were by far the most expensive part of YouTube's content delivery pipeline, accounting for approximately 8,500 Gigawatt-hours of electricity usage—enough to power over 750,000 U.S. homes [3]. At over 10 times the electricity usage per unit of traffic, the relative inefficiency of cellular transmission is clear.
Eventually, the video data reaches your device for viewing. While your device might not technically be part of YouTube's content delivery pipeline, we can't overlook the cost of moving those pixels. Devices accounted for an estimated 6,100 Gigawatt-hours of electricity usage [3]: that's over half a million U.S. homes worth of electricity.
In total, Preist et al.'s research estimated that YouTube traffic consumed 19.6 Tetrawatt-hours of electricity in 2016 [3]. Using the world emissions factor for electricity generation as reported by the International Energy Agency, they place the resulting carbon emissions at 10.2 million metric tons of CO₂ (offset to 10.1 after Google's renewable energy purchases for its data center activities).
YouTube emitted nearly as much CO₂ as a metropolitan area like Auckland, New Zealand did in 2016. Put in other words, 10.2 MtCO₂ is equivalent to the yearly footprint of approximately 2.2 million cars in the United States.
YouTube's monthly active user count has since increased by a minimum of 33% since 2016, (from 1.5 billion a year later in 2017 to over 2 billion last year in 2019). This means its current CO₂ emissions in 2020 could be even higher than Preist et al.'s 2016 prediction.
If we assume that the emissions factor for electricity usage is similar for each part of the pipeline, we can get a rough idea of the carbon footprint profile of YouTube. However, it's important to note that this breakdown isn't necessarily representative of the entirety of the information and technology sector. A 2018 study by McMaster University researchers Lotfi Belkhir and Ahmed Elmeligi paints a surprisingly different picture for the sector as a whole [1].
To compare the two studies, we can group the "Internet", "Residential Network", and "Cellular Network" sections into an umbrella "Networks".
* For data center emissions: Preist et al. note that Google purchases renewable energy to match its data center electricity consumption.
† For device emissions: Preist et al. do not account for the energy required to manufacture devices.
Belkhir and Elmeligi provide emission estimates for both 2010 (retrospective) and 2020 (prospective). Most surprising is the weight Data Centers and CDNs have in this breakdown. We can speculate that the relatively high bandwidth required to transfer videos as a medium contributes at least partially to the overweightedness of the "Networks" category for YouTube.
Note that, unlike Preist et al, Belkhir and Elmeligi do account for the emissions generated from the production of devices.
In the same study, Belkhir and Elmeligi created two models to project the ICT sector's emissions decades forward. Even their "unrealistically conservative" linear model put tech at a 6–7% share of GHG emissions in 2040, and their exponential model had tech reaching over 14%.
What is being done about this? Aside from increasing the efficiency of each part of the pipeline and taking advantage of renewable energy, mindful design could also go a long way.
In the context of digital streaming, Preist et al. point out that much of YouTube's traffic comes from music videos, and a good number of those "views" are likely just "listens". If this "listen" to "view" ratio were even just 10%, YouTube could have reduced its carbon footprint by about 117 thousand tons of CO₂ in 2016, just by intelligently sending audio when no video is required. That's over 2.2 million gallons of gasoline worth of CO₂ in savings.
Tech's Inconvenient Truths
Digital streaming is not the only instance where environmentally harmful aspects of technology are outside of public consciousness. Tech is rarely perceived as environmentally toxic, but here's a surprising fact: Santa Clara County, the heart of "Silicon Valley", has the most EPA classified "superfund" (highly polluted) sites in the nation. These 23 locations may be impossible to fully clean. Silicon Valley is primarily to blame.
The Superfund Sites of Santa Clara Valley
The 23 superfund sites of Santa Clara County. Hover over each one for more information. Larger Hazard Ranking System Scores are worse.
Enable pan / zoom
In their book The Silicon Valley of Dreams, David Pellow & Lisa Park detail how Santa Clara County became "Silicon Valley" in the 1960s when companies in the area found ways to embed computer circuits into small wafers of silicon. Manufacturing these chips is inherently toxic, and its chemical runoff has been linked to increases in cancer and birth defect rates around the Bay. However, the industry avoided responsibility for decades and actually cultivated a clean image, publicizing their pristine campuses and white-collar workforce [5].
More from the Parametric Press
Your Personal Carbon History
You've lived through a period of unprecedented carbon emissions.
The Corporations Behind Climate Change
Seven simple demands to hold fossil fuel companies accountable.
Tiny Algae and the Political Theater of Planting One Trillion Trees
To fight climate change, it's time to start thinking big by thinking small.
Change in Silicon Valley never came easily. In the 1980s, residents around the Bay took matters into their own hands, mapping out patterns of illnesses around factories and formed environmental coalitions. It wasn't until this public outcry that one of the largest Silicon Valley manufacturers closed down a particularly problematic factory [5]. For the first time, a tech company admitted responsibility for manufacturing-related illnesses, blowing a hole in the industry's clean facade. In the decades that followed, factories began shutting down as manufacturing moved out of Silicon Valley, but their environmental impact is still being felt today, primarily by working-class communities of color. A study done in 2014 by researchers from Santa Clara University found that "social vulnerability, cumulative environmental hazards, and environmental benefits exhibit distinct spatial patterns in [Santa Clara County]" [6].
The tech industry's legacy of toxic mining and manufacturing continues today—with hardware giants like Apple taking the throne of Silicon Valley's founding industrialists—only now on a global scale. In 2009, China produced 95% of the world's "rare earth" minerals, with an estimated 70% coming from the Bayan Obo mining district in Inner Mongolia. In her book Rare Earth Frontiers, Julie Klinger discusses how the process of mining "rare earth" minerals releases contaminated water into the Yellow River, Bayan Obo's local water supply. The local population has suffered devastating effects, as farmland has turned to lakes of toxic waste, and cancer related deaths have spiked. Since 2004, villagers have organized to compel the government to take action. While the state has not fully fixed the devastation it's caused, it has responded to public pressure by removing some of the toxic waste and providing the people of Bayan Obo better access to healthcare [7].
Satellite view of the Bayan Obo mining district. The open pit mines and lakes of toxic waste can be seen clearly even from sattelite images. (Credit: Google, Maxar Technologies)
Tech's complicity in environmental destruction is not just limited to toxic manufacturing waste and a large carbon footprint. Companies also have a large influence on the climate crisis in the context of policy, the broader economy, and the flow of information. Reports in 2019 revealed that Google has made significant contributions to climate denialist groups, including the Competitive Enterprise Institute (CEI), which helped convince the Trump administration to withdraw from the Paris Agreement in 2017. Facebook has come under fire for lax action against climate change denial on their platform, where disinformation can easily spread without diligent fact-checking. Google, Microsoft, and Amazon have partnered with large oil companies to build machine learning tools that streamline oil production. In fact, the oil industry invested an estimated $1.75 billion in 2018 into machine learning tools, which is projected to grow to $4.01 billion by 2025.
Amazon employees joined the global "climate strike" in September 2019 to protest their company's complicity in the climate crisis. Someone holds a sign that says "No AWS For Gas And Oil". (Credit: Lilian Liang)
Our purpose in throwing light onto this destructive behavior is not simply to paint a bleak future. Through public pressure and the collective organizing of tech employees, there has been media coverage of some success in holding the tech industry responsible for their environmental destruction. In 2019, after the Amazon Employees for Climate Justice organization led a large walkout in support of the global climate strike, Amazon pledged to reach net-zero carbon emissions by 2040. Similarly, in response to employee pressure, Google pledged to stop funding climate change deniers in 2020. Google also promised to rescind all future contracts with oil companies in response to a Greenpeace report about tech's oil contracts.
Regarding digital consumption and its associated emissions, many tech companies have set varying carbon footprint targets for 2030 and beyond. Some of these companies will rely on offsetting carbon emissions (e.g., by planting trees). Others plan to run their operations completely on carbon-free energy sources. It's important to note, however, that these pledges often don't include emissions produced by usage from sources the company doesn't "own" but is ultimately responsible for. For example, Google pledging carbon-free doesn't mean the YouTube videos sent over the internet and viewed on devices will be emission-less.
The efficacy of carbon offsets is debated: there are no guarantees that carbon offsets actually result in the desired amount of greenhouse gas removed from the atmosphere, and it is never instantaneous.
These pledges are not enough to fully mitigate these companies' destructive practices. In our present climate crisis, carbon reductions cannot come soon enough, and carbon free technology should be the ultimate goal. In addition, they don't account for the climate impact tech has through the other avenues we've discussed. The good news is that public pressure and collective organizing have the power to raise the bar for climate accountability—change can only occur if we understand and acknowledge the hidden, yet harmful consequences of the technology we use.
Halden Lin is a visualization designer, developer, and researcher. He's currently creating visualization experiences at Apple.
Aishwarya Nirmal works as a software engineer at Airbnb, and enjoys visual design and writing in her spare time.
Shobhit Hathi works as an applied scientist at Microsoft focusing on natural language processing.
Lilian Liang is a writer and software engineer at Apple.
Assessing ICT global emissions footprint: Trends to 2040 & recommendations, Lotfi Belkhir and Ahmed Elmelgi. Journal of Cleaner Production. March 2018.
Electricity Intensity of Internet Data Transmission: Untangling the Estimates, Joshua Aslan, Kieren Mayers, Jonathan G. Koomey, and Chris France. Journal of Industrial Ecology. August 2017.
Evaluating Sustainable Interaction Design of Digital Services: The Case of YouTube, Chris Preist, Daniel Schien, and Paul Shabajee. CHI. 2019.
InterTubes: A Study of the US Long-haul Fiber-optic Infrastructure, Ramakrishnan Durairajan, Paul Barford, Joel Sommers, and Walter Willinger. SIGCOMM '15. 2015.
The Silicon Valley of Dreams: Environmental Injustice, Immigrant Workers, and the High-Tech Global Economy, David N. Pellow and Lisa Sun-Hee Park. December 2002.
The uneven distribution of environmental burdens and benefits in Silicon Valley's backyard, Iris T.Stewart, Christopher M.Bacon, and William D.Burke. Applied Geography. December 2014.
Rare Earth Frontiers: From Terrestrial Subsoils to Lunar Landscapes, Julie Michelle Klinger. 2017.
Read the next article
01: Science + Society
02: The Climate Issue
|
CommonCrawl
|
Sports Engineering
June 2019 , 22:13 | Cite as
Flow visualisation in swimming practice using small air bubbles
Josje van Houwelingen
Rudie P. J. Kunnen
Willem van de Water
Ad P. C. Holten
GertJan F. van Heijst
Herman J. H. Clercx
First Online: 22 June 2019
Measuring Behavior in Sport and Exercise
A non-harmful system to visualize the flow around an entire swimmer in a regular swimming pool is developed. Small air bubble tracers are injected through the bottom of the swimming pool in a prescribed measurement area. The motion of these bubbles, which will be largely induced by the swimmer's motion, is captured by a camera array. The two-dimensional velocity field of the water at arbitrary planes of interest can be resolved using a refocussing method in combination with an optical visualisation method, based on particle image velocimetry, which is commonly used in fluid dynamics research. Using this technique, it is possible to visualize coherent flow structures produced during swimming; it is demonstrated here for the dolphin kick.
Swimming Flow visualisation Synthetic aperture PIV Dolphin kick Vortices
This article is a part of Topical Collection in Sports Engineering on Measuring Behavior in Sport and Exercise, edited by Dr. Tom Allen, Dr. Robyn Grant, Dr. Stefan Mohr and Dr. Jonathan Shepherd.
The online version of this article ( https://doi.org/10.1007/s12283-019-0306-5) contains supplementary material, which is available to authorized users.
New insights in propulsion mechanisms and resistance can be gained by studying the swimmer from a hydrodynamical point of view. The motion of the swimmer drives fluid motion, which is reflected in the wake of the swimmer. Studying the flow structures emerging in this wake can give insight into the forces both experienced and applied by the swimmer, and thus may be used as a diagnostic for the efficiency of the swimming technique. The strength, size and direction of motion of coherent vortex structures and the interaction of the swimmer with these structures are all expected to be related to the swimming efficiency.
Within the area of experimental fluid dynamics several techniques exist to visualize and quantify the velocity field of the flow. A pervasive technique is particle image velocimetry (PIV) [2, 14], in which the flow is seeded with neutrally buoyant tracer particles. The motion of these particles is captured with one or more cameras. The velocity field can be evaluated with spatial correlations between subsequent frames resulting in a coarse-grained velocity field.
In the last decade it has become more common to apply flow visualisation techniques in advanced studies of swimming propulsion [8, 9, 12, 17, 18, 19], optionally in combination with force measurements, to observe the flow phenomena in the neighbourhood of the swimmer and to analyse the propulsive mechanisms. For example, the unsteady flow field around the hand of a swimmer during front crawl swimming has been evaluated in more detail by Matsuuchi [12]. The vortices generated during underwater undulatory swimming have been visualized by Hochstein et al. [8, 9]. They suggested that vortex re-capturing may be used to enhance propulsion in human undulatory swimming. Takagi et al. [17, 18, 19] have clarified the propulsive mechanisms during front crawl swimming and the sculling technique by a combination of pressure measurements at the hand and visualisation of the unsteady flow field around (robotic) hands. They concluded that the unsteady flow phenomena are essential in generating high propulsive forces. For instance, high propulsive forces during sculling motion are possible due to vortex re-capturing.
So far, all studies concerning flow visualisation in human swimming have been performed in a laboratory setting. In this study, a non-harmful system to visualize the flow around a swimmer in a regular swimming pool is developed, rendering these techniques available for practical use. The application in practice is accompanied with several restrictions and requirements, which makes the development challenging.
Our aim is to measure the velocity field in planes that can be selected during processing of the acquired images. For this we use synthetic aperture PIV (SA-PIV) [5], which has been proven to be suitable for analysing unsteady 3D biological flows [11, 13]. Instead, we only use the 3D information to select planes of interest. This reduces the processing time because one does not have to process the complete 3D field for PIV. Moreover, if just planes are considered instead of volumes a very tight calibration is not required in the SA-PIV technique. This puts less pressure on the accuracy of the 3D calibration [5], which cannot be guaranteed in the swimming pool.
Usually in PIV experiments, small neutrally buoyant particles (which are typically illuminated by a laser light sheet) are used to act as tracer particles for the flow. However, this is not allowed in the swimming pool. Alternatively, small air bubbles have been chosen as tracer particles.
In Sect. 2 of this paper the experimental setup and the methods to reveal the flow velocity field are presented. In Sect. 3 the proof of principle of the application of this technique in swimming practice is shown by means of visualizing the flow in the wake of a swimmer performing different styles of the dolphin kick.
We first discuss the ideas behind synthetic aperture refocussing in Sect. 2.1. Subsequently, we present the experimental setup in the swimming pool in Sect. 2.2, followed by an outline of the post-processing in Sect. 2.3. For more details about the methods and setup we refer to the dissertation corresponding to this work [20].
2.1 Synthetic aperture refocussing
The synthetic aperture refocussing technique relies on the fact that the apparent position of a certain object relative to another object is shifted when viewed from different positions. With this shift, known as parallax, depth information can be obtained. In our application this depth information is used to select planes of interest in the velocity field. A schematic representation of the working principle of the synthetic aperture refocussing technique is given in Fig. 1.
Schematic sketch of the working principle of SA-PIV. a Schematic demonstration of how particles on two planes of interest are captured on the images of a camera array and the resulting refocused images. \(Z_1\) and \(Z_2\) denote the planes of interest on different Z positions. b The mutual shift of the apparent position of particles on different cameras is given by \(d_\mathrm{A}\) and \(d_\mathrm{B}\). The local image coordinates are indicated by (x, z). c 2D illustration of the application of the image shifts to obtain a refocused image. The ideas for the schematic representation in this figure are borrowed from Belden et al. [5]
In synthetic aperture techniques, multiple cameras are used, all of them recording the same measurement volume. Since all cameras view the measurement volume from a slightly different position, the images of particles on different cameras are mutually shifted, as illustrated by the blue dot on plane \(Z_2\) and the red dot in \(Z_1\) in Fig. 1a. With the help of a calibration [20], the shifts (\(d_\mathrm{A}\) and \(d_\mathrm{B}\) in Fig. 1b) related to each different camera position and each plane of interest throughout the measurement volume can be determined.
During preprocessing this information can be used to align the images of different cameras into one refocused image with a narrow depth of field on a chosen focal plane, as illustrated in Fig. 1c. When refocussing happens on plane \(Z_1\), the red dot will appear in focus with a high intensity, while particles elsewhere in the measurement volume, like the blue dot, are out of focus due to the parallax of the cameras and appear repeatedly with low intensity. This offers possibilities to filter out the plane of interest. In summary, while all bubbles are in focus on all cameras, "refocussing" on the chosen plane is done by shifting and stacking images.
The experimental setup consists of a bubble system to generate the bubble tracers and a camera system to capture the motion of the bubbles. The bubbles are illuminated by the ambient light (daylight and regular pool lighting). A black canvas on the opposite wall to the cameras enhances the appearance of the bubbles. A schematic plan view of the bubble and camera system is given in Fig. 2.
Schematic plan view of the bubble system placed in the swimming pool. The camera system is implemented in a special double wall of the swimming pool. On the wall opposite to the camera system a black canvas is placed. On the right side of the figure, a schematic frontal view of the camera array is shown, with the camera numbering shown in the figure. The X, Y, Z-directions are denoted in the figure, with the origin in the lower left corner on the double bottom. The dashed–dotted line indicates the field of view (FOV) of the cameras
The position of the measurement volume was prescribed (the third lane relative to the side wall at \(Z = 0\)). The visualisation of the flow around an entire swimmer requires a large field of view (FOV) and thus a large measurement volume. The choice of cameras and lenses was attuned to the principles of the synthetic aperture refocussing technique [5]. Besides that the design rules for PIV were also considered in the design of the system.
2.2.1 Camera system
Six monochrome cameras (Sony: XCG-CG240) with a full frame resolution of \(1920 \times 1200\) pixels at a frame rate of 41 Hz were placed in a hexagonal frame with 0.3 m spacing in the double side wall of the swimming pool (see Fig. 2). To obtain higher frame rates of 50 Hz, the height of the active region on the light sensor chip was cropped to 956 pixels. All cameras were connected with ethernet cables (bandwidth: 100 gigabit/s) to the computer and trigger box to synchronize. Each camera (with lens) was placed in an underwater casing and aligned to a central point in the measurement volume [\((X,Y,|Z|)=(7.5, 1.05, 5.95)\,\hbox {m}\)] to have as much overlap in the FOV of the cameras as possible. All cameras have a 16 mm lens (KOWA: LM16HC F 1.4 f 16 mm). This camera–lens combination results in a depth of field of \(\sim 2\,\hbox {m}\) and a FOV of \(\sim 3.1 \times 1.5\,\hbox {m}^2\) in the measurement volume (at 5.95 m from the cameras). The minimum focal plane spacing \(\delta Z\) using this configuration is 24 mm [5], which is sufficient when considering multiple thick slices within the measurement volume to explore with 2D-PIV.
In this setup one pixel corresponds to \(\sim 1.6\,\hbox {mm}\). In a typical PIV experiment, interrogation windows of \(32 \times 32\) pixels are used with a 50 % overlap region [2, 14]. This would result in a spatial resolution of \(\sim 25\,\hbox {mm}\), which should be sufficient to observe coherent vortex structures with sizes of \(\sim 125\,\hbox {mm}\). Considering the PIV design rule for in-plane motion (displacements of \(\frac{1}{4}\) window size) velocities up to 0.65 m/s can be resolved with ease [2].
2.2.2 Bubble system
The air bubbles are generated by the bubble system, which was fully integrated in the bottom of the swimming pool (see Fig. 2). The bubble system covers an area between 5 and 10 m from the starting platform, so that the FOV of the cameras is in the center of the generated bubble curtains. The bubble system consists of five parallel PVC tubes with a length of 4.5 m, placed 0.25 m apart. Each tube has a series of small conically shaped holes with a smallest diameter of 0.2 mm and a separation distance of 0.02 m along its length. The airflow is supplied by a compressor (Leonardo 201) placed in the basement of the swimming pool. The airflow through the bubble system is regulated per PVC tube with a flowmeter (Kytola EK4A) to control the number of bubbles in the measurement volume. Five homogeneous bubble curtains are produced with an air flow rate of \(\sim 0.01\,\hbox {m}^{3}\)/min per tube. The air bubbles have a diameter of approximately 4 mm (particle image diameter of 2.5 pixels). The bubbles have an average vertical spacing of 12 mm, assuming an estimated rising speed of \(\sim 0.25\,\hbox {m/s}\) [10]. Therefore, the seeding density in a thick slice above each tube corresponds to 10.6 bubbles per interrogation window, which is in accordance with the PIV guidelines [14].
2.2.3 Calibration
For the calibration, a black canvas of \(3.0 \times 1.5\,\hbox {m}^2\) with a pattern of white dots was traversed in the water, through the measurement volume above the bubble system, at specified distances (\(|Z_\mathrm{k}|\)) from the camera array ranging from 5450–6450 mm with increments of 125 mm. At each distance a record was made with the six cameras. The shifts in the camera views were established with the calibration for each camera at every \(|Z_\mathrm{k}|\) [5, 20]. Mapping functions were obtained by second order polynomial fits through the set of pixel coordinates of the grid points and corresponding world coordinates. The polynomial coefficients were found to depend linearly on Z, despite the coarseness of the calibration with manual grid positioning. Given this linear nature, interpolation of the polynomial coefficients for different Z was straightforward, and the mapping functions of any desired XY-plane in the measurement volume could be determined. More technical details and quantitative information about the calibration, its performance and the application can be found in the thesis of van Houwelingen [20].
2.3 Post-processing
The quality of this refocussing technique highly depends on the intensity distributions throughout the recordings of different cameras. Processing the recording is therefore an essential step to effectuate the refocussing. Based on the ideas of previous studies on SA-PIV [5, 11, 13], several preprocessing operations have been applied on the recordings of single cameras and the refocussed image. All processing was performed in Matlab R2015b.
The background was removed by subtracting an average background recording of about 50 frames for each camera;
The intensity distribution was normalized to create uniform brightness across the images. This so-called flat field image was created by dividing by an average of bubble recordings made up of about 500 frames;
Optionally, when the illumination of the measurement volume turns out to be too low, an asymptotic weighting function can be applied to improve the contrast of the bubbles;
With the imwarp function in Matlab, the recordings were transformed to world coordinates (mm) using the mapping functions following from the calibration. Simultaneously, the recordings were cropped towards regions where the images of the different cameras overlap;
To achieve equal pixel intensity distributions across the images of different cameras, the histograms of the images from different cameras were equalized using the standard Matlab function histeq;
The contrast of the image was improved to enhance the visibility of the bubbles by the imsharpen function in Matlab, which uses a Gaussian low-pass filter [15];
The refocused image was generated by summing the images of different cameras [5];
To reveal the bubbles in the plane of interest, the refocused image was thresholded on pixel intensities corresponding to the mean \(+ 3\sigma\), with \(\sigma\) the standard deviation of the pixel intensity across the image. Usually, in thresholding, all pixels with an intensity lower than the threshold are replaced with zero [5]. Here the eight neighbouring pixels of the pixel exceeding the threshold were kept. Therefore, the contrast distribution of the bubbles is partially preserved.
2.3.1 PIV
The PIV analysis in this study was applied with PIVtec software (PIVview version 3.6.2). The (final) window size was chosen \(32 \times 32\) pixels with a typical overlap of 50%. A fast Fourier transform (FFT) correlation was applied between the interrogation windows, where the highest frequency components of the resulting spectrum were removed with a Nyquist frequency filter. To reduce the noise, the correlation was repeated twice with slightly shifted correlation planes. Through multiplication, the different correlations were combined into a single signal and the random noise peaks are reduced [7]. A multi-grid interrogation method, with an initial window size of \(128 \times 128\) pixels, was applied to improve the spatial resolution of the velocity field. Image shifting was allowed to shift (deform) the windows in between the different interrogation passes according to the displacement data of the previous pass. Sub-pixel shifts were obtained using a Gaussian pixel interpolation scheme. To obtain the velocity field with sub-pixel accuracy, the correlation peak was detected using a least-square Gaussian fit [1].
Within the PIVtec software, the raw velocity data from the PIV analysis are subjected to an outlier detection method based on a normalized median filter to find spurious vectors [23]. A bi-linear interpolation scheme is used to calculate the replacement vectors at the location of the outliers. A Gaussian-weighted interpolation is used when several neighbouring vectors are also outliers [3]. Due to the use of an interpolation scheme, the reliability of the quantitative data is not optimal, but at this stage for this purpose it was convenient for the proof of principle of the visualisation method.
Further post-processing and analysis were performed in Matlab. The rising speed of the bubbles can be assumed equal, because they are of similar size. Based on the PIV results, the mean rising speed of the bubbles was found to be \(0.33 \pm 0.03\,\hbox {m/s}\), which was determined by averaging all velocity vectors resulting from the PIV analysis on that plane. This mean rising speed was subtracted to obtain a better view of the flow velocity induced by the swimmer. Vorticity is a key quantity to understand propulsion. Vorticity can be computed from a measured velocity field through differentation, which can produce large errors. The presence of vortical structures can be clearly visualized by means of the vorticity component perpendicular to the plane of visualisation. This vorticity component \(\omega _z\) is defined as:
$$\begin{aligned} \omega _z = ({\nabla } \times {\mathbf {v}})_z = \frac{\partial v_y}{\partial x} - \frac{\partial v_x}{\partial y}, \end{aligned}$$
with \({\mathbf {v}}\) the velocity vector with \(v_x\) and \(v_y\) the velocity components in the x and y direction. Rather than calculating the vorticity by means of differentiating the velocity, the vorticity was obtained by integrating the velocity around an enclosed area following Stokes' theorem [14]:
$$\begin{aligned} \omega _{i,j} = \frac{\varGamma _{i,j}}{4\varDelta X \varDelta Y}, \end{aligned}$$
with \(\varDelta X \varDelta Y\) the area of a grid cell and \(\varGamma _{i,j}\) the circulation in point (i, j) estimated by:
$$\begin{aligned} \varGamma _{i,j}= \frac{1}{2} \varDelta X \left( U_{i-1,j-1} + 2U_{i,j-1} + U_{i+1,j-1}\right) \\ + \frac{1}{2} \varDelta Y \left( V_{i+1,j-1} + 2V_{i+1,j} + V_{i+1,j+1}\right) \\ - \frac{1}{2} \varDelta X \left( U_{i+1,j+1} + 2U_{i,j+1} + U_{i-1,j+1}\right)\\- \frac{1}{2} \varDelta Y \left( V_{i-1,j+1} + 2V_{i-1,j} + V_{i-1,j-1}\right) ,\end{aligned}$$
with U, V the velocity components coinciding with the contour around point (i, j). This method has the advantage of being less sensitive to errors compared to differentiation, because it uses velocity information from eight vectors [14]. The uncertainty of eight velocities contributes to the uncertainty in the vorticity, with the gain in accuracy depending on the correlation properties of the velocity field. Below, we will estimate this uncertainty and argue that the measured vorticity structures are well above the noise.
An attempt has been made to visualize and analyse the flow around a swimmer performing kicks. One experienced swimmer volunteered to participate in this study and has given written informed consent. The ethical officer of the Eindhoven University of Technology approved the design of this study. The swimmer was instructed to swim different styles of kicks through the center of the measurement volume (\(|Z| = 5950\,\hbox {mm}\)) at a depth of approximately 1 m, see video in the online supplementary material. The vortices produced during the kicks could be visualized using PIV. We expect to observe cross-sections of vortex rings (a patch of positive and negative vorticity) shed at the end of each kick [6, 8, 9, 21, 22]. Typical vorticity plots of these trials (slow, fast, low frequency + large amplitude and focus on powerful up-kick) are shown in Fig. 3a–d.
Vorticity plots at \(|Z| = 5950\,\hbox {mm}\) of four different styles of kicks: a slow, b fast, c low frequency + large amplitude, d focus on powerful up-kick. The swimmer is moving from left to right. Just the flow in the wake at the back of the swimmer is shown, since the swimmer disturbs the PIV analysis. Blue colors indicate clockwise motion, red colors indicate counter-clockwise motion. The vortical structures present in the flow are visualized by their cross-sections with the measurement plane and are indicated by the dashed boxes, they slowly move downward. The dashed line indicates the height at which the swimmer moved approximately
Repetitive vortical structures are created after each down-kick and were present in the vorticity plots. Due to self-induced motion, the vortices move downward and appear under an incline.
The velocity field (vector plot) and the vorticity component perpendicular to the plane of visualisation (color map), in, respectively, (a) and (b), of a typical coherent vortical structure found in the kick trials. From the vector field the mean rising speed of the bubbles is subtracted
In Fig. 4, we show on the velocity field and vorticity component perpendicular to this plane for a representative vortical structure. In particular, the jet produced by the vortex ring is clearly visible in the velocity vector field. The radius R of the vortex rings produced during the kicks is approximately 0.1 m (see Figs. 3 and 4). Because of the finite size, the vortex ring in Fig. 4 is not dominantly present in the other planes of interest (\(|Z| = 5450, 5700, 6200, 6450\,\hbox {mm}\)).
Since vorticity \(\omega\) was determined by velocity differences over small distances, a point of concern is its uncertainty \(\sigma _{\omega }\). This uncertainty depends on the random fluctuations \(\sigma _u\) and \(\sigma _v\) of the measured velocities, and their correlation properties in a way that has been documented by [16]. Their formula can be generalized readily to our averaged vorticity (Eq. 2), while \(\sigma _u\), \(\sigma _v\) and their correlation can be measured from the background velocity field. This field was not constant, but fluctuates due to the fluctuations of the rising bubble velocity. With values \(\sigma _u \cong \sigma _v \cong 0.1\) m/s in the experiments we find \(\sigma _{\omega } \cong 1.3\) s\(^{-1}\). This is a crude estimate as \(\sigma _u\) and \(\sigma _v\) are not the same everywhere. Even with \(\sigma _{\omega } = 3\) s\(^{-1}\) the vortical structures which we associate with kicks are way above the background.
An approximation of the impulse \({\mathbf {I}}\) of the vortex ring can be estimated by:
$$\begin{aligned} {\mathbf {I}} = \rho \varGamma \pi R^2 {\mathbf {e}}, \end{aligned}$$
where \(\varGamma\) is the circulation of the vortex ring (based on one of the patches in a 2D cross-section), R is the vortex ring radius measured from center to core and \({\mathbf {e}}\) is the unit vector of the impulse in the axial direction normal to the plane of the vortex [13]. The impulse of the vortex ring in Fig. 4 is approximately \(10.1\,\hbox {kg}\,\hbox {m/s}\) (\(\varGamma \sim 0.32\,\hbox {m}^{2}/\hbox {s}\), \(R \sim 0.1\,\hbox {m}\)). The accuracy of these values depends on the interpolation scheme used to obtain the velocity field. Nevertheless, it is a reasonable estimate of the generated impulse.
4 Discussion
The appearance of the vortical structures in Fig. 3 is confirmed by the fact that they exist for a long period of time (visible in a series of frames) and they appear on positions expected from the raw records (see supplementary video material for the raw video records). Moreover, the typical vorticity signature, consisting of a patch of clockwise and a patch of counter-clockwise rotation, resembles an intersection of a vortex ring. This is in agreement with previous observations in the literature [6, 8, 9, 21, 22]. Unfortunately, the applicability is not valid for all strokes yet. The vortices in the 'high frequency + small amplitude' and the 'focus on powerful down-kick' trial did not appear in the PIV analysis, most probably because the vorticity is weaker, smaller or less coherent in those cases.
It is peculiar that in most cases only the vortices after the down (extension) kick could be made visible. It seems that the vortices produced after the up-kick are weaker, smaller or less coherent, and are not captured by the PIV analysis. From the literature it is known that significantly more thrust is produced during the extension kick. During the up (flexion) kick, smaller and less coherent vortices are produced [6, 22], which is in agreement with the results in this study and moreover explains the poorer visibility. This difference occurs due to a joint asymmetry within the human body, and a larger projected frontal area during the flexion kick [6, 22]. Another reason can be found in the combination of a weak secondary circulation induced by the bubble curtains directed upward (\(\sim 0.04\) to 0.10 m/s), and the self-induced motion of a vortex ring (\(|{\mathbf {v}}| \sim 0.18\,\hbox {m/s}\) for \(R \sim 0.1\,\hbox {m}\), \(\varGamma \sim 0.32\,\hbox {m}^2/\hbox {s}\) and \(\sigma = 0.05\,\hbox {m}\) [4]). The self-induced motion is also directed nearly straight upward for a possible vortex ring after the up-kick. Hence, these structures disappear from the field of view more rapidly.
Based on the results of this study, it can be concluded that the SA-PIV with bubbles in a regular swimming pool works properly to visualize four flow structures within a limited range of size and vorticity. The measurement environment unfortunately does not yet allow accurate, repeatable validation of experiments to define these limits precisely. However, some additional quantitative and qualitative experiments have been conducted with the system to test the performance of the system [20].
Regarding the accuracy in these experiments, most difficulties arise at areas where noise originates from an inhomogeneous bubble distribution, and not directly from additional bubble dynamics. The bubble curtains show some large-scale collective motion, leading to empty areas in planes of interest. To improve the system, it is suggested to supply the bubbles by means of a large perforated plate (2D) rather than perforated tubes (1D), to achieve a more equal bubble distribution throughout the complete volume. The art of the refocussing technique lies in finding a good balance between filtering out-of-plane bubbles and retaining in-plane bubbles for the PIV analysis. The addition of homogeneous illumination of the measurement volume in such a way that the bubbles appear with a high intensity in all cameras can improve the results. Moreover, it would be interesting to test the implementation of pattern recognition for optimizing the algorithm, since the hexagon formation of the cameras is apparent in the refocussed images of the bubbles.
At this moment, just the flow in the wake behind the swimmer can be visualized, because the swimmer distorts the PIV analysis too much. In the future, it will be valuable to develop a dynamic mask suitable for these experiments to distinguish the flow features induced by the swimmer and the distortions originating from the swimmer's body (which currently cannot be left out of the PIV analysis) [13]. With that, the flow closer to the body of the swimmer can be captured as well, and the ability of resolving smaller coherent flow structures produced by the swimmer can be tested.
The objective of this study is to design, build and test a prototype system to visualize the flow around a swimmer in practice with the help of SA-PIV. The proof of principle is shown for a swimmer performing different styles of kicks. The expected vortex rings after each down-kick can be visualized in most cases. However, the quantitative results must be carefully interpreted at this stage, because inhomogeneity of the bubble distribution compromised particle image velocimetry. In future research, the system must be further optimized to increase the range of flow structures that can be visualized.
Compliance with ethical standards
The authors declare that there are no conflicts of interest.
Supplementary material 1 (mp4 99196 KB) Online Resource 1: A raw video record of the swimmer performing different styles of kick through the bubbles
PIVTEC GmbH (2006) PIVview User manual (version2.4)Google Scholar
Adrian RJ, Westerweel J (2011) Particle image velocimetry. Cambridge University Press, CambridgezbMATHGoogle Scholar
Agüí JC, Jimenez J (1987) On the performance of particle tracking. J Fluid Mech 185:447–468CrossRefGoogle Scholar
Batchelor GK (2007) An introduction to fluid dynamics. Cambridge University Press, CambridgeGoogle Scholar
Belden J, Truscott TT, Axiak MC, Techet AH (2010) Three-dimensional synthetic aperture particle image velocimetry. Meas Sci Technol 21(12):125403CrossRefGoogle Scholar
Cohen RCZ, Cleary PW, Mason BR (2012) Simulations of dolphin kick swimming using smooted particle hydrodynamics. Hum Mov Sci 31:604–619CrossRefGoogle Scholar
Hart DP (2000) PIV error correction. Exp Fluids 29:13–22CrossRefGoogle Scholar
Hochstein S, Blickhan R (2011) Vortex re-capturing and kinematics in human underwater undulatory swimming. Hum Mov Sci 30:998–1007CrossRefGoogle Scholar
Hochstein S, Pacholak S, Brücker C, Blickhan R (2012) Experimental and numerical investigation of the unsteady flow around a human underwater undulating swimmer. In: Tropea C, Bleckmann H (eds) Nature-inspired fluid mechanics. Notes on numerical fluid mechanics and multidisciplinary design, vol 119. Springer, Berlin, Heidelberg, pp 293–308CrossRefGoogle Scholar
Kulkarni AA, Joshi JB (2005) Bubble formation and bubble rise velocity in gas-liquid systems: a review. Ind Eng Chem Res 44(16):5873–5931CrossRefGoogle Scholar
Langley KR, Hardester E, Thomson SL, Truscott TT (2014) Three-dimensional flow measurements on flapping wings using synthetic aperture PIV. Exp Fluids 55(10):1831CrossRefGoogle Scholar
Matsuuchi K, Miwa T, Nomura T, Sakakibara J, Shintani H, Ungerechts BE (2009) Unsteady flow field around a human hand and propulsive force in swimming. J Biomech 42(1):42–47CrossRefGoogle Scholar
Mendelson L, Techet AH (2015) Quantitative wake analysis of a freely swimming fish using 3D synthetic aperture PIV. Exp Fluids 56(7):135CrossRefGoogle Scholar
Raffel M, Willert CE, Wereley ST, Kompenhans Jürgen (2007) Particle image velocimetry: a practical guide. Springer, BerlinCrossRefGoogle Scholar
Russ JC (2011) The image processing handbook, 6th edn. CRC Press, Inc., Boca RatonzbMATHGoogle Scholar
Sciacchitano A, Wieneke B (2016) Piv uncertainty propagation. Meas Sci Technol 27(8):084006CrossRefGoogle Scholar
Takagi H, Nakashima M, Ozaki T, Matsuuchi K (2013) Unsteady hydrodynamic forces acting on a robotic hand and its flow field. J Biomech 46(11):1825–1832CrossRefGoogle Scholar
Takagi H, Nakashima M, Ozaki T, Matsuuchi K (2014) Unsteady hydrodynamic forces acting on a robotic arm and its flow field: application to the crawl stroke. J Biomech 47(6):1401–1408CrossRefGoogle Scholar
Takagi H, Shimada S, Miwa T, Kudo S, Sanders R, Matsuuchi K (2014) Unsteady hydrodynamic forces acting on a hand and its flow field during sculling motion. Hum Mov Sci 38:133–142CrossRefGoogle Scholar
van Houwelingen J (2018) In the swim: optimizing propulsion in human swimming. PhD thesis, University of Technology EindhovenGoogle Scholar
von Loebbecke A, Mittal R, Fish F, Mark R (2009) Propulsive efficiency of the underwater dolphin kick in humans. J Biomech Eng 131:054504CrossRefGoogle Scholar
von Loebbecke A, Mittal R, Mark R, Hahn J (2009) A computational method for analysis of underwater dolphin kick hydrodynamics in human swimming. Sports Biomech 8(1):60–77CrossRefGoogle Scholar
Westerweel J, Scarano F (2005) Universal outlier detection for PIV data. Exp Fluids 39(6):1096–1100CrossRefGoogle Scholar
© The Author(s) 2019
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Department of Applied Physics and J.M. Burgers Centre for Fluid DynamicsEindhoven University of TechnologyEindhovenThe Netherlands
van Houwelingen, J., Kunnen, R.P.J., van de Water, W. et al. Sports Eng (2019) 22: 13. https://doi.org/10.1007/s12283-019-0306-5
First Online 22 June 2019
Publisher Name Springer London
|
CommonCrawl
|
Increasing or Decreasing Subsequences of Finite Sequences
Recall from The Generalized Pigeonhole Principle page that if $p_1, p_2, ..., p_n$ are positive integers and if $A$ is a finite set that has $p_1 + p_2 + ... + p_n - n + 1$ elements and if $A_1, A_2, ..., A_n$ are subsets of $A$ that form a partition of $A$ then for some $i \in \{1, 2, ..., n \}$ we have that $\lvert A_i \rvert \geq p_i$.
We will now look at a particularly nice application of the generalized pigeonhole principle. We will first need to look at a couple of definitions though, both of which you may be familiar with already.
Definition: A Finite Sequence of $n$ real numbers $a_i \in \mathbb{R}$ is an ordered list of $n$ terms $(a_i)_{i=1}^{n} = (a_1, a_2, ..., a_n)$.
Infinite sequences can be defined analogously.
Definition: A sequence $(a_i)_{i=1}^{n}$ is said to be Increasing if $a_j \leq a_{j+1}$ for each $j = 1, 2, ..., n-1$. A sequence $(a_i)_{i=1}^{n}$ is said to be Decreasing if $a_j \geq a_{j+1}$ for each $j = 1, 2, ..., n- 1$.
For example, the finite sequence of positive integers from $1$ to $10$ is:
\begin{align} \quad (a_i)_{i=1}^{n} = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) \end{align}
Clearly each term of the sequence is greater or equal than the previous term in the sequence and so this sequence is increasing.
Definition: A Subsequence $(a_{i_k})_{k=1}^{m}$ of a finite sequence $(a_i)_{i=1}^{n}$ is an ordered sequence of $m$ terms $(a_{i_k})_{k=1}^{m} = (a_{i_1}, a_{i_2}, ..., a_{i_m})$ where $i_1 < i_2 < ... < i_m < n$.
By definition every subsequence is itself a sequence.
For example, consider the finite sequence $(a_i)_{i=1}^{n}$ described above. Then the finite sequence of event integers from $1$ to $10$ is a subsequence of $(a_i)_{i=1}^{n}$ and is given by:
\begin{align} \quad (a_{i_k})_{k=1}^{5}) = (2, 4, 6, 8, 10) \end{align}
Note that $i_1 = 2$, $i_2 = 4$, $i_3 = 6$, $i_4 = 8$, and $i_5 = 10$.
Now that we have defined a subsequence of a finite sequence, we will now look at an example to motivate the theorem coming up. Consider the following sequence of $10$ terms:
\begin{align} \quad (4, 2, 9, 5, 1, 3, 5, 4, 6, 10) \end{align}
Now suppose that we want to determine what is the largest number $m$ such that $1 \leq m \leq n$ that guarantees the existence of a subsequence of $m$ elements that is either increasing or decreasing? The following theorem answers this question if the number of elements in the parent sequence is of the form $n^2 + 1$ for $n \in \{ 0, 1, 2, ... \}$.
Theorem 1: For $n \in \{0, 1, 2, ... \}$ there exists a subsequence of $n + 1$ terms of the finite sequence of $n^2 + 1$ terms $(a_1, a_2, ..., a_{n^2 +1})$ that is either increasing or decreasing.
Proof: Let $(a_i)_{i=1}^{n^2+1} = (a_1, a_2, ..., a_{n^2+1})$ be a finite sequence of $n^2 + 1$ numbers. For each $k \in \{ 1, 2, ..., n^2 + 1 \}$ let $s_k$ equal the maximum length of an increasing subsequence of $(a_i)_{i=1}^{n^2+1}$ starting at $a_k$.
We will carry the proof by contradiction. Assume that $(a_i)_{i=1}^{n^2+1}$ has no increasing subsequences of length greater or equal to $n+1$ and no decreasing subsequences of length greater or equal to $n+1$. Since $s_k$ is equal to the maximum length of an increasing subsequence of $(a_i)_{i=1}^{n^2+1}$ starting at $a_k$ we therefore have that for all $k \in \{1, 2, ..., n^2 + 1 \}$ that $1 \leq s_k \leq n$.
Consider the sequence of real numbers $\{ s_1, s_2, ..., s_{n^2+1} \}$. There are $n^2 + 1$ numbers in this sequence and for each $k \in \{1, 2, ..., n^2 + 1\}$ we have that $s_k$ is either $1$, $2$, …, or $n$. We will now apply The Generalized Pigeonhole Principle.
We have $n^2 + 1$ numbers $s_k$ for which each of these numbers is one of $n$ numbers (from $1$ to $n$). Recall that if $n(r - 1) + 1$ objects are put into $n$ boxes than at least one box contains $r$ objects. Here we have that $n = r - 1$ which implies that $r = n + 1$ so for $n^2 + 1$ numbers $s_k$ put into $n$ groups we have that one group contains $n + 1$ or more of the $s_k$'s.
For $1 \leq k_1 < k_2 < ... < k_n < k_{n+1} \leq n^2 + 1$ we let:
\begin{align} \quad s_{k_1} = s_{k_2} = ... = s_{k_n} = s_{k_{n+1}} \end{align}
Suppose that there exists an $i \in \{1, 2, .., n \}$ such that $a_{k_i} < a_{k_{i+1}}$. Consider a longest increasing sequence starting at $a_{k_{i+1}}$. This sequence has length $s_{k_{i+1}}$. If $a_{k_i} < a_{k_{i+1}}$ then place the element $a_{k_i}$ in front of a longest increasing sequence starting at $a_{k_{i+1}}$. Then there exists an increasing sequence starting at $a_{k_i}$ that is $s_{k_{i+1}} + 1$ in length. However, $s_{k_{i+1}} = s_{k_i}$ so instead we have a sequence starting at $a_{k_i}$ that is $s_{k_i} + 1$ in length. However, $s_{k_i}$ is the length of the longest increasing sequence starting at $a_{k_i}$. Thus our assumption that there exists an $i \in \{1, 2, ..., n \}$ such that $a_{k_i} < a_{k_{i+1}}$ so instead for each $i \in \{1, 2, ..., n \}$ we have:
\begin{align} \quad a_{k_i} \geq a_{k_{i+1}} \end{align}
\begin{align} \quad a_{k_1} \geq a_{k_2} \geq ... \geq a_{k_n} \geq a_{k_{n+1}} \end{align}
This is a decreasing subsequence of $(a_i)_{i=1}^{n^2+1}$ length $n + 1$ though. Thus by assuming no increasing subsequences exist we have proven that a decreasing subsequence exists. The same follows if we assume no decreasing subsequences exist which will show an increasing subsequence exists. $\blacksquare$
|
CommonCrawl
|
Household level tree planting and its implication for environmental conservation in the Beressa Watershed of Ethiopia
Tesfa Worku1 nAff2,
S. K. Tripathi3 &
Deepak Khare3
Elsewhere in other developing countries, and 85% of the Ethiopian population is living in rural part of the country and more than 90% of domestic energy source is dependent of traditional biofuel. Increase in population is causing more demand for human use and more pressure on natural resources. This adversely affect the increase of multi-purpose and indigenous tree plantation and ago-forestry practice and hence has a vicious circle with food security. However, following the start of community based watershed management practice, households are encouraged to plant trees on their private land, which contributed to the increase of forest coverage. Therefore, the objective of this study was to assess household level tree planting, domestic energy consumption, and explore implication for environmental conservation.
Fuelwood and dung was major source of domestic energy in the area, consumed on an average 2280 and 1533 kg/year respectively and the total biofuel consumption was 268.06 t/year. The decline in natural forest and increase in demand for wood motivated people to have privately planted trees. Though it was variable among various socio-economic characteristics of farmers, tree planting was encouraged, based on ground reality. Therefore, promoting private based tree plantation should be considered as economic relief and filling the demand gap of fuelwood. Likewise, the opportunity cost of dung available for soil conditioner. The use of fuel saving stove and other alternative source of energy should be encouraged.
Local context policy option used for favoring for the allocation of bare land and mountainous topography for community and private tree planting for landless and small holder farmers has to be encouraged.
Conversion of forest for human use is one of the most serious challenges of the planet, resulted alarming increase in climate change and environmental degradation. Rural dwellers depending on subsistence agriculture are highly susceptible to natural resource degradation and climate change (Slingo et al. 2005; Meijer et al. 2015). In developing countries, forest resources and various biodiversity are declining. To a large extent, this has resulted from increasing human population, as intensive agriculture pressure increases on natural resources (FAO 2007; Ndayambaje et al. 2012; Meijer et al. 2015). Between 2000 and 2005 in developing countries, especially in Africa more than 4 Mha of natural forest was lost annually (FAO 2007; Ndayambaje et al. 2012). Forest degradation in turn resulted for scarcity of fuelwood and other substantial antagonistic consequences, such as watershed functions deterioration, loss of biodiversity, carbon dioxide release into the environment and intensive soil erosion (Jain and Singh1999; Heltberg et al. 2000; Pandey 2002).
According to United Nations Economic Commission for Africa (UNECA 2004; Bewket 2005 and Gebreegziabher et al. 2010), economic growth, self-sufficiency and alleviating poverty is limited in developing countries especially in sub-Saharan Africa due to in insufficient provision of energy service. A majority of the population depends on firewood and livestock dung as a source of energy, and thus accelerating the problems of environment and land resource degradation (Meshesha et al. 2016). Fuelwood gathered from the forest is a common phenomenon and important source of domestic energy in rural area of the globe. More than half of the world's population cook with traditional biofuel source, which provides around 35% of domestic energy supplies in the developing countries (Heltberg et al. 2000). On the basis of Pandey assessment in 2002, in most rural part of India the dominant source of household domestic energy is remained on fuelwood, crop residue and dung cake. However, the share of consumption is varies considerably, and largely depending upon the availability, the cost in terms of required time for collection.
Like other developing countries, in both urban and rural parts of Ethiopia, traditional biofuel is a source of energy (Miller 1986; Bewket 2005; Abreham and Belay 2015). For instance, in urban areas the largest share of domestic energy is from fuelwood (55.4%), the remaining 44.6% of energy is covered from charcoal, cattle dung, and agricultural crop residue which comprises 9.3, 8.4 and 6.8% respectively. The share of modern sources of household energy consumption includes 15.35% of kerosene, 0.28% of diesel and 4.75% electricity which account 20.38% of the total domestic energy consumption. On the other hand, in rural area in which majority of the population living more than 99% of domestic energy source is dependent on traditional biofuel. Fuelwood holds more than 80% followed by cattle dung and agricultural residue (9.25 and 8.31% respectively) (Bewket 2005). According to EREDPC/MoRD (2002), fuelwood with charcoal, animal dung with crop residues comprises 83 and 16% respectively, on the other hand electricity and petrol accounts for 1% to satisfy the total household domestic energy consumption. In general, 99.9% of the total domestic energy consumption of rural households is originated from biomass fuels.
Like agricultural land expansion, the growing demand for fuelwood is a serious cause of the high rate of deforestation in Ethiopia (Worku and Tripathi 2016). Increasing in the traditional fuelwood consumption level of the community and coupled with insufficient utilization of the available resource have led to more tension on natural resource (Arrow et al. 2004; Godfray et al. 2010; Lin et al. 2011; Mislimshoeva et al. 2014).
The share of natural forest coverage in Ethiopia has been decreasing at an alarming rate, from 40% of land area around 50 Mha just before the turn of the last century to 3.6% by the early 1980s (Cheng et al. 1998) and the clearance of resource was continuous, and by the early 1990s much of the cover was destroyed and only 2.3 Mha of land forest cover remained (EFAP 1993; Bewket 2005).
According to WRI (1990), the rate of reforestation was only 13,000 ha/year; on the contrary the rate of deforestation was 88,000 ha/year. Similarly, one report indicated that the rate of deforestation was 150,000–200,000 ha/year. Davidson (1988) and Wood (1990) predicted that if deforestation is continuing at the rate of 100,000 ha/year, all the highlands parts of Ethiopia will be cleared by the year 2020.
The rapid loss of forest land has raised the concern of local, national, and international communities. Many local communities now work harder to collect firewood and construction materials. In some villages women spend 6 h, walking 10 km each way, to collect wood (Cheng et al. 1998). Similarly, Heltberg et al. (2000), to collect fuelwood from forest in India during winter season women make 1–8 journeys per a week, which headed-loaded back to their house and it took from 1.5 to 6 h. In general, each household especially in rural part of the country in which domestic energy is relied on traditional fuel source spend over the year ranges between 34 and 504 h with a mean of 190 h.
It is obviously a result of deforestation and de-vegetation. Fuel-wood shortage has serious consequences as households are forced to replace wood by using agricultural crop residue and livestock dung. This is instead of use to increase soil fertility, and will significantly affect agricultural crop production.
According to Heltberg et al. (2000), in many parts of Asia and Africa animal dung is used for household energy consumption. But dung is used as source of manure, and using it for fuel can have substantial negative effect on soil fertility. More than 97% of overall food for population of the world originated from natural resource. However, land degradation and soil erosion are a striking problem at the global level (Munro et al. 2008; Mekuria et al. 2009). In 1980s, 1–1.5 million tons of agricultural crop was lost per year through use of livestock dung and agricultural crop residue as fuel source instead of using it for soil amendment. Likewise, the adverse effect is challenge the availability of productivity in Ethiopia. In appropriate natural resource use practice, and compatibility of policy based on the local socio-economic context of the community have been the pressing cause of resource degradation (Munro et al. 2008; Mekuria et al. 2009). Serious land degradation has resulted in a reduction of crop productivity, leading to demand for additional food aid (Newcombe 1987; Alemu 1999). Similarly, Araya and Edwards (2006) estimated that 600,000 tons of agricultural crop production has been lost per year as a result of livestock dung rather than use for maintaining soil fertility.
Following increase in population, there has been more demand for domestic energy and hence increased in de-vegetation and depletion of soil. Taking this catastrophe into consideration development of energy supply for both rural and urban area of the country should give priority. Even though different literatures has identified a high rate of deforestation in the country (Davidson 1988; Aklog 1990; WRI 1990; EFAP 1993), only limited studies have explored the impact of household level tree planting on soil properties (Meaza and Gebresamuel 2013), rural household fuel production and consumption (Alemu 1999), energy, growth and environmental interactions (Gebreegziabher et al. 2010). A few studies have explored household level tree planting and its impact on environmental conservation. Therefore, the objective of this paper was to assess household level tree planting, energy consumption pattern, source, and its implication on environmental conservation.
Research site
The study watershed lies within 39°37′E and 9°41′N. In administrative terms, it is located Basona Woreda (District), North Showa zone of Amhara regional state. Situated north east of capital city, Addis Ababa, the watershed forms parts of central highlands of Ethiopia which is the parts of Abay basin (Fig. 1).
Location study area
The watershed is characterized by diverse topographic conditions like mountainous and dissected terrain with steep slopes. The elevation ranges from 2747 to 3674 m a.s.l. The annual average temperature of the area is 19.7 °C; annual maximum rainfall is 1083.3 mm, with a minimum amount recorded 698.5 mm. The most common types of soil are Cambisols (locally called Abolse), vertisols (Merere), Andosols, Fluvisols and Regosols. Mixed crop- livestock is farming system of the study area. Barley, wheat, horse beans, field peas, lentils and chick peas are the most commonly growing crops in the area. Cattle and sheep are the dominant types of livestock, but goats, horses, and chickens are also common. The farming system is depending on rain-fed system and farmers are always worried about the duration and intensity of rainfall.
Data and methods
According to Biratu and Asmamaw (2016), Elsewhere in Ethiopia from the total 2020 household heads, 5% (101) samples were selected using stratified sampling techniques. Likewise, Moges and Holden (2007), in South part of Ethiopia 5% of farm household from the total household heads were taken for survey study using simple random sampling techniques. Bewket (2005), in order to undertake survey study in the Northern highlands of Ethiopia, with 3670 total households based on systematic random sampling system 133 (~4%) samples size were taken. Gebre et al. (2013), elsewhere, in Ethiopia like our sample size from nearly similar total household size 92 samples were taken using random sampling procedure for the interview. According to Ayele (2009), proportionate sampling techniques was employed in order to undertake his study therefore from 10,094 total populations he used to select 150 samples households.
Therefore, the data for this study obtained from structured household survey conducted from May to August 2015. The procedure was as follows, 92 sample household were selected randomly while different soil and water conservation work was implemented. Initially structured questionnaire was prepared and pre-tested for quantitative information. The interview was done by going to the watershed member's homestead and during community watershed management practice. Additional information was obtained through focused group discussion, key informant interview, field observation and informal discussion while community watershed management practices.
The most important and dominant source of biofuel in the study area is fuelwood and cattle dung. Therefore, in the survey questions included about the quantity of biofuel used, source of biofuel, level of tree planting, distance traveled to collect biofuel, and farmer's response on shortage of biofuel. It is difficult to ask the weight of fuelwood and dug consumed, but, in the interviews it was asked to mention the number of bundles of wood and basket of dung consumed per a week. The size of bundles of wood varies depending upon the person carrying it and the size of basket of dung varies depending upon the size and patterns of stacking the dung cakes. The researchers tried to determine the mean weight of dry wood and cattle dung. According to Bewket (2005) and as per present study, the average weight of a bundle of wood and a basket of dry dung was 12.5 and 6 kg respectively. The survey also included socio-economic data about household size, size of land holding, income earned from crop production, off-farm income, cattle ownership, sale of wood and trees. After having all the pertinent information, correlation matrix, descriptive and least significance difference (LSD) test has employed using SPSS software version 23.
In addition to the above statistic, econometrics model (Probit models) has applied in this study to identify household's decision to plant tree, number of trees planted, and biofuel consumption. In the model, household who do not have access to plant trees has considered. The use of probit model is useful to distinguish the determinant of household level decision to plant trees, number of tree planted and the domestic energy consumption in their home. The model described in the following ways:
$$ y_{i}^{*} = x^{\prime}_{i} \beta + \varepsilon_{i } ,\sim NID \,(0,1) $$
where, yi* is unobserved and is referred to as a latent variable.
Therefore, the individual i chooses to participate in planting trees at the household and community level when the variation of planting and not plant tree surpasses a certain threshold, null in this case, thus, yi = 1, if and only if yi* > 0; and yi = 0, if yi* ≤ 0. Therefore, the latent variables are depends on the value of x:
$$\begin{aligned} { \Pr }\left( { y_{i} = \frac{1}{{x_{i} }}} \right) & = \Pr \left( {y_{i}^{*} > 0} \right) = \Pr \left( {u_{i} \ge x^{\prime}_{i} \beta } \right) \\ & = 1 - F\left( { - x^{\prime}\beta } \right) = F(x^{\prime}\beta )\end{aligned} $$
where, \( F(x^{\prime}_{i} \beta ) \) is a cumulative distribution function, which is associated with the expected distribution of error term.
According to Wooldridge (2002), maximum likelihood has used to estimate the value of β. However, the degree of β value is not particularly meaningful except in the special case. So, both in the continuous as well as discrete explanatory variables, it is significant to know how to interpret the value of β. When the value of β is estimated, the effect of marginal change in the ith variables in X, Xj, is described in the form of:
$$ \frac{{\partial { \Pr }(Y_{i} = 1)}}{{\partial X_{ij} }} = f^{\prime}(X^{\prime}_{i} \beta )\beta_{j} $$
The value of marginal effects therefore, is depend on the value of Xi we used. Hence, the mean value of Xi in the observed sample are used to obtain the value of f (βXi). Finally, the effect of Xj variables on the willingness of the community and household planting trees at the household and community level specified by the magnitude and signs of the marginal effects.
Biofuel consumption in the study watershed
Fuelwood consumption
Elsewhere in rural part of Ethiopia, in the study area fuelwood is the main source of energy. The entire household fuelwood consumption was ranged from 304 to 4258 kg/year. The majority of households (15.2 and 14.1% from the total population) biofuel consumption comprised of 1825 and 2433 kg/year respectively (Table 1). The average household fuelwood consumption was 2280 kg/year. The total fuelwood consumption in the watershed was 172,868 kg/year, which is equivalent to (~172.868 tonnes/year) with mean quantities of 1902.95 kg/year (Table 1). In line with Guta (2012), study the total annual fuelwood and animal dung consumption elsewhere in rural part of Ethiopia estimated to be 2154 and 1825 kg respectively. Similarly, to the study by Mekonnen and Kohlin (2009), the total quantity of fuelwood consumed fluctuated between 2004 and 2143 kg/year in 2005, it was nearly the same with findings of the present study, which indicated that the consumption of fuelwood as a source of domestic energy is increasing over time and negatively related with environment. The annual fuelwood consumption of household was significantly varied among each other, as indicated by least significance difference (LSD) result (f = 2.80, p = 0.0001) and the variation of fuelwood consumption over time in the study area resulted from absence of fuel saving stove (65.2%), other things citreous paribusFootnote 1 such as household size, income level, cattle population and number of tree planting. According to Nanda and Khurana (1995) and Saxena (1997) besides increasing in consumption, the use of traditional biofuel expose user causes to health problems such as eye and lung diseases due to kitchen smoke, especially for females, who devote long hours in close proximity to kitchen. Majority of households was using traditional three stone stoves. Only 34.8% of the respondents were using fuel saving stove locally called "Gonze".Footnote 2 Obviously rural people are fulfilling their demand of fuelwood anywhere from agricultural land, natural forest, and grazing land and fallow land (Jaiswal and Bhattacharya 2013). Even though people in all over rural parts of the country in general, the study area in particular depend on natural forest, privately planted trees and community plantation used for fuel source and construction at large, and less community plantation and natural forest comprised of 7.6 and 5.4% respectively.
Table 1 Annual domestic biofuel consumption in Beressa Watershed
In addition to acceleration of land degradation and loss of agricultural productivity because of clearing of natural forest, degradation contributes negatively for further shortage of fuel-wood. It is observed that due to deforestation of natural forest households were spending considerably more time in collecting fuelwood over long distances. As a result, households were substituting dung cake for fuel consumption, which has important implications for agricultural production. Therefore, participants were roughly asked to assess average time travelling to collect biofuel before and after community based watershed management practice. The estimate of average time may not accurate so care has to taken during assessment (Fig. 2). The average return time before the project was 121 min. By assuming the average walking of person is 5 km/h, distance traveled for fuelwood collection therefore was 10.08 km. Recently equivalent distances take 92.3 min, which is equivalent to 7.66 km distance for fuelwood. As a result of increasing community plantation around homestead and farm area they were turned their face to private tree plantation for domestic utilization (97.8%) and in turn reduced distance traveled for fuelwood from 10.08 to 7.66 km.
Comparison of time spending for fuelwood collection before and after watershed management
Dung fuel consumption
Similarly, cattle dung cake holds the most important source of household's energy consumption next to fuelwood. The annual dung consumption of the study watershed ranged from 146–2920 kg/year, with total quantity of 95,192 kg/year–95.192 t/year (already presented in Table 1). Majority of households used 876 kg/year, which is lower than the average (1533 kg/year) and the mean quantities was 1050.46 kg/year. In contrast, the other studies concluded that the total average quantity of animal dung cake used as fuel source reduced from 1307 to 1157 kg/year in 2000 and 2005 respectively (Mekonnen and Köhlin 2009).
Clearing of forest and degradation of land are inter-related with use of fuelwood and cow dung as domestic energy source, which are the most serious environmental problems for the country. For instance, using dung cake for energy source negatively contributed for the availability of manure for soil conditioner to boost agricultural productivity (Mekonnen and Köhlin 2009). One of the most serious constraints of food security was soil fertility depletion. Manure is an important soil conditioner used to enhance fertility of the soil (Raj et al. 2014). As Fulhage [(2000), cited in Raj et al. (2014)] manure contains N, P, K, Mg, Ca, S, Zn, Cu, B, Mn etc. (Table 2). These are used to improve soil tilth, water holding capacity of the soil and aeration, in turn used to increase agricultural productivity. Therefore, use of dung as source of domestic energy reduces the opportunity cost of using it as soil fertility amendment, as cattle dung contains very important nutrients which are suitable for soil fertility.
Table 2 Estimated nutrient loss from burning of cattle dung
To fill the gap of domestic energy source 95.192 tonnes of cattle dung burnt annually and loss many more macro and micro nutrient which is suitable for soil fertility improvement. Instead of using as a soil fertility amendment, burning cattle dung for domestic energy source-therefore has negative implication for agricultural productivity especially for developing country like Ethiopia-importing fertilizer from abroad.
Dung was collected from different places of the study area from private grazing land, communal grazing land, and prepared from privately own cattle in their home. Elsewhere in rural parts of Ethiopia, in the study watershed in addition to performing household chores and raising children the responsibility of collecting fuelwood mostly handed by females and children too. However, every family member performed the activity.
Biofuel consumption pattern and socio-economic characteristics
The annual biofuel consumption of the study watershed ranged from 450 to 7118 kg with the average consumption of 3813 kg/annum. The variation of biofuel consumption is susceptible to various factors. The requirement of fuelwood is subject to population size, availability of biofuel nearby their home, number of cattle population they have. However, increase in population number makes availability of fuelwood lagging behind the need for domestic use (Swaminathan and Varadharaj 2001).
The use of biofuel consumption in the study area was variable, the variation reflects there were different factors that influence the total quantity of biofuel consumed in a year. There was positive correlation between fuelwood consumption and household size, which was statistically significant (r = 0.472, p < 0.01) (Table 3), this indicates larger households need additional fuelwood to prepare household food. Similarly, the size of households and use of dung cake were significantly correlated (r = 0.401, p = <0.01), possibly because larger family needs more energy consumption and they have possibly free labor to collect dung. The association between size of land holding and biofuel is statistically insignificant (r = −0.54, p > 0.05), which indicates large land holders do not have better option of getting more biofuel. The association between number of tree planted and fuelwood was statistically insignificant. Likewise, no significant association between dung consumption and number of tree planted. Instead of using wood and trees for domestic energy source households used to prefer to earn additional income for selling to nearby market rather than replacing dung cake as a fuel source. Therefore, the availability of different micro and macro nutrient obtaining from dung to the soil became scarcer and scarcer-in turn positively contributed for food insecurity.
Table 3 Correlation matrix indicates biofuel consumption and influencing factors
In the present study it has been found that cattle ownership was positively correlated with dung cake. Households that owned large number of cattle have additional opportunity for preparing dung cake in their home. On the contrary it is found that the association between cattle ownership and fuelwood was negatively correlated. Therefore, more cattle they have, more tendency of using dung for domestic use. In turn positively contribute for having more trees and wood free for home construction and earning additional cash income.
There was a significant positive correlation between income level of households and fuelwood, (r = 0.048, p < 0.05). On the other hand, income and dung was not statistically significantly correlated. The possible explanation may be more income from sale of farm produce and off-farm income earned, fuel saving stove used and therefore, using dung as soil fertility amendment hence boosting agricultural production and productivity.
Household level tree plantation to the livelihoods strategies
The number of tree planted per a year was variable among different households-ranged from 0 to more than 520. Tree planted around farm plot and homestead therefore positively contributed in last decade for the increase of forest coverage. The total number of trees planted per a year was 11,330 with the average of 243 trees per households (Fig. 3). In line with this, a study by Bewket (2005), in the north-western highlands of Ethiopia of Chemoga watershed inferred in the last four decades farm and community level tree planting has substantially contributed for the increase of forest cover, even though there is difference between planted trees and natural forest cover with averagely 307 trees planting per household. According to Ndayambaje et al. (2012), in Rural Rwanda out 480 of the total households, 350 of them were planting 1–4 trees species in their farm plots. Similar findings, Sandewall et al. (2015), average planted tree area conserved and managed by households accounted by about 0.4 ha/hh in China, 0.3 ha/hh in Ethiopia and 0.4 ha/hh in Vietnam. Being, stipulated by market and government favorable policies for climate resilient green economy together with erratic rainfall, households was increasingly decided to tree planting operation in their homestead and around farm plots. Through planting trees and earning cash income for their survival, simultaneously reduced dependence on crops. Similar finding by Sandewall et al. (2015), planted trees and natural forest are the second income source as a percent of total households in Ethiopia after crop production and 6–25% for Vietnam. However, increase of tree and forest plantation at the expense of unsustainable farm rises incomes for some, it did not immediately bring household out of food insecurity and shortage.
Number of trees planted
Other reason farmers decided to plant tree compared to producing crops because growing of trees required less labors. Eucalyptus tree was the most dominant types of tree planted in the watershed (more than 90%) which is followed by Juniperus procera tree (locally called Tid) to some extent Sesbania sesban, tree Lucerne, shrubs and grass was recently practiced. Due to fast growing, less time required for treating and less susceptible to climate change farmers preferred eucalyptus tree. On the contrary, from long experience they had farmers expressed eucalyptus tree has negative ecological effect, as it needs large volume of water requirement. Many controversial points listed in various studies about the benefit and ecological effect of eucalyptus tree. According to Zhang (2012), for instance various biological debris and soil fertility was reduced because of rehabilitation of eucalyptus tree. Simultaneously the growth rate of eucalyptus tree faster than any other forest and tree species and needs large amount of water—cause for degradation in biodiversity. Similar findings by (Bone et al. 1997; Lemenih 2004), Eucalyptus tree has negatively affect the ecosystem services by for instance reducing soil moisture content and reducing the groundwater. Contrary to this, being planted eucalyptus tree precipitation has increased by 152.5 mm/annum, 75.3 mm reduction in evaporation annually.
Regardless of negative ecological as well as environmental effects, which needs further deep study on clear advantage and disadvantages, eucalyptus tree was a means of economic relief for households especially during drought period through earning additional cash income (Jagger and Pender 2003). From the survey, we concluded that more than 85% of households were sold trees to nearby market to earn additional cash income. Similar study by Holden et al. (2003), Eucalyptus tree plantation on marginal and degraded lands has positive effects on crop production and land conservation.
To meet the demand especially for home construction household were used their privately planted tree near homestead as well as farm plots (90%). This study inferred that, unlike many other studies on natural forest of the country that simply farm households believed to be the agent for deforestation, we observed that rural households were also be the actor for increasing the coverage of forest. Similarly, study by (Kajembe 1994; Price and Campbell 1997), elsewhere in many developing countries of the world confirmed that rural households play significant roles in the indigenous tree planting and management of their forest resources.
The analysis of association between tree planting and various household socio-economic characteristics showed that there was positive correlation between age of households and number of tree planted, which was statistically significant (r = 0.245, p < 0.01). Household size was positively correlated with tree planting, which indicated that the higher the size of household, the more free labors were available for planting trees. The association between education background of household and planting trees was significantly correlated (r = 0.049, p < 0.05). On the basis of statistical analysis farmers who were able to read and write have better information about the benefit of planting trees than who have not. Household income was not significantly related to planting trees (r = −0.123, p = 0.07). Likewise, the association between the size of land holding and number of tree planting was not significant. This is because planting trees operated around homesteads. In addition, youngsters who have less farm plots eager to have additional cash income-devote to have more trees.
Based on probit model result indicated that having land use right certificate significantly increase level of tree planting. In line with this, the anticipation of land use certificate is likely householder more self-assured in order to decide growing trees in their own land (Table 4). Similarly, previous study by Holden et al. (2009), confirmed that land use certification is likely affect household decision on planting trees.
Table 4 Probit model result for household level of tree planting
The result for household-level variables confirmed that household's, education, age, gender, family size, and biofuel consumption per household were significant variables, which inferred more likely to plant trees. On the other hand, credit accesses, off farm income, size of land holding and number of livestock tending are less likely to influence the household decide to plant trees.
Elsewhere in developing country, for Ethiopia in general, in the study watershed people relied on traditional biofuel. There has been conflict between natural resource and people dependent on subsistence farming and still unresolved issue in the problems of biodiversity.
Alarming growth rate of population and hence accelerating in rate of deforestation of natural forest and the growing demand of household domestic energy source, household level tree planting used to reverse the scarcity. Likewise, it has positive impact on economic relief and to the environment as well. Due to the growing demand of energy for domestic use, household turned to using dung for energy source. Hence, the use of cattle dung for soil fertility improvement was limited. A possible solution to reverse the problem is encouragement of households to use more efficient fuel saving stove. So that the use of cattle dung as organic fertilizer can be increased and hence agricultural diversification. Besides using of cattle dung for soil fertility improvement and agricultural productivity, it used to reduce foreign currency spending on importing chemical fertilizer from abroad. Therefore, homestead and bare land private and community tree plantation should be encouraged as strategy to reduce further pressure on natural forest and ecological and environmental conservation.
Eucalyptus and Juniperus tree (locally known as Tid) are the dominant tree types in the study area, the interviewed households stated that they were unable to get fodder to feed their livestock. To reduce shortage of fodder problems, soil erosion and land degradation, to meet fuelwood requirement, promotion and encouragement of planting multi-purpose and indigenous tree should have done. On the basis of ecological and economic analysis, private and community level tree planting, and localized natural resource management should be implemented. Therefore, allocation of bare land, hillside and degraded land for private tree planting at large create more responsibility for local households.
Finally, it can be concluded that households level multi-purpose tree plantation and agro-forestry practice based on local context should be acknowledged. Because, it has immense positive contribution for increasing natural forest coverage, ecological succession, reducing natural resource degradation, increase the availability of fuelwood and earning cash income and then reduce pressure of using dung for domestic fuel source. Therefore, it has inclusive benefit not only for land owner but also have not because they will have a chance getting benefit from the rehabilitated watershed.
Citreous paribus is an economic term refers to other thing remain constant.
Backing and cooking fuel saving stove technology produced from cement and sand mostly used by rural population.
Abreham B, Belay Z (2015) Biofuel energy for mitigation of climate change in Ethiopia. J Energy Nat Resour 4(6):62–72. doi:10.11648/j.jenr.20150406.11
Aklog L (1990) Forest resources and the Ministry of Agriculture forestry development and legislation. National Conservation Strategy Conference Document. ONCC, Addis Ababa
Alemu M (1999) Rural household biomass fuel production and consumption in Ethiopia: a case study
Araya H, Edwards S (2006) The Tigray experience : a success story in sustainable agriculture. Third World Network (TWN), Kuala Lumpur
Arrow K, Dasgupta P, Goulder L, Daily G, Ehrlich P, Heal G, Levin S, Mäler KG, Schneider S, Starrett D, Walker B (2004) Are we consuming too much? J Econ Perspect 18(3):147–172
Ayele ZE (2009) Smallholder farmers' decision making in farm tree growing in the highlands of Ethiopia. ProQuest, Ann Arbor
Bewket W (2005) Biofuel consumption, household level tree planting and its implications for environmental management in the northwestern highlands of Ethiopia. East Afr Soc Sci Res Rev 21(1):19–38
Biratu AA, Asmamaw DK (2016) Farmers' perception of soil erosion and participation in soil and water conservation activities in the Gusha Temela watershed, Arsi, Ethiopia. Int J Riv Basin Manag 14(3):329–336
Bone RC, Grodzin CJ, Balk RA (1997) Sepsis: a new hypothesis for pathogenesis of the disease process. CHEST J 112(1):235–243
Cheng S, Hiwatashi Y, Imai H, Naito M, Numata T (1998) Deforestation and degradation of natural resources in Ethiopia: forest management implications from a case study in the Belete-Gera Forest. J For Res 3(4):199–204
Davidson J (1988) Ethiopia, preparatory assistance to research for afforestation and conservation. Ministry of Agriculture, Addis Ababa
EFAP (Ethiopian Forestry Action Program) (1993) Ethiopian forestry action program: the challenge for development. Ministry of Natural Resources Development and Environmental Protection, Addis Ababa
EREDPC/MoRD (2002) Integrated rural energy strategy. EREDPC/MoRD, Addis Ababa
FAO (2007) State of the world's forests 2007. Food and Agriculture Organisation, Rome
Fulhage CD (2000) Reduce environmental problems with proper land application of animal manure. University of Missouri Extension, Columbia
Gebre T, Mohammed Z, Taddesse M, Narayana SC (2013) Adoption of structural soil and water conservation technologies by smallholder farmers in Adama Wereda. East Shewa, Ethiopia
Gebreegziabher Z, Hailu Y, Mekelle T, Ababa A, Gebrekirstos K (2010) Energy, growth, and environmental interaction in the Ethiopian economy.
Godfray HCJ, Crute IR, Haddad L, Lawrence D, Muir JF, Nisbett N, Pretty J, Robinson S, Toulmin C, Whiteley R (2010) The future of the global food system. Philos Trans R Soc Lond B: Biol Sci 365(1554):2769–2777
Guta DD (2012) Assessment of biomass fuel resource potential and utilization in Ethiopia: sourcing strategies for renewable energies. Int J Renew Ener Res (IJRER) 2(1):131–139
Heltberg R, Arndt TC and Sekhar NU (2000) Fuelwood consumption and forest degradation: a household model for domestic energy substitution in rural India. Land Econ: 213–232
Holden S, Benin S, Shiferaw B, Pender J (2003) Tree planting for poverty reduction in less-favoured areas of the Ethiopian highlands. Small-Scale For Econ Manag Policy 2(1):63–80
Holden ST, Deininger K, Ghebru H (2009) Impacts of low-cost land certification on investment and productivity. Am J Agric Econ 91(2):359–373
Jagger P, Pender J (2003) The role of trees for sustainable management of less-favored lands: the case of eucalyptus in Ethiopia. Forest Policy Econ 5(1):83–95
Jain RK, Singh B (1999) Fuelwood characteristics of selected indigenous tree species from central India. Bioresour Technol 68(3):305–308
Jaiswal A, Bhattacharya P (2013) Fuelwood dependence around protected areas: a case of Suhelwa wildlife sanctuary, Uttar Pradesh. J Hum Ecol 42(2):177–186
Kajembe GC (1994) Indigenous management systems as a basis for community forestry in Tanzania: a case study of Dodoma urban and Lushoto districs
Lemenih M (2004) Effects of land use changes on soil quality and native flora degradation and restoration in the highlands of Ethiopia, vol 306
Lin Z, Xuelin LIU, Yunjie W, Li YANG, Xin LONG, Bingzhen D, Fen L, Xiaochang CAO (2011) Consumption of ecosystem services: a conceptual framework and case study in Jinghe watershed. J Resour Ecol 2(4):298–306
Lupwayi NZ, Girma M, Haque I (2000) Plant nutrient contents of cattle manures from small-scale farms and experimental stations in the Ethiopian highlands. Agric Ecosyst Environ 78(1):57–63
Meaza H, Gebresamuel G (2013) Impact of household level tree planting on key soil properties in Tigray, Northern Highlands of Ethiopia, vol 3
Meijer SS, Catacutan D, Ajayi OC, Sileshi GW, Nieuwenhuis M (2015) The role of knowledge, attitudes and perceptions in the uptake of agricultural and agro forestry innovations among smallholder farmers in sub-Saharan Africa. Int J Agric Sustain 13(1):40–54
Mekonnen A, Köhlin G (2009) Biomass fuel consumption and dung use as manure: evidence from rural households in the Amhara region of Ethiopia. Institutionen för national ekonomi med statistik, Handelshögskolan vid Göteborgs universitet, Gothenburg
Mekuria W, Veldkamp E, Haile M, Gebrehiwot K, Muys B, Nyssen J (2009) Effectiveness of exclosures to control soil erosion and local community perception on soil erosion in Tigray, Ethiopia
Meshesha TW, Tripathi SK, Khare D (2016) Analyses of land use and land cover change dynamics using GIS and remote sensing during 1984 and 2015 in the Beressa Watershed Northern Central Highland of Ethiopia. Model Earth Syst Environ 2(4):168
Miller AS (1986) Growing Power: bio energy for development and industrialization. World Resource Institutes, A Center for Policy Research
Mislimshoeva B, Hable R, Fezakov M, Samimi C, Abdulnazarov A, Koellner T (2014) Factors influencing households' firewood consumption in the Western Pamirs, Tajikistan. Mt Res Dev 34(2):147–156
Moges A, Holden NM (2007) Farmers' perceptions of soil erosion and soil fertility loss in Southern Ethiopia. Land Degrad Dev 18(5):543–554
Munro RN, Deckers J, Haile M, Grove AT, Poesen J, Nyssen J (2008) Soil landscapes, land cover change and erosion features of the Central Plateau region of Tigrai, Ethiopia: photo-monitoring with an interval of 30 years. Catena 75(1):55–64
Nanda AK, Khurana SAMIDHA (1995) From hearth to earth: use of natural resources for cooking in Indian households. Demogr India 24(1):33–58
Ndayambaje JD, Heijman WJM, Mohren GMJ (2012) Household determinants of tree planting on farms in rural Rwanda. Small-scale For 11(4):477–508
Newcombe K (1987) An economic justification for rural afforestation: the case of Ethiopia. Ann Reg Sci 21(3):80–99
Pandey D (2002) Fuelwood studies in India myth and reality. Center for International Forestry Research, Bogor
Price L, Campbell B (1997) Household tree holdings: a case study in Mutoko communal area, Zimbabwe. Agrofor Syst 39(2):205–210
Raj A, Jhariya MK, Toppo P (2014) Cow dung for eco-friendly and sustainable productive farming. Environ Sci 3(10):201–202
Sandewall M, Kassa H, Wu S, Khoa PV, He Y, Ohlsson B (2015) Policies to promote household based plantation forestry and their impacts on livelihoods and the environment: cases from Ethiopia, China, Vietnam and Sweden. Int For Rev 17(1):98–111
Saxena NC (1997) The wood fuel scenario and policy issues in India (No 49). Food and Agriculture Organization of the United Nations, Rome
Slingo JM, Challinor AJ, Hoskins BJ, Wheeler TR (2005) Introduction: food crops in a changing climate. Phil Trans R Soc Lond B Biol Sci 360(1463):1983–1989
Swaminathan LP, Varadharaj S (2001) Status of firewood in India. In IUFRO Joint Symposium on Tree Seed Technology, Physiology and Tropical Silviculture, College, Laguna (Philippines). Accessed 30 Apr–3 May 2001
UNECA (United Nations Economic Commission for Africa) (2004). Sustainable agriculture and environmental rehabilitation program: household level socio-economic survey of the Amhara Region. UNECA, Addis Ababa
Wood A (1990) Natural resource management and rural development in Ethiopia. Ethiopia: Rural Dev Options, p 187–195
Wooldridge JM (2002) Econometric Analysis of cross section and panel data. Cambridge. Wooldridge JM (2006) Introductory econometrics, a modern approach, Ohio et al.
Worku T, Tripathi SK (2016) Farmer's perception on soil erosion and land degradation problems and management practices in the Beressa Watershed of Ethiopia. J Water Resour Ocean Sci 5(5):64–72
WRI (World Resources Institute) (1990) World resources 1990–1991: a guide to the global environment. Oxford University Press, Oxford
Zhang W (2012) Did Eucalyptus contribute to environment degradation? Implications from a dispute on causes of severe drought in Yunnan and Guizhou, China. Environ Skept Crit 1(2):34
In the acquisition of the data, data collection, data coding and entry, data analysis, interpretation of the result, and writing has been substantially contributed by TW, SKT and DK have been involved in critically advising, revising the manuscript and made possible suggestion. All authors read and approved the final manuscript.
This research was supported by the Ministry of education, government of Ethiopia. Prof. S.K Tripathi and Prof. Deepak Khare, Department of Water Resource Development and Management, India Institute of Technology Roorkee are acknowledged for the valuable comment on the manuscript. Sincere thanks also extended to all who took part for the manuscript. For the insightful comments special thank extended for editors and anonymous reviewers.
Tesfa Worku
Present address: IITR Roorkee, Roorkee, India
Department of Water Resource and Irrigation Management, Debre Berhan University, Debre Berhan, Ethiopia
Department of Water Resources Development and Management, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, 247667, India
S. K. Tripathi & Deepak Khare
S. K. Tripathi
Deepak Khare
Correspondence to Tesfa Worku.
Worku, T., Tripathi, S.K. & Khare, D. Household level tree planting and its implication for environmental conservation in the Beressa Watershed of Ethiopia. Environ Syst Res 6, 10 (2018). https://doi.org/10.1186/s40068-017-0087-4
Fuelwood
|
CommonCrawl
|
Only show content I have access to (37)
Last 3 years (12)
Over 3 years (32)
Physics and Astronomy (40)
Publications of the Astronomical Society of Australia (39)
Symposium - International Astronomical Union (2)
Proceedings of the International Astronomical Union (1)
GaLactic and Extragalactic All-sky Murchison Widefield Array survey eXtended (GLEAM-X) I: Survey description and initial data release
Murchison Widefield Array
N. Hurley-Walker, T. J. Galvin, S. W. Duchesne, X. Zhang, J. Morgan, P. J. Hancock, T. An, T. M. O. Franzen, G. Heald, K. Ross, T. Vernstrom, G. E. Anderson, B. M. Gaensler, M. Johnston-Hollitt, D. L. Kaplan, C. J. Riseley, S. J. Tingay, M. Walker
Published online by Cambridge University Press: 23 August 2022, e035
We describe a new low-frequency wideband radio survey of the southern sky. Observations covering 72–231 MHz and Declinations south of $+30^\circ$ have been performed with the Murchison Widefield Array "extended" Phase II configuration over 2018–2020 and will be processed to form data products including continuum and polarisation images and mosaics, multi-frequency catalogues, transient search data, and ionospheric measurements. From a pilot field described in this work, we publish an initial data release covering 1,447 $\mathrm{deg}^2$ over $4\,\mathrm{h}\leq \mathrm{RA}\leq 13\,\mathrm{h}$ , $-32.7^\circ \leq \mathrm{Dec} \leq -20.7^\circ$ . We process twenty frequency bands sampling 72–231 MHz, with a resolution of 2′–45′′, and produce a wideband source-finding image across 170–231 MHz with a root mean square noise of $1.27\pm0.15\,\mathrm{mJy\,beam}^{-1}$ . Source-finding yields 78,967 components, of which 71,320 are fitted spectrally. The catalogue has a completeness of 98% at ${{\sim}}50\,\mathrm{mJy}$ , and a reliability of 98.2% at $5\sigma$ rising to 99.7% at $7\sigma$ . A catalogue is available from Vizier; images are made available via the PASA datastore, AAO Data Central, and SkyView. This is the first in a series of data releases from the GLEAM-X survey.
What is the SKA-Low sensitivity for your favourite radio source?
M. Sokolowski, S. J. Tingay, D. B. Davidson, R. B. Wayth, D. Ung, J. Broderick, B. Juswardy, M. Kovaleva, G. Macario, G. Pupillo, A. Sutinjo
Published online by Cambridge University Press: 15 April 2022, e015
The Square Kilometre Array (SKA) will be the largest radio astronomy observatory ever built, providing unprecedented sensitivity over a very broad frequency band from 50 MHz to 15.3 GHz. The SKA's low frequency component (SKA-Low), which will observe in the 50–350 MHz band, will be built at the Murchison Radio-astronomy Observatory (MRO) in Western Australia. It will consist of 512 stations each composed of 256 dual-polarised antennas, and the sensitivity of an individual station is pivotal to the performance of the entire SKA-Low telescope. The answer to the question in the title is, it depends. The sensitivity of a low frequency array, such as an SKA-Low station, depends strongly on the pointing direction of the digitally formed station beam and the local sidereal time (LST), and is different for the two orthogonal polarisations of the antennas. The accurate prediction of the SKA-Low sensitivity in an arbitrary direction in the sky is crucial for future observation planning. Here, we present a sensitivity calculator for the SKA-Low radio telescope, using a database of pre-computed sensitivity values for two realisations of an SKA-Low station architecture. One realisation uses the log-periodic antennas selected for SKA-Low. The second uses a known benchmark, in the form of the bowtie dipoles of the Murchison Widefield Array. Prototype stations of both types were deployed at the MRO in 2019, and since then have been collecting commissioning and verification data. These data were used to measure the sensitivity of the stations at several frequencies and over at least 24 h intervals, and were compared to the predictions described in this paper. The sensitivity values stored in the SQLite database were pre-computed for the X, Y, and Stokes I polarisations in 10 MHz frequency steps, $\scriptsize{1/2}$ hour LST intervals, and $5^\circ$ resolution in pointing directions. The database allows users to quickly and easily estimate the sensitivity of SKA-Low for arbitrary observing parameters (your favourite object) using interactive web-based or command line interfaces. The sensitivity can be calculated using publicly available web interface (http://sensitivity.skalow.link) or a command line python package (https://github.com/marcinsokolowski/station_beam), which can also be used to calculate the sensitivity for arbitrary pointing directions, frequencies, and times without interpolations.
Early-time searches for coherent radio emission from short GRBs with the Murchison Widefield Array
J. Tian, G. E. Anderson, P. J. Hancock, J. C. A. Miller-Jones, M. Sokolowski, A. Rowlinson, A. Williams, J. Morgan, N. Hurley-Walker, D. L. Kaplan, Tara Murphy, S. J. Tingay, M. Johnston-Hollitt, K. W. Bannister, M. E. Bell, B. W. Meyers
Published online by Cambridge University Press: 03 February 2022, e003
Many short gamma-ray bursts (GRBs) originate from binary neutron star mergers, and there are several theories that predict the production of coherent, prompt radio signals either prior, during, or shortly following the merger, as well as persistent pulsar-like emission from the spin-down of a magnetar remnant. Here we present a low frequency (170–200 MHz) search for coherent radio emission associated with nine short GRBs detected by the Swift and/or Fermi satellites using the Murchison Widefield Array (MWA) rapid-response observing mode. The MWA began observing these events within 30–60 s of their high-energy detection, enabling us to capture any dispersion delayed signals emitted by short GRBs for a typical range of redshifts. We conducted transient searches at the GRB positions on timescales of 5 s, 30 s, and 2 min, resulting in the most constraining flux density limits on any associated transient of 0.42, 0.29, and 0.084 Jy, respectively. We also searched for dispersed signals at a temporal and spectral resolution of 0.5 s and 1.28 MHz, but none were detected. However, the fluence limit of 80–100 Jy ms derived for GRB 190627A is the most stringent to date for a short GRB. Assuming the formation of a stable magnetar for this GRB, we compared the fluence and persistent emission limits to short GRB coherent emission models, placing constraints on key parameters including the radio emission efficiency of the nearly merged neutron stars ( $\epsilon_r\lesssim10^{-4}$ ), the fraction of magnetic energy in the GRB jet ( $\epsilon_B\lesssim2\times10^{-4}$ ), and the radio emission efficiency of the magnetar remnant ( $\epsilon_r\lesssim10^{-3}$ ). Comparing the limits derived for our full GRB sample (along with those in the literature) to the same emission models, we demonstrate that our fluence limits only place weak constraints on the prompt emission predicted from the interaction between the relativistic GRB jet and the interstellar medium for a subset of magnetar parameters. However, the 30-min flux density limits were sensitive enough to theoretically detect the persistent radio emission from magnetar remnants up to a redshift of $z\sim0.6$ . Our non-detection of this emission could imply that some GRBs in the sample were not genuinely short or did not result from a binary neutron star merger, the GRBs were at high redshifts, these mergers formed atypical magnetars, the radiation beams of the magnetar remnants were pointing away from Earth, or the majority did not form magnetars but rather collapse directly into black holes.
The MWA long baseline Epoch of reionisation survey—I. Improved source catalogue for the EoR 0 field
C. R. Lynch, T. J. Galvin, J. L. B. Line, C. H. Jordan, C. M. Trott, J. K. Chege, B. McKinley, M. Johnston-Hollitt, S. J. Tingay
One of the principal systematic constraints on the Epoch of Reionisation (EoR) experiment is the accuracy of the foreground calibration model. Recent results have shown that highly accurate models of extended foreground sources, and including models for sources in both the primary beam and its sidelobes, are necessary for reducing foreground power. To improve the accuracy of the source models for the EoR fields observed by the Murchison Widefield Array (MWA), we conducted the MWA Long Baseline Epoch of Reionisation Survey (LoBES). This survey consists of multi-frequency observations of the main MWA EoR fields and their eight neighbouring fields using the MWA Phase II extended array. We present the results of the first half of this survey centred on the MWA EoR0 observing field (centred at RA (J2000) $0^\mathrm{h}$ , Dec (J2000) $-27^{\circ}$ ). This half of the survey covers an area of 3 069 degrees $^2$ , with an average rms of 2.1 mJy beam–1. The resulting catalogue contains a total of 80 824 sources, with 16 separate spectral measurements between 100 and 230 MHz, and spectral modelling for 78 $\%$ of these sources. Over this region we estimate that the catalogue is 90 $\%$ complete at 32 mJy, and 70 $\%$ complete at 10.5 mJy. The overall normalised source counts are found to be in good agreement with previous low-frequency surveys at similar sensitivities. Testing the performance of the new source models we measure lower residual rms values for peeled sources, particularly for extended sources, in a set of MWA Phase I data. The 2-dimensional power spectrum of these data residuals also show improvement on small angular scales—consistent with the better angular resolution of the LoBES catalogue. It is clear that the LoBES sky models improve upon the current sky model used by the Australian MWA EoR group for the EoR0 field.
A broadband radio view of transient jet ejecta in the black hole candidate X-ray binary MAXI J1535–571
Jaiverdhan Chauhan, J. C. A. Miller-Jones, G. E. Anderson, A. Paduano, M. Sokolowski, C. Flynn, P. J. Hancock, N. Hurley-Walker, D. L. Kaplan, T. D. Russell, A. Bahramian, S. W. Duchesne, D. Altamirano, S. Croft, H. A. Krimm, G. R. Sivakoff, R. Soria, C. M. Trott, R. B. Wayth, V. Gupta, M. Johnston-Hollitt, S. J. Tingay
We present a broadband radio study of the transient jets ejected from the black hole candidate X-ray binary MAXI J1535–571, which underwent a prolonged outburst beginning on 2017 September 2. We monitored MAXI J1535–571 with the Murchison Widefield Array (MWA) at frequencies from 119 to 186 MHz over six epochs from 2017 September 20 to 2017 October 14. The source was quasi-simultaneously observed over the frequency range 0.84–19 GHz by UTMOST (the Upgraded Molonglo Observatory Synthesis Telescope) the Australian Square Kilometre Array Pathfinder (ASKAP), the Australia Telescope Compact Array (ATCA), and the Australian Long Baseline Array (LBA). Using the LBA observations from 2017 September 23, we measured the source size to be $34\pm1$ mas. During the brightest radio flare on 2017 September 21, the source was detected down to 119 MHz by the MWA, and the radio spectrum indicates a turnover between 250 and 500 MHz, which is most likely due to synchrotron self-absorption (SSA). By fitting the radio spectrum with a SSA model and using the LBA size measurement, we determined various physical parameters of the jet knot (identified in ATCA data), including the jet opening angle ( $\phi_{\rm op} = 4.5\pm1.2^{\circ}$ ) and the magnetic field strength ( $B_{\rm s} = 104^{+80}_{-78}$ mG). Our fitted magnetic field strength agrees reasonably well with that inferred from the standard equipartition approach, suggesting the jet knot to be close to equipartition. Our study highlights the capabilities of the Australian suite of radio telescopes to jointly probe radio jets in black hole X-ray binaries via simultaneous observations over a broad frequency range, and with differing angular resolutions. This suite allows us to determine the physical properties of X-ray binary jets. Finally, our study emphasises the potential contributions that can be made by the low-frequency part of the Square Kilometre Array (SKA-Low) in the study of black hole X-ray binaries.
Murchison Widefield Array rapid-response observations of the short GRB 180805A
G. E. Anderson, P. J. Hancock, A. Rowlinson, M. Sokolowski, A. Williams, J. Tian, J. C. A. Miller-Jones, N. Hurley-Walker, K. W. Bannister, M. E. Bell, C. W. James, D. L. Kaplan, Tara Murphy, S. J. Tingay, B. W. Meyers, M. Johnston-Hollitt, R. B. Wayth
Published online by Cambridge University Press: 10 June 2021, e026
Here we present stringent low-frequency (185 MHz) limits on coherent radio emission associated with a short-duration gamma-ray burst (SGRB). Our observations of the short gamma-ray burst (GRB) 180805A were taken with the upgraded Murchison Widefield Array (MWA) rapid-response system, which triggered within 20s of receiving the transient alert from the Swift Burst Alert Telescope, corresponding to 83.7 s post-burst. The SGRB was observed for a total of 30 min, resulting in a $3\sigma$ persistent flux density upper limit of 40.2 mJy beam–1. Transient searches were conducted at the Swift position of this GRB on 0.5 s, 5 s, 30 s and 2 min timescales, resulting in $3\sigma$ limits of 570–1 830, 270–630, 200–420, and 100–200 mJy beam–1, respectively. We also performed a dedispersion search for prompt signals at the position of the SGRB with a temporal and spectral resolution of 0.5 s and 1.28 MHz, respectively, resulting in a $6\sigma$ fluence upper-limit range from 570 Jy ms at DM $=3\,000$ pc cm–3 ( $z\sim 2.5$ ) to 1 750 Jy ms at DM $=200$ pc cm–3 ( $z\sim 0.1)$ , corresponding to the known redshift range of SGRBs. We compare the fluence prompt emission limit and the persistent upper limit to SGRB coherent emission models assuming the merger resulted in a stable magnetar remnant. Our observations were not sensitive enough to detect prompt emission associated with the alignment of magnetic fields of a binary neutron star just prior to the merger, from the interaction between the relativistic jet and the interstellar medium (ISM) or persistent pulsar-like emission from the spin-down of the magnetar. However, in the case of a more powerful SGRB (a gamma-ray fluence an order of magnitude higher than GRB 180805A and/or a brighter X-ray counterpart), our MWA observations may be sensitive enough to detect coherent radio emission from the jet-ISM interaction and/or the magnetar remnant. Finally, we demonstrate that of all current low- frequency radio telescopes, only the MWA has the sensitivity and response times capable of probing prompt emission models associated with the initial SGRB merger event.
Remnant radio galaxies discovered in a multi-frequency survey
GAMA Legacy ATCA Southern Survey
Benjamin Quici, Natasha Hurley-Walker, Nicholas Seymour, Ross J. Turner, Stanislav S. Shabala, Minh Huynh, H. Andernach, Anna D. Kapińska, Jordan D. Collier, Melanie Johnston-Hollitt, Sarah V. White, Isabella Prandoni, Timothy J. Galvin, Thomas Franzen, C. H. Ishwara-Chandra, Sabine Bellstedt, Steven J. Tingay, Bryan M. Gaensler, Andrew O'Brien, Johnathan Rogers, Kate Chow, Simon Driver, Aaron Robotham
The remnant phase of a radio galaxy begins when the jets launched from an active galactic nucleus are switched off. To study the fraction of radio galaxies in a remnant phase, we take advantage of a $8.31$ deg $^2$ subregion of the GAMA 23 field which comprises of surveys covering the frequency range 0.1–9 GHz. We present a sample of 104 radio galaxies compiled from observations conducted by the Murchison Widefield Array (216 MHz), the Australia Square Kilometer Array Pathfinder (887 MHz), and the Australia Telescope Compact Array (5.5 GHz). We adopt an 'absent radio core' criterion to identify 10 radio galaxies showing no evidence for an active nucleus. We classify these as new candidate remnant radio galaxies. Seven of these objects still display compact emitting regions within the lobes at 5.5 GHz; at this frequency the emission is short-lived, implying a recent jet switch off. On the other hand, only three show evidence of aged lobe plasma by the presence of an ultra-steep-spectrum ( $\alpha<-1.2$) and a diffuse, low surface brightness radio morphology. The predominant fraction of young remnants is consistent with a rapid fading during the remnant phase. Within our sample of radio galaxies, our observations constrain the remnant fraction to $4\%\lesssim f_{\mathrm{rem}} \lesssim 10\%$; the lower limit comes from the limiting case in which all remnant candidates with hotspots are simply active radio galaxies with faint, undetected radio cores. Finally, we model the synchrotron spectrum arising from a hotspot to show they can persist for 5–10 Myr at 5.5 GHz after the jets switch of—radio emission arising from such hotspots can therefore be expected in an appreciable fraction of genuine remnants.
A low-frequency blind survey of the low Earth orbit environment using non-coherent passive radar with the Murchison widefield array
S. Prabu, P. Hancock, X. Zhang, S. J. Tingay
Published online by Cambridge University Press: 09 December 2020, e052
We have extended our previous work to use the Murchison widefield array (MWA) as a non-coherent passive radar system in the FM frequency band, using terrestrial FM transmitters to illuminate objects in low Earth orbit (LEO) and the MWA as the sensitive receiving element for the radar return. We have implemented a blind detection algorithm that searches for these reflected signals in difference images constructed using standard interferometric imaging techniques. From a large-scale survey using 20 h of archived MWA observations, we detect 74 unique objects over multiple passes, demonstrating the MWA to be a valuable addition to the global Space Domain Awareness network. We detected objects with ranges up to 977 km and as small as $0.03$ ${\rm m}^2$ radar cross section. We found that 30 objects were either non-operational satellites or upper-stage rocket body debris. Additionally, we also detected FM reflections from Geminid meteors and aircraft flying over the MWA. Most of the detections of objects in LEO were found to lie within the parameter space predicted by previous feasibility studies, verifying the performance of the MWA for this application. We have also used our survey to characterise these reflected signals from LEO objects as a source of radio frequency interference (RFI) that corrupts astronomical observations. This has allowed us to undertake an initial analysis of the impact of this RFI on the MWA and the future square kilometer array (SKA). As part of this analysis, we show that the standard MWA RFI flagging strategy misses most of this RFI and that this should be a careful consideration for the SKA.
A survey of spatially and temporally resolved radio frequency interference in the FM band at the Murchison Radio-astronomy Observatory
S. J. Tingay, M. Sokolowski, R. Wayth, D. Ung
We present the first survey of radio frequency interference (RFI) at the future site of the low frequency Square Kilometre Array (SKA), the Murchison Radio-astronomy Observatory (MRO), that both temporally and spatially resolves the RFI. The survey is conducted in a 1 MHz frequency range within the FM band, designed to encompass the closest and strongest FM transmitters to the MRO (located in Geraldton, approximately 300 km distant). Conducted over approximately three days using the second iteration of the Engineering Development Array in an all-sky imaging mode, we find a range of RFI signals. We are able to categorise the signals into: those received directly from the transmitters, from their horizon locations; reflections from aircraft (occupying approximately 13% of the observation duration); reflections from objects in Earth orbit; and reflections from meteor ionisation trails. In total, we analyse 33 994 images at 7.92 s time resolution in both polarisations with angular resolution of approximately 3.5 $^{\circ}$ , detecting approximately forty thousand RFI events. This detailed breakdown of RFI in the MRO environment will enable future detailed analyses of the likely impacts of RFI on key science at low radio frequencies with the SKA.
A SETI survey of the Vela region using the Murchison Widefield Array: Orders of magnitude expansion in search space
C. D. Tremblay, S. J. Tingay
Following the results of our previous low-frequency searches for extraterrestrial intelligence (SETI) using the Murchison Widefield Array (MWA), directed towards the Galactic Centre and the Orion Molecular Cloud (Galactic Anticentre), we report a new large-scale survey towards the Vela region with the lowest upper limits thus far obtained with the MWA. Using the MWA in the frequency range 98–128 MHz over a 17-h period, a $400\,\textrm{deg}^{2}$ field centred on the Vela Supernova Remnant was observed with a frequency resolution of 10 kHz. Within this field, there are six known exoplanets. At the positions of these exoplanets, we searched for narrow-band signals consistent with radio transmissions from intelligent civilisations. No unknown signals were found with a 5 $\sigma$ detection threshold. In total, across this work plus our two previous surveys, we have now examined 75 known exoplanets at low frequencies. In addition to the known exoplanets, we have included in our analysis the calculation of the Effective Isotropic Radiated Power (EIRP) upper limits towards over 10 million stellar sources in the Vela field with known distances from Gaia (assuming a 10-kHz transmission bandwidth). Using the methods of Wright, Kanodia, & Lubar (2018) to describe an eight-dimensional parameter space for SETI searches, our survey achieves the largest search fraction yet, two orders of magnitude higher than the previous highest (our MWA Galactic Anticentre survey), reaching a search fraction of $\ \sim2\,{\times}\,10^{-16}$ . We also compare our results to previous SETI programs in the context of the $\mbox{EIRP}_{\textrm{min}}$ —Transmitter Rate plane. Our results clearly continue to demonstrate that SETI has a long way to go. But, encouragingly, the MWA SETI surveys also demonstrate that large-scale SETI surveys, in particular for telescopes with a large field-of-view, can be performed commensally with observations designed primarily for astrophysical purposes.
Science with the Murchison Widefield Array: Phase I results and Phase II opportunities – Corrigendum
A. P. Beardsley, M. Johnston-Hollitt, C. M. Trott, J. C. Pober, J. Morgan, D. Oberoi, D. L. Kaplan, C. R. Lynch, G. E. Anderson, P. I. McCauley, S. Croft, C. W. James, O. I. Wong, C. D. Tremblay, R. P. Norris, I. H. Cairns, C. J. Lonsdale, P. J. Hancock, B. M. Gaensler, N. D. R. Bhat, W. Li, N. Hurley-Walker, J. R. Callingham, N. Seymour, S. Yoshiura, R. C. Joseph, K. Takahashi, M. Sokolowski, J. C. A. Miller-Jones, J. V. Chauhan, I. Bojičić, M. D. Filipović, D. Leahy, H. Su, W. W. Tian, S. J. McSweeney, B. W. Meyers, S. Kitaeff, T. Vernstrom, G. Gürkan, G. Heald, M. Xue, C. J. Riseley, S. W. Duchesne, J. D. Bowman, D. C. Jacobs, B. Crosse, D. Emrich, T. M. O. Franzen, L. Horsley, D. Kenney, M. F. Morales, D. Pallot, K. Steele, S. J. Tingay, M. Walker, R. B. Wayth, A. Williams, C. Wu
Science with the Murchison Widefield Array: Phase I results and Phase II opportunities
The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together $60+$ programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories.
A VOEvent-based automatic trigger system for the Murchison Widefield Array
P. J. Hancock, G. E. Anderson, A. Williams, M. Sokolowski, S. E. Tremblay, A. Rowlinson, B. Crosse, B. W. Meyers, C. R. Lynch, A. Zic, A. P. Beardsley, D. Emrich, T. M. O. Franzen, L. Horsley, M. Johnston-Hollitt, D. L. Kaplan, D. Kenney, M. F. Morales, D. Pallot, K. Steele, S. J. Tingay, C. M. Trott, M. Walker, R. B. Wayth, C. Wu
The Murchison Widefield Array (MWA) is an electronically steered low-frequency (<300 MHz) radio interferometer, with a 'slew' time less than 8 s. Low-frequency (∼100 MHz) radio telescopes are ideally suited for rapid response follow-up of transients due to their large field of view, the inverted spectrum of coherent emission, and the fact that the dispersion delay between a 1 GHz and 100 MHz pulse is on the order of 1–10 min for dispersion measures of 100–2000 pc/cm3. The MWA has previously been used to provide fast follow-up for transient events including gamma-ray bursts (GRBs), fast radio bursts (FRBs), and gravitational waves, using systems that respond to gamma-ray coordinates network packet-based notifications. We describe a system for automatically triggering MWA observations of such events, based on Virtual Observatory Event standard triggers, which is more flexible, capable, and accurate than previous systems. The system can respond to external multi-messenger triggers, which makes it well-suited to searching for prompt coherent radio emission from GRBs, the study of FRBs and gravitational waves, single pulse studies of pulsars, and rapid follow-up of high-energy superflares from flare stars. The new triggering system has the capability to trigger observations in both the regular correlator mode (limited to ≥0.5 s integrations) and using the Voltage Capture System (VCS, 0.1 ms integration) of the MWA and represents a new mode of operation for the MWA. The upgraded standard correlator triggering capability has been in use since MWA observing semester 2018B (July–Dec 2018), and the VCS and buffered mode triggers will become available for observing in a future semester.
Interplanetary Scintillation with the Murchison Widefield Array V: An all-sky survey of compact sources using a modern low-frequency radio telescope
J. S. Morgan, J.-P. Macquart, R. Chhetri, R. D. Ekers, S. J. Tingay, E. M. Sadler
Published online by Cambridge University Press: 28 January 2019, e002
We describe the parameters of a low-frequency all-sky survey of compact radio sources using Interplanetary Scintillation, undertaken with the Murchison Widefield Array. While this survey gives important complementary information to low-resolution survey, providing information on the sub-arsecond structure of every source, a survey of this kind has not been attempted in the era of low-frequency imaging arrays such as the Murchison Widefield Array and LOw Frequency Array. Here we set out the capabilities of such a survey, describing the limitations imposed by the heliocentric observing geometry and by the instrument itself. We demonstrate the potential for Interplanetary Scintillation measurements at any point on the celestial sphere and we show that at 160 MHz, reasonable results can be obtained within 30° of the ecliptic (2π str: half the sky). We also suggest some observational strategies and describe the first such survey, the Murchison Widefield Array Phase I Interplanetary Scintillation survey. Finally we analyse the potential of the recently upgraded Murchison Widefield Array and discuss the potential of the Square Kilometre Array-low to use Interplanetary Scintillation to probe sub-mJy flux density levels at sub-arcsecond angular resolution.
In situ measurement of MWA primary beam variation using ORBCOMM
J. L. B. Line, B. McKinley, J. Rasti, M. Bhardwaj, R. B. Wayth, R. L. Webster, D. Ung, D. Emrich, L. Horsley, A. Beardsley, B. Crosse, T. M. O. Franzen, B. M. Gaensler, M. Johnston-Hollitt, D. L. Kaplan, D. Kenney, M. F. Morales, D. Pallot, K. Steele, S. J. Tingay, C. M. Trott, M. Walker, A. Williams, C. Wu
We provide the first in situ measurements of antenna element beam shapes of the Murchison Widefield Array. Most current processing pipelines use an assumed beam shape, which can cause absolute and relative flux density errors and polarisation 'leakage'. Understanding the primary beam is then of paramount importance, especially for sensitive experiments such as a measurement of the 21-cm line from the epoch of reionisation, where the calibration requirements are so extreme that tile to tile beam variations may affect our ability to make a detection. Measuring the primary beam shape from visibilities is challenging, as multiple instrumental, atmospheric, and astrophysical factors contribute to uncertainties in the data. Building on the methods of Neben et al. [Radio Sci., 50, 614], we tap directly into the receiving elements of the telescope before any digitisation or correlation of the signal. Using ORBCOMM satellite passes we are able to produce all-sky maps for four separate tiles in the XX polarisation. We find good agreement with the beam model of Sokolowski et al. [2017, PASA, 34, e062], and clearly observe the effects of a missing dipole from a tile in one of our beam maps. We end by motivating and outlining additional on-site experiments.
VLBI Observations of Southern Gamma-Ray Sources. III
P. G. Edwards, R. Ojha, R. Dodson, J. E. J. Lovell, J. E. Reynolds, A. K. Tzioumis, J. Quick, G. Nicolson, S. J. Tingay
We report the results of Long Baseline Array observations made in 2001 of ten southern sources proposed by Mattox et al. as counterparts to EGRET >100 MeV gamma-ray sources. Source structures are compared with published data where available and possible superluminal motions identified in several cases. The associations are examined in the light of Fermi observations, indicating that the confirmed counterparts tend to have radio properties consistent with other identifications, including flat radio spectral index, high brightness temperature, greater radio variability, and higher core dominance.
Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes
I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro
The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor.
|
CommonCrawl
|
Eur. Phys. J. C (2003) 30: 111-116
https://doi.org/10.1140/epjc/s2003-01255-8
Strange particle production at RHIC in the dual parton model
A. Capella1, C.A. Salgado2 and D. Sousa3
1 Laboratoire de Physique Théorique, Unité Mixte de Recherche UMR no. 8627 - CNRS, Université de Paris XI, Bâtiment 210, 91405, Orsay Cedex, France
2 Theory Division, CERN, 1211, Geneva 23, Switzerland
3 Unité Mixte de Recherche UMR no. 8627 - CNRS, ECT, Villa Tambosi, Strada delle Tabarelle 286, 38050, Villazzano, Trento, Italy
We compute the mid-rapidity densities of pions, kaons, baryons and antibaryons in Au-Au collisions at $\sqrt{s} = 130 $ GeV in the dual parton model supplemented with final state interaction (comovers interaction). The ratios $B/n_{\mathrm {{part}}}$ ($\overline{B}/n_{\mathrm {{part}}}$) increase between peripheral ($n_{\mathrm {{part}}} = 18$) and central ($n_{\mathrm {{part}}} = 350)$ collisions by a factor 2.4 (2.0) for the $\Lambda$, 4.8 (4.1) for the $\Xi$ and 16.5 (13.5) for the $\Omega$. The ratio $K^-/\pi^-$ increases by a factor 1.3 in the same centrality range. A comparison with the available data is presented.
© Springer-Verlag, 2003
Monte Carlo model for nuclear collisions from SPS to LHC energies
Eur. Phys. J. C 22, 149-163 (2001)
Hadron production and phase changes in relativistic heavy-ion collisions
Eur. Phys. J. A (2008) 35: 221-242
Strange and identified particle production with ALICE at the LHC
Identified particle production and freeze-out properties in heavy-ion collisions at RHIC Beam Energy Scan program
New state of nuclear matter: Nearly perfect fluid of quarks and gluons in heavy-ion collisions at RHIC energies — From charged particle density to jet quenching
Eur. Phys. J. Plus (2016) 131: 70
|
CommonCrawl
|
How complete is our understanding of lift?
I'm currently studying for my PPL and one of the accepted textbooks contains the following disclaimer at the end of the Principles of Flight section on lift:
It is important to note that the forgoing explanation of lift, and its reliance on Bernoulli's theorem, is very much the 'classical' theory of lift production and the one on which the exam questions are usually based. There are differences of opinion amongst scientists on the subject....[snip]
The same book previously also describes the venturi theory which NASA discredits.
Additionally, one of my previous CFI's told me that during a previous successful job interview he had been asked to explain lift and had merely responded with "Which theory would you like me to cover?"
On the contrary - we must have an excellent understand of some components because of the way we're able to design and build such stable (and unstable when we want) aircraft. Plus, I see some absolutely incredible mathematics described on this website which seek to accurately answer complex questions.
So, my question isn't how is lift generated -it's how complete is our understanding? Which bits are still in dispute, and which bits are fully accepted?
aerodynamics lift
$\begingroup$ have you had a look at aviation.stackexchange.com/q/8281/1467 and aviation.stackexchange.com/q/466/1467 ? $\endgroup$ – Federico♦ Oct 6 '15 at 11:18
$\begingroup$ @Ethan. Really? That puts you into a small set of people who think they know. We have theories than can be modelled and tested and which fit all known measured results. But how lift is generated is an incomplete set of theories and the best brains in the business still argue about it. Read some of the linked answers, especially on Physics SE to understand this. We can design, model and test wings and we know that they will perform in a given manner because the results match the model. Our understanding of how the results match the model is, at best, incomplete. $\endgroup$ – Simon Oct 6 '15 at 12:40
$\begingroup$ @Simon, it is not lack of understanding. Physicists understand fluid dynamics pretty well and lift is just a consequence of it. It is lack of simple explanation and that is simply because lift is surprisingly complex phenomenon. $\endgroup$ – Jan Hudec Oct 6 '15 at 13:12
$\begingroup$ Note that in engineering contexts, it's perfectly acceptable to have two models describe the same phenomena. This is not a lack of understanding, but simply two ways of looking at the same thing. For example, compare calculating the impact velocity of a dropped mass by looking at an energy balance from potential to kinetic energy, or looking at the acceleration over a certain distance. In the end, it's the same equation $\endgroup$ – Sanchises Oct 6 '15 at 13:21
$\begingroup$ @ryan I believe the real question being answered here is "How complete is the understanding of lift among Aviation.SE members". :) $\endgroup$ – Sanchises Oct 8 '15 at 8:09
Short answer: Yes, our understanding of lift is complete, but solving the equations for some practical cases needs more resources than what is technically sensible.
Lift is a matter of definition
First of all, lift is only one part of the aerodynamic forces. It is the component normal to the direction of airflow. Since the aircraft will distort the local flow around itself, this direction is taken ideally at an infinite distance where the air is undisturbed.
The other component is, of course, drag. It is defined as the part of the aerodynamic forces parallel to the direction of airflow.
The aerodynamic forces are the sum of all local pressures, which act orthogonally on the local surface of the airplane, and the shear forces, which act parallel to the local surface.
When aerodynamics was researched first, electric fields were new and exciting, and the same equations which help to calculate electromagnetic forces could be used to calculate aerodynamic forces. Therefore, abstract concepts like sources or sinks were used to explain aerodynamics. This made it not easier to understand, and many authors tried to find simpler explanations. Unfortunately, they were mostly too simple and not correct, but the next generation of authors would mostly copy what had been written before, so the wrong concepts were still bandied about.
To get to the bottom of it, it might help to look at lift at a molecular level:
Every air molecule is in a dynamic equilibrium between inertial, pressure and viscous effects:
Inertial means that the mass of the particle wants to travel on as before and needs force to be convinced otherwise.
Pressure means that air particles oscillate all the time and bounce into other air particles. The more bouncing, the more force they exert on their surroundings.
Viscosity means that air molecules, because of this oscillation, tend to assume the speed and direction of their neighbors.
All three contributions are well understood, and with the Navier-Stokes equations they can be completely mathematically expressed. What is still improving is our ability to solve these equations, and in turbulent flow the characteristic length required to capture all effects is so small that it is practically impossible to solve those equations fully with finite time and resources.
Flow over the upper side of the wing
Now to the airflow: When a wing approaches at subsonic speed, the low pressure area over its upper surface will suck in air ahead of it. See it this way: Above and downstream of a packet of air we have less bouncing of molecules (= less pressure), and now the undiminished bouncing of the air below and upstream of that packet will push its air molecules upwards and towards that wing. The packet of air will rise and accelerate towards the wing and be sucked into that low pressure area. Due to the acceleration, the packet will be stretched lengthwise and its pressure drops in sync with it picking up speed. Spreading happens in flow direction - the packet is distorted and stretched lengthwise, but contracts in the direction orthogonally to the flow. Once there, it will "see" that the wing below it curves away from its path of travel, and if that path would remain unchanged, a vacuum between the wing and our packet of air would form. Reluctantly (because it has mass and, therefore, inertia), the packet will change course and follow the wing's contour. This requires even lower pressure, to make the molecules overcome their inertia and change direction. This fast-flowing, low-pressure air will in turn suck in new air ahead and below of it, will go on to decelerate and regain its old pressure over the rear half of the wing, and will flow off with its new flow direction.
Note that lift can only happen if the upper contour of the wing will slope downwards and away from the initial path of the air flowing around the wing's leading edge. This could either be camber or angle of attack - both will have the same effect. Since camber allows for a gradual change of the contour, it is more efficient than angle of attack.
Flow over the lower side of the wing
A packet of air which ends up below the wing will experience less uplift and acceleration, and in the convex part of highly cambered airfoils it will experience a compression. It also has to change its flow path, because the cambered and/or inclined wing will push the air below it downwards, creating more pressure and more bouncing from above for our packet below the wing. When both packets arrive at the trailing edge, they will have picked up some downward speed.
Behind the wing, both packets will continue along their downward path for a while due to inertia and push other air below them down and sideways. Above them, this air, having been pushed sideways before, will now fill the space above our two packets. Macroscopically, this looks like two big vortices. But the air in these vortices cannot act on the wing anymore, so it will not affect drag or lift. See here for more on that effect, including pretty pictures.
Lift can be explained in several, equivalent ways
Following the picture of a pressure field outlined above, lift is the difference of pressure between upper and lower surface of the wing. The molecules will bounce against the wing skin more at the lower side than at the upper side, and the difference is lift.
Or you look at the macroscopic picture: A certain mass of air has been accelerated downwards by the wing, and this required a force to act on that air. This force is what keeps the aircraft up in the air: Lift.
If you look at the wing as a black box and only pay attention to the impulse of the inflowing and outflowing air, the wing will change the impulse by adding a downward component. The reaction force of this impulse change is lift.
Either way, you will arrive at the same result. By the way: Most of the directional change happens in the forward part of the airfoil, not at the trailing edge!
Supersonic flow
When the aircraft moves faster than pressure changes propagate through air, the changes in pressure are no longer smooth, but sudden. The aircraft will push the air molecules aside, producing a compression shock. Behind the shock front pressure, temperature and density are higher than ahead of it, and the increase is proportional to the local change in flow direction. The incremental pressure change $\delta p$ due to the aircraft hitting air with an incremental angle of $\delta\vartheta$, expressed in terms of the undisturbed flow with the index $\infty$, is proportional to the change in the streamlines: $$\delta p = -\frac{\rho_{\infty}\cdot v^2_{\infty}}{\sqrt{Ma^2_{\infty} - 1}}\cdot\delta\vartheta$$
Gas pressure on a molecular level is the number and severity of particle collisions. The air molecules experience more collisions on the downstream side of the shock, since air pressure is higher there. The average direction of the additional collisions is indeed orthogonal to the shock, because it is the boundary between blissfully unaware molecules at ambient pressure ahead of the shock and their bruised brethren downstream which have just crossed that boundary. Once a molecule has passed the shock, the collisions are coming again equally from all sides and its speed does not change any more.
If the surface curves away from the local flow direction, the air produces an expansion fan which re-sets the old pressure and density values when the air flows again in its original direction.
Pure supersonic lift is only a matter of the angle of incidence, and any local curvature of the wing will not change overall lift (but increase drag). Now the total aerodynamic force is normal to the wing, and drag will become proportional to the angle of incidence. In hypersonic flow you will get good results with the venerable impact theory first formulated by Isaac Newton.
Separated flow
This happens when the air molecules are no longer able to follow the contour of the aircraft. Instead, you get a chaotic, oscillating flow pattern which is very hard to compute exactly. This is really the only part of aerodynamics which cannot be predicted precisely, even though the effects are well understood. Separated flow will produce lift, too, but less than attached flow. In delta wings, this separation is produced on purpose to create what is called vortex lift.
$\begingroup$ I read somewhere that most of the lift is produced not just at the front, but front of the upper surface. Is that correct? If so, it doesn't seem consistent with your explanation of lift at the molecular level (that it's due to more molecules bouncing on the lower surface than the upper). That would suggest that the lower surface is what produces most of the lift, since that's where the net-positive bouncing happens. $\endgroup$ – yshavit Jun 25 '18 at 6:28
$\begingroup$ @yshavit: Yes, suction is just less pressure on one side. Now it depends what you see as the direct source of lift. You can 1) either vote for suction, or 2) for impulse exchange, or 3) for pressure. All three views are equally defensible - it depends on your point of view. $\endgroup$ – Peter Kämpf Jun 25 '18 at 6:55
$\begingroup$ Hm, okay. The reason I asked is that at the molecular level, "suction" is not a force. The only way an air molecule could suck the wing molecule up is if you had an attractive electrostatic force between them. So if the mental model is pressure, then it's really a case of the wing being pushed up from the bottom, not pulled up from the top. But it sounds like you're saying that's a valid way to think of it? $\endgroup$ – yshavit Jun 25 '18 at 21:39
$\begingroup$ @yshavit: Yes. The wing pulls the air above itself down as much as it pushes the air below itself down, too. Now, on the molecular level, you are right to say that the air above the wing is only pushed down by pressure from above because the wing creates a barrier to pressure from below. But that is suction, only in other words. $\endgroup$ – Peter Kämpf Jun 26 '18 at 13:24
$\begingroup$ Thanks! That's the one bit that's always confused me -- the notion that lift is generated more on the top of the wing than on the bottom. I think it makes sense now. $\endgroup$ – yshavit Jun 26 '18 at 13:39
From this paper:
The principle of equal transit times holds only for a wing with zero lift. [!!]
The air passes over the wing and is bent down. Newton's first law says that them [sic] must be a force on the air to bend it down (the action). Newton's third law says that there must be an equal and opposite force (up) on the wing (the reaction). To generate lift a wing must divert lots of air down.
So how does a thin wing divert so much air? When the air is bent around the top of the wing, it pulls on the air above it accelerating that air downward. Otherwise there would be voids in the air above the wing. Air is pulled from above. This pulling causes the pressure to become lower above the wing. It is the acceleration of the air above the wing in the downward direction that gives lift.
We (those of us reading this) can conlude the following about our (humanity's in general) understanding of lift:
We certainly understand it well enough to design aircraft, and there may be overlap with this knowledge in other areas, such as maybe wind-powered generator design.
Many believe we have a fairly complete understanding of lift.
The second bullet is not at all in impune the excellent (and challenging!) work done through history in fluid dynamics, aeronautical physics, and aeronautical engineering. It is merely to allow for the possibility of future paradigm shifts in our understanding of those topics, even if those shifts do not affect common design practice or practical discussions of lift. A historical example of that last point would be General Relativity as a paradigm shift in our understanding of gravity, while Newtonian gravitational theory was still used for the moon program and is still widely taught and used for situations not requiring extreme precision.
In addition to links in Frederico's comment, see also: https://physics.stackexchange.com/questions/290/what-really-allows-airplanes-to-fly
This NASA page discusses the controversy of "Bernoulli versus Newton" and concludes that both explanations of lift are "correct" and that there is even more to it. The Euler Equations and the Navier-Stokes Equations are mentioned. This page in the same series on NASA's site suggest that lift is fairly well understood by experts, but is poorly explained in the majority of popular sources:
There are many explanations for the generation of lift found in encyclopedias, in basic physics textbooks, and on Web sites. Unfortunately, many of the explanations are misleading and incorrect. Theories on the generation of lift have become a source of great controversy and a topic for heated arguments. To help you understand lift and its origins, a series of pages will describe the various theories and how some of the popular theories fail.
Lift occurs when a moving flow of gas is turned by a solid object. The flow is turned in one direction, and the lift is generated in the opposite direction, according to Newton's Third Law of action and reaction. Because air is a gas and the molecules are free to move about, any solid surface can deflect a flow. For an aircraft wing, both the upper and lower surfaces contribute to the flow turning. Neglecting the upper surface's part in turning the flow leads to an incorrect theory of lift.
Todd WilcoxTodd Wilcox
$\begingroup$ The edit is quite relevant. No physicist would doubt Navier-Stokes. The problem is that Navier-Stokes is fiendishly difficult and far too general for something as simple as lift. (yes, lift is simple in comparison to Navier-Stokes). And even so, a good physicist knows deep in his heart that Navier-Stokes is still wrong because it assumes fluids are not made up from molecules. The theory breaks down at microscopic scales. $\endgroup$ – MSalters Oct 6 '15 at 13:40
$\begingroup$ @MSalters, no, lift is not simple in comparison to Navier-Stokes. Explaining lift requires both inertia and viscosity and Navier-Stokes are just expression of those (plus the appropriate conservation laws). $\endgroup$ – Jan Hudec Oct 6 '15 at 16:59
$\begingroup$ @MSalters, the fact that Navier-Stokes breaks down at microscopic level does not make it wrong. It merely makes it an approximation. All thermodynamics is like that. $\endgroup$ – Jan Hudec Oct 6 '15 at 17:00
The problem here is that "correlation does not imply causation". Neither Bernouilli's principle nor Newton's laws of motion explain lift. Both of them give valid methods of calculating the lift force from the air flow pattern around the wing, but neither of them explain why the flow pattern is what it is.
Ideas like "equal transit time" at least try to give a reason "why," but experiments which visualize the flow pattern with smoke demonstrate that is just wrong.
The best "one-word explanation" of what causes lift is the viscosity of the air. Viscosity is the reason why there can't be any discontinuities in the overall flow pattern*. In particular, the air velocities on either side of the relatively sharp trailing edge of the wing have to be the same, otherwise the effect of viscosity at that point would propagate upstream through the air (at the speed of sound) and change the global flow pattern.
If there were no viscosity, no wing of any shape would produce any lift, or any drag force.
*Let's limit this discussion to subsonic flows. Introducing shock waves into the airflow makes a "hand-waving" non-mathematical discussion more complicated, but it doesn't invalidate the essential point I'm trying to make.
alephzeroalephzero
$\begingroup$ Well, you also need inertia, otherwise the flow would not continue downward in the direction of the trailing edge. $\endgroup$ – Jan Hudec Oct 6 '15 at 17:49
$\begingroup$ Do you have a source for the claim that without viscosity, there could be no lift? Superfluid helium has a viscosity of zero, would a sheet slicing through it at an angle not experience any lift? $\endgroup$ – Roman Oct 7 '15 at 12:51
$\begingroup$ I would add that the Euler equations, which assume inviscid fluids, certainly do allow one to approximate the lift generated by a body in flow. This is possible because the effects of viscosity are mostly confined to the infamous boundary layer, whose effects are negligible for an aerofoil in air at modest angles of attack. It is correct that without viscosity you will find zero drag, and it is also impossible for flow to become detached from a surface ("stall" in aircraft). $\endgroup$ – sigma Oct 7 '15 at 15:45
$\begingroup$ @romkyns There is no net force on a body in an inviscid fluid, unless the flow contains some circulation (i.e. a vortex) around the body. But with no viscosity, there is no "simple" way to create the vortex. You can get a rough estimate of the lift on a wing by making two assumptions: (1) the air is inviscid, and (2) there is a vortex of the required strength to make the flow match up on the top and bottom surfaces at the trailing edge. Assumption 2 is the same as assuming the flow is faster over the top than the bottom. See en.wikipedia.org/wiki/Kutta%E2%80%93Joukowski_theorem $\endgroup$ – alephzero Oct 7 '15 at 21:03
$\begingroup$ Even flow across shocks is continuous (i.e., shockwaves have finite thickness); they're just so thin that their thickness is often negligible when calculating larger-scale flow fields. $\endgroup$ – Ghillie Dhu Oct 8 '15 at 0:13
How complete is our understanding?
Complete enough to design and fly a number of complex aircraft of varying sizes, shapes and applications.
Complete enough to extract power using it.
On a basic level, lift is the force generated as solid body 'turns' fluid while satisfying the conservation laws. The issue is not that we don't know what lift is, but there is no consensus about how to explain it. Most of the 'theories' of lift are just models try to explain the same thing based on the points of view of the people involved.
The way the pilot views the lift is different from that of an engineer or a mathematician. For the pilot, the lift is a force that keeps the aircraft in the air (and proportional to $\rho V^{2} S$ and angle of attack, at-least till stall), while a mathematician can say that lift 'follows naturally' by solving Navier-Stokes Equation (whether it can realistically solved or not is another matter) for some conditions. Of course, this is of no practical use to either the engineer or the pilot. Both can claim (rightfully) that they are correct, while a physicist can object that NS assumes that the fluid is a continuum, while it is not the case in realty.
This is the reason for so many theories of lift. As the fluid flow is extremely complex, some simplification is done in each theory (like omitting viscosity in the Euler or potential flow theory). Based on the simplification, the theory is either useful in some (or most) of the situations or is outright wrong.
Which bits are still in dispute, and which bits are fully accepted?
Almost all 'theories' of lift accept that lift is a force and its requirements. As far as engineering goes, the issue is which bits of are necessary for the problem in hand.
For example, the potential flow theory can predict the lift as long as we are not approaching stall. After that all bets are off. There is no point in arguing about a result from a theory after using it in a situation for which it was not designed for in the first place.
This is the reason for arguments about lift. Some theories are developed to describe a particular situation (for example inviscid flow) and then applied in general, which obviously leads to confusion and dispute.
As far as engineering is considered we have enough understanding of lift to create the flying machines we need though not as much as to explain everything that happens with accuracy.
aeroaliasaeroalias
Scientifically speaking, lift is perfectly understood. Lift is merely the vertical component of force generated by a body moving through a fluid. And we know perfectly well how to calculate forces on a body moving through a fluid since the Navier-Stokes equations was published in 1822. That is to say, we know the physics of it and it has to do with the viscosity of fluids (in the case of aircraft, air).
But using the Navier-Stokes equations to design a wing is like trying to use Quantum-Electrodynamics (QED) to cook the perfect steak. Since gravity isn't involved in the perfectionness of steak, all you need to formulate a perfect steak is QED.
The Navier-Stokes equations calculate forces on a single point on the wing. Therefore you have to repeat the calculations over the whole wing to calculate lift. Over the last 190+ years mathematicians and engineers have formulated simpler algorithms to calculate the result of the Navier-Stokes equations and over the last 30 or so years we've used computers to calculate lift. However, you can see how this doesn't tell you the ideal shape to generate the aerodynamic characteristics you want. You can also see how this doesn't explain "lift" in terms a human can understand. It's all just large arrays of numbers.
Is it possible to explain lift in terms a human can understand? Maybe. We've certainly given names to how certain shapes generate certain output when subjected to the Navier-Stokes equations. Names such as "Coanda effect" and "Bernoulli Principle" etc. In the end, nature/physics doesn't care what name we give to our interpretation of the result of the Navier-Stokes equations - if calculating the equations result in a vertical force vector upwards you have lift. Maybe, like quantum physics, we'll never get a complete intuitive understanding of what lift is. But we certainly have the complete theory to explain it.
Additional note: Apart from not being helpful in helping us formulate a theory of wing design, the Navier-Stokes equations are also problematic because it's computationally expensive. For example, it's often not practical to use the Navier-Stokes equations to simulate turbulence (even though it's possible in theory). So we often take shortcuts for certain forms of simulations using other simpler but less perfect equations.
slebetmanslebetman
$\begingroup$ +1 for adressing the difference between "having a complete theory" on one side and "having an intuitive understanding" on the other. $\endgroup$ – Sanchises Oct 8 '15 at 8:15
TL;DR: we can very precisely model aerodynamic forces at the micro level; we can reasonably predict behavior at the macro level by aggregating micro-level models (CFD). We don't have a universally-applicable story for why the macro level behavior is what it is.
Fuller explanation:
At the risk of being a bit pedantic, I'm going to back up a couple of steps of abstraction in order to provide a more-complete picture.
The overall aerodynamic force on a body is decomposed into vectors normal to the direction of travel and parallel to the direction of travel, which are labeled 'lift' and 'drag' respectively; they are not distinct forces in & of themselves. Aerodynamic force itself is often decomposed at a different scale into pressure & friction; for the most part, friction only contributes to the 'drag' component while pressure contributes to both the 'lift' & 'drag' components.
Trying to tell a stylized story about why the integrated pressure & friction across the entire body result in a particular net force is challenging at best, since it is affected by the idiosyncrasies of each body; various models (such as Venturi, downwash, & circulation) really just provide designers & analysts with rough rules of thumb within particular flight regimes.
This last point is more important than it appears. As soon as you enter transonic flight (a mix of subsonic & supersonic flow at the surface of the body), drag increases precipitously (standing shocks creating adverse pressure gradients). Passing through to fully-supersonic flight you find yet another set of behaviors (because the leading shock radically alters the pressure distribution on the body). Don't even get me started on hypersonic flow (where the temperature change across the shocks is enough to decompose the N2 & O2 from the air itself).
Edit Peter Kampf's answer covered most of the same topics as mine, with pictures, so I'll just add this for completeness:
Ghillie DhuGhillie Dhu
Lift is generated because air molecules are bouncing into and rebounding off of the airfoil, on both top and bottom surfaces. It is the difference in the amount of momentum transferred in these collisions that creates lift. It is, (obviously), only the velocity of the air molecules that is normal (perpendicular) to the airfoil, that produces lift.
The Bernoulli principal is true, because the TOTAL average momentum of any air molecule in incompressible (subsonic) flow is a constant. Therefore, if the velocity of the air parallel with the airfoil increases, the normal component of the velocity must decrease to keep the total constant.
So, if the air is moving faster, the normal component must be slower, and it's pressure (against the airfoil) must be lower.
So, the longer distance to travel argument is only bogus if you try to assume that it can only be generated by a asymmetrical airfoil. Other things can change the travel distance (and resultant velocity) of the air across the airfoil as well. If a symmetrical airfoil is inclined to the relative wind, then as the air flows across the airfoil on the side where the airfoil bends away from the flow, the air must travel a longer distance (to fill in the gap created by the inclination) than air flowing across the surface on the other side, where the surface is inclined into the relative wind, and must either compress (supersonic flow) or move away from (change direction away) the airfoil.
This is because in subsonic, (incompressible), flow, the air cannot make an instantaneous change in direction when it gets to the leading edge of the airfoil. If the Angle of attack was 10 degrees, the air does not make an instantaneous 10 degree change ion direction. From the point of the leading edge away from the airfoil, the change in direction, and the resultant pressure, gradually changes as you move further away. the result ios that the flow of the air is following a curved path, and travelling a longer distance, on this side of the airfoil, than it is on the other side, even for a symmetrical airfoil.
Charles BretanaCharles Bretana
The principles of aerodynamics and fluid dynamics are what you would call "well understood."
The ambiguity is around what so-called "lift" is, which can be a nebulous concept. For example, if you drop a piece of paper it will drift slowly to the ground, essentially a form of gliding; this same air resistance is the basic force keeping a plane aloft. Would you consider this "lift"? Once you get into these arguments about semantics, things get vague.
Just as one example of the craziness, the FAA test, the same one you are taking, requires you to know the "four forces of flight" in which the so-called "lift" is the force that keeps the aircraft aloft. The only problem is that you can compute lift by equations that are in every book on aerodynamics and if you actually do this (like I did) you will find that the force generated is nowhere near enough to keep a plane in the sky. If "lift" were the force keeping a plane up, it would fall like a rock, so the FAA guidelines are simply completely wrong. It's just a huge semantic hairball that is not going to go away anytime soon.
The worst part of it is that EVERY pilot (or wannabe pilot) I have ever known thinks they know exactly what "lift" is and, even worse, their beliefs generally fall into one of 5 or 6 different categories with contradictory principles. This leads to huge arguments whenever the subject comes up. After 15 years of this, I just try to stay out of it, other than to tell the beginners not to make the same mistake (like I am telling you now).
$\begingroup$ We use definitions to make clear what "lift" and "drag" are. Is's actually quite simple: Lift is the force orthogonal to the direction of flow, and drag is the force parallel to it. What brakes the fall of the piece of paper is drag, not lift. $\endgroup$ – Peter Kämpf Oct 6 '15 at 21:45
$\begingroup$ Yeah, yeah, I have heard it all before. I have had these conversations with aeronautical engineers from MIT. The whole lift concept is completely screwed up. I am sure you have your opinion what lift is, but the TEXTBOOK the OP mentioned says there are differences of opinion, a book written by professional experts, so don't start preaching like you know the truth. The reality is that it is an ambiguously defined term. $\endgroup$ – Tyler Durden Oct 7 '15 at 2:39
$\begingroup$ One reason that it might seem a "screwed up" or "vague" concept is because the notion of describing "lift" as a single number is more or less useless for designing aircraft, though it is quite useful for the very simplistic explanations that are all you need to be able to fly them safely. All a pilot really needs to know about lift is, "if the plane does this, then move the controls like that to correct the situation". Similarly, a car driver doesn't need to know anything about tire design and friction to correct a skid by steering into it and not braking heavily. $\endgroup$ – alephzero Oct 8 '15 at 15:34
Lot's of talk above about "force," with little to no explanation of what force is, or its genesis. So, I might as well take a stab at it, and explain it the way it was explained to me.
Force is found in the dynamic relationship between the atom and the electron. Current understanding of electrons inside atoms (besides the atomic number) is that the electron exists everywhere at the same time inside the atom like a dense thick fog. It's not a lone agent circling in orbit around the nuclei, it's like a huge school of fish that react and move as though they were of a single mind.
The electron is so small that nothing is solid to it. Particles can be mind numbingly small, and in the time you just read this sentence 100 trillion neutrinos just passed through your head, and into the earth, and out the other side where they didn't even touch another particle. That's how small they are. Electrons are small too.
Some electrons dwell inside atoms. If the electron is the size of a child's marble, the atoms is the size of a hot air balloon. And yet the electron dwells everywhere inside that space all at the same time. (I'm not making this s*** up, it's true.)
I'm almost there... the force thing.
When the vacuum is on top of the wing, the electrons below the wing can see the vacuum above. The solid metal wing of the plane is not solid to the electron, it's more like a chain ling fence, and the electron can see through the fence just like you and I can see through the chain link fence. When the "flock" of electrons see the vacuum through the porous skin of the wing (to them at least) they "load shift" inside the atom and they move 'en mass to fill the void and when the get to the metal skin - they're stopped - because the atom is too big to move into the chain links barrier. When the atom is dragged into the skin of the wing - energy is converted from potential to kinetic because that's part of the recipe. Pressure into surface space is where energy is converted.
The pressure comes from the dynamic internal struggle between the flock of electrons and their connection to the atom. As the electrons keep pressing from the inside belly of the atom, the body of the atom is pressed into the metal skin of the wing. It's too big to go in. This creates a continuous pressure wave because of those darn electrons inside keep pressing forward and forward and will not ever stop - ever.
The amount of energy being converted is by calculated by two things: The physical dimension of the vacuum on top of the wing, and the pressure differential of the vacuum to atmospheric. 1 PSI less than 14.7 PSI over 163,440 square inches of a Boeing 737 wing space is enough to lift that 150,000 pound plane, but it can't do so until the airflow over the wing is above 150 MPH. As the plane goes faster and faster the pressure drops, the electrons tug harder and harder on the poor atom, and the energy being converted is calculated by multiplying pressure differential into the available surface space.
We have lift - and hopefully we all have a good time in Hawaii...
Murray WestMurray West
$\begingroup$ How do the planes with composite wings fly then? From a BBC.com new article "today's latest planes like Boeing's 787 Dreamliner and Airbus's A350 rely on lightweight carbon fibre composites - woven mats of carbon which are embedded in plastic." $\endgroup$ – CrossRoads Jan 1 '19 at 16:32
$\begingroup$ CrossRoads - Metal or Composite, the electron dragging the too big atom into the surface space of the wing is the where energy conversion takes place. $\endgroup$ – Murray West Jan 1 '19 at 18:38
$\begingroup$ Utter nonsense. $\endgroup$ – bogl Apr 5 '19 at 7:02
Not the answer you're looking for? Browse other questions tagged aerodynamics lift or ask your own question.
Does lift generation create a vacuum over an airfoil?
Aerodynamics of lift
What is the equality theory, or equal transit time theory, and is it supported by scientific evidence?
Behavior of air over a wing
What are the five leading theories for lift generation?
Does air travel in the same time over the top of the airfoil and along the bottom?
Why was the boat mounted this way on the underside of the wing on the "Landseaire" flying yacht?
Why does supersonic flight detach airflow from a wing?
How do I explain what makes an airplane fly to a non-technical person?
How does an aircraft form wake turbulence?
How to calculate the lift starting from the vertical speed?
How do wings generate lift?
What is vortex lift?
How do disks generate lift?
How can the lift force be changed without changing velocity?
What amount of lift would be created in the fallacious "lift from equal transit time" theory?
Is lift created by air downwash?
How much lift comes from the fuselage on modern jets?
How do rotorcraft achieve lift?
How does speed affect lift?
|
CommonCrawl
|
Post deadline: 10th February 2020
Upload deadline: 11th February 2020 11:59:59 PM (local time in Czech Republic)
(3 points)1. tchibonaut
Consider an astronaut of weight $M$ remaining still (with respect to a space station) in zero-g state, holding a heavy tool of weight $m$. The distance between the astronaut and the wall of the space station is $l$. Suddenly, he decides to throw the tool against the wall. Find his distance from the wall when the tool hits it.
astrophysicsmechanics of a point mass
Karel wanted to set this name for this problem.
(3 points)2. Mach number
Planes at high flight levels are controlled using the Mach number. This unit describes velocity as a multiple of the speed of sound in the given environment. However, the speed of sound changes with height. What is the difference in the speed of a plane, flying at Mach number $0{,}85$, at two different flight levels FL 250 ($7\;600 \mathrm{m}$) and FL 430 ($13\;100 \mathrm{m}$)? At which flight level is the speed higher and by how much (in $\jd {kph}$)? The speed of sound is given by $c =\(331{,}57+0{,}607\left \lbrace t \right \rbrace \) \jd {m.s^{-1}}$, where $t$ is temperature in degrees Celsius. Assume a standard atmosphere, where temperature decreases with height from $15 \mathrm{\C }$ by $0,65 \mathrm{\C }$ per $100 \mathrm{m}$ (for heights between $0$ and $11 \mathrm{km}$) till $-56{,}5 \mathrm{\C }$, and then remains constant till $20 \mathrm{km}$ above mean sea level.
thermodynamicsmechanics of a point mass
Karel was learning Air Traffic Control.
(5 points)3. uuu-pipe
What period of small oscillations will water in a glass container (shown on the picture) have? The dimensions of the container and the equilibrium position of water are shown. Assume that there is room temperature and standard pressure and that water is perfectly incompressible.
hydromechanicsoscillations
Karel was thinking about U-pipes again.
(8 points)4. optical FYKOS bird
The FYKOS bird found an optical bench at the Faculty of Physics. The bench allows him to place different tools along an optical axis. He started to play with it and gradually placed onto it: a point source of light, a first lens, a second lens and a screen, with the same spacing between them (so the distance between the screen and the light source is three times bigger than any distance of two neighbouring tools). A sharp image of the source was created on the screen. Then, he dipped the whole system into an unknown liquid, which he found in a strange container. To his amazement, the image on the screen stayed sharp. Figure out the refractive index of the given liquid, which is certainly different from the refractive index of air. You can assume that the refractive index of air is unitary. One of the lenses has ten times bigger focal length than the other and both are thin, manufactured from a material with refractive index $2$.
Matej likes to play with strangers' things.
(9 points)5. a shortcut across time
Jachym is located in a two dimensional Cartesian system at a point $J = (-2a, 0)$. As fast as possible, he wants to get to a point $T = (2a, 0)$, which is located (luckily) in the same system. Jachym moves exclusively with velocity $v$. This is not so easy, because there is a moving strip in the shape of a line passing through points $(-3a, 0)$ and $(0, a)$. On the moving strip, Jachym is moving with total velocity $kv$. For what minimum $k \ge 1$ is it profitable for Jachym to get on the moving strip?
Jachym, from life experience.
(10 points)P. climate changes feat. airplanes
Travel by airplane affects the atmosphere not only by well-known carbon emissions. Discuss how the aircraft industry affects warming of the atmosphere of Earth.
(12 points)E. torsional pendulum
Take a homogeneous rod, at least $40 \mathrm{cm}$ long. Attach two cords of the same material (e.g. thread or fishing line) to it, symmetrically with respect to its centre, and attach the other ends of the cords to some fixed body (e.g. stand, tripod) so that both cords would have the same length and they'd be parallel to each other. Measure the period of torsion oscillations of the rod depending on the distance $d$ of the cords, for multiple lengths of the cords, and find the relationship between these two variables. During torsion oscillations, the rod rotates in a horizontal plane and its centre remains still.
mechanics of rigid bodiesoscillations
Karel wanted to hyponotize participants.
(10 points)S.
electric currentelectric field
|
CommonCrawl
|
Sketching the graph of $Y=\sin(3x)+\sin(x)$
How do I sketch the graph of $Y = \sin(3x) + \sin (x)$?.
I realize the amplitude is $1$, but am not sure. I found the period to be $2\pi$. My problem is that I don't know if this graph will be similar to the graph of $Y = \sin (x)$. I want to know how the graph will be like.
trigonometry graphing-functions
Dando18
Ashalley SamuelAshalley Samuel
$\begingroup$ You may find it useful to note that $\sin(A)+\sin(B)=2\sin((A+B)/2)\cos((A-B)/2)$. (For instance, to calculate roots) $\endgroup$ – preferred_anon Mar 11 '17 at 15:31
$\begingroup$ One way to do it is to sketch both $y=\sin(3x)$ and $y=\sin(x)$ on the graph, and then sketch the values of the sum of the two functions. $\endgroup$ – projectilemotion Mar 11 '17 at 15:35
$\begingroup$ Also find some interesting special points. Like all of the zeros, max, and mins that you can. Plot those individual points first and then smoothly connect them. $\endgroup$ – Brick Mar 11 '17 at 15:38
$\begingroup$ "I realize the amplitude is $1$": no, think of the value of the function at $x=\pi/6$. $\endgroup$ – Yves Daoust Mar 11 '17 at 15:40
$\begingroup$ You could expand $sin(3x)$ in terms of $sin(x)$ as $sin(2x+x)$, and use addition formulae to expand this. This will give a cubic equations which you can graph $\endgroup$ – Sumant Mar 11 '17 at 15:40
A simple method to sketch periodic functions like this is:
1) find all zeroes ($x$ for which $Y=0$) in the first period
$$ \sin(3x) + \sin(x) = 0 $$ $$ x = n\pi,\, x=n\pi - \frac{\pi}{2}, \quad n\in\mathbb{Z} $$ So $Y$ crosses the x-axis at $x=0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}, 2\pi$ for $x \in [0,2\pi]$
Edit How to find zeros: $$ \begin{align*} & \sin(x) = -\sin(3x) \\ & x = \pi + 3x + 2\pi n_1, \quad n_1 \in \mathbb{Z} \quad \# \sin^{-1}\text{ of LHS and RHS} \\ & x = 2\pi n_2 - 3x, \quad n_2 \in \mathbb{Z} \\ & \text{and solve for } x \end{align*}$$
2) find all critical points ($x$ for which $Y'=0$ or is undefined) in the first period
$$ \frac{dY}{dx} = 3\cos(3x) + \cos(x) = 0 $$ $$ x = n\pi - \frac{\pi}{2}, 2n\pi \pm 2\tan^{-1}\left(\sqrt{5\pm 2\sqrt{6}} \right), \quad n \in \mathbb{Z}$$
Edit How to find zeros: $$ \begin{align*} &\cos(x) + 3\cos(3x) = 12\cos^3(x) - 8\cos(x) = \cos(x)(3\cos^2(x)-2)=0 \\ &\cos(x)=0 \implies x=\frac{\pi}{2}+\pi n, n\in\mathbb{Z} \text{ or} \\ &3\cos^2(x)=2 \\ &\cos(x)=\pm\sqrt{\frac{2}{3}} \\ &\text{ and take the in inverse cosine of each side} \\ &x=\pm\cos^{-1}\left(\sqrt{\frac{2}{3}}\right) + 2\pi n \text{ or}\\ &x= \pm\left(\cos^{-1}\left(\sqrt{\frac{2}{3}}\right)-\pi\right) + 2\pi n, n \in \mathbb{Z} \\ \end{align*} $$
Where the above inverse cosine is equivalent to the inverse tangent expression I showed in my answer.
so $Y$ has critical points at $(0.615, \,1.54),\, (\frac{\pi}{2}, \,0),\, (2.526,\,1.54),\, (\pi,\,0),\, (3.757,\,-1.54),\, (\frac{3\pi}{2},\, 0),\,\text{ and } (5.668,\, -1.54)$
3) find concavity by looking at the sign of the second derivative (the sign of $Y''$). This, however, is not really necessary for a periodic function because you can just look at the intervals between, which the first derivative changes sign.
and you should get a graph like below, which repeats every $2\pi$
Dando18Dando18
$\begingroup$ What's the idea behind getting the zeroes of X? How did X=nπ? $\endgroup$ – Ashalley Samuel Mar 12 '17 at 13:02
$\begingroup$ Ohk I get it now. 2x=2nπ which implies X=nπ $\endgroup$ – Ashalley Samuel Mar 12 '17 at 13:14
$\begingroup$ @AshalleySamuel $\sin x$ is periodic and crosses the x-axis infinitely many times, so it has infinite zeroes. In the case of your function, it has zeroes at integer multiples of $\pi$ $(\pi, 2\pi, 3\pi, ...)$ and also $\frac{\pi}{2}$ less than integer multiples of $\pi$ $(\frac{\pi}{2}, \frac{3\pi}{2}, \frac{5\pi}{2}, ...)$ $\endgroup$ – Dando18 Mar 12 '17 at 13:15
$\begingroup$ I see. This is how I solved the X. Sin(3x)=Sin(-x)> 3x=(-x+2nπ) and 3x={(π+x)+2nπ} so eventually you will have X=nπ/2 and X=(π/2+nπ). How different is this from yours? $\endgroup$ – Ashalley Samuel Mar 12 '17 at 13:31
$\begingroup$ @AshalleySamuel see my edit $\endgroup$ – Dando18 Mar 12 '17 at 13:51
In this particular case, you can bypass a complete study using a trick: $\sin3x$ is a "compressed" version of $\sin x$, as if the $x$ axis had been shrunk by a factor $3$.
You can plot both functions on the same graph and perform a graphical addition. Then, you should understand the hint below.
Hint: MW.
Yves DaoustYves Daoust
156k1313 gold badges9494 silver badges256256 bronze badges
One way to do this is like this:
Sketch both $y=\sin(3x)$ and $y=\sin(x)$. Note that the period of $y=\sin(x)$ is $2\pi$, therefore the period of $y=\sin(3x)$ is $\frac{2\pi}{3}$. Therefore, we can sketch both graphs. Here, the blue curve is $y=\color{blue}{\sin(x)}$ and the red curve is $y=\color{red}{\sin(3x)}$: The black points are a result of graphical addition (Summing up both functions). Since you are adding both functions, you are essentially plotting: $$y=\color{red}{\sin(3x)}+\color{blue}{\sin(x)}$$
I've done the first few for you. Continue this process, and look for a pattern. Connect the black points with a smooth curve.
projectilemotionprojectilemotion
$\begingroup$ Pls with this approach, how will I know when the black dotted lines should cross the X axis and when it will not? $\endgroup$ – Ashalley Samuel Mar 12 '17 at 14:53
$\begingroup$ When the sum of the two functions is equal to $0$, it will cross the $x$-axis. For example, at $x=\frac{\pi}{2}$, the $y$-coordinates of both functions are $1$ and $-1$, hence the sum is equal to $0$. Thus, $y=\sin(3x)+\sin(x)$ will intersect with the $x$-axis at $\left(\frac{\pi}{2},0\right)$. $\endgroup$ – projectilemotion Mar 12 '17 at 16:50
Not the answer you're looking for? Browse other questions tagged trigonometry graphing-functions or ask your own question.
How to graph $x\sin(x)$
Sketching the graph of a trig function
Sketching a graph under certain condtions
Question on trigometric graph sketching
Find a sine function for this graph
Graphs of funtions such as $y=\sin(\sin(x))$
Sketching the graph of trigonometric functions involving absolute value function
|
CommonCrawl
|
American Institute of Mathematical Sciences
Journal Prices
Book Prices/Order
Proceeding Prices
E-journal Policy
A transformation of Markov jump processes and applications in genetic study
DCDS Home
Structural stability for the splash singularities of the water waves problem
December 2014, 34(12): 5045-5059. doi: 10.3934/dcds.2014.34.5045
Remarks on geometric properties of SQG sharp fronts and $\alpha$-patches
Angel Castro 1, , Diego Córdoba 2, , Javier Gómez-Serrano 3, and Alberto Martín Zamora 2,
Departamento de Matemáticas de la UAM, Instituto de Ciencias Matemáticas del CSIC, Campus de Cantoblanco, 28049 Madrid, Spain
Instituto de Ciencias Matemáticas, Consejo Superior de Investigaciones Científicas, C/ Nicolas Cabrera, 13-15, 28049 Madrid, Spain, Spain
Department of Mathematics, Princeton University, 1102 Fine Hall, Washington Rd, Princeton, NJ 08544, United States
Received January 2014 Revised May 2014 Published June 2014
Guided by numerical simulations, we present the proof of two results concerning the behaviour of SQG sharp fronts and $\alpha$-patches. We establish that ellipses are not rotational solutions and we prove that initially convex interfaces may lose this property in finite time.
Keywords: computer-assisted, patches., contour dynamics, Surface quasigeostrophic, incompressible flow.
Mathematics Subject Classification: Primary: 76B47, 35Q31; Secondary: 65G3.
Citation: Angel Castro, Diego Córdoba, Javier Gómez-Serrano, Alberto Martín Zamora. Remarks on geometric properties of SQG sharp fronts and $\alpha$-patches. Discrete & Continuous Dynamical Systems - A, 2014, 34 (12) : 5045-5059. doi: 10.3934/dcds.2014.34.5045
A. L. Bertozzi and P. Constantin, Global regularity for vortex patches,, Comm. Math. Phys., 152 (1993), 19. doi: 10.1007/BF02097055. Google Scholar
M. Berz and K. Makino, New methods for high-dimensional verified quadrature,, Reliable Computing, 5 (1999), 13. doi: 10.1023/A:1026437523641. Google Scholar
D. Chae, P. Constantin, D. Córdoba, F. Gancedo and J. Wu, Generalized surface quasi-geostrophic equations with singular velocities,, Comm. Pure Appl. Math., 65 (2012), 1037. doi: 10.1002/cpa.21390. Google Scholar
J.-Y. Chemin, Persistance de structures géométriques dans les fluides incompressibles bidimensionnels,, Ann. Sci. École Norm. Sup. (4), 26 (1993), 517. Google Scholar
P. Constantin, A. J. Majda and E. Tabak, Formation of strong fronts in the $2$-D quasigeostrophic thermal active scalar,, Nonlinearity, 7 (1994), 1495. doi: 10.1088/0951-7715/7/6/001. Google Scholar
D. Córdoba, M. A. Fontelos, A. M. Mancho and J. L. Rodrigo, Evidence of singularities for a family of contour dynamics equations,, Proc. Natl. Acad. Sci. USA, 02 (2005), 5949. Google Scholar
G. S. Deem and N. J. Zabusky, Vortex waves: Stationary "V-states", interactions, recurrence, and breaking,, Physical Review Letters, 40 (1978), 859. doi: 10.1103/PhysRevLett.40.859. Google Scholar
S. A. Denisov, The Sharp Corner Formation in 2d Euler Dynamics of Patches: Infinite Double Exponential rate of Merging,, ArXiv e-prints, (2012). Google Scholar
F. Gancedo, Existence for the $\alpha$-patch model and the QC sharp front in Sobolev spaces,, Adv. Math., 217 (2008), 2569. doi: 10.1016/j.aim.2007.10.010. Google Scholar
F. Gancedo and R. M. Strain, Absence of splash singularities for SQG sharp fronts and the Muskat problem,, Proc. Natl. Acad. Sci. USA, 111 (2014), 635. doi: 10.1073/pnas.1320554111. Google Scholar
J. Gómez-Serrano and R. Granero-Belinchón, On turning waves for the inhomogeneous Muskat problem: A computer-assisted proof,, Nonlinearity, 27 (2014), 1471. doi: 10.1088/0951-7715/27/6/1471. Google Scholar
I. M. Held, R. T. Pierrehumbert, S. T. Garner and K. L. Swanson, Surface quasi-geostrophic dynamics,, J. Fluid Mech., 282 (1995), 1. doi: 10.1017/S0022112095000012. Google Scholar
T. Hmidi, J. Mateu and J. Verdera, Boundary regularity of rotating vortex patches,, Archive for Rational Mechanics and Analysis, 209 (2013), 171. doi: 10.1007/s00205-013-0618-8. Google Scholar
W. Hofschuster and W. Krämer, C-XSC 2.0-A C++ library for extended scientific computing,, In Numerical software with result verification, 2991 (2004), 15. doi: 10.1007/978-3-540-24738-8_2. Google Scholar
O. Holzmann, B. Lang and H. Schütt, Newton's constant of gravitation and verified numerical quadrature,, Reliable Computing, 2 (1996), 229. doi: 10.1007/BF02391697. Google Scholar
W. Krämer and S. Wedner, Two adaptive Gauss-Legendre type algorithms for the verified computation of definite integrals,, Reliable Computing, 2 (1996), 241. doi: 10.1007/BF02391698. Google Scholar
H. Lamb, Hydrodynamics,, Cambridge Mathematical Library. Cambridge University Press, (1993). Google Scholar
B. Lang, Derivative-based subdivision in multi-dimensional verified gaussian quadrature,, In G. Alefeld, (2001), 145. Google Scholar
R. Moore and F. Bierbaum, Methods and Applications of Interval Analysis, volume 2,, Society for Industrial & Applied Mathematics, (1979). Google Scholar
J. Pedlosky, Geophysical fluid dynamics,, Journal of Applied Mechanics, 48 (1981). doi: 10.1115/1.3157711. Google Scholar
S. G. Resnick, Dynamical Problems in Non-Linear Advective Partial Differential Equations,, PhD thesis, (1995). Google Scholar
J. L. Rodrigo, On the evolution of sharp fronts for the quasi-geostrophic equation,, Comm. Pure Appl. Math., 58 (2005), 821. doi: 10.1002/cpa.20059. Google Scholar
R. Scott and D. Dritschel, A self-similar cascade of instabilities in the surface quasigeostrophic system,, Phys. Rev. Lett., 112 (2014). doi: 10.1103/PhysRevLett.112.144505. Google Scholar
R. K. Scott, A scenario for finite-time singularity in the quasigeostrophic model,, Journal of Fluid Mechanics, 687 (2011), 492. Google Scholar
W. Tucker, Validated Numerics,, Princeton University Press, (2011). Google Scholar
H. M. Wu, E. A. Overman and N. J. Zabusky, Steady-state solutions of the Euler equations in two dimensions: Rotating and translating $V$-states with limiting cases. I. Numerical algorithms and results,, J. Comput. Phys., 53 (1984), 42. doi: 10.1016/0021-9991(84)90051-2. Google Scholar
V. I. Yudovich., Non-stationary flows of an ideal incompressible fluid,, Ž. Vyčisl. Mat. i Mat. Fiz., 3 (1963), 1032. Google Scholar
show all references
Shuang Liu, Yuan Lou. A functional approach towards eigenvalue problems associated with incompressible flow. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3715-3736. doi: 10.3934/dcds.2020028
Joan Carles Tatjer, Arturo Vieiro. Dynamics of the QR-flow for upper Hessenberg real matrices. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1359-1403. doi: 10.3934/dcdsb.2020166
Guido Cavallaro, Roberto Garra, Carlo Marchioro. Long time localization of modified surface quasi-geostrophic equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020336
Hua Zhong, Xiaolin Fan, Shuyu Sun. The effect of surface pattern property on the advancing motion of three-dimensional droplets. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020366
Tian Ma, Shouhong Wang. Topological phase transition III: Solar surface eruptions and sunspots. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 501-514. doi: 10.3934/dcdsb.2020350
Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377
Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115
Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326
Björn Augner, Dieter Bothe. The fast-sorption and fast-surface-reaction limit of a heterogeneous catalysis model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 533-574. doi: 10.3934/dcdss.2020406
Xiaoli Lu, Pengzhan Huang, Yinnian He. Fully discrete finite element approximation of the 2D/3D unsteady incompressible magnetohydrodynamic-Voigt regularization flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 815-845. doi: 10.3934/dcdsb.2020143
Caterina Balzotti, Simone Göttlich. A two-dimensional multi-class traffic flow model. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020034
Pablo D. Carrasco, Túlio Vales. A symmetric Random Walk defined by the time-one map of a geodesic flow. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020390
Petr Pauš, Shigetoshi Yazaki. Segmentation of color images using mean curvature flow and parametric curves. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1123-1132. doi: 10.3934/dcdss.2020389
Gui-Qiang Chen, Beixiang Fang. Stability of transonic shock-fronts in three-dimensional conical steady potential flow past a perturbed cone. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 85-114. doi: 10.3934/dcds.2009.23.85
Peter Frolkovič, Viera Kleinová. A new numerical method for level set motion in normal direction used in optical flow estimation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 851-863. doi: 10.3934/dcdss.2020347
Kohei Nakamura. An application of interpolation inequalities between the deviation of curvature and the isoperimetric ratio to the length-preserving flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1093-1102. doi: 10.3934/dcdss.2020385
Tetsuya Ishiwata, Takeshi Ohtsuka. Numerical analysis of an ODE and a level set methods for evolving spirals by crystalline eikonal-curvature flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 893-907. doi: 10.3934/dcdss.2020390
Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301
Jong-Shenq Guo, Ken-Ichi Nakamura, Toshiko Ogiwara, Chang-Hong Wu. The sign of traveling wave speed in bistable dynamics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3451-3466. doi: 10.3934/dcds.2020047
Yueh-Cheng Kuo, Huey-Er Lin, Shih-Feng Shieh. Asymptotic dynamics of hermitian Riccati difference equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020365
HTML views (0)
on AIMS
Angel Castro Diego Córdoba Javier Gómez-Serrano Alberto Martín Zamora
Copyright © 2021 American Institute of Mathematical Sciences
Recipient's E-mail*
|
CommonCrawl
|
For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.
Domain wall structures and dynamics in wide permalloy strips
Estevez, Virginia
Domain walls in permalloy strips may exhibit various equilibrium micromagnetic structures depending on the width and thickness of the strip. We show that the equilibrium phase diagram of domain wall structures displays in addition to the previously found structures (symmetric and asymmetric tranverse wall, vortex wall) also double and triple vortex walls for large enough strip widths and thicknesses [1]. We analyze the field driven dynamics of such domain walls in permalloy strips of widths from 240 nm up to 6 $\mu$m, using the known equilibrium domain wall structures as initial configurations. Our micromagnetic simulations show that the domain wall dynamics in wide strips is very complex, and depends strongly on the geometry of the system, as well as on the magnitude of the driving field. We discuss in detail the rich variety of the dynamical behaviors found, including dynamic transitions between different domain wall structures, periodic dynamics of a vortex core close to the strip edge, transitions towards simpler domain wall structures of the multi-vortex domain walls controlled by vortex polarity, and the fact that for some combinations of the strip geometry and the driving field the system cannot support a compact domain wall [2]. [1] V. Estevez and L. Laurson, Phys. Rev. B {\bf 91}, 054407 (2015). [2] V. Estevez and L. Laurson, Phys. Rev. B {\bf 93}, 064403 (2016).
Annihilation of domain walls in one dimension
Ghosh, Anirban
As a model system for understanding the annihilation of topological solitons in magnets, we studied the behavior of two domain walls in a one-dimensional easy axis ferromagnet. This is a system in which one can obtain many analytic results. It is well known that a single domain wall (soliton) can be characterized by two collective coordinates: the position of the center (X) and the azimuthal angle (\Phi). Two-soliton configurations are not stable in a one dimensional magnet, because they do not form local extrema of the energy functional. This means we cannot have a static two-soliton solution as a starting point for our analysis. To find an alternative starting point, we used the fact that our system has two conserved quantities: linear and angular momentum and looked at extrema of energy with additional constraints of having fixed (linear and angular) momenta. As a first step, we found the local extremum of the energy functional that has a given fixed angular momentum. This yielded a stationary configuration of two solitons with the same \Phi that precesses with a constant angular velocity but otherwise preserves its shape in the absence of dissipation. The angular velocity depends on the relative separation between the solitons. We chose boundary conditions at \pm\infty that yields a configuration with a total winding number zero, i.e. one which can be continuously deformed to the uniform state. Adding dissipation to the system causes the two solitons to move towards each other and eventually annihilate. We were able to find an analytic description of this dynamics using two collective coordinates: the separation between the domain walls (\zeta) and the azimuthal angle (\Phi), which are canonically conjugate to each other. We then studied the configuration that extremizes the energy functional for a fixed linear momentum and a fixed angular momentum. This yields a stationary state of two solitons with different azimuthal angles \Phi_1 and \Phi_2. This state has both a constant linear and a constant angular velocity, the linear velocity depending on the relative azimuthal angle \chi = \Phi_1 - \Phi_2. Like before, adding dissipation causes the solitons to attract each other and eventually annihilate. An analytic description of this dynamics requires two more (canonically conjugate) collective coordinates, in addition to the separation \zeta and the average azimuthal angle \Phi = (\Phi_1 + \Phi_2)/2. They are the position of the center of the two-soliton system (Z) and the relative azimuthal angle (\chi). We think this work will be a good starting point for studying the annihilation of more complicated topological solitons like vortices and skyrmions in two dimensions.
Bloch line dynamics within domain walls in perpendicular media
Herranen, Touko
We carry out large-scale micromagnetic simulations that demonstrate that due to topological constraints, Bloch lines within extended domain walls are more robust than domain walls in nanowires. Thus, the possibility of spintronics applications based on their motion channeled along domain walls emerges. Bloch lines are nucleated within domain walls in perpendicularly magnetized media concurrent with a Walker breakdown-like abrupt reduction of the domain wall velocity above a threshold driving force, and may also be generated within pinned, localized domain walls. We observe fast field and current driven Bloch line dynamics without a Walker breakdown along pinned domain walls, originating from topological protection of the internal domain wall structure due to the surrounding out-of-plane domains.
Defects of helimagnetic ordering
Köhler, Laura
The Dzyaloshinskii-Moriya interaction in chiral magnets like MnSi, FeGe or Cu$_2$OSeO$_3$ stabilizes a magnetic helix with a magnetization that is periodic along the pitch vector. We consider defects of this helimagnetic ordering and their properties. Similar to cholesteric liquid crystals, there exist disclination defects around which the pitch vector rotates by $\pi$. A dislocation is formed by combining a $\pi$ and $-\pi$ disclination whose distance is directly related to the Burgers vector. We find that dislocations fall into two classes with vanishing or a finite topological skyrmion density depending on whether the length of their Burgers vector is a half-integer or integer multiple of the helix wavelength, respectively. As a result, only the latter will couple to spin currents via Berry phases. Dislocations have been recently identified to be important for the relaxation of helimagnetic ordering in FeGe [1]. [1] Local dynamics of topological magnetic defects in the itinerant helimagnet FeGe, A. Dussaux et al. arXiv:1503.06622
Cycloid motion of magnetic bubbles in Dzyaloshinskii-Moriya ferromagnetic materials, with easy-axis anisotropy
Koulouris, Athanasios
Defects around a colloid particle in nematic liquid crystal
Lamy, Xavier
Existence and stability of skyrmion lattice solution in chiral magnets
Li, Xinye
Symmetry of minimizers for Aviles-Giga type functionals
Monteil, Antonin
This poster illustrates some symmetry issues for divergence-free vector fields minimizing some Aviles-Giga type functionals. This is an important problem, in particular in micromagnetics where it is well known that very complex structures can appear. We bring to light a new calibration method that allows to identify some conditions on the potential which insure that any global minimizer is 1D.
Skyrmions and defects: from pinning to new memory designs
Müller, Jan
In the last few years, magnetic whirls with integer winding number, so-called 'skyrmions', have gained a lot of attention due to their thermal (topological) stability, nanometer scale size, and the ability to be controlled at ultra low current densities. These properties make skyrmions promising candidates for future logic devices and in particular magnetic memory devices. As the most prominent example, the skyrmion racetrack memory has been proposed. Using numerical and analytical calculations, we investigated the interaction of a single skyrmion with different kinds of single defects. We will present the possible phases of interaction [1]. Finally we will motivate an alternative to the skyrmion racetrack, that is supposed to take care of most of its disadvantages and be a realistic candidate for skyrmion memory devices. [1] J. Müller and A. Rosch, Phys. Rev. B 91, 054410
Action of Rashba Torque on Domain Wall in Magnetic Helix
Pylypovskyi, Oleksandr
O. V. Pylypovskyi$^1$, D. D. Sheka$^1$, V. P. Kravchuk$^2$, K. V. Yershov$^{2,3}$, D. Makarov$^{4,5}$, Y. Gaididei$^{2}$ $^1$ Taras Shevchenko National University of Kyiv, 01601 Kyiv, Ukraine $^2$ Bogolyubov Institute for Theoretical Physics of the National Academy of Sciences of Ukraine, 03680 Kyiv, Ukraine $^3$ National University of "Kyiv-Mohyla Academy", 04655 Kyiv, Ukraine $^4$ Helmholtz-Zentrum Dresden-Rossendorf e. V., Institute of Ion Beam Physics and Materials Research, 01328 Dresden, Germany $^5$ Institute for Integrative Nanosciences, IFW Dresden, 01069 Dresden Germany Effective anisotropies and Dzyaloshinskii-Moriya interaction emerge in nanomagnets with nonzero curvature $\kappa$ and torsion $\tau$ [1, 2]. The simplest object with constant $\kappa$ and $\tau$ is a helix wire. We consider helices with easy-tangential anisotropy. The equilibrium magnetization is tilted by the angle $\psi \propto \kappa \tau$ inside the tangent-binormal surface by the effective easy-axis anisotropy and easy-surface anisotropies. The effective Dzyaloshinskii-Moriya interaction significantly influences to the structure of domain wall: (i) The coupling between domain wall type (head-to-head or tail-to-tail) and its magnetization chirality appears; (ii) The azimuthal angle of magnetization is a linear function of coordinate along a wire with a slope $Y \propto \tau$. In contrast to planar systems, where head-to-head domain walls cannot be moved by spin-orbit torques in parallel current injection geometry [3], the head-to-head domain wall can be efficiently moved under the action of the Rashba torque in helix wires. Altering of the ground state results in the existence of a Rashba field component along magnetization in one of the domains. It pushes domain wall with velocity, proportional to $\sin\psi/(1+Y^2)$ [4]. All analytical predictions are confirmed by micromagnetic [5] and spin-lattice simulations [6]. [1] D. D. Sheka, V. P. Kravchuk, Y. Gaididei, J. Phys. A: Math. Theor. 48, 125202 (2015). [2] D. D. Sheka, V. P. Kravchuk, K. V. Yershov, Y. Gaididei, Phys. Rev. B. 92, 054417 (2015). [3] A. V. Khvalkovskiy, V. Cros, D. Apalkov, V. Nikitin, M. Krounbi, K. A. Zvezdin, A. Anane, J. Grollier, A. Fert, Phys. Rev. B. 87, 020402 (2013). [4] O. V. Pylypovskyi, D. D. Sheka, V. P. Kravchuk, K. V. Yershov, D. Makarov, Y. Gaididei, arXiv:1510.04725 (2015). [5] T. Fischbacher, M. Franchin, G. Bordignon, H. Fangohr, IEEE Trans. Magn. 43, 2896-2898 (2007). [6] SlaSi spin-lattice simulations package http://slasi.rpd.univ.kiev.ua.
Coarsening dynamics of topological defects in thin Permalloy films
Rissanen, Ilari
This abstract is from our paper submitted to PRB: We study the dynamics of topological defects in the magnetic texture of rectangular Permalloy thin film elements during relaxation from random magnetization initial states. Our full micromagnetic simulations reveal complex defect dynamics during relaxation towards the stable Landau closure domain pattern, manifested as temporal power-law decay, with a system-size dependent cut-off time, of various quantities. These include the energy density of the system, and the number densities of the different kinds of topological defects present in the system. The related power-law exponents assume non-trivial values, and are found to be different for the different defect types. We discuss details of the processes allowed by conservation of the winding number (topological charge) of the defects, underlying their complex coarsening dynamics. We reproduce the relaxation using dynamics predicted by Thiele's equation accompanied with some simplifications and approximations.
Coupling and numerical integration of the Landau-Lifshitz-Gilbert equation
Ruggeri, Michele
The nonlinear Landau-Lifshitz-Gilbert (LLG) equation models the dynamics of the magnetization in ferromagnetic materials. Numerical challenges arise from strong nonlinearities, a nonconvex pointwise constraint which enforces length preservation, and the possible nonlinear coupling to other PDEs. We discuss numerical integrators, based on lowest-order FEM in space that are proven to be (unconditionally) convergent towards a weak solution of the problem. Emphasis is put on an effective numerical treatment, where the time-marching scheme decouples the numerical integration of the coupled equation. As an example, we consider the nonlinear coupling of the LLG equation and a diffusion equation which models the evolution of the spin accumulation in magnetic multilayers in the presence of electric current.
Discretization methods for oriented materials
Sander, Oliver
Materials such as ferromagnets, liquid crystals, and granular media involve orientation degrees of freedom. Mathematical descriptions of such materials involve fields of nonlinear objects such as unit vectors, rotations matrices, or unitary matrices. Classical numerical methods like the finite element method cannot be applied in such situations, because linear and polynomial interpolation is not defined for such nonlinear objects. Instead, a variety of heuristic approaches is used in the literature, which are difficult to analyze rigorously. We present nonlinear generalizations of the finite element method that allow to treat problems with orientation degrees of freedom in a mathematically sound way. This allows to construct numerical methods that are more stable, more efficient, and more reliable. We use the technique to calculate stable configurations of chiral magnetic skyrmions, and wrinkling patterns of a thin elastic polyimide film.
Skyrmion caloritronics
Schroeter, Sarah
Magnetic skyrmions are topologically protected smooth magnetic whirl textures, which are energetically stabilized in chiral magnets by the Dzyaloshinskii-Moriya interaction. Skyrmions can be manipulated by ultra-low electric current densities, which makes them promising candidates for novel spintronic applications. In insulating chiral magnets, it turns out that skyrmions are sensitive to tiny magnon currents, induced, for example, with the help of a temperature gradient [1]. In turn, the scattering of spin-waves off skyrmions results in an emergent Lorentz force that leads to a topological magnon Hall effect. Based on our previous work [2,3], we discuss thermal transport of magnons in the presence of a dilute gas of skyrmions in a two-dimensional chiral magnet. Using the Boltzmann equation, we derive the thermal transport coefficients for the magnons as well as the effective equation of motion for the skyrmion positions. [1] Mochizuki et al., Nature materials 13.3 (2014): 241-246. [2] C. Schütte and M. Garst, Phys. Rev. B 90, 094423 (2014). [3] S. Schroeter and M. Garst, Low Temperature Physics, 41, 817-825 (2015).
Rigidity of Shape Memory Alloys
Simon, Thilo
Shape Memory Alloys are materials which, after having been deformed at a low temperature, are able to recover their previous shape upon heating. This effect is a result of the fact that at low temperature the crystal lattice has several distinct stress-free variants. Using these so-called martensite variants, the material can form microstructures. We present our current progress on characterizing the macroscopic geometry of a class of microstructures called habit planes in shape memory alloys undergoing cubic-to-tetragonal transformations. Starting from a geometrically linear variational model we describe the patterns formed by the habit planes under some assumptions on the regularity of the macroscopic configuration.
Why there are no skyrmions in MnSi epitaxial films - the experimental perspectives
Zhang, Shilei
Single crystal MnSi is a prototypical material system that carries the skyrmion lattice phase, which is well described by the Ginzburg-Landau theory taking thermal fluctuation into account. The thin-downed MnSi thin films from the bulk have extended skyrmion phase region, as the uniaxial magnetic anisotropy is favoured by the skyrmion lattice formation in the micromagnetic model. This seems to suggest the fact that the skyrmions may be more stable for a grown MnSi films. However, the existence of the skyrmion lattice phase, and the magnetic structures in epitaxial MnSi thin film systems are still an unknown issue, or at least is under controversial debates currently. I would like to present our recent findings from the perspectives of the growth and characterisation of this system, which infer a non-skyrmion conclusion. In fact, several fundamental differences between the bulk and films, both structurally and magnetically, may be the reason why the skyrmion state is not favoured in films. These are also general problems for all the grown B20 thin films.
|
CommonCrawl
|
Multi-state models for the analysis of time-to-treatment modification among HIV patients under highly active antiretroviral therapy in Southwest Ethiopia
Belay Birlie ORCID: orcid.org/0000-0001-8621-65391,
Roel Braekers2,
Tadesse Awoke3,
Adetayo Kasim4 &
Ziv Shkedy2
Highly active antiretroviral therapy (HAART) has shown a dramatic change in controlling the burden of HIV/AIDS. However, the new challenge of HAART is to allow long-term sustainability. Toxicities, comorbidity, pregnancy, and treatment failure, among others, would result in frequent initial HAART regimen change. The aim of this study was to evaluate the durability of first line antiretroviral therapy and to assess the causes of initial highly active antiretroviral therapeutic regimen changes among patients on HAART.
A Hospital based retrospective study was conducted from January 2007 to August 2013 at Jimma University Hospital, Southwest Ethiopia. Data on the prescribed ARV along with start date, switching date, and reason for change was collected. The primary outcome was defined as the time-to-treatment change. We adopted a multi-state survival modeling approach assuming each treatment regimen as state. We estimate the transition probability of patients to move from one regimen to another.
A total of 1284 ART naive patients were included in the study. Almost half of the patients (41.2%) changed their treatment during follow up for various reasons; 442 (34.4%) changed once and 86 (6.69%) changed more than once. Toxicity was the most common reason for treatment changes accounting for 48.94% of the changes, followed by comorbidity (New TB) 14.31%. The HAART combinations that were robust to treatment changes were tenofovir (TDF) + lamivudine (3TC)+ efavirenz (EFV), tenofovir + lamivudine (3TC) + nevirapine (NVP) and zidovudine (AZT) + lamivudine (3TC) + nevirapine (NVP) with 3.6%, 4.5% and 11% treatment changes, respectively.
Moving away from drugs with poor safety profiles, such as stavudine(d4T), could reduce modification rates and this would improve regimen tolerability, while preserving future treatment options.
The implementation of HAART at a large scale has shown a dramatic change in controlling the burden of HIV/AIDS. Various studies from developed as well as developing countries have reported an improvement in CD4 cell counts following ART initiation [1–6] and decreases in mortality [7].
Currently, Ethiopia and most resource-limited countries have adopted non-nucleoside reverse transcriptase inhibitors (NNRTIs) based therapy. They use either NVP or EFV plus two nucleoside reverse transcriptase inhibitors (NRTI) as a first line therapy for adults and adolescents. This combination has been shown to be efficacious, are generally less expensive, and have generic formulations [7]. However, the new challenge of HAART is to allow long-term durability. Many patients will be forced to modify or switch their treatment regimens for various reasons, including poor drug tolerance, drug toxicities, drug-to-drug interactions, pregnancy and treatment failure [8–10].
Studies from developed and developing countries have shown that a substantial number of patients (up to 69%) may modify their regimen overtime, where 25% - 44% of them modify their initial treatment within the first years of treatment [8–14]. Drug related toxicity was the most common reason for treatment modification [8, 9, 11, 13, 15–18] and this can be an important barrier to adherence and potentially lead to treatment failure [19]. The majority of these studies found that patients that receive d4T as a part of their treatment were at increased risk of treatment modification due to toxicity [13, 15–18], which raises questions about the continued role of d4T in first-line treatment. The WHO had revised its guidelines on the use of antiretroviral drugs several times. Recently, WHO recommends to move away from d4T giving preference to the use of TDF and AZT in standard first-line therapy when possible [20, 21]. Due to cost and management of toxicity, however, the transition from d4T to TDF has been slow in resource-limited settings [20, 21].
The high rate of HAART switching emphasizes the complexity of managing these therapies. Given the limited number of second-line treatment options available in resource-limited settings, maximizing regimen durability by minimizing the rate of treatment modification and rates of treatment failure amongst those on first-line regimens is vital to extend first-line treatment options and prevent premature initiation of second-line therapy. In order to achieve this goal, key reasons for changes in ART regimens should be studied and durable regimens should be identified for recommendations. Moreover, evaluating the influence of initial ART regimens on the likelihood of treatment modification has a vital role in determining what treatment to initiate and what treatment to preserve. Although relevant data on patients' long-term experience on ART from resource limited settings are less commonly available, some investigators have described the reasons for modification of HAART and compared durability of individual ARV's using routine clinical programme data [15–18, 22–27]. However, the majority of these studies have had short follow-up times and consider only first time regimen switching or first time single drug substitution with no distinction made between NNRTI and NRTI substitutions.
Therefore, this study aims to compare the durability of first-line ART regimens and investigate reasons for treatment modification in patients under HAART in Jimma university specialized hospital. For this purpose, we adopted a multi-state survival modeling approach assuming each treatment regimen as state. We estimate the transition probability of patients to move from one regimen to another in general as well as due to a specific event that triggers the move. The proposed model allows modelling of both the occurrence of different event types (such as, single drug substitution or regimen switch) and the occurrence of subsequent events, the latter potentially of different types.
Description of the cohort
The data used for this study were obtained from Jimma University Specialized Hospital HIV/AIDS clinic, located 352 Km Southwest of Addis Ababa, Ethiopia. The Hospital gives VCT, PMTCT and free ART service for people living in Jimma Town and Southwest Ethiopia. Patients begin ART after they have been checked for medical eligibility and are counseled for adherence for ART. Patients presenting with WHO stage 4 disease and/or a CD4 count lower than 200 were eligible to start ART. Those who started ART, have a regular follow-up for drug adverse effects, management of opportunistic infection, TB screen and counseling related to family planning. In addition, CD4 count is measured at each visit. Viral load measurement is not available. Adverse event monitoring is conducted by clinicians during medical visits in accordance with national guidelines.
Decisions on which treatment regimen to start or substitute are made by the clinician in consultation with the patient. During the study period, the standard first-line regimens to be initiated were d4T + 3TC + NVP (1), d4T + 3TC + EFV (2), AZT + 3TC + NVP (3), AZT + 3TC + EFV (4), TDF + 3TC + EFV (5), and TDF + 3TC + NVP (6). If a patient suffered from side effects/toxicities related to the NRTI's (d4T, AZT or TDF), and was not in need of second-line therapy for virologic failure, the recommendation was to substitute d4T with either AZT or TDF, to substitute AZT either with d4T or TDF, and to substitute TDF with either d4T or AZT. Similarly, patients initiated on the NNRTI EFV could substitute with NVP, while those on NVP could substitute with EFV. In addition, upon the recommendation of WHO, in Oct 2012 the hospital started to phase out d4T backbone by replacing either AZT or TDF for patients who were on d4T based regimen.All data, including demographic, clinical conditions, laboratory test results and medications are recorded and entered in to the database by a data entry clerk at the clinic. In addition, data on prescribed ARV along with start and stop dates of the drug and reasons for discontinuation are documented. Use of Jimma University Hospital HIV/AIDS clinic data and analysis of de-identified data was approved by the Human Research Ethics Committee of Jimma University.
All ART naive patients, aged 18 years or older and who initiated a standard, public-sector, first-line ART regimen at the clinic between between January 1, 2007 and December 31, 2011 were eligible for this analysis. The data was closed for analysis at the end of August, 2013.
The primary outcome was defined as the time-to-treatment change (treatment modification or regimen switching). For the purpose of this study, treatment change is defined as changing at least one ARV in the regimen without initiating a second-line therapy. ARV dosage adjustments were not considered as treatment change. Time zero was defined as the day of ART initiation and each recurrent treatment change time was measured from the beginning of the patient's ART initiation in months. Person-time of the study subject ended at the earliest of initiation on second-line therapy, lost to follow up, death, transfer or closure of the data set for analysis (August 25, 2013).
Multi-state survival model
Model formulation
Possible transition between treatment combinations are presented in Fig. 1 which illustrates the treatment history of patients under ART. From here onward we use the term state to denote a specific treatment combination. The model has 6 transient states (which represents 6 first-line treatment combinations): d4T + 3TC + NVP (1), d4T + 3TC + EFV (2), AZT + 3TC + NVP (3), AZT + 3TC + EFV (4), TDF + 3TC + EFV (5), and TDF + 3TC + NVP (6). The model assumes that every patient can switch to all the regimen at one point or other. However, a patient can only switch to one regime at a time. For example, a patient who started treatment with d4T + 3TC + NVP State 1) is at risk of making one of the following transitions at a particular time; 1→2,1→3,1→4,1→5 and 1→6. If the patients made the transition 1→2 or 1→3, the subject has undergone a single drug-substitution (treatment modification). Transition 1→2 implies that the patient has substituted their NNRTI's NVP by EFV without changing their NRTI treatment (d4T). However, transition 1→3 implies that the patient has substituted their NRTI's d4T by AZT without changing their NNRTI treatment. Transitions 1→4 or 1→5, imply regimen switching, substituting both NNRTI and NRTI at the same time. After making one of these possible transitions patients will be at risk of making further transition.
A Six-state multi state model for treatment change. Note that 3TC was omitted because it was present in all the regimens. The transition intensities matrix is presented in Additional file 1: Section S2
Let (X t ) t>0 be a multi-state process with a state space {1,2,…,6}. The stochastic process (X t ) t>0 is defined as X t =ℓ, if the process is in state ℓ at time t (in months). As mentioned above, for the case study presented in this paper, there are 6 possible first lines regimens which implies that the initial state of the patient X 0∈{1,2,…,6}.
Our main interest is to model the transition from ℓ th regimen (state ℓ) to j th regimen (state j) at time t. A transition will be simply denoted by ℓ j. The distribution of this multi-state process is characterized by the transition intensities, or hazard rate, a ℓ j (t), which expresses the instantaneous risk of a transition from state ℓ into state j at time t, that is
$$ {}\begin{aligned} a_{\ell j}(t)={\lim}_{\Delta t \rightarrow 0} \frac{P(X_{(t+\Delta t)}=j|X_{t}=\ell,{F}_{t^{-}})}{\Delta t}, \ell, j \in \{1,2,...,6\}, \ell \neq j. \end{aligned} $$
Here, \({F}_{t^{-}}\phantom {\dot {i}\!}\) represents process history prior to time t. In our application, time t represents time since ART initiation. The cumulative transition hazards is defined as \(A_{\ell j}(t) = \int _{0}^{t} a_{\ell j}(u)du, u \leq t, \text {where}\ A_{\ell j}(t) = 0\) if a direct transition between state ℓ and j is impossible. These intensities can be gathered in to a 6×6 matrix A(t) with diagonal elements \(A_{\ell \ell }(t)=-\sum _{\substack {j=1,j \neq \ell }}^{6} A_{\ell j}(t), \ell, j = \{1,2, \ldots, 6\}\). Note that individuals who have no transition should remain on their initial regimen (starting state) after ART initiation.
A central issue related to ART management is the ability to estimate the probability of the future treatment combination of the patient (i.e. the patient state) given all the information available until the present moment. For example, given a patient who substituted his NRTI's d4T by AZT without changing his NNRTI after 6 months and (i.e., the current state of the patient is either in state 3 or state 4, depending on the initial NNRTI component) who has had no further events at one year post ART, one may be interested in estimating the probability of staying on this combination for additional 6 months as well as in comparing this probability to a patient who did not substitute their NRTI (d4T). We propose to use the transition probabilities for long-term prediction of a patient's state. Let s be the time at which the prediction is made measured from the time origin of the patient (start of treatment) and let us denote the event history of the patient up to time s by X u ,0≤u≤s. Then, the transition probability from state ℓ to state j in the time interval [s,t], given information available until time s, is defined as
$$ {}\begin{aligned} P_{\ell j}(s,t) = P(X_{t} = j \mid X_{s}=\ell,X_{u}), s \leq t, \ell,j \in \{1,2,...,6\}, u \in\, [0,s]\!. \end{aligned} $$
In order to estimate P ℓ j (s,t) we proposed to use a Markov model [28]. The model assumes that the future course of the patient depends on the patient's state in the current time but not on the patient's history before the current state. This means that, conditional on the present state, the past has no influence on the risk. This implies that
$$ {}\begin{aligned} a_{\ell j}(t)dt=P(X_{(t+\Delta t)^{-}}=j|X_{t^{-}}=\ell), \ell, j \in \{1,2,...,6\}, \ell \neq j \end{aligned} $$
$$ {}\begin{aligned} P_{\ell j}(s,t) = P(X_{t} = j \mid X_{s}=\ell), s \leq t,\ell,j \in \{1,2,...,6\}. \end{aligned} $$
Analogous to A(t),these probabilities can be gathered in to a 6×6 matrix M(s,t) with (P ℓ j (s,t)) as its (ℓ,j)th entry. A single element (P ℓ j (s,t)) combines both direct and indirect transition from state ℓ to state j. The matrix is fully presented in Additional file 1: Section S.2.
In this section we present non-parametric approaches for time continuous Markov models with finite state space and under independent right censoring. We consider n individual multistate processes \(\left (X^{(i)}_{t}\right)_{t \geq 0}, X^{(i)}_{t} \in \{1,...,6\},\ \text {and}\ i=1,2,...,n\). We assume that the n process are all, conditional on the initial state \(X^{(i)}_{0}\), independent multistate processes. Observation of the individual multistate data is subject to a right censoring time C i . Our notation and ideas are based on a counting process formulation [29, 30].
Let N ℓ j;i (t) be the counting process denoting individual i ′ s number of observed direct (without visiting another state in between) ℓ→j transition in [0,t],ℓ,j∈1,2,…,6,ℓ≠j. Here, time t refers to the time since the patient entered the initial state (i.e., the time since ART initiation). Let Y ℓ;i (t) be an indicator variable which represent the at risk process where we have
$${}{\begin{aligned} Y_{\ell;i}(t) = \left\{ \begin{array}{cl} 1 & : \text{if individual}~ i~ \text{is in state}~ \ell~ \text{and under observation before time}~ t,\\ 0 & :\text{otherwise.} \end{array}\right. \end{aligned}} $$
Let the aggregated process \(N_{\ell j}(t) = \sum _{i=1}^{n} N_{\ell j;i} (t), \ell \neq j\) and \(Y_{\ell }(t) = \sum _{i=1}^{n} Y_{\ell ;i}(t)\) denote, respectively, the number of observed direct ℓ→j transitions during the time interval [0,t] and the number of individuals to be observed at risk in state ℓ just prior to time t. We define the the increment of N ℓ j (t) as △N ℓ j (t)=N ℓ j (t)−N ℓ j (t −) for the increment of N ℓ j (t) which gives the number of ℓ→j transitions observed exactly at time t.
Nonparametric estimation of baseline hazards
From the definition of the transition intensities in Eq. (3) a ℓ j d t=P(X (t+d t)−=j∣X t−=ℓ),ℓ≠j. Hence, if we observe no ℓ→j transition at t (i.e △N ℓ j (t)=0) we estimate the increment a ℓ j (t)d t of the cumulative hazard as 0. If we do observe ℓ→j transition at t (i.e △N ℓ j (t)>0), we estimate this conditional transition probability by
$$ \triangle \hat{A}_{\ell j}(t) = \frac{\triangle N_{\ell j}(t)}{Y_{\ell}(t)}, $$
summing up over these increments yields the Nelson-Aalen estimators [29]
$$ \hat{A}_{\ell j}(t) =\sum\limits_{s \leq t} \frac{\triangle N_{\ell j}(s)}{Y_{\ell} (s)}, \ell \neq j, $$
where summation is over all observed event times in [0,t] and its variance is given by
$$ \hat{\delta}_{\ell j}^{2}(t) =\sum\limits_{s \leq t} \frac{\triangle N_{\ell j}(s)}{Y_{\ell}^{2}(s)}, \ell \neq j. $$
Nonparametric estimation of Transition probabilities
The transition probabilities are a complex function of the transition hazards, because the state occupied at some time t may potentially result from a complex nested series of competing risks experiments and there may also be more than one possible sequence of competing risks experiments leading to being in a certain state at a certain time [31]. Under the Markov model the transition probabilities defined in (4) are the solution of a set of differential equations [29]
$$ \frac{d}{dt}\mathbf{M}(s,t) =\mathbf{A}^{T}(t)\mathbf{M}(s,t), $$
where M(s,t) is transition probability matrix with (ℓ,j) element P ℓ j (s,t)=P(X t =j∣X s =ℓ) and A(t) is a matrix with off diagonal elements A ℓ j (t)=a ℓ ℓ (t) and diagonal elements \(A_{\ell \ell }(t)=-\sum _{\substack {j=1,j \neq \ell }}^{6} a_{\ell j}(t)\). In coordinates, (8) is \(d/dt P_{\ell j}(s,t) =\sum _{k} P_{\ell k}(s,t)A_{kj}(t)\) and for any fixed initial state ℓ, the vector of transition probabilities from ℓ, (P ℓ1(s,t),P ℓ2(s,t),...,P ℓ6(s,t)) satisfies this equation. Even though this equation can not be solved in general due to the non-constancy over time of the matrix A(t), under Markov assumption, the transition probabilities satisfy
$$ P_{\ell j}(s,t)=\sum_{r} P_{\ell r}(s,u)P_{rj}(u,t)\;\;, s \le u \le t, $$
and based on a partition s=t 0<t 1<t 2<…<t k−1<t k =t of the time interval [s,t], the matrix of transition probabilities M(s,t) can be approximated by [31]
$$ \mathbf{M}(s,t) \approx \prod\limits_{k=1}^{K} (\mathbf{I}+\triangle \mathbf{A}(t_{k})), $$
where I is the (6×6) identity matrix, the (ℓ,j)th element of △A(t k ) is equal to A ℓ j (t k )−A ℓ j (t k−1), and \(A_{\ell \ell }(t)=-\sum _{\substack {j=1,j \neq l }}^{6} A_{lj}(t)\). Computing the approximation for ever finer partition [s,t] approaches a limit, namely a matrix-valued product integral [31, 32], which equals the matrix of transition probabilities,
$$ \mathbf{M}(s,t)=\prod\limits_{u \in (s,t]}(\mathbf{I}+d\mathbf{A}(u)), $$
where u ranges from s to t and d A(u) is defined as d A ℓ j (u)=a ℓ j (u)d u,ℓ,j∈1,2,...,6 [29]. Therefore, for Markov models, given A(t), the product integration is the mapping that switches from cumulative transition hazards to the matrix of transition probabilities while all cumulative transition hazards are involved.
An estimator of M(s,t) can be obtained by replacing A(u) with the Nelson-Aalen estimators, \(\hat {\mathbf {A}}(u)\), and by defining \(d\hat {\mathbf {A}}(u)\) as the matrix with entries \(\triangle \hat {A}(u)= \hat {A}_{\ell j}(u) - \hat {A}_{\ell j}(u^{-})\) (i.e., the increment of the Nelson-Aalen estimators at time u). This results in the Aalen-Johansen type estimator [29],
$$ \hat{\mathbf{M}}(s,t) = \prod\limits_{\substack{u \in (s,t]}}(\mathbf{I} + \Delta \hat{\mathbf{A}}(u)), $$
in which u indicates all event times in (s,t]. Note that the transition probability matrix defined in (11) is calculated by means of a product integral, while its estimator in (12) is based on a finite product, which only changes at event times.
The transition probabilities can be used for two types of prediction: forward and fixed horizon [30, 33]. In the former case, at a given fixed time s the probabilities of possible future events are evaluated for varying time horizons t. In the latter case, the prediction is made from several starting points to one future fixed time point. In both cases, Aalen-type or Greenwood type estimators of the variance-covariance matrix of M(s,t) can be calculated directly or through a recursion formula which can for instance be used to construct point-wise confidence intervals around the estimated transition probability curves [30]. In our application we use forward prediction type.
Robustness of First-line HAART towards Treatment Modification/change
The primary aim of this study is to quantify the robustness of first line treatments to treatment modification. Given the individuals initial state ℓ at time s, the waiting time in state ℓ. i.e., the duration of stay at state ℓ, can be used as a summary measure of the model. The waiting time in state ℓ is generated with hazard
$$a_{\ell.}(t)=\sum\limits_{j=1,j \neq \ell}^{6} a_{\ell j}(t), t>=0. $$
We define the total cumulative transition hazard out of state ℓ as \( A_{\ell.}(t)=\int _{0}^{t} a_{\ell.}(u)du=\sum _{j=1,j \neq \ell }^{6} A_{\ell j}(t)\). Using these quantities one can evaluate the probability of no events during a period. The survival function of the waiting time in the initial state ℓ, i.e., probability to stay in state ℓ until time t, given that the individual had already been in the respective state at time s,s≤t is given by,
$$\begin{array}{*{20}l} P(X_{t}=\ell|x_{s}=\ell) &= \prod\limits_{\substack{u \in (s,t]}}(1-a_{\ell.}(u)du)\\ &=exp \left(-\int_{s}^{t}a_{\ell.}(u)du\right)\\&=\text{exp}(-A_{\ell.}(t)) \ell=1,...,6. \end{array} $$
These quantities are essentially common survival probabilities with all cause hazard a ℓ.(u), taking time s as time origin [31]. However, this can also be seen as a solution of the product integral in (11). Since, the ℓ th row of the Aalen-Johansen type estimator of \(\hat {\textbf {M}}(s,t)\) contains the estimates \(\hat {P}_{\ell j}(s, t)\) for ℓ≠j and the diagonal element is such that the sum over the ℓ th row equals 1, the Aalen-Johansen estimator of P(X t =ℓ|x s =ℓ) is just \(\hat {P}_{\ell \ell }(s,t)\).
The multi-state model formulated above allows us to evaluate whether treatment modification reflect a substitution of NNRTI, substitution of NRTI or substitution of both NNRTI and NRTI by initial treatment combinations. We propose to use the following measures of HAART robustness to treatment modification:
Probability of NNRTI substitution
P 12 for state 1 P 34 for state 3 P 56 for state 5
Probability of NRTI substitution
P 13+P 16 for state 1 P 35+P 36 for state 3 P 54+P 52 for state 5
Probability of regimen switching
Of the 1453 eligible patients, 169 patients were excluded because of limited follow up (i.e those with at most 1 month follow-up data) and missing information (patients with missing information about prescribed ARV or start and stop dates of the drug). A total of 1284 subjects were included for the analysis presented in this paper. Patients person-time were cut at the earliest of changing to second line treatment, death, lost to followup, transfer or end of study (Aug 25, 2013). The median follow-up time was 37.40 months (IQR: 22.32-56.15 months) and the average follow-up time was 38.25 months per person. At ART initiation, patients had a median CD4 cell count of 137c e l l s/m m 3 (IQR: 78-201 c e l l s/m m 3), were predominately female (68.81%) and had a median age of 30 years (IQR: 26-35 years) (Table 1). The most common regimens initiated were d4T + 3TC + EFV consisting of 526(40.96%) patients while the treatment TDF + 3TC + EFV was administered to 401(31.23%) patients at initiation.
Table 1 Baseline characteristics of study subjects
For the majority of the patients (58.8%) the first line treatment was not modified, 442 patients (34.4%) had their treatment changed only once while 86 patients (6.69%) had their treatment changed more than once. In total, 615 (32.4%) treatment changes occurred in this cohort over the period of follow-up. Of those 615 changes, 426 (69.27%) substituted NRTI only, 144 (23.41%) substituted NNRTI only and 45 (7.32%) substituted both the NRTI and NNRTI at the same time. The number of events (transition made from each state) and the number of patients in total that were at risk for treatment modification are presented in Table 2. Five hundred forty one patients were on d4T + 3TC + NVP combination, but only 125 (23%) remained on this regimen without any modification. 89% of the 483 patients who were on AZT + 3TC + NVP and 95% of the 89 patients who were on TDF + 3TC + NVP did not experience treatment modification. Among the regimens containing EFV, 36% of 157 patients on d4T + 3TC + EFV, 86% of 161 patients on AZT + 3TC + EFV and 96% of 471 patients on TDF + 3TC + EFV did not experience treatment modification. The frequency of treatment change was the lowest amongst those patients initiated on TDF-based regimens (3.75%) compared to those initiated on AZT (11.96%) and d4T-based regimens (73.92%). It is interesting that regimens containing d4T were more prone to treatment modification than those containing AZT and TDF. Apart from d4T, patients who received NVP (42.5%) were more susceptible to treatment modification than patients who received EFV (17.87%).
Table 2 Observed transition matrix
As seen from Fig. 2 (Panel b), however, when we look at the time spent in the current treatment combination of the patients who modified their treatment, patients initiated on d4T had a tendency to stay longer (40.40 months; IQR: 14.60-55.73) as compared to AZT patients (3.7 months; IQR: 1.90-16.02) and TDF patients (12.60 months; IQR: 6.84-20.40). Similarly, patients initiated on NVP had a tendency to stay longer (38.37 months; IQR: 5.42-56.05) as compared to EFV patients (21.80 months; IQR: 8.70-39.80) (Fig. 2 (Panel c)). The duration of stay in each treatment combination before the first change to another treatment combination is presented in Additional file 1: Figure S3.1.
Duration on treatment before switch in months. a Duration in original treatment combination before switch, b Duration in original NRTI before switch, and c Duration in original NNRTI before switch. Note that only the time spent in the current treatment combination of the patients who modified their treatment are considered
The multistate model described in the previous section was estimated using the mstate packagee developed by [34]. Details about different R function used for the estimation is given in Additional file 1: Section S1. In particular, using the estimated transition hazards as described in Eq. (12), we calculate the transition probabilities P ℓ j (s,t) from all starting states to all possible states, between the starting time s=0 and all event times successively. Note that several probabilities estimates cannot be obtained due to limited information in some states. As shown in Table 2, treatment change was observed only in 4 out of the 89 patients initiated on TDF + 3TC + NVP; hence we have chosen to consider as inadmissible the occurrence treatment modification from this treatment combination (State 6). The model has 6-states as before but with a different transition matrix. The transition matrix in Additional file 1: Section S1 shows the multistate structure which reflects this framework. We show in Fig. 3 the estimated transition probabilities from all starting states to all possible states, between the starting time s=0 and all event times successively. Treatment combinations containing d4T have the lowest probability of no treatment modification while the combination of TDF and EFV are the most robust to treatment modification.
Transition probability starting from each state. Note: the estimate contain both direct and indirect transition probabilities. 1: d4T-3TC-NVP, 2: d4T + 3TC + EFV, 3: AZT + 3TC + NVP, 4: AZT + 3TC + EFV, 5: TDF + 3TC + EFV, 6: TDF + 3TC + NVP
As mentioned above, we are mainly interested in prediction of the four measures of HAART robustness to treatment modification: (1), probability of no treatment modification, (2) NRTI substitution, (3) NNRTI substitution and (4) regimen changes. The estimated probabilities are shown in Fig. 4. As was to be expected on the basis of the previous discussion, the prospects for a patient who received the regimens containing d4T are indeed worse than those patient who received the regimens containing AZT or TDF, the former having a far larger probability of NRTI substitution, regimen changes and the lowest treatment modification-free survival probabilities.
Prediction probabilities at s = 0 for a reference patient. a Probability of no treatment modification, b Probability of NRTI substitution, c Probability of NNRTI substitution and d Probability of regimen changes. 1: d4T-3TC-NVP, 2: d4T + 3TC + EFV, 3: AZT + 3TC + NVP, 4: AZT + 3TC + EFV, 5: TDF + 3TC + EFV, 6: TDF + 3TC + NVP
As mentioned above, treatment modification occurs more frequently for AZT and TDF early after treatment initiation while it occurs later on in follow-up among patients on d4T. Thus, it is interesting to compare treatments based on the situation after some months to account for early ART complication. For this, the transition probabilities at 10 months post ART were estimated and the results are presented in Fig. 5. A comparison of Figs. 4 and 5 clearly shows that the fact that a patient on a treatment combination containing AZT or TDF has not had any early ART complication leading to treatment change or modification in the first 10 months post-ART has decreased his/her probability of future treatment change or modification considerably; notably, his/her probability of long-term NRTI substitution-free survival has increased significantly. On the other hand, the long-term treatment modification-free survival of a patient on a treatment combination containing d4T was unchanged by the fact that he/she has not experienced treatment modification in the first 10 months post-ART.
Reason for treatment modification
Table 3 shows the reason for treatment change for the total 615 observed treatment changes in the cohort, stratified by treatment combinations. We were able to obtain the reason for the majority of treatment changes (88.62%). Toxicity and comorbidity were was the main reasons for treatment modification accounting for 48.94% and 14.31% of the observed treatment changes, respectively. About 50% of the patients on all the regimens except TDF + 3TC + EFV reported toxicity or side effects. In addition, phasing out of d4T from the NRTI backbone accounts for 20.16% of the observed treatment changes.
Prediction probabilities at s = 10 for a reference patient. a Probability of no treatment modification, b Probability of NRTI substitution, c Probability of NNRTI substitution and d Probability of regimen changes. 1: d4T-3TC-NVP, 2: d4T + 3TC + EFV, 3: AZT + 3TC + NVP, 4: AZT + 3TC + EFV, 5: TDF + 3TC + EFV, 6: TDF + 3TC + NVP
Table 3 Reasons for antiretroviral modification among HIV patients on HAART
In order to quantify the effect of toxicity on treatment change, we modify the definition of time-to-treatment change to time-to-treatment change due to toxicity. Here, treatment change related to other reasons during the follow-up period were censored at the time of their occurrence. As seen from Table 4, a large proportion of patients (26.25%) on NVP and d4T combination had NRTI substitution with d4T replaced by AZT due to toxicity. Similarly, 15.92% and 14.01% of patients on EFV combination with d4T had d4T replaced by AZT and TDF, respectively due to toxicity. Treatment combination of TDF and EFV was the most robust to treatment modification among the six first line regimens. As previously noted, this combination seems the least toxic. In a similar fashion, we calculate the toxicity driven transition probabilities P ℓ j (s,t) from all starting states to all possible states, between the starting time s=0 and all event times successively and extract the four measures of transition probabilities. Here again, Fig. 6 shows treatment combination containing d4T had highest probability for treatment modification due to toxicity. Where as the combination of TDF and EFV was the most robust to treatment modification.
Prediction probabilities for time to treatment change due to toxicity at s = 0 for a reference patient: a Probability of no treatment modification, b Probability of NRTI substitution, c Probability of NNRTI substitution and d Probability of regimen changes. 1: d4T-3TC-NVP, 2: d4T + 3TC + EFV, 3: AZT + 3TC + NVP, 4: AZT + 3TC + EFV, 5: TDF + 3TC + EFV, 6: TDF + 3TC + NVP
This study has provided unique and important data on durability of first-line ART and on reasons responsible for antiretroviral treatment modification in the setting of a tertiary care Hospital in a resource limited country. This work adds to the previous observational studies [15–18, 22–27] conducted in resource-limited settings, where no distinction was made between NRTI substitution and NNRTI substitution, treatment modification due to all causes and toxicity driven treatment modification, and the incidence of subsequent treatment modification was not studied. Further, we show how a simple multi-state survival model can be used to estimate the probability of the future treatment combination of the patient given all the information available up to the present moment.
In our cohort, a large proportion of patients (41.2%) changed their treatment during follow up time, where 34.4% patients changed once and 6.69% changed more than once, which is far higher to what has been previously reported [22, 27]. The short follow up time and consideration of only the first treatment modification as event of interest in their study may be the reason for such discrepancy. Among the regimens containing d4T, 77% of 541 patients on d4T + 3TC + NVP and 64% of 157 patients on d4T + 3TC + EFV experienced all cause treatment modification. In all cause analysis, regimens containing d4T had highest probability for treatment modification, NRTI substitution, and regimen switching as compared to those regimens containing AZT and TDF, consistent with previous findings [15–18, 22–27]. Whereas the combination of TDF and EFV was the most robust to treatment modification. Apart from d4T, patients on EFV were less susceptible to treatment modification than patients on NVP, similar to what has been reported previously [27].
We also found that treatment modification occurring more frequently for AZT and TDF early after treatment initiation while treatment modification occurs later on in follow-up amongst patients on d4T, consistent with previous findings [27]. The superiority of AZT and TDF over d4T, however, should not be shadowed by this finding. A further comparison of treatment combinations accounting for early ART complication shows that, if a patient on a treatment combination containing AZT or TDF has not had any early ART complication leading to modification in the first 10 months post-ART his probability of future treatment modification decreased considerably. On the other hand, the long-term treatment modification-free survival of a patient on a treatment combination containing d4T was unchanged by the fact that he/she has not experienced treatment modification in the first 10 months post-ART. Further, no significant difference in the timing of treatment modification was observed among NVP and EFV.
The unique feature of this study is we manage to determine the type of treatment modification along with the reason for modification. Only in less than 7% of the treatment changes were we unable to determine the reason for treatment modification. Toxicity-related treatment modification has been identified as the most common reason for treatment modification accounting for 48.94% of the changes, followed by comorbidity (New TB) 14.31%, similar to what has been reported previously [15–18, 22–27]. About 50% of the patients on all the regimens except TDF plus EFV reported toxicity or side effects. The largest number of treatment modification due to toxicity was in patients on d4T: approximately 27% of those who originally started with NVP and d4T combination had NRTI substitution with d4T replaced by AZT and 15.92% and 14.01% patients who originally started with EFV combination with d4T had d4T replaced by AZT and TDF,respectively. Treatment combination of TDF and EFV was the most robust to toxicity related treatment modification among the six first line regimens. As previously noted, this combination seems the least toxic. This is a significant finding because TDF is a WHO recommended preferred treatment, with AZT as alternative [20]. Phasing out of d4T also accounted for 20.16% of treatment changes observed in our study.
This study provided unique and important data on durability of first-line ART and on reasons responsible for antiretroviral treatment modification in a resource limited setting. Furthermore, the study shows the use of multi-state models to study the evolution of patient's state (treatment regimen) over time and to predict the probability of changing treatment. The proposed model allow us to model both the occurrence of different event types (such as, single drug substitution or regimen switch) and the occurrence of subsequent events, the latter potentially of different types in a unified way.
Our findings must be interpreted in light of some limitations. Our model assumes that the future course of a patient only depends on where you are at the current time, but not on how you got there. Deviations from this could have led to bias.
Our study shows the burden of toxicity/side effect related to d4T use is a matter of major concern, as it accounts for the majority of modifications. Safer and more tolerable regimens like a combination of TDF and EFV should be made more accessible to treatment programs in resource-limited settings. Moving away from drugs with poor safety profiles, such as d4T, could reduce modification rates and this would improve regimen tolerability, while preserving future treatment options.
Antiretroviral therapy
NRTI:
Nucleotide reverse transcriptase inhibitors
NNRTI:
Non nucleotide reverse transcriptase inhibitors
VCT:
Voluntary counselling and testing
PMTCT:
Prevention of mother-to-child transmission
Wolbers M, Battegay M, Hirschel B, Furrer H, Cavassini M, Hasse B, et al.CD4 ̂+ T-cell count increase in HIV-1-infected patients with suppressed viral load within 1 year after start of antiretroviral therapy. Antivir Ther. 2007; 12(6):889.
Moore RD, Keruly JC. CD4+ cell count 6 years after commencement of highly active antiretroviral therapy in persons with sustained virologic suppression. Clin Infect Dis. 2007; 44(3):441–6.
Mocroft A, Phillips AN, Gatell J, Ledergerber B, Fisher M, Clumeck N et al. Normalisation of CD4 counts in patients with HIV-1 infection and maximum virological suppression who are taking combination antiretroviral therapy: an observational cohort study. The Lancet. 2007; 370(9585):407–13.
Lawn SD, Myer L, Bekker LG, Wood R. CD4 cell count recovery among HIV-infected patients with very advanced immunodeficiency commencing antiretroviral treatment in sub-Saharan Africa. BMC Infect Dis. 2006; 6(1):1.
Erhabor O, Ejele O, Nwauche C. The effects of highly active antiretroviral therapy (HAART) of stavudine lamivudine and nevirapine on the CD4 lymphocyte count of HIV-infected Africans. The Nigerians experience. Nigerian journal of clinical practice. 2006; 9(2):128–33.
Seid A, Getie M, Birlie B, Getachew Y. Joint modeling of longitudinal CD4 cell counts and time-to-default from HAART treatment: a comparison of separate and joint models. Electronic J Appl Stat Anal. 2014; 7(2):292–314.
Calmy A, Pinoges L, Szumilin E, Zachariah R, Ford N, Ferradini L, et al.Generic fixed-dose combination antiretroviral treatment in resource-poor settings: multicentric observational cohort. Aids. 2006; 20(8):1163–9.
Van Roon E, Verzijl J, Juttmann J, Lenderink A, Blans M, Egberts A. Incidence of discontinuation of highly active antiretroviral combination therapy (HAART) and its determinants. JAIDS J Acquir Immune Defic Syndr. 1999; 20(3):290–4.
Monforte Ad, Lepri AC, Rezza G, Pezzotti P, Antinori A, Phillips AN, et al.Insights into the reasons for discontinuation of the first highly active antiretroviral therapy (HAART) regimen in a cohort of antiretroviral naive patients. Aids. 2000; 14(5):499–507.
Fellay J, Ledergerber B, Bernasconi E, Furrer H, Battegay M, Hirschel B, et al. Prevalence of adverse events associated with potent antiretroviral treatment: Swiss HIV Cohort Study. The Lancet. 2001; 358(9290):1322–7.
Mocroft A, Youle M, Moore A, Sabin CA, Madge S, Lepri AC, et al. Reasons for modification and discontinuation of antiretrovirals: results from a single treatment centre. Aids. 2001; 15(2):185–94.
Mocroft A, Phillips A, Soriano V, Rockstroh J, Blaxhult A, Katlama C, et al.Reasons for stopping antiretrovirals used in an initial highly active antiretroviral regimen: increased incidence of stopping due to toxicity or patient/physician choice in patients with hepatitis C coinfection. AIDS Res Hum Retrovir. 2005; 21(9):743–52.
Cardoso SW, Grinsztejn B, Velasque L, Veloso VG, Luz PM, Friedman RK, et al.Incidence of modifying or discontinuing first HAART regimen and its determinants in a cohort of HIV-infected patients from Rio de Janeiro, Brazil. AIDS Res Hum Retrovir. 2010; 26(8):865–74.
Teklay G, Legesse B, Legesse M. Adverse effects and regimen switch among patients on antiretroviral treatment in a resource limited setting in Ethiopia. J Pharmacovigilance. 2014;2013.
Woldemedhin B, Wabe NT, et al. The reason for regimen change among HIV/AIDS patients initiated on first line highly active antiretroviral therapy in Southern Ethiopia. N Am J Med Sci. 2012; 4(1):19.
Assefa D, Hussein N. Reasons for Regimen Change among HIV/AIDS Patients Initiated on First Line Highly Active Antiretroviral Therapy in Fitche Hospital, Oromia, Ethiopia. Adv Pharmacol Pharm. 2014; 2(5):77–83.
Jima YT, Angamo MT, Wabe NT. Causes for antiretroviral regimen change among HIV/AIDS patients in Addis Ababa, Ethiopia. Tanzania J Health Res. 2013;15(1).
Wube M, Tesfaye A, Hawaze S. Antiretroviral therapy regimen change among HIV/AIDS patients in Nekemt Hospital: a primary care Hospital in Oromia Regional State, Ethiopia. J Appl Pharm Sci. 2013; 3(8):36.
O'Brien ME, Clark RA, Besch CL, Myers L, Kissinger P. Patterns and correlates of discontinuation of the initial HAART regimen in an urban outpatient cohort. JAIDS J Acquir Immune Defic Syndr. 2003; 34(4):407–14.
World Health Organization. Antiretroviral therapy for HIV infection in adults and adolescents: recommendations for a public health approach-2010 revision: Geneva: World Health Organization; 2010.
World Health Organization and others. Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection: recommendations for a public health approach: World Health Organization; 2016.
Louwagie G, Zuma K, Okello V, Takuva S. Durability of first line antiretroviral therapy: reasons and predictive factors for modifications in a Swaziland cohort. J Antivirals Antiretrovirals. 2012;2012.
Velen K, Lewis JJ, Charalambous S, Grant AD, Churchyard GJ, Hoffmann CJ. Comparison of tenofovir, zidovudine, or stavudine as part of first-line antiretroviral therapy in a resource-limited-setting: a cohort study. PLoS ONE. 2013; 8(5):e64459.
Chi BH, Mwango A, Giganti M, Mulenga LB, Tambatamba-Chapula B, Reid SE, et al. Early clinical and programmatic outcomes with tenofovir-based antiretroviral therapy in Zambia. J Acquir Immune Defic Syndr. 2010; 54(1):63.
Bygrave H, Ford N, van Cutsem G, Hilderbrand K, Jouquet G, Goemaere E, et al. Implementing a tenofovir-based first-line regimen in rural Lesotho: clinical outcomes and toxicities after two years. JAIDS J Acquir Immune Defic Syndr. 2011; 56(3):e75–8.
Njuguna C, Orrell C, Kaplan R, Bekker LG, Wood R, Lawn SD. Rates of switching antiretroviral drugs in a primary care service in South Africa before and after introduction of tenofovir. PLoS One. 2013; 8(5):e63596.
Brennan AT, Maskew M, Ive P, Shearer K, Long L, Sanne I, et al. Increases in regimen durability associated with the introduction of tenofovir at a large public-sector clinic in Johannesburg, South Africa. J Intl AIDS Soc. 2013;16(1).
Hougaard P. Analysis of multivariate survival data: Springer Science & Business Media; 2012.
Andersen PK, Borgan O, Gill RD, Keiding N. Statistical models based on counting processes: Springer Science & Business Media; 2012.
De Wreede LC, Fiocco M, Putter H. The mstate package for estimation and prediction in non-and semi-parametric multi-state and competing risks models. Comput Methods Prog Biomed. 2010; 99(3):261–74.
Beyersmann J, Latouche A, Buchholz A, Schumacher M. Simulating competing risks data in survival analysis. Stat Med. 2009; 28(6):956–71.
Gill RD, Johansen S. A survey of product-integration with a view toward application in survival analysis, The annals of statistics: JSTOR; 1990. pp. 1501–55.
Putter H, van der Hage J, de Bock GH, Elgalta R, van de Velde CJ. Estimation and prediction in a multi-state model for breast cancer. Biom J. 2006; 48(3):366–80.
de Wreede LC, Fiocco M, Putter H, et al. mstate: an R package for the analysis of competing risks and multi-state models. J Stat Softw. 2011; 38(7):1–30.
The authors are grateful to Jimma University Specialized Hospital ART clinic for the permission to use the data and the staff members of the clinic for their support in extracting the information from patients medical card. Financial support to the first authors for his research visit from the Institutional University Cooperation of the Council of Flemish Universities (VLIR-IUC) is gratefully acknowledged. The authors are grateful to Kibralem Sisay for his support in the programming of the R function we used for data preparation.
This work was financially supported by Jimma University Inter-university cooperation (IUC-JU)
The data sets analyzed in this study available from the corresponding author on reasonable request. The R code used to analyze the data provided as a supplement of the article.
BB contributed to the study concept and design, performed the analysis on the data set as well as wrote the first draft of the paper. TA contributed to the analysis and interpretation of the data, in addition to drafting and critical revision of the manuscript. RB and ZS contributed to the study concept and design, the statistical methodology and finalization of the writing. AS contributed to the analysis and interpretation of the Application section and critical revision of the manuscript. All authors read and approved the final manuscript.
Human subject research approval for this study was received from Jimma University Research Ethics Committee and the medical director of the Hospital. As the study was retrospective, informed consent was not obtained from the study participants, but data were anonymous and kept confidential.
Department of Statistics, Jimma University, Jimma, Ethiopia
Belay Birlie
I-BioStat, Hasselt University, Diepenbeek, Belgium
Roel Braekers & Ziv Shkedy
Institute of public Health, University of Gondar, Gondar, Ethiopia
Tadesse Awoke
Wolfson Research Institute for Health and Wellbeing, Durham University, Manchester, UK
Adetayo Kasim
Roel Braekers
Ziv Shkedy
Correspondence to Belay Birlie.
Additional file
Supplementary Appendix: Evaluation of the Durability of First-line Highly Active Antiretroviral Therapy in Southwest Ethiopia Using Multistate Survival Model. (PDF 218 KB)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Birlie, B., Braekers, R., Awoke, T. et al. Multi-state models for the analysis of time-to-treatment modification among HIV patients under highly active antiretroviral therapy in Southwest Ethiopia. BMC Infect Dis 17, 453 (2017). https://doi.org/10.1186/s12879-017-2533-3
Treatment modification
Survial analysis
Multistate models
|
CommonCrawl
|
Pointwise definable models of set theory
J. D. Hamkins, D. Linetsky, and J. Reitz, "Pointwise definable models of set theory," Journal of Symbolic Logic, vol. 78, iss. 1, p. 139–156, 2013.
@article {HamkinsLinetskyReitz2013:PointwiseDefinableModelsOfSetTheory,
AUTHOR = {Hamkins, Joel David and Linetsky, David and Reitz, Jonas},
TITLE = {Pointwise definable models of set theory},
FJOURNAL = {Journal of Symbolic Logic},
ISSN = {0022-4812},
MRCLASS = {03E55},
MRREVIEWER = {Bernhard A. König},
DOI = {10.2178/jsl.7801090},
URL = {http://jdh.hamkins.org/pointwisedefinablemodelsofsettheory/},
One occasionally hears the argument—let us call it the math-tea argument, for perhaps it is heard at a good math tea—that there must be real numbers that we cannot describe or define, because there are are only countably many definitions, but uncountably many reals. Does it withstand scrutiny?
This article provides an answer. The article has a dual nature, with the first part aimed at a more general audience, and the second part providing a proof of the main theorem: every countable model of set theory has an extension in which every set and class is definable without parameters. The existence of these models therefore exhibit the difficulties in formalizing the math tea argument, and show that robust violations of the math tea argument can occur in virtually any set-theoretic context.
A pointwise definable model is one in which every object is definable without parameters. In a model of set theory, this property strengthens V=HOD, but is not first-order expressible. Nevertheless, if ZFC is consistent, then there are continuum many pointwise definable models of ZFC. If there is a transitive model of ZFC, then there are continuum many pointwise definable transitive models of ZFC. What is more, every countable model of ZFC has a class forcing extension that is pointwise definable. Indeed, for the main contribution of this article, every countable model of Godel-Bernays set theory has a pointwise definable extension, in which every set and class is first-order definable without parameters.
This entry was posted in Publications and tagged David Linetsky, definability, forcing, Goedel-Bernays, HOD, Jonas Reitz by Joel David Hamkins. Bookmark the permalink.
12 thoughts on "Pointwise definable models of set theory"
Pingback: Pointwise definable models of set theory, extended abstract, Oberwolfach 2011
Pingback: Must there be non-definable numbers? Pointwise definability and the math tea argument, KGRC, Vienna 2011 | Joel David Hamkins
Pingback: Must there be numbers we cannot describe or define? Pointwise definability and the Math Tea argument, Bristol, April 2012 | Joel David Hamkins
Thomas Benjamin on February 8, 2013 at 7:20 am said:
Regarding Theorem 11, i.e. "Every Countable model of ZFC has a pointwise definable class forcing extension", can one, using Boolean-valued models show that every model of ZFC has a pointwise definable class forcing extension? Also, in the proof of Theorem 11 ( since HOD is being forced) does the Leibniz-Myceielski axiom also hold in the class forcing extension and can the model constructed in Theorem 11 satisfy not-CH (since in the proof the construction forces failures of the GCH)?
Joel David Hamkins on May 8, 2014 at 7:43 am said:
Thomas, no, it would be impossible to extend the theorem to uncountable models, since pointwise definable models must be countable, as there are only countably many definitions. This is in a sense the residue of the math tea argument, but applied in the right way—one may enumerate the definable elements in a realm where one has the satisfaction relation, external to the model. Meanwhile, every pointwise definable model must have V=HOD, since in fact you don't need the ordinal parameters, and it is easy to arrange not CH or CH. One can have GCH or lots of failures of GCH simply by using a different coding method to force V=HOD.
YM on May 8, 2014 at 5:25 am said:
The mathematics is undoubtedly correct, but I'm not sold on the more philosophical conclusion that the math tea-argument does not withstand scrutiny. I will give a ZFC-centric rebuttal, but we can replace ZFC with pretty much any other sufficiently well-motivated set theory and the argument will still go through.
Here's my thinking. Just as a "complete lattice" is not just "a model of the first-order theory of complete lattices" but must necessarily be "a model of the second-order theory of complete lattices," similarly I would argue that a "model of set theory" is not just "a model of ZFC" but is necessarily "a model of ZFC-2" (i.e. second-order ZFC with its standard semantics). After all, the ultimate motivation for ZFC is of course ZFC-2; the latter is what we actually "believe," while the former is merely a recursively-ennumerable approximation to (the consequences of) those beliefs. We put up with first-order set theory because we're interested in founding mathematics on set theory; the moment we want to know what's actually *true* in the universe of sets according to a different foundations (e.g. homotopy theory theory), we immediately move to second-order set theory.
It should be clear where I'm going with this. Since the language of ZFC-2 is countable, but every model of ZFC-2 is uncountable, thus any model of set theory must have elements that cannot be defined without parameters. Along a similar vein, since every model of set theory contains an isomorphic copy of the real line, which is therefore uncountable, the tea party argument works perfectly; without parameters, set theory cannot "see" every real number.
Thanks for the comment. You are on Zermelo's side, because Zermelo's original formulation of the axioms of set theory used essentially a second-order formulation. This was later changed to the first-order formulation we know so well today, in large part because many mathematicians find $\text{ZFC}_2$ to be essentially incoherent. How are we to make sense of a second-order model of a theory, except with respect to some background concept of set? But it is the background concept of set we are trying to describe with our theory! So I think that your argument is sensible only to those who believe already that there is a unique absolute background concept of set. I have argued at length in my various papers on the set-theoretic multiverse that what we have is actually a plurality of concepts of set. To speak of a second-order logic is to speak of set theory itself. Daniel Isaacson has argued that CH and other set-theoretic assertions are settled on the basis of $\text{ZFC}_2$, since the initial segments of the cumulative universe are unique in models of full second-order logic, and either CH holds there or it doesn't. But meanwhile, we can't say which way CH is determined. I don't find this sense of "being determined in second-order logic" to be very meaningful.
"Daniel Isaacson has argued that CH and other set-theoretic assertions are settled on the basis of ZFC2."
Look the way I see it, $\mathrm{ZFC_2}$ is just a finite list of strings. It doesn't even come with a deductive system to deduce further strings (unless we're interested in Henkin semantics, which we're not.) So from this viewpoint, nothing is settled on the basis of $\mathrm{ZFC_2}$, not even whether or not $\aleph_0$ comes before $2^{\aleph_0}$! So I'm going to have to strongly disagree with Daniel Isaacson position; if $\mathrm{ZFC_2}$ is defined as above, it settles nothing.
And you know what, THAT'S OKAY. After all, $\mathrm{ZFC}_2$ isn't meant to settle anything. Notwithstanding Zermelo's original intentions, we know today that its not a foundations on which mathematics can be built. Does it have any use, then? It sure does. In particular, $\mathrm{ZFC}_2$ it tells us how to define the phrase "model of set theory" correctly, in much the same way that the second-order axioms for the real numbers tell us how to define the phrase "Dedekind-complete ordered field" correctly, in much the same way that the second-order Peano axioms tell us how to define the phrase "model for arithmetic" correctly.
By the way, I think this is perfectly compatible with the multiverse perspective. However, my thoughts on the matter are very unclear, so maybe I'll just stop writing.
I'm not sure that your position is self-consistent. On the one hand, you want to regard the models of set theory as models of $\text{ZFC}_2$, which requires them to have *all* the subsets of natural numbers and all the sets of set of natural numbers, but on the other hand, you seem to want there to be different models of set theory with different outcomes for CH. These two expectations seem contradictory, since if every model of set theory has all the reals and all the sets of reals, then they will all agree on CH, which is determined by those sets.
Oh its perfectly consistent; the multiverse is populated by all models of $\mathrm{ZFC}$, but the phrase "model of set theory" ought be reserved for models of $\mathrm{ZFC_2}$. That is really all I was trying to say; everything else I've said is just motivation, it was just meant to explain why this convention is optimal.
Now you may say: "Well if you're merely advocating a particular convention, then its kind of no big deal right? They're just definitions." However I think the stakes are very high. Here's why.
If my younger brother comes to me tomorrow and asks: "What is a real number system?" and I reply, "its a real closed field," then I think you and I both agree that this was the wrong answer. The right answer was, of course "its a complete ordered field." Similarly if he asks: "What is a model of set theory?" and I reply "Its a model of $\mathrm{ZFC}$," then by analogy with the previous case I think I have given the wrong answer. What I should have said was: "Its a model of $\mathrm{ZFC_2}$."
Pingback: Building models of ZFC with exactly 2 GBC-realizations | Recursively saturated and rather classless
Pingback: Definability and the Math Tea argument: must there be numbers we cannot describe or define? University of Warsaw, 22 January 2021 | Joel David Hamkins
|
CommonCrawl
|
Moser-lower.tex
Revision as of 06:18, 12 July 2009 by Mpeake (Talk | contribs)
Just as for the DHJ problem, we found that Gamma sets $\Gamma_{a,b,c}$ were useful in providing large lower bounds for the Moser problem. This is despite the fact that the symmetries of the cube do not respect Gamma sets.
An integer program was run to obtain the optimal lower bounds achievable by the $A_B$ construction (using \eqref{cn3}, of course). The results for $1 \leq n \leq 20$ are displayed in Figure \ref{nlow-moser}:
\begin{figure}[tb] \centerline{ \begin{tabular}{|ll|ll|} \hline $n$ & lower bound & $n$ & lower bound \\ \hline 1 & 2 &11& 71766\\ 2 & 6 & 12& 212423\\ 3 & 16 & 13& 614875\\ 4 & 43 & 14& 1794212\\ 5 & 122& 15& 5321796\\ 6 & 353& 16& 15455256\\ 7 & 1017& 17& 45345052\\ 8 & 2902&18& 134438520\\ 9 & 8622&19& 391796798\\ 10& 24786& 20& 1153402148\\ \hline \end{tabular}} \caption{Lower bounds for $c'_n$ obtained by the $A_B$ construction.} \label{nlow-moser} \end{figure}
More complete data, including the list of optimisers, can be found at \centerline{{\tt http://abel.math.umu.se/~klasm/Data/HJ/}.}
Note that the lower bound $c'_{6,3} \geq 353$ was first located by a genetic algorithm: see Appendix \ref{genetic-alg}.
\begin{figure}[tb] \centerline{\includegraphics{moser353new.png}} \caption{One of the examples of $353$-point sets in $[3]^6$ (elements of the set being indicated by white squares).} \label{moser353-fig} \end{figure}
The proportion of extra points for each of the cells $\Gamma_{(a,n-i-4,i+4-a)}$ is no more than $2/(i+6)$. Only one cell in three is included from the $b=n-i-4$ layer, so we expect no more than $\binom{n,i+4}2^{i+5}/(3i+18)$ new points, all from $S_{n,i+4}$. One can also find extra points from $S_{n,i+5}$ and higher spheres.
With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ A(11,1)=2048&A(11,2)=1024& A(11,3)=144& A(11,4)=72&A(11,5)=24&A(11,6)=12 &A(11,7)=2&A(11,8)=2\\ A(12,1)=4096&A(12,2)=2048& A(12,3)=256& A(12,4)=144&A(12,5)=32&A(12,6)=24 &A(12,7)=4&A(12,8)=2\\ A(13,1)=8192&A(13,2)=4096& A(13,3)=512& A(13,4)=256&A(13,5)=64&A(12,6)=32 &A(13,7)=8&A(13,8)=4\\ \end{array} \] }}
Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at \centerline{\tt http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html} \textbf{include to references? or other book with explicit values of $A(n,d)$ }
For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,1)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,1)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,1)&=7880. \end{array}\] With k=4 \[ \begin{array}{llll} c'_{10,3}&\geq &\binom{10}{0}A(10,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,2)+\binom{10}{4}A(6,1)&=22232.\\ c'_{11,3}&\geq &\binom{11}{0}A(11,5)+\binom{11}{1}A(10,4)+\binom{11}{2}A(9,3) + \binom{11}{3}A(8,2)+\binom{11}{4}A(7,1)&=66024.\\ c'_{12,3}&\geq &\binom{12}{0}A(12,5)+\binom{12}{1}A(11,4)+\binom{12}{2}A(10,3) + \binom{12}{3}A(9,2)+\binom{12}{4}A(8,1)&=188688.\\ \end{array}\] With $k=5$ \[ c'_{13,3}\geq 539168.\]
For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following:
\subsection{Higher $k$ values} One can consider subsets of $[k]{}^n$ that contain no geometric lines. Section \ref{moser-lower-sec} has considered the case $k=3$. Let $c'_{n,k}$ be the greatest number of points in $[k]{}^n$ with no geometric line. For example, $c'_{n,3} = c'_n$. We have the following lower bounds:
$c'_{n,4} \ge \binom{n}{n/2}2^n$. The set of points with $a$ $1$s,$b$ $2$s,$c$ $3$s and $d$ $4$s, where $a+d$ has the constant value $n/2$, does not form geometric lines because points at the ends of a geometric line have more $a$ or $d$ values than points in the middle of the line.
One can show a lower bound that, asymptotically, is twice as large. Take all points with $a$ $1$s, $b$ $2$s, $c$ $3$s and $d$ $4$s, for which:
We also have a DHJ(3)-like lower bound for $c'_{n,5}$, namely $c'_{n,5} = 5^{n-O(\sqrt{\log n})}$. Consider points with $a$ $1$s, $b$ $2$s, $c$ $3$s, $d$ $4$s and $e$ $5$s. For each point, take the value $a+e+2(b+d)+3c$. The first three points in any geometric line give values that form an arithmetic progression of length three.
Select a set of integers with no arithmetic progression of length 3. Select all points whose value belongs to that sequence; there will be no geometric line among those points. By Behrend theory, it is possible to choose these points with density $\exp{-O(\sqrt{\log n})}$.
The $k=6$ version of Moser implies DHJ(3). Indeed, any $k=3$ combinatorial line-free set can be "doubled up" into a $k=6$ geometric line-free set of the same density by pulling back the set from the map that maps 1, 2, 3, 4, 5, 6 to 1, 2, 3, 3, 2, 1 respectively; note that this map sends $k=6$ geometric lines to $k=3$ combinatorial lines. So $c'_{k,6} \geq 2^k c_k$, and more generally, $c'_{k,2n} \geq 2^k c_{k,n}$.
|
CommonCrawl
|
Materials Research (24)
MRS Online Proceedings Library Archive (23)
Mineralogical Magazine (2)
Journal of Developmental Origins of Health and Disease (1)
Proceedings of the International Astronomical Union (1)
Materials Research Society (24)
International Astronomical Union (2)
Mineralogical Society (2)
Developmental Origins of Health and Disease Society (1)
Monitoring research on act on medical care and treatment for insane or quasi-insane persons who caused serious incidents in Japan
H. Noguchi, T. Oakada, A. Kikuchi, Y. Mino, M. Sano, F. Hisanaga, K. Yoshikawa
Journal: European Psychiatry / Volume 22 / Issue S1 / March 2007
Published online by Cambridge University Press: 16 April 2020, pp. S310-S311
The Act on medical care and treatment for insane or quasi-insane person who have caused serious incidents in Japan went into effect in July, 2005. It is critical to understand the current situation and the issue concerning medical care in this legal system for revision of the Act five years later. Therefore, this research aims to evaluate and analyze the information comprehensively collected from designated inpatient medical institutions and outpatient medical institutions from a technical standpoint.
The subjects of this research are 50 cases from designated inpatient medical institutions and 4 cases from designated outpatient medical institutions who have been registered as the subject of treatment under the Act. Static information at the time of the treatment starts and dynamic information, such as treatment evaluation usually created periodically in routine work, were the specific documentation for this research. From the information, variables required for analysis of improvement of medical care and operational situation of the Act were collected through the use of a database system.
Results and Conclusion
Since the Act was enacted last year, this one year research remains as a short term monitoring targeting a few cases. In this report, evaluation and progress of treatment of the subject will be presented with the information relevant to the mental illness and the judicial system exposed by collected various kinds of data. From this information, we outline the current situation and issues in this legal system and shows the usability of the result from this monitoring research.
Hemodynamic activation in infant's prefrontal and occipital cortices when viewing maternal facial expressions: A near-infrared spectroscopic study
H. Yamamoto, T. Yoshikawa, H. Ito, K. Nomura, K. Yasunaga, H. Kaneko
Published online by Cambridge University Press: 16 April 2020, p. 440
EPA-0621 - Ultra-Short Daily Briefings for Sick-Listed Employees with Psychological Problems Strengthen the Sense of Coherence in Occupational Healthcare
J. Nobori, H. Ishida, A. Inoue, T. Yoshikawa, E. Kimura, K. Ishihara
Journal: European Psychiatry / Volume 29 / Issue S1 / 2014
Published online by Cambridge University Press: 15 April 2020, pp. 1-2
There are no effective programs on return-to-work (RTW) despite an increase of stress related disorders. We developed an original rehabilitation program,'Ultra-short daily briefings care (USDBC)'. USDBC is based on a key concept of European Framework for Psychosocial Risk Management (PRIMA-EF; WHO, 2008) that provides the good practice guidelines at the workplace. We carried out USDBC at the worksite of Panasonic Healthcare Co., Ltd. to determine whether USDBC facilitates RTW.
To develop and establish the appropriate intervention that reduces depressive severity of sick-listed employees.
The aim of the study was to determine whether USDBC strengthen the sense of coherence (SOC; Antonovsky, 1985).
We compared two groups in a cross-sectional study design: 16 depressed RTW employees (USDBC group) vs. 121 healthy employees (control group) (Fig.1). USDBC group was received the instant face-to-face rehabilitation program in every workday (Fig.2). The primary outcome was the ability to cope with stress, measured by self-reported 13-items SOC scale for Japanese (Yamazaki, 1999).
In the USDBC group, significant changes were observed between baseline and measurement point in SOC score (40.3 vs. 54.4; 95% CI (20.6 to (7.5), whereas in the control group, no significant changes were observed (58.3 vs. 57.9; 95% CI (0.1 to 0.9) (Table.1).
The study suggests that USDBC strengthen the depressed employees' SOC.
Participant now climt
Flow diagram showing the selection of USDBC group and control group
Intervention with USDBC
Depressive employees behave according to this flow in every workday.
Subjects' characteristics and SOC.
MedianAde(Range)
MeanSOC
93% CI
Baseline (SD)
Measurtnentpoint (SD)
USDBC group (n=l6) 12 4 39 (32 - 53) 40.3(12 4) 54.4 (8.8) -20.6 to-7.5 < 0.001
Control group (n=121) 94 27 41 (21 - 59) 58 3(9.4) 57.9(10 1) -0 1 to 0.9 0.10
In the USDBC goroup, significant change was found between baselne and tneasurment point. In the control group, no significant change was found.
Converging on child mental health – toward shared global action for child development
G. Belkin, L. Wissow, C. Lund, L. Aber, Z. Bhutta, M. Black, C. Kieling, S. McGregor, A. Rahman, C. Servili, S. Walker, H. Yoshikawa
Journal: Global Mental Health / Volume 4 / 2017
Published online by Cambridge University Press: 19 October 2017, e20
We are a group of researchers and clinicians with collective experience in child survival, nutrition, cognitive and social development, and treatment of common mental conditions. We join together to welcome an expanded definition of child development to guide global approaches to child health and overall social development. We call for resolve to integrate maternal and child mental health with child health, nutrition, and development services and policies, and see this as fundamental to the health and sustainable development of societies. We suggest specific steps toward achieving this objective, with associated global organizational and resource commitments. In particular, we call for a Global Planning Summit to establish a much needed Global Alliance for Child Development and Mental Health in all Policies.
Numerical simulation of jets generated by a sphere moving vertically in a stratified fluid
H. Hanazaki, S. Nakamura, H. Yoshikawa
Journal: Journal of Fluid Mechanics / Volume 765 / 25 February 2015
Print publication: 25 February 2015
The flow past a sphere moving vertically at constant speeds in a salt-stratified fluid is investigated numerically at moderate Reynolds numbers $\mathit{Re}$ . Time development of the flow shows that the violation of density conservation is the key process for the generation of the buoyant jet observed in the experiments. For example, if the sphere moves downward, isopycnal surfaces are simply deformed and dragged down by the sphere while the density is conserved along the flow. (The flow pattern is inverted if the sphere moves upward. Some explanations are given in the introduction.) Then, the flow will never become steady. As density diffusion becomes effective around the sphere surface and generates a horizontal hole in the isopycnal surface, fluid with non-conserved density is detached from the isopycnal surface and moves upward to generate a buoyant jet. These processes will constitute a steady state near the sphere. With lengths scaled by the sphere diameter and velocities by the downward sphere velocity, the duration of density conservation at the rear/upper stagnation point, or the maximum distance that the isopycnal surface is dragged downward, is proportional to the Froude number $\mathit{Fr}$ , and estimated well by ${\rm\pi}\mathit{Fr}$ for $\mathit{Fr}\gtrsim 1$ and $\mathit{Re}\gtrsim 200$ , corresponding to a constant potential energy. The radius of a jet defined by the density and velocity distributions, which would have correlations with the density and velocity boundary layers on the sphere, is estimated well by $\sqrt{\mathit{Fr }/2\mathit{Re }\mathit{ Sc}}$ and $\sqrt{\mathit{Fr }/2\mathit{Re}}$ respectively for $\mathit{Fr}\lesssim 1$ , where $\mathit{Sc}$ is the Schmidt number. Numerical results agree well with the previous experiments, and the origin of the conspicuous bell-shaped structure observed by the shadowgraph method is identified as an internal wave.
Insulin-like growth factor-1 and lipoprotein profile in cord blood of preterm small for gestational age infants
N. Nagano, T. Okada, R. Fukamachi, K. Yoshikawa, S. Munakata, Y. Usukura, S. Hosono, S. Takahashi, H. Mugishima, M. Matsuura, T. Yamamoto
Journal: Journal of Developmental Origins of Health and Disease / Volume 4 / Issue 6 / December 2013
Low birth weight was associated with cardiometabolic diseases in adult age. Insulin-like growth factor-1 (IGF-1) has a crucial role in fetal growth and also associates with cardiometabolic risks in adults. Therefore, we elucidated the association between IGF-1 level and serum lipids in cord blood of preterm infants. The subjects were 41 consecutive, healthy preterm neonates (27 male, 14 female) born at <37-week gestational age, including 10 small for gestational age (SGA) infants (<10th percentile). IGF-1 levels and serum lipids were measured in cord blood, and high-density lipoprotein cholesterol (HDLC), low-density lipoprotein cholesterol (LDLC) and very low-density lipoprotein triglyceride (VLDLTG) levels were determined by HPLC method. SGA infants had lower IGF-1 (13.1 ± 5.3 ng/ml), total cholesterol (TC) (55.0 ± 14.8), LDLC (21.6 ± 8.3) and HDLC (26.3 ± 11.3) levels, and higher VLDLTG levels (19.0 ± 12.7 mg/dl) than in appropriate for gestational age (AGA) infants (53.6 ± 25.6, 83.4 ± 18.9, 36.6 ± 11.1, 38.5 ± 11.6, 8.1 ± 7.0, respectively). In simple regression analyses, log IGF-1 correlated positively with birth weight (r = 0.721, P < 0.001), TC (r = 0.636, P < 0.001), LDLC (r = 0.453, P = 0.006), and HDLC levels (r = 0.648, P < 0.001), and negatively with log TG (r = −0.484, P = 0.002) and log VLDL-TG (r = −0.393, P = 0.018). Multiple regression analyses demonstrated that IGF-1 was an independent predictor of TC, HDLC and TG levels after the gestational age and birth weight were taken into account. In preterm SGA infants, cord blood lipids profile altered with the concomitant decrease in IGF-1 level.
Comparison of microbiological influences on the transport properties of intact mudstone and sandstone and its relevance to the geological disposal of radioactive waste
J. Wragg, H. Harrison, J. M. West, H. Yoshikawa
Journal: Mineralogical Magazine / Volume 76 / Issue 8 / December 2012
Published online by Cambridge University Press: 05 July 2018, pp. 3251-3259
The role of the microbial activity on the transport properties of host rocks for geological repositories, particularly in the far-field, is an area of active research. This paper compares results from experiments investigating changes in transport properties caused by microbial activity in sedimentary rocks in Japan (mudstones) and sandstone (UK).
These experiments show that both Pseudomonas denitrificans and Pseudomonas aeruginosa appear to survive and thrive in pressurized flow-through column experiments which utilized host rock materials of relevance to radioactive waste disposal. Indeed, despite there being a difference in the numbers of organisms introduced into both biotic experiments, numbers appear to stabilize at ∼105 ml−1 at their completion. Post experimental imaging has highlighted the distinct differences in biofilm morphology, for the chosen rock types and bacteria, with Pseudomonas aeruginosa derived biofilms completely covering the surface of the sandstone host and Pseudomonas denitrificans forming biofilament structures. Regardless of substrate host or choice of microbe, microbial activity results in measurable changes in permeability. Such activity appears to influence changes in fluid flow and suggests that the transport of radionuclides through the far-field will be complicated by the presence of microbes.
An Investigation of Microbial Effect as Biofilm Formation on Radionuclide Migration
H. Yoshikawa, M. Kawakita, K. Fujiwara, T. Sato, T. Asano, Y. Sasaki
Published online by Cambridge University Press: 28 March 2012, imrc11-1475-nw35-p57
It is well discussed about biological effect to high-level radioactive waste (HLW) disposal and known that the biofilm is considered to be the uncertain factor to estimate for migration of radioactive elements. The objective of this research is to estimate the microbial effect of Cs migration in groundwater interacted with rock surface. Specially, we focus on Cs behavior at the rock surface surrounded by biofilm. The most important factor is the Cs sorption and diffusion to the microbe and/or their biofilm. Generation of bio-colloid absorbed with Cs and retardation of Cs by their matrix diffusion in rock will be influenced by these phenomena. We introduce about scenario analysis for biofilm and a simple Cs diffusion test with and without sulfur reducing bacteria (SRB) which is well known as easy to produce biofilm on the rock surface in order to clarify the existence effect of the bacteria at the rock surface. The Cs diffusion experiment, using Desulfovivrio desullfuricans as SRB, indicated that microbial effect was less to through their biofilm in the experimental condition. We consider that Cs is easy to contact the rock surface even if surrounded biofilm and not effect to retardation by matrix diffusion scenario.
Infrared ellipsometry and near-infrared-to-vacuum-ultraviolet ellipsometry study of free-charge carrier properties in In-polar p-type InN
Stefan Schöche, Tino Hofmann, Nebiha Ben Sedrine, Vanya Darakchieva, Xinqiang Wang, Akihiko Yoshikawa, Mathias Schubert
Published online by Cambridge University Press: 16 January 2012, mrsf11-1396-o07-27
We apply infrared spectroscopic ellipsometry (IRSE) in combination with near-infrared to vacuum-ultraviolet ellipsometry to study the concentration and mobility of holes in a set of Mg-doped In-polar InN samples of different Mg-concentrations. P-type behavior is found in the IRSE spectra for Mg-concentrations between 1x1018 cm-3 and 3x1019 cm-3. The free-charge carrier parameters are determined using a parameterized model that accounts for phonon-plasmon coupling. From the NIR-VUV data information about layer thicknesses, surface roughness, and structural InN layer properties are extracted and related to the IRSE results.
Microbiological influences on fracture surfaces of intact mudstone and the implications for geological disposal of radioactive waste
H. Harrison, D. Wagner, H. Yoshikawa, J. M. West, A. E. Milodowski, Y. Sasaki, G. Turner, A. Lacinska, S. Holyoake, J. Harrington, D. Noy, P. Coombs, K. Bateman, K. Aoki
Journal: Mineralogical Magazine / Volume 75 / Issue 4 / August 2011
The significance of the potential impacts of microbial activity on the transport properties of host rocks for geological repositories is an area of active research. Most recent work has focused on granitic environments. This paper describes pilot studies investigating changes in transport properties that are produced by microbial activity in sedimentary rock environments in northern Japan. For the first time, these short experiments (39 days maximum) have shown that the denitrifying bacteria, Pseudomonas denitrificans, can survive and thrive when injected into flow-through column experiments containing fractured diatomaceous mudstone and synthetic groundwater under pressurized conditions. Although there were few significant changes in the fluid chemistry, changes in the permeability of the biotic column, which can be explained by the observed biofilm formation, were quantitatively monitored. These same methodologies could also be adapted to obtain information from cores originating from a variety of geological environments including oil reservoirs, aquifers and toxic waste disposal sites to provide an understanding of the impact of microbial activity on the transport of a range of solutes, such as groundwater contaminants and gases (e.g. injected carbon dioxide).
Conversion of Bordetella pertussis to Bordetella parapertussis
Norichika H. Kumazawa, Masanosuke Yoshikawa
Journal: Epidemiology & Infection / Volume 81 / Issue 1 / August 1978
Published online by Cambridge University Press: 15 May 2009, pp. 15-23
The epidemiological and drug susceptibility data on whooping cough suggested a possibility that Bordetella pertussis converts in some way to Bordetella parapertussis. To prove this, B. pertussis strain 75 was treated with N-methyl-N'-nitro-N-nitrosoguanidine and a mutant resistant to staphcillin V and eight mutants resistant to trimethoprim were isolated. The staphcillin V-resistant mutant of B. pertussis agreed with all of the criteria of B. parapertussis and the trimethoprimresistant mutants also agreed with many of these criteria. Thus, a hypothesis is presented that B. parapertussis is a mutant of B. pertussis which appeared in nature probably by a selective pressure of antibiotics.
Evaluation of Chemical State Analysis and Imaging by Micro XPS
H Iwai, H Yoshikawa, S Fukushima, S Tanuma
Journal: Microscopy and Microanalysis / Volume 14 / Issue S2 / August 2008
Extended abstract of a paper presented at Microscopy and Microanalysis 2008 in Albuquerque, New Mexico, USA, August 3 – August 7, 2008
Kinetics of Aqueous Alteration of P0798 Simulated Waste Glass in the Presence of Bentonite
K. Yamaguchi, Y. Inagaki, T. Saruwatari, K. Idemitsu, T. Arima, H. Yoshikawa, M. Yui
Journal: MRS Online Proceedings Library Archive / Volume 932 / 2006
Published online by Cambridge University Press: 21 March 2011, 83.1
Static aqueous alteration tests were performed with a Japanese simulated HLW glass, P0798, in the presence of bentonite in order to understand the effects of bentonite on the glass alteration kinetics and on the associated Cs release. Analogous alteration tests were performed in 0.001M NaOH solution without bentonite for comparison. The results indicated that; 1) at the initial stage of alteration up to 50 days, no remarkable difference was observed in the alteration rate between both cases "with" and "without" bentonite, 2) at the later stage beyond 50 days, however, the rate in the case "with" bentonite was larger than that in the case "without" bentonite. These results on the alteration rate were analyzed by use of a water-diffusion model. In the case "without" bentonite, a good agreement was observed between the model analysis and the experimental results at the initial stage of alteration up to 50 days, however, the model analysis deviated from the experimental results at the later stage beyond 50 days. In the case "with" bentonite, on the other hand, a good agreement was observed even at the later stage to give the value of the apparent diffusion coefficient, D i of 3.5×10−21m2/s. The comparison between both cases suggests that the alteration rate is controlled by the water diffusion in both cases "with" and "without" bentonite, however, the rate is depressed in the case "without" bentonite probably by the protective effects of the alteration layer developing at the glass surface. In the case "with" bentonite, on the other hand, the alteration layer is expected to be less protective. Cesium desorption tests performed for the altered glass and bentonite indicated that most of the cesium dissolved from the glass is retained in the secondary phase of smectite developing in the precipitated layer by sorption with ion-exchange in the case "without" bentonite. In the case "with" bentonite, however, it is likely to be sorbed at bentonite surface.
Temperature Dependence of Long-Term Alteration Rate for Aqueous Alteration of P0798 Simulated Waste Glass under Smectite Forming Conditions
Y. Inagaki, T. Saruwatari, K. Idemitsu, T. Arima, A. Shinkai, H. Yoshikawa, M. Yui
Several kinetic models have been proposed to evaluate the aqueous dissolution/alteration rate of nuclear waste glass for long-term. However, reaction processes controlling the long-term rate are much more subjected to controversy. Temperature dependence of the long-term alteration rate is an essential issue to understand the rate controlling processes. In the present study, the static aqueous alteration tests were performed with a Japanese simulated waste glass P0798 as a function of temperature from 60°C to120°C, and the temperature dependence of the long-term alteration rate was evaluated to understand the rate controlling processes. The tests were performed in 0.001M NaOH solution to maintain a constant solution pH of around 10 during the test period and to provide smectite forming conditions where smectite forms as the major secondary phase without zeolite formation. From the test results on dissolution of boron, the alteration rate at each temperature was analyzed by use of a water-diffusion model. The water-diffusion model used is based on a simple assumption; the glass alteration is controlled by water diffusion with ion-exchange between water (hydronium ion: H3O+) and soluble elements (B, Na, Li, etc) at the glass surface layer with the apparent diffusion coefficient Di . A good agreement was observed between the model analysis and the test results, and the value of Di was evaluated to be 1.2 × 10−22 m2/s at 60°C to 1.8 × 10−21 m2/s at 120°C. The Arrhenius plot of Di showed a good linearity to give the activation energy of 49 kJ/mol, which value is close to that for the residual dissolution rate of French waste glass (53 kJ/mol) by Gin [1], and is very close to that for ion-exchange in sodium aluminosilicate glass (49 kJ/mol) by McGrail [2]. These results suggest that water diffusion with ion-exchange can be the dominant process controlling the alteration rate under smectite forming conditions. At elevated temperatures (100°C and 120°C), however, the model-predicted boron releases deviated from the experimental data at the later stage beyond 50-80 days, which suggests that the alteration layer developing at the glass surface may evolve to be protective against the water diffusion to depress the alteration rate as the alteration proceeds.
Commission 20: Positions and Motions of Minor Planets, Comets and Satellites
Giovanni B. Valsecchi, Julio A. Fernández, J.-E. Arlot, E.L.G. Bowell, Y. Chernetenko, S.R. Chesley, J.A. Fernández, D. Lazzaro, A. Lemaître, B.G. Marsden, K. Muinonen, H. Rickman, D.J. Tholen, G.B. Valsecchi, M. Yoshikawa
Journal: Proceedings of the International Astronomical Union / Volume 1 / Issue T26A / December 2005
The past triennium has continued to see a huge influx of astrometric positions of small solar system bodies provided by near-Earth object (NEO) surveys. As a result, the size of the orbital databases of all populations of small solar system bodies continues to increase dramatically, and this in turn allows finer and finer analyses of the types of motion in various regions of the orbital elements space.
Dependence of Faraday effect on the orientation of terbium–scandium–aluminum garnet single crystal
Y. Kagamitani, D.A. Pawlak, H. Sato, A. Yoshikawa, J. Martinek, H. Machida, T. Fukuda
Journal: Journal of Materials Research / Volume 19 / Issue 2 / February 2004
Print publication: February 2004
To investigate the directional dependence of the Faraday effect in terbium–scandium–aluminum garnet (TSAG) single crystals, grown by the Czochralski method, the Verdet constant was measured at 〈111〉, 〈110〉, and 〈100〉 orientations. Extinction ratio and magnetic susceptibility were measured. From the linear dependence of the Verdet constant and inverse wavelength square 1/λ2, 〈111〉 direction shows the highest value of Verdet constant (for λ⋅ = 649.1 nm, Vav = 8.256 × 10−3 deg · Oe−1 · cm−1). Significant anisotropy of magnetic susceptibility was not observed. The extinction ratio of TSAG shows the highest value for 〈111〉 orientation 38.7 dB, which implies that it can be used as an optical isolator.
Growth of Yb:Y2O3 single crystals by the micro-pulling-down method
A. Novoselov, J. H. Mun, A. Yoshikawa, G. Boulon, T. Fukuda
Published online by Cambridge University Press: 01 February 2011, FF6.12
(Yb x Y1-x )2O3 (x = 0.0, 0.005, 0.05, 0.08 and 0.15) promising single crystal laser rods of 4.2 mm in diameter and 15–20 mm in length have been grown from the rhenium crucible by the micro-pulling-down method in Ar + H2(3–4%) atmosphere. Linear decrease of lattice constant with the increase of Yb3+-content was observed. High homogeneity of the Yb3+-dopant distribution has been demonstrated. Absorption, emission and Raman spectra have been recorded and decay time was approximated.
Sequence heterogeneity of the small subunit ribosomal RNA genes among Blastocystis isolates
N. ARISUE, T. HASHIMOTO, H. YOSHIKAWA
Journal: Parasitology / Volume 126 / Issue 1 / January 2003
Published online by Cambridge University Press: 17 February 2003, pp. 1-9
Genes encoding small subunit ribosomal RNA (SSUrRNA) of 16 Blastocystis isolates from humans and other animals were amplified by the polymerase chain reaction, and the corresponding fragments were cloned and sequenced. Alignment of these sequences with the previously reported ones indicated the presence of 7 different sequence patterns in the highly variable regions of the small subunit ribosomal RNA. Phylogenetic reconstruction analysis using Proteromonas lacertae as the outgroup clearly demonstrated that the 7 groups with the different sequence patterns are separated to form independent clades, 5 of which consisted of the Blastocystis isolates from both humans (B. hominis) and other animals. The presence of 3 higher order clades was also clearly supported in the phylogenetic tree. However, a relationship among the 4 groups including these 3 higher order clades was not settled with statistical confidence. The remarkable heterogeneity of small subunit ribosomal RNAs among different Blastocystis isolates found in this study confirmed, with sequence-based evidence, that these organisms are genetically highly divergent in spite of their morphological identity. The highly variable small subunit ribosomal RNA regions among the distinct groups will provide useful information for the development of group-specific diagnostic primers.
Temperature Dependence of the Optical Properties for InN Films Grown by RF-MBE
Y. Ishitani, K. Xu, W. Terashima, H. Masuyama, M. Yoshitani, N. Hashimoto, S. B. Che, A. Yoshikawa
Published online by Cambridge University Press: 01 February 2011, Y12.5
InN epitaxial layers are grown on sapphire substrates. The investigated samples have electron concentration in a range of 1.8 ×1018 − 1.1 ×1019 cm-3. Optical reflection and transmission measurements are performed. The plasma edge energy position in the spectra is constant in a measurement temperature range of 5 – 300 K. The reflection and transmission spectra are calculated on the basis of the LO phonon-electron coupling scheme and non-parabolic conduction band structure. From this analysis we find that the observed absorption edge is attributed to valence band to conduction band transition rather than the valence band to defect (impurity) band transition, and the intrinsic bandgap energy of 0.64 (±0.03) eV. This bandgap energy increases by 40 – 50 meV as the temperature decreases from 295 to 10 K.
Galaxy Evolution in the Infrared: Number Counts and Cosmic Infrared Background
T. T. Takeuchi, H. Hirashita, T. T. Ishii, K. Yoshikawa
Journal: Symposium - International Astronomical Union / Volume 204 / 2001
Published online by Cambridge University Press: 13 May 2016, p. 303
Recently reported infrared galaxy number counts and cosmic infrared background (CIRB) measures all suggest that galaxies have experienced a strong evolutionary phase. We statistically estimated the galaxy evolution history from these data. We treated the evolution of galaxy luminosity as a stepwise nonparametric form, in order to explore the most suitable evolutionary history which satisfies the constraint from the CIRB. We found that an order of magnitude increase of the far infrared luminosity at redshift z = 0.75 - 1.0 was necessary to reproduce the very high CIRB intensity at ~ 150 μm reported by Hauser et al. (1998). We note that too large an evolutionary factor at high z overpredicts the CIRB intensity around 1 mm. The evolutionary history also satisfies the constraints from galaxy number counts obtained by IRAS, ISO and SCUBA. The rapid evolution of the IR luminosity density required from the CIRB well reproduces the very steep slope of galaxy number counts obtained by ISO. Based on this result and the evolution of optical luminosity density, we quantitatively discuss the contribution of starburst galaxies. In addition, we present the performance of the Japanese IRIS galaxy survey.
|
CommonCrawl
|
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지)
Asian Australasian Association of Animal Production Societies (아세아태평양축산학회)
Ileal Endogenous Amino Acid Flow Response to Nitrogen-free Diets with Differing Ratios of Corn Starch to Dextrose in Pigs
Kong, C. (Department of Animal Sciences, Purdue University) ;
Ragland, D. (Department of Veterinary Clinical Sciences, Purdue University) ;
Adeola, O. (Department of Animal Sciences, Purdue University)
https://doi.org/10.5713/ajas.2014.14232
The objective of this study was to determine the responses in the digestibility of dry matter (DM) and amino acid (AA) composition of ileal endogenous flow (IEF) of pigs (initial body weight, $69.1{\pm}6.46$ kg) fed N-free diets (NFD) formulated with different ratios of corn starch to dextrose. Fifteen pigs fitted with a T-cannula at the distal ileum were fed 5 diets according to a triplicated $5{\times}2$ incomplete Latin-square design. Each period consisted of a 5-d adjustment period and 2 d of ileal digesta collection for 12 h on each of d 6 and 7 and between each period, there was a 5-d recovery period to avoid abnormal weight loss. The ratios of corn starch to dextrose investigated were 0:879, 293:586, 586:293, 779:100, and 879:0 for diet numbers 1, 2, 3, 4 and 5, respectively, and chromic oxide (5 g/kg) was used as an indigestible index. Ileal DM digestibility was greater in Diet 1 than that in Diet 4 (89.5% vs 87.3%, p<0.01) but they were not different from Diet 2, 3, or 5. The IEF for most of indispensable AA were not different among diets with the exception of Met, in which a lack of corn starch or dextrose gave lower (p = 0.028) IEF of Met than diets containing corn starch and dextrose. Likewise, the dispensable AA and total AA in the IEF did not differ among diets. The respective IEF of AA (mg/kg of dry matter intake) in pigs fed Diets 1, 2, 3, 4, or 5 were 301, 434, 377, 477,or 365 for Lys, 61, 89, 71, 87, or 61 for Met, and 477, 590, 472, 520, or 436 for Thr. Proline was the most abundant AA in the IEF followed by Gly, Glu, and Asp and together accounted for approximately 50% of the total ileal AA flows of pigs fed NFD. In conclusion, the variation in proportion of corn starch and dextrose in a NFD does not largely affect estimates of IEF of N and AA for growing-finishing pigs.
Ileal Digestibility;Basal Endogenous Amino Acid Loss;Swine
Zhang, Y., D. Li, S. Fan, X. Piao, J. Wang, and I. K. Han. 2002. Effects of casein and protein-free diets on endogenous amino acid losses in pigs. Asian Australas. J. Anim. Sci.15:1634-1638. https://doi.org/10.5713/ajas.2002.1634
Lien, K. A., W. C. Sauer, and M. Fenton. 1997. Mucin output in ileal digesta of pigs fed a protein-free diet. Z. Ernahrungswiss. 36:182-190. https://doi.org/10.1007/BF01611398
Moughan, P. J. and S. M. Rutherfurd. 2012. Gut luminal endogenous protein: Implications for the determination of ileal amino acid digestibility in humans. Br. J. Nutr. 108:S258-S263. https://doi.org/10.1017/S0007114512002474
NRC. 1998. Nutrient Requirements of Swine. 10th ed. National Academy Press, Washington, DC, USA.
Nyachoti, C. M., C. F. M. de Lange, B. W. McBride, and H. Schulze. 1997. Significance of endogenous gut nitrogen losses in the nutrition of growing pigs: A review. Can. J. Anim. Sci. 77:149-163. https://doi.org/10.4141/A96-044
Park, C. S, S. I. Oh, and B. G. Kim. 2013. Prediction of basal endogenous losses of amino acids based on body weight and feed intake in pigs fed nitrogen-free diets. Rev. Colomb. Cienc. Pecu. 26:186-192.
Rothman, S., C. Liebow, and L. Isenman. 2002. Conservation of digestive enzymes. Physiol. Rev. 82:1-18. https://doi.org/10.1152/physrev.00022.2001
Stein, H. H., C. Pedersen, A. R. Wirt, and R. A. Bohlke. 2005. Additivity of values for apparent and standardized ileal digestibility of amino acids in mixed diets fed to growing pigs. J. Anim. Sci. 83:2387-2395.
Stein, H. H., B. Seve, M. F. Fuller, P. J. Moughan, and C. F. M. de Lange. 2007. Invited review: Amino acid bioavailability and digestibility in pig feed ingredients: Terminology and application. J. Anim. Sci. 85:172-180. https://doi.org/10.2527/jas.2005-742
Wilfart, A., L. Montagne, H. Simmins, J. Noblet, and J. van Milgen. 2007. Digesta transit in different segments of the gastrointestinal tract of pigs as affected by insoluble fibre supplied by wheat bran. Br. J. Nutr. 98:54-62. https://doi.org/10.1017/S0007114507682981
Zhai, H. and O. Adeola. 2011. Apparent and standardized ileal digestibilities of amino acids for pigs fed corn- and soybean meal-based diets at varying crude protein levels. J. Anim. Sci. 89:3626-3633. https://doi.org/10.2527/jas.2010-3732
de Lange, C. F. M., W. C. Saucer, R. Mosenthin, and W. B. Souffrant. 1989. The effect of feeding different protein-free amino acid composition of endogenous protein collected from the distal ileum and feces in pigs. J. Anim. Sci. 67:746-754.
Dilger, R. N., J. S. Sands, D. Ragland, and O. Adeola. 2004. Digestibility of nitrogen and amino acids in soybean meal with added soyhulls. J. Anim. Sci. 82:715-724.
Fenton, T. W. and M. Fenton. 1979. An improved procedure for the determination of chromic oxide in feed and feces. Can. J. Anim. Sci. 59:631-634. https://doi.org/10.4141/cjas79-081
Hofmann, A. F., L. R. Hagey, and M. D. Krasowski. 2010. Bile salts of vertebrates: structural variation and possible evolutionary significance. J. Lipid Res. 51:226-246. https://doi.org/10.1194/jlr.R000042
Hughes, R. J. 2008. Relationship between digesta transit time and apparent metabolisable energy value of wheat in chickens. Br. Poult. Sci. 49:716-720. https://doi.org/10.1080/00071660802449145
Jansman, A. J. M., W. Smink, P. van Leeuwen, and M. Rademacher. 2002. Evaluation through literature data of the amount and amino acid composition of basal endogenous crude protein at the terminal ileum of pigs. Anim. Feed Sci. Technol. 98:49-60. https://doi.org/10.1016/S0377-8401(02)00015-9
Kim, B. G., M. D. Lindemann, G. L. Cromwell, A. Balfagon, and J. H. Agudelo. 2007. The correlation between passage rate of digesta and dry matter digestibility in various stages of swine. Livest. Sci. 109:81-84. https://doi.org/10.1016/j.livsci.2007.01.082
Kong, C. and O. Adeola. 2013. Ileal endogenous amino acid flow response to nitrogen-free diets with differing ratios of corn starch to dextrose in broiler chickens. Poult. Sci. 92:1276-1282. https://doi.org/10.3382/ps.2012-02835
Li, S. F., Y. B. Niu, J. S. Liu, L. Lu, L. Y. Zhang, C. Y. Ran, M. S. Feng, B. Du, J. L. Deng, and X. G. Luo. 2013. Energy, amino acid, and phosphorus digestibility of phytase transgenic corn for growing pigs. J. Anim. Sci. 91:298-308. https://doi.org/10.2527/jas.2012-5211
Adedokun, S. A., O. Adeola, C. M. Parsons, M. S. Lilburn, and T. J. Applegate. 2011. Factors affecting endogenous amino acid flow in chickens and the need for consistency in methodology. Poult. Sci. 90:1737-1748. https://doi.org/10.3382/ps.2010-01245
AOAC. 2006. Official Methods of Analysis. 18th ed. Association Official Analytical Chemists, Washington, DC, USA.
Bennick, A. 1982. Salivary proline-rich proteins. Mol. Cell. Biochem. 45:83-99.
Cozannet, P., Y. Primot, C. Gady, J. P. Metayer, P. Callu, M. Lessire, F. Skiba, and J. Noblet. 2010. Ileal digestibility of amino acids in wheat distillers dried grains with solubles for pigs. Anim. Feed Sci. Technol. 158:177-186. https://doi.org/10.1016/j.anifeedsci.2010.04.009
Corn Refiners Association. 2013. Corn starch. http://www.corn.org/wp-content/uploads/2013/12/StarchBook let2013.pdf. Accessed March 17, 2014.
Evaluation of Amino Acid and Energy Utilization in Feedstuff for Swine and Poultry Diets vol.27, pp.7, 2014, https://doi.org/10.5713/ajas.2014.r.02
Neutral detergent fiber increases endogenous ileal losses but has no effect on ileal digestibility of amino acids in growing pigs vol.88, pp.2, 2016, https://doi.org/10.1111/asj.12633
|
CommonCrawl
|
Variational parabolic capacity
DCDS Home
Harmonic functions in union of chambers
December 2015, 35(12): 5631-5663. doi: 10.3934/dcds.2015.35.5631
Density estimates for vector minimizers and applications
Nicholas D. Alikakos 1, and Giorgio Fusco 2,
Department of Mathematics, University of Athens, Panepistemiopolis, 15784 Athens, Greece
Università degli Studi dell'Aquila, Via Vetoio, 67010 Coppito, L'Aquila, Italy
Received April 2014 Revised September 2014 Published May 2015
We extend the Caffarelli--Córdoba estimates to the vector case (L. Caffarelli and A. Córdoba, Uniform Convergence of a singular perturbation problem, Comm. Pure Appl. Math. 48 (1995)). In particular, we establish lower codimension density estimates. These are useful for studying the hierarchical structure of minimal solutions. We also give applications.
Keywords: polar form., Allen-Cahn functional, vector minimizers, structure of minimizers.
Mathematics Subject Classification: 35J20, 35J47, 35J5.
Citation: Nicholas D. Alikakos, Giorgio Fusco. Density estimates for vector minimizers and applications. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5631-5663. doi: 10.3934/dcds.2015.35.5631
S. Alama, L. Bronsard and C. Gui, Stationary solutions in $\mathbbR^2$ for an Allen-Cahn system with multiple well potential,, Calc. Var. Part. Diff. Eqs., 5 (1997), 359. doi: 10.1007/s005260050071. Google Scholar
N. D. Alikakos, Some basic facts on the system $\Delta u-W_u(u)=0$,, Proc. Amer. Math. Soc., 139 (2011), 153. doi: 10.1090/S0002-9939-2010-10453-7. Google Scholar
N. D. Alikakos, On the structure of phase transition maps for three or more coexisting phases,, In Geometric Partial Differential Equations (eds. M. Novaga and G. Orlandi), (2013), 1. doi: 10.1007/978-88-7642-473-1_1. Google Scholar
N. D. Alikakos, A new proof for the existence of an equivariant entire solution connecting the minima of the potential for the system $\Delta u-W_u(u)=0$,, Comm. Partial Diff. Eqs, 37 (2012), 2093. doi: 10.1080/03605302.2012.721851. Google Scholar
N. D. Alikakos and G. Fusco, Entire solutions to equivariant elliptic systems with variational structure,, Arch. Rat. Mech. Analysis, 202 (2011), 567. doi: 10.1007/s00205-011-0441-z. Google Scholar
N. D. Alikakos and G. Fusco, In, preparation., (). Google Scholar
N. D. Alikakos and G. Fusco, On the connection problem for potentials with several global minima,, Indiana Univ. Math. Journal, 57 (2008), 1871. doi: 10.1512/iumj.2008.57.3181. Google Scholar
N. D. Alikakos and G. Fusco, Asymptotic and rigidity results for symmetric solutions of the elliptic system $\Delta u=W_u(u)$,, Ann. Scuola Norm Sup. Pisa Cl. Sci., 9 (2009), 1. Google Scholar
N. D. Alikakos and G. Fusco, A maximum principle for systems with variational structure and an application to standing waves,, to appear in JEMS, (). Google Scholar
S. Baldo, Minimal interface criterion for phase transitions in mixtures of Cahn-Hilliard fluids,, Ann. Inst. Henri Poincare, 7 (1990), 67. Google Scholar
F. Bethuel, H. Brezis and F. Helein, Ginzburg-Landau Vortices,, Birkhäuser, (1994). doi: 10.1007/978-1-4612-0287-5. Google Scholar
L. Bronsard and F. Reitich, On three-phase boundary motion and the singular limit of a vector-valued Ginzburg-Landau equation,, Arch. Rat. Mech. Analysis, 124 (1993), 355. doi: 10.1007/BF00375607. Google Scholar
L. Bronsard, C. Gui and M. Schatzman, A three-layered minimizer in $\mathbbR^2$ for a variational problem with a symmetric three-well potential,, Comm. Pure. Appl. Math., 49 (1996), 677. doi: 10.1002/(SICI)1097-0312(199607)49:7<677::AID-CPA2>3.0.CO;2-6. Google Scholar
L. Caffarelli and A. Cordoba, Uniform convergence of a singular perturbation problem,, Comm. Pure Appl. Math., 48 (1995), 1. doi: 10.1002/cpa.3160480101. Google Scholar
L. Caffarelli and F. Lin, Singularly perturbed elliptic systems and multi-valued harmonic functions with free boundaries,, Journal Amer. Math. Society , 21 (2008), 847. doi: 10.1090/S0894-0347-08-00593-6. Google Scholar
A. Cesaroni, C. M. Muratov and M. Novaga, Front propagation and phase field models of stratified media,, Archive for Rational Mechanics and Analysis, 216 (2015), 153. doi: 10.1007/s00205-014-0804-3. Google Scholar
L. C. Evans, Partial Differential Equations,, Graduate Studies in Mathematics, (1998). doi: 10.1090/gsm/019. Google Scholar
L. C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions,, Studies in Advanced Mathematics, (1992). Google Scholar
A. Farina, Two results on entire solutions of Ginzburg-Landau systems in higher dimensions,, J. Funct. Anal., 214 (2004), 386. doi: 10.1016/j.jfa.2003.07.012. Google Scholar
A. Farina and E. Valdinoci, Geometry of quasiminimal phase transitions,, Calc. Var. Part. Diff. Eqs., 33 (2008), 1. doi: 10.1007/s00526-007-0146-1. Google Scholar
M. Fazly and N. Ghoussoub, De Giorgi type results for elliptic systems,, Calculus of Variations and Partial Differential Equations, 47 (2013), 809. doi: 10.1007/s00526-012-0536-x. Google Scholar
G. Fusco, Equivariant entire solutions to the elliptic system $\Delta u=W_u(u)$ for general $G-$invariant potentials,, Calc. Var. Part. Diff. Eqs., 49 (2014), 963. doi: 10.1007/s00526-013-0607-7. Google Scholar
G. Fusco, On some elementary properties of vector minimizers of the Allen-Cahn energy,, Comm. Pure Appl. Anal., 13 (2014), 1045. doi: 10.3934/cpaa.2014.13.1045. Google Scholar
G. Fusco, F. Leonetti and C. Pignotti, A uniform estimate for positive solutions of semilinear elliptic equations,, Trans. Amer. Math. Soc., 363 (2011), 4285. doi: 10.1090/S0002-9947-2011-05356-0. Google Scholar
E. Gonzalez, U. Massari and I. Tamanini, On the regularity of boundaries of sets minimizing perimeter with a volume constraint,, Indiana Univ. Math. Journal, 32 (1983), 25. doi: 10.1512/iumj.1983.32.32003. Google Scholar
C. Gui and M. Schatzman, Symmetric quadruple phase transitions,, Ind. Univ. Math. J., 57 (2008), 781. doi: 10.1512/iumj.2008.57.3089. Google Scholar
J. Rubinstein, P. Sternberg and J. Keller, Fast reaction, slow diffusion and curve shortening,, SIAM J. Appl. Math., 49 (1989), 116. doi: 10.1137/0149007. Google Scholar
J. Rubinstein, P. Sternberg and J. Keller, Reaction-Diffusion processes and evolution to harmonic maps,, SIAM J. Appl. Math., 49 (1989), 1722. doi: 10.1137/0149104. Google Scholar
O. Savin, Regularity of flat level sets in phase transitions,, Ann. of Math., 169 (2009), 41. doi: 10.4007/annals.2009.169.41. Google Scholar
O. Savin and E. Valdinoci, Density estimates for a variational model driven by the Gagliardo norm,, Journal de Mathématiques Pures et Appliquées, 101 (2014), 1. doi: 10.1016/j.matpur.2013.05.001. Google Scholar
O. Savin and E. Valdinoci, Density estimates for a nonolocal variational model via the Sobolev inequality,, SIAM J. Math. Anal., 43 (2011), 2675. doi: 10.1137/110831040. Google Scholar
Y. Sire and E. Valdinoci, Density estimates for phase transitions with a trace,, Interfaces And Free Boundaries, 14 (2012), 153. doi: 10.4171/IFB/277. Google Scholar
P. Smyrnelis, Personal, communication., (). Google Scholar
P. Sternberg, Vector-valued local minimizers of nonconvex variational problems,, Rocky Mountain J. Math., 21 (1991), 799. doi: 10.1216/rmjm/1181072968. Google Scholar
J. E. Taylor, The structure of singularities in soap-bubble-like and soap-film-like minimal surfaces,, Ann. Math., 103 (1976), 489. doi: 10.2307/1970949. Google Scholar
E. Valdinoci, Plane-like minimizers in periodic media: Jet flows and Ginzburg-Landau-type functionals,, J. Reine Angew. Math., 574 (2004), 147. doi: 10.1515/crll.2004.068. Google Scholar
B. White, Topics in GMT,, Notes by O. Chodash., (2012). Google Scholar
Giorgio Fusco. On some elementary properties of vector minimizers of the Allen-Cahn energy. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1045-1060. doi: 10.3934/cpaa.2014.13.1045
Giorgio Fusco. Layered solutions to the vector Allen-Cahn equation in $\mathbb{R}^2$. Minimizers and heteroclinic connections. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1807-1841. doi: 10.3934/cpaa.2017088
Christos Sourdis. On the growth of the energy of entire solutions to the vector Allen-Cahn equation. Communications on Pure & Applied Analysis, 2015, 14 (2) : 577-584. doi: 10.3934/cpaa.2015.14.577
Gianni Gilardi. On an Allen-Cahn type integrodifferential equation. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 703-709. doi: 10.3934/dcdss.2013.6.703
Georgia Karali, Yuko Nagase. On the existence of solution for a Cahn-Hilliard/Allen-Cahn equation. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 127-137. doi: 10.3934/dcdss.2014.7.127
Christopher P. Grant. Grain sizes in the discrete Allen-Cahn and Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 127-146. doi: 10.3934/dcds.2001.7.127
Jie Shen, Xiaofeng Yang. Numerical approximations of Allen-Cahn and Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1669-1691. doi: 10.3934/dcds.2010.28.1669
Shixing Li, Dongming Yan. On the steady state bifurcation of the Cahn-Hilliard/Allen-Cahn system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3077-3088. doi: 10.3934/dcdsb.2018301
Alain Miranville, Wafa Saoud, Raafat Talhouk. On the Cahn-Hilliard/Allen-Cahn equations with singular potentials. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3633-3651. doi: 10.3934/dcdsb.2018308
Alain Miranville, Ramon Quintanilla, Wafa Saoud. Asymptotic behavior of a Cahn-Hilliard/Allen-Cahn system with temperature. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2257-2288. doi: 10.3934/cpaa.2020099
Florian Krügel. Some properties of minimizers of a variational problem involving the total variation functional. Communications on Pure & Applied Analysis, 2015, 14 (1) : 341-360. doi: 10.3934/cpaa.2015.14.341
Annalisa Cesaroni, Serena Dipierro, Matteo Novaga, Enrico Valdinoci. Minimizers of the $ p $-oscillation functional. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 6785-6799. doi: 10.3934/dcds.2019231
Hongmei Cheng, Rong Yuan. Multidimensional stability of disturbed pyramidal traveling fronts in the Allen-Cahn equation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1015-1029. doi: 10.3934/dcdsb.2015.20.1015
Yan Hu. Layer solutions for an Allen-Cahn type system driven by the fractional Laplacian. Communications on Pure & Applied Analysis, 2016, 15 (3) : 947-964. doi: 10.3934/cpaa.2016.15.947
Xinlong Feng, Huailing Song, Tao Tang, Jiang Yang. Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation. Inverse Problems & Imaging, 2013, 7 (3) : 679-695. doi: 10.3934/ipi.2013.7.679
Paul H. Rabinowitz, Ed Stredulinsky. On a class of infinite transition solutions for an Allen-Cahn model equation. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 319-332. doi: 10.3934/dcds.2008.21.319
Ciprian G. Gal, Maurizio Grasselli. The non-isothermal Allen-Cahn equation with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2008, 22 (4) : 1009-1040. doi: 10.3934/dcds.2008.22.1009
Eleonora Cinti. Saddle-shaped solutions for the fractional Allen-Cahn equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 441-463. doi: 10.3934/dcdss.2018024
Zhuoran Du, Baishun Lai. Transition layers for an inhomogeneous Allen-Cahn equation in Riemannian manifolds. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1407-1429. doi: 10.3934/dcds.2013.33.1407
Hirokazu Ninomiya, Masaharu Taniguchi. Global stability of traveling curved fronts in the Allen-Cahn equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (3) : 819-832. doi: 10.3934/dcds.2006.15.819
Nicholas D. Alikakos Giorgio Fusco
|
CommonCrawl
|
Search Results: 1 - 10 of 37 matches for " Anastassia Tsoukala "
Page 1 /37
La construction sociale de l'altérité en Grèce
Anastassia Tsoukala
Cahiers Balkaniques , 2011,
Abstract: L'état grec, à ses débuts, s'est appuyé sur une identité collective mythique qui a subi de graves traumatismes et doit aujourd'hui se reconstruire.La création de l'état grec moderne s'est appuyée sur une identité collective mythique, qui s'articulait autour d'une société prétendument homogène sur le plan démographique, d'une population éparpillée en dehors des frontières du pays qu'il convenait d'y intégrer en menant des guerres irrédentistes, et d'une continuité historique ininterrompue depuis l'Antiquité. L'arrivée massive des réfugiés grecs en 1922 a mis en rude épreuve cette identité nationale. Alors que l'état leur a immédiatement accordé des droits civiques, la majorité de la société leur a réservé un accueil hostile qui les a rapidement marginalisés. Un nouveau déchirement suivit, la guerre civile (1949), or, tout au long de sa durée et surtout une fois terminée, cette guerre fut niée, ou réduite à la seule figure du communiste manipulateur–manipulé qui permettait d'apporter des réponses réconfortantes à des questions douloureuses sur le plan politique, social et démographique du pays. Préserver cette image de la menace communiste a, donc, impliqué l'imposition du silence. Sa remise en question, récente, est-elle aussi en voie de construction. The Modern Greek state was created on the basis of a collective, mythical identity. This identity was predicated upon an allegedly homogeneous population scattered well beyond the borders of the country, with all its irredentist aspirations, and upon an imagined historical continuity that was uninterrupted since Antiquity. The massive arrival of Greek refugees in 1922 put this national identity severely to the test. Whereas the state immediately granted them civil rights, they were not welcomed by most of Greek society, which promptly marginalised them. A new rupture followed in the form of the Civil War (1949). In the course of this conflict, and more especially afterwards, the war was denied, or reduced to the lone figure of the communist—at once manipulator and manipulated—which made it possible to give comforting answers to the country's painful political, social and demographic questions. Preserving this image of the communist threat therefore necessitated silence—a silence only recently broken.
Democratization, Europeanization and Regionalization beyond the European Union: Search for Empirical Evidence
Obydenkova,Anastassia
European Integration Online Papers , 2006,
Abstract: This paper gives a viewpoint on a controversial issue of transnational regional cooperation (TRC) in Eurasia covering the analysis of cooperation between Europe and the regions of Russia. By promoting the favorable regime of transnational regional cooperation, both sides become more effective in managing such common problems as mutual security, political, economic and environmental challenges; illegal immigration; drug- and human-trafficking; etc. What is needed for the successful development of TRC in Eurasian context? What factors make a crucial impact on the development of regional cooperation, whether it is further "inclusion" or "exclusion" of the regions from cooperation with European neighbors? Is it the geopolitical location of the regions that makes regional cooperation more feasible or are there other factors that influence the success of this process? How might the regulatory and administrative tools of central government facilitate or complicate this process? To answer these questions, the study attempts to re-conceptualize the theories of integration, Europeanization, and regionalism. Then, it addresses the role of ethnicity, economical development, and geopolitical factors in the establishment and development of transnational regional cooperation. It also investigates the importance of "domestic-policy factors" (reforms in the federal governments) in the development of TRC.
Anastassia Obydenkova
Local-Global principles for certain images of Galois representations
Anastassia Etropolski
Abstract: Let $K$ be a number field and let $E/K$ be an elliptic curve whose mod $\ell$ Galois representation locally has image contained in a group $G$, up to conjugacy. We classify the possible images for the global Galois representation in the case where $G$ is a Cartan subgroup or the normalizer of a Cartan subgroup. When $K = \mathbf{Q}$, we deduce a counterexample to the local-global principle in the case where $G$ is the normalizer of a split Cartan and $\ell = 13$. In particular, there are at least three elliptic curves (up to twist) over $\mathbf{Q}$ whose mod $13$ image of Galois is locally contained in the normalizer of a split Cartan, but whose global image is not.
Gas transfer under breaking waves: experiments and an improved vorticity-based model
V. K. Tsoukala ,C. I. Moutzouris
Annales Geophysicae (ANGEO) , 2008,
Abstract: In the present paper a modified vorticity-based model for gas transfer under breaking waves in the absence of significant wind forcing is presented. A theoretically valid and practically applicable mathematical expression is suggested for the assessment of the oxygen transfer coefficient in the area of wave-breaking. The proposed model is based on the theory of surface renewal that expresses the oxygen transfer coefficient as a function of both the wave vorticity and the Reynolds wave number for breaking waves. Experimental data were collected in wave flumes of various scales: a) small-scale experiments were carried out using both a sloping beach and a rubble-mound breakwater in the wave flume of the Laboratory of Harbor Works, NTUA, Greece; b) large-scale experiments were carried out with a sloping beach in the wind-wave flume of Delft Hydraulics, the Netherlands, and with a three-layer rubble mound breakwater in the Schneideberg Wave Flume of the Franzius Institute, University of Hannover, Germany. The experimental data acquired from both the small- and large-scale experiments were in good agreement with the proposed model. Although the apparent transfer coefficients from the large-scale experiments were lower than those determined from the small-scale experiments, the actual oxygen transfer coefficients, as calculated using a discretized form of the transport equation, are in the same order of magnitude for both the small- and large-scale experiments. The validity of the proposed model is compared to experimental results from other researchers. Although the results are encouraging, additional research is needed, to incorporate the influence of bubble mediated gas exchange, before these results are used for an environmental friendly design of harbor works, or for projects involving waste disposal at sea.
Er81 is a downstream target of Pax6 in cortical progenitors
Tran Tuoc, Anastassia Stoykova
BMC Developmental Biology , 2008, DOI: 10.1186/1471-213x-8-23
Abstract: We identified and analyzed the regulatory function of an evolutionarily conserved upstream DNA sequence in the putative mouse Er81 promoter. Three potential Pax6 binding sites were identified in this region. We found that the presence of one of these sites is necessary and sufficient for full activation of the Er81 promoter in Pax6-transfected HeLa cells, while other still unknown factors appear to contribute to Er81 promoter activity in cortical progenitors and neuronal cells. The results suggest that endogenous Pax6, which is expressed at the highest level in progenitors of the rostrolateral cortex, exerts region-specific control of Er81 activity, thus specifying a subpopulation of layer 5 projection neurons.We conclude that the genetic interplay between the transcription factors, Pax6 and Er81, is responsible, in part, for the regional specification of a distinct sublineage of layer 5 projection neurons.In the mammalian neocortex (pallium), neurons with striking morphological and functional diversity are organized radially in six layers, and tangentially into numerous functional domains. Only recently have the molecular and cellular mechanisms that guide the process of corticogenesis responsible for this organization begun to be resolved [1,2]. The main source of cortical projection neurons is the population of pluripotent radial glial progenitors (RG), which divide asymmetrically at the apical surface of the ventricular zone (VZ) and generate both neuronal and glial progeny [3]. After midgestation, RG generate neuronal progenitors, termed intermediate or basal progenitors (BPs), that divide symmetrically at the basal surface of the VZ and in the subventricular zone (SVZ). Thus, while the asymmetric division of RG progenitors gives rise to progeny with distinct cell fates, the symmetric division of BPs primarily modulates the number of cells in previously established neuronal cell lineages [4]. The projection neurons of the lower (6 and 5) and upper (4–2) layers
Structures of Li-Doped Alkali Clusters are Dictated by AO-Hybridization
Anastassia N. Alexandrova
Abstract: Hybridization of atomic orbitals is a widely appreciated phenomenon whose impact on the structure and properties of, for example, organic molecules is well-established. Here, we demonstrate that hybridization also dramatically impacts the shapes of small alkali metal clusters. The seemingly similar and valence-iso-electronic LiNa4- and LiK4- clusters adopt very different global minimum structures: LiNa4- is a planar C2v (1A1) species distorted from the perfect pentagon due to the pseudo Jahn-Teller effect, and LiK4- is a planar square D4h (1A1g) species with Li being in the centre. This effect is rooted in the different degrees of the 2s-2p hybridization in Li in response to binding to Na versus K. Li inside the Na cluster exhibits a strong 2s-2p mixing, to achieve stronger covalent bonding. In contrast, Li inside of the K cluster does not show any hybridization, and the LiK4- cluster is reminiscent of an ionic salt. These differences are tied to the relative electronegativities of Li, Na, and K, and overlap of the valence atomic orbitals of Li with those of Na versus those of K. Atomic orbital hybridization is thus a pronounced effect in clusters, the understanding of which is important for the use of clusters in material science or catalysis, and designing clusters of desired shapes.
Plan for VLBI observations of close approaches of Jupiter to compact extragalactic radio sources in 2014-2016
Anastassia Girdiuk,Oleg Titov
Abstract: Very Long Baseline Interferometry is capable of measuring the gravitational delay caused by the Sun and planet gravitational fields. The post-Newtonian parameter $\gamma$ is now estimated with accuracy of $\sigma_{\gamma}=2\cdot 10^{-4}$ using a global set of VLBI data from 1979 to present (Lambert, Gontier, 2009), and $\sigma_{\gamma}=2\cdot10^{-5}$ by the Cassini spacecraft (Bertotti et. al, 2003). Unfortunately, VLBI observations in S- and X-bands very close to the Solar limb (less than 2-3 degrees) are not possible due to the strong turbulence in the Solar corona. Instead, the close approach of big planets to the line of sight of the reference quasars could be also used for testing of the general relativity theory with VLBI. Jupiter is the most appropriate among the big planets due to its large mass and relatively fast apparent motion across the celestial sphere. Six close approaches of Jupiter with quasars in 2014-2016 were found using the DE405/LE405 ephemerides, including one occultation in 2016. We have formed tables of visibility for all six events for VLBI radio telescopes participating in regular IVS programs. Expected magnitudes of the relativistic effects to be measured during these events are discussed in this paper.
How Metal Substitution Affects the Enzymatic Activity of Catechol-O-Methyltransferase
Manuel Sparta, Anastassia N. Alexandrova
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0047172
Abstract: Catechol-O-methyltransferase (COMT) degrades catecholamines, such as dopamine and epinephrine, by methylating them in the presence of a divalent metal cation (usually Mg(II)), and S-adenosyl-L-methionine. The enzymatic activity of COMT is known to be vitally dependent on the nature of the bound metal: replacement of Mg(II) with Ca(II) leads to a complete deactivation of COMT; Fe(II) is slightly less than potent Mg(II), and Fe(III) is again an inhibitor. Considering the fairly modest role that the metal plays in the catalyzed reaction, this dependence is puzzling, and to date remains an enigma. Using a quantum mechanical / molecular mechanical dynamics method for extensive sampling of protein structure, and first principle quantum mechanical calculations for the subsequent mechanistic study, we explicate the effect of metal substitution on the rate determining step in the catalytic cycle of COMT, the methyl transfer. In full accord with experimental data, Mg(II) bound to COMT is the most potent of the studied cations and it is closely followed by Fe(II), whereas Fe(III) is unable to promote catalysis. In the case of Ca(II), a repacking of the protein binding site is observed, leading to a significant increase in the activation barrier and higher energy of reaction. Importantly, the origin of the effect of metal substitution is different for different metals: for Fe(III) it is the electronic effect, whereas in the case of Ca(II) it is instead the effect of suboptimal protein structure.
Structure, stability, and mobility of small Pd clusters on the stoichiometric and defective TiO$_2$ (110) surfaces
Jin Zhang,Anastassia N. Alexandrova
Physics , 2011, DOI: 10.1063/1.3657833
Abstract: We report on the structure and adsorption properties of Pd$_n$ ($n=1-4$) clusters supported on the rutile TiO$_2$ (110) surfaces with the possible presence of a surface oxygen vacancy or a subsurface Ti-interstitial atom. As predicted by the density functional theory, small Pd clusters prefer to bind to the stoichiometric titania surface or at sites near subsurface Ti-interstitial atoms. The adsorption of Pd clusters changes the electronic structure of the underlying surface. For the surface with an oxygen vacancy, the charge localization and ferromagnetic spin states are found to be largely attenuated owing to the adsorption of Pd clusters. The potential energy surfaces of the Pd monomer on different types of surfaces are also reported. The process of sintering is then simulated via the Metropolis Monte Carlo method. The presence of oxygen vacancy likely leads to the dissociation of Pd clusters. On the stoichiometric surface or surface with Ti-interstitial atom, the Pd monomers tend to sinter into larger clusters, whereas the Pd dimer, trimer and tetramer appear to be relatively stable below 600 K. This result agrees with the standard sintering model of transition metal clusters and experimental observations.
|
CommonCrawl
|
A few weeks ago we had a discussion about citations, and how we can compare the citation impact of papers that were published in different years. Obviously, older papers have an advantage as they have more time to accumulate citations.
To compare papers, just for fun, we ended up opening the profile page of each paper in Google Scholar, and we analyzed the paper citations years by year to find the "winner." (They were both great papers, by great authors, fyi. It was more of a "Lebron vs. Jordan" discussion, as opposed to anything serious.)
This process got me curious though. Can we tell how a paper is doing at any given point in time? How can we compare a 2-year-old article, published in 2016, with 100 citations against a 10-year-old document, published in 2008, with 500 citations?
To settle the question, we started with the profiles of faculty members in the top-10 US universities and downloaded about 1.5M publications, across all fields, and their citation histories over time.
We then analyzed the citation histories of these publications, and, for each year, we ranked the papers based on the number of citations received over time. Finally, we computed the citation numbers corresponding to different percentiles of performance.
Cumulative percentiles
The plot below shows the number of citations that a paper needs to have at different stages to be placed in a given percentile.
A few data points, focusing on certain age milestones: 5-years after publication, 10-years after publication, and lifetime.
50% line: The performance of a "median" paper. The median paper gets around 20 citations 5 years after publication, 50 citations within 10 years, and around 100 citations in its lifetime. Milestone scores: 20,50,90
75% line: These papers perform "better," citation-wise than 75% of the remaining papers with the same age. Such papers get around 50 citations within 5 years, 100 citations within 10 years of publication, and around 200 citations in their lifetime. Milestone scores: 50,100,200
90% line: These papers perform better than 90% of the papers in their cohort. Around 90 citations within 5 years, 200 citations within 10 years, and 500 citations in their lifetime. Milestones scores: 90,200,500
Yearly percentiles and peak years
We also wanted to check at which point papers reach their peak, and start collecting fewer citations. The plot below shows the percentiles based on the yearly numbers of accumulated citations. The vast majority of papers tend to reach their peak 5-10 years after publication; the number of yearly citations starts declining after 5-10 years.
Below is the plot of the peak year for a paper based on the paper percentile:
There is an interesting effect around the 97.5% percentile: After that level, it seems that a 'rich-gets-richer' effect kicks in, and we effectively do not observe a peak year. The number of citations per year keeps increasing. You could call these papers the "classics".
What does it take to be a "classic"? 200 citations at 5 years or 500 citations at 10 years.
TL;DR: There are about 100K-200K unique workers on Amazon. On average, there are 2K-5K workers active on Amazon at any given time, which is equivalent to having 10K-25K full-time employees. On average, 50% of the worker population changes within 12-18 months. Workers exhibit widely different patterns of activity, with most workers being active only occasionally, and few workers being very active. Combining our results with the results from Hara et al, we see that MTurk has a yearly transaction volume of a few hundreds of millions of dollars.
For more details read below, or take a look at our WSDM 2018 paper.
A topic that frequently comes up when discussing Mechanical Turk is "how many workers are there on the platform"?
In general, this is a question that is very easy for Amazon to answer, but much harder for outsiders. Amazon claims that there are 500,000 workers on the platform. How can we check the validity of this statement?
Basic capture-recapture model
A common technique for this problem is the capture-recapture technique, that is widely used in the field of ecology, to measure the population of a species.
The simplest possible technique is the following:
Capture/marking phase: Capture $n_1$ animals, mark them, and release them back.
Recapture phase: A few days later, capture $n_2$ animals. Assuming there are $N$ animals overall, $n_1/N$ of them are marked. So, for each of the $n_2$ captured animals, the probability that the animal is marked is $n_1/N$ (from the capture/marking phase).
Calculation: On expectation, we expect to see $n_2 \cdot \frac{n_1}{N}$ marked animals in the recapture phase. (Notice that we do not know $N$.) So, if we actually see $m$ marked animals during the recapture phase, we set $m = n_2 \cdot \frac{n_1}{N}$ and we get the estimate that:
$N = \frac{n_1 \cdot n_2}{m}$.
In our setting we adapted the same idea, where "capture" and "recapture" correspond to participating in a demographics survey. In other words, we "capture/mark" MTurk users that complete the survey in one day. Then, in another day, we also "recapture" by surveying more workers and we see how many workers overlap in the two surveys.
First (naive) attempt
We decided to apply this technique to estimate the size of the Mechanical Turk population. We considered as "capture" period the set of surveys running over a period of 30 days. Then we considered as "recapture" period, the surveys that we ran on another 30-day period. The plot below shows the results.
The x-axis shows the beginning of the recapture period, and the y-axis the estimate of the number of workers. The color of each dot corresponds to the difference in time between the capture-recapture periods: black is a short time, and red is a long time.
If we focus on the black-color dots (~60 days between the surveys), we get a (naive) estimate of around 10K-15K workers. (Warning: this is incorrect.)
While we could stop here, we see some results that are not consistent with our model. Remember, that color encodes time between samples: black is for short time (~2 months) between samples, red is for long time (~2yrs) between samples. Notice that, as the time between the two periods increases, the estimates are becoming higher, and we get the "rainbow cake" effect in the plot. For example, for July 2017, our estimate is 12K workers if we compare with a capture from May 2017, but the estimate goes up to 45K workers if we compare with a sample from May 2015. Our model, though, says that the time between captures should not affect the population estimates. This indicates that there is something wrong with the model.
Assumptions of basic model
The basic capture-recapture estimation described above relies on a couple of assumptions. Both of these assumptions are violated when applying this technique to an online environment.
Assumption of no arrivals / departures ("closed population"): The vanilla capture-recapture scheme assumes that there are no arrivals or departures of workers between the capture and recapture phase.
Assumption of no selection bias ("equal catchability"): The vanilla capture-recapture scheme assumes that every worker in the population is equally likely to be captured.
In ecology, the issue of closed population has been examined under many different settings (birth-death of animals, immigration, spatial patterns of movement, etc.) and there are many research papers on the topic. Catchability, by comparison, has received comparatively less attention. This is reasonable, as in ecology the assumption of closed population is problematic in many settings. By comparison, assuming that the probability of capturing an animal is uniform among similar animals is reasonable. Typically the focus is on segmenting the animals into groups (e.g., nesting females vs hunting males) and assign different catchability heterogeneity to groups (but not to individuals).
In online settings though, the assumption of equal catchability is more problematic. First we have the activity bias: Workers exhibit very different levels of activity: A worker who works every day is much more likely to see and complete a task, compared to someone who works once a month. Similarly, we have a selection bias: Some workers may like to complete surveys, while others may avoid such tasks.
So, to improve our estimates, we need to use models that alleviate these assumptions.
Endowing workers with survival probabilities
We can extend the model, allowing each worker to have a certain survival probability, to allow workers to "disappear" from the platform. If we see the plot above, we can see that the population estimate increases as the time between two samples increases. This hints that workers leave the platform, and the intersection of capture-recapture becomes smaller over time.
If we account for that, we can get an estimate that the "half-life" of a Mechanical Turk worker is between 12-18 months. In other words, approximately 50% of the Mechanical Turk population changes every 12-18 months.
Endowing workers with propensity to participate
We can also extend the model by associating a certain propensity for each worker. The propensity is the probability that a worker is active and willing to participate in a task, at any given time.
In our work, we assumed that the underlying "propensity to participate" follows a Beta distribution across the worker population, and the parameters of the Beta distribution are unknown. When we assume that follow a Beta distribution, then the probability that a worker participates in the survey k times, follows a Beta Binomial distribution. Since we know how many workers participated k times in our surveys, it is then easy to estimate the underlying parameters of the Beta distribution.
Notice that we had to depart from the simple "two occasion" model above, and instead use multiple capturing periods over time. Intuitively, workers that have high propensity to participate will appear many times in our results, while inactive workers will appear only a few times.
By doing this analysis, we can observe that (as expected) the distribution of activity is highly skewed: A few workers are very active in the platform, while others are largely inactive. A nice property of the Beta distribution is its flexibility: Its shape can be pretty much anything: uniform, Gaussian-like, bimodal, heavy-tailed... you name it.
In our analysis, we estimated that the propensity distribution follows a Beta(0.3,20) distribution. We plot above the "inverse CDF" of the distribution (Inverse CDF: "what percentage of the workers have propensity higher than x").
As you can see, the propensity follows a familiar (and expected) pattern. Only 0.1% of the workers have propensity higher than 0.2, and only 10% have propensity higher than 0.05.
Intuitively, a propensity of 0.2 means that the worker is active and willing to participate 20% of their time (this is roughly equivalent to full-time level of activity; full-timer employees work around 2000 hrs per year, out of 24*365 available hours in a year). A propensity of 0.05 means that the worker is active and available approximately 24 hr * 0.05 ~ 1 hour per day.
How big is the platform?
So, how many workers are there? Under such highly skewed distributions, giving an exact number for the number of workers is rather futile. The best that you can do is give a ballpark estimate, and hope to be roughly correct on the order of magnitude. What our estimates are showing is that there are round 180K distinct workers in the MTurk platform. This is good news for anyone who is trying to reach a large number of distinct workers through the platform.
Our analysis also allows us to estimate how many workers are active and willing to participate in our task at any given time. For that, we estimate that around 2K to 5K workers are available, at any given time. If we want to convert this number to full-time employee equivalence, this is equivalent to 10K-25K full-time workers.
The latter part also allows us to give some low and high estimates on the transaction volume of MTurk.
Lower bound: Assuming 2K workers active at any given time, this is 2000*24*365=17,520,000 work hours in a year. If we assume that the median wage is \$2/hr, this is roughly \$35M/yr transaction volume on Amazon Mechanical Turk (with Amazon netting ~\$7M in fees).
Upper bound: Assuming 5K workers active at any given time, this is 5000*24*365=43,800,000 work hours in a year. If we assume average wage of \$12/hr, this is around \$525M/yr transaction volume (with Amazon netting ~$100M in fees).
I understand that a range of \$35M to \$500M may not be very helpful, but these are very rough estimates. If someone wanted my own educated guess, I would put it somewhere in the middle of the two, i.e., transaction volume of a few hundreds of millions of dollars.
Labels: mechanical turk
|
CommonCrawl
|
DOI:10.1007/JHEP05(2013)143
Perturbative calculation of the clover term for Wilson fermions in any representation of the gauge group SU(N)
@article{Musberg2013PerturbativeCO,
title={Perturbative calculation of the clover term for Wilson fermions in any representation of the gauge group SU(N)},
author={S. Musberg and Gernot M{\"u}nster and Stefano Piemonte},
journal={Journal of High Energy Physics},
volume={2013},
pages={1-6}
S. Musberg, G. Münster, S. Piemonte
A bstractWe calculate the Sheikholeslami-Wohlert coefficient of the O(a) improvement-term for Wilson fermions in any representation of the gauge group SU(N) perturbatively at the one-loop level. The result applies to QCD with adjoint quarks and to $ \mathcal{N} $ = 1 supersymmetric Yang-Mills theory on the lattice.
Non-perturbative O(a) improvement of the SU(3) sextet model
M. Hansen
We calculate non-perturbatively the coefficient c_sw required for O(a) improvement of the SU(3) gauge theory with Nf = 2 fermions in the two-index symmetric (sextet) representation. For the…
View 2 excerpts, cites methods
Improved results for the mass spectrum of N = 1 supersymmetric SU(3) Yang-Mills theory
Sajid Ali, G. Bergner,
P. Scior
This talk summarizes the results of the DESY-Munster collaboration for N = 1 supersymmetric Yang-Mills theory with the gauge group SU(3). It is an updated status report with respect to our…
Spectroscopy of four-dimensional N = 1 supersymmetric SU(3) Yang-Mills theory
M. Steinhauser, A. Sternbeck, B. Wellegehausen, A. Wipf
Supersymmetric gauge theories are an important building block for extensions of the standard model. As a first step towards Super-QCD we investigate the pure gauge sector with gluons and gluinos on…
Ward identities in N = 1 supersymmetric SU(3) Yang-Mills theory on the lattice
The introduction of a space-time lattice as a regulator of field theories breaks symmetries associated with continuous space-time, i.e. Poincare invariance and supersymmetry. A non-zero gluino mass…
Numerical Results for the Lightest Bound States in N=1 Supersymmetric SU(3) Yang-Mills Theory.
This work determines the masses of the lightest bound states in SU(3) N=1 SYM theory and shows the formation of a supermultiplet of bound states, which provides a clear evidence for unbroken supersymmetry.
Baryonic states in supersymmetric Yang-Mills theory
Proceedings of The 36th Annual International Symposium on Lattice Field Theory — PoS(LATTICE2018)
In $\mathcal{N}$=1 supersymmetric Yang-Mills theory the superpartner of the gluon is the gluino, which is a spin 1/2 Majorana particle in the adjoint representation of the gauge group. Combining…
The light bound states of N=1$$ \mathcal{N}=1 $$ supersymmetric SU(3) Yang-Mills theory on the lattice
A bstractIn this article we summarise our results from numerical simulations of N=1$$ \mathcal{N}=1 $$ supersymmetric Yang-Mills theory with gauge group SU(3). We use the formulation of Curci and…
Multi-Representation Dynamics of SU(4) Composite Higgs Models: Chiral Limit and Spectral Reconstructions
L. Debbio, Alessandro Lupo, M. Panero, N. Tantalo
We present a lattice study of the SU (4) gauge theory with two Dirac fermions in the fundamental and two in the two-index antisymmetric representation, a model close to a theory of partial…
Nonperturbative renormalization of the supercurrent in $\mathcal{N} = 1$ Supersymmetric Yang-Mills Theory
G. Bergner, M. Costa, H. Panagopoulos, S. Piemonte, Ivan Soler, G. Spanoudes
In this work, we study the nonperturbative renormalization of the supercurrent operator in N = 1 Supersymmetric Yang-Mills (SYM) theory, using a gauge-invariant renormalization scheme (GIRS). The…
Lattice simulations of a gauge theory with mixed adjoint-fundamental matter
G. Bergner, S. Piemonte
In this article we summarize our efforts in simulating Yang-Mills theories coupled to matter fields transforming under the fundamental and adjoint representations of the gauge group. In the context…
Non-perturbatively improved clover action for SU(2) gauge + fundamental and adjoint representation fermions
Anne Mykkanen, J. Rantaharju,
Helsinki Institute of Physics University of Jyvaskyla
The research of strongly coupled beyond-the-standard-model theories has generated significant interest in non-abelian gauge field theories with different number of fermions in different…
O(a) improvement of the axial current in lattice QCD to one-loop order of perturbation theory
M. Luscher, P. Weisz
Non-perturbative O(a) improvement of lattice QCD
M. Luescher, S. Sint, R. Sommer, P. Weisz, U. Wolff
O( a) perturbative improvement for Wilson fermions
S. Naik
Determination of the improvement coefficientcSWup to one-loop order with conventional perturbation theory
S. Aoki, Y. Kuramashi
We calculate the $O(a)$ improvement coefficient c_SW in the Sheikholeslami-Wohlert quark action for various improved gauge actions with six-link loops. We employ the conventional perturbation theory…
Towards N = 1 super-Yang-Mills on the lattice
A. Donini, M. Guagnelli, P. Hernández, A. Vladikas
The gluino-glue particle and finite size effects in supersymmetric Yang-Mills theory
G. Bergner, T. Berheide, G. Münster, U. D. Özugurel, D. Sandbrink, I. Montvay
A bstractThe spectrum of particles in supersymmetric Yang-Mills theory is expected to contain a spin 1/2 bound state of gluons and gluinos, the gluino-glue particle. We study the mass of this…
Towards the spectrum of low-lying particles in supersymmetric Yang-Mills theory
G. Bergner, I. Montvay, G. Münster, U. D. Özugurel, D. Sandbrink
A bstractThe non-perturbative properties of supersymmetric theories are of interest for elementary particle physics beyond the Standard Model. Numerical simulations of these theories are associated…
An algorithm for gluinos on the lattice
I. Montvay
Simulation of 4d$\mathcal{N}=1$ supersymmetric Yang–Mills theory with Symanzik improved gauge action and stout smearing
K. Demmouche, F. Farchioni,
J. Wuilloud
We report on the results of a numerical simulation concerning the low-lying spectrum of four-dimensional $\mathcal{N}=1$ SU(2) Supersymmetric Yang–Mills (SYM) theory on the lattice with light…
|
CommonCrawl
|
Just LaunchedOur global affiliate program is up and running! Start earning today.
BlogFeaturesPricingSupportSign In
ArticlesLibrary
11.4 Modelling Anaerobic Digestion
Fig. 11.10 Simulation of TAN (XsubNHx-N,1/sub) in [mg/l] over 2 days = 2880 min with Q = 300 l/min (blue) and Q = 200 l/min (orange)
Fig. 11.11 Simulation of nitrate-N (XsubNO3-N,1/sub) in [mg/l] over 50 days = 72,000 min with QsubExc/sub = 300 l/day (yellow), QsubExc/sub = 480 l/day (orange) and QsubExc/sub = 600 l/day (blue)
Anaerobic digestion (AD) of organic material is a process that involves the sequential steps of hydrolysis, acidogenesis, acetogenesis and methanogenesis (Batstone et al. 2002). The anaerobic digestion of a mixture of proteins, carbohydrates and lipids is visualized in Figure 11.11. Most often, hydrolysis is considered as the rate-limiting step in the anaerobic digestion of complex organic matter (Pavlostathis and GiraldoGomez 1991). Thus, increasing the hydrolysis reaction rate will most likely lead to a higher anaerobic digestion reaction rate. However, increasing the reaction rates needs further understanding of the related process. Further understanding can be obtained via experimentation and/or mathematical modelling. As there are many factors influencing, for instance, the hydrolysis process, such as ammonia concentration; temperature; substrate composition; particle size; pH; intermediates; degree of hydrolysis; i.e. the potential of hydrolysable content; and residence time, it is almost impossible to evaluate the total effect of the factors on the hydrolysis reaction rate through experimentation. Mathematical modelling could therefore be an alternative, but as a result of all the uncertainties in model formulation, rate coefficients and initial conditions, no unique answers can be expected. But, a mathematical modelling framework would allow sensitivity and uncertainty analyses to facilitate the modelling process. As mentioned before, hydrolysis is just one of the steps in anaerobic digestion. Consequently, understanding and optimization of the full anaerobic digestion process needs connections from hydrolysis to the other processes taking place during anaerobic digestion and interactions between all these steps.
The well-known and widely used ADM1 (anaerobic digestion model #1) is a structured model including disintegration and hydrolysis, acidogenesis, acetogenesis and methanogenesis steps. Disintegration and hydrolysis are two extracellular steps. In the disintegration step, composite particulate substrates are converted into inert material, particulate carbohydrates, protein and lipids. Subsequently, the enzymatic hydrolysis step decomposes particulate carbohydrates, protein and lipids to monosaccharides, amino acids and long-chain fatty acids (LCFA), respectively (Batstone et al. 2002) (see Fig. 11.12).
ADM1 is a mathematical model that describes the biological processes and physicochemical processes of anaerobic digestion as a set of differential and algebraic equations (DAE). The model contains 26 dynamic state variables in terms of concentrations, 19 biochemical kinetic processes, 3 gas-liquid transfer kinetic processes and 8 implicit algebraic variables for each process unit. As an alternative, Galí et al. (2009) described the anaerobic process as a set of differential equations with 32 dynamic state variables in terms of concentrations and an additional 6 acid-base kinetic processes per process unit. For an overview of the modelling of anaerobic digestion processes, we refer to Ficara et al. (2012). However, in what follows and for some first insights into the AD process, we will present a simple nutrient-balance model of AD in a sequencing batch reactor (SBR).
11.4.1 Nutrient Mineralization
The nutrient mineralization can be calculated using the following equation (Delaide et al. 2018):
$NR=100\% \times (\frac{DN{out}-DN{in}}{TN{in}-DN_{in}})$ (11.15a)
Fig. 11.12 A simplified scheme for the anaerobic digestion of complex particulate organic matter (based on El-Mashad 2003)
where NR is the nutrient recovery at the end of the experiment in percent, DNsubout/sub is the total mass of dissolved nutrient in the outflow, DNsubin/sub is the total mass of dissolved nutrient in the inflow and TNsubin/sub is the total mass of dissolved plus undissolved nutrients in the inflow (see also Fig. 11.13).
11.4.2 Organic Reduction
The organic reduction performance of the reactor can be calculated using the following equation:
$η{OM}=1-\frac{\Delta OM+T{OM\ out}}{T_{OM\ in}}$ (11.15b)
where ΔOM is the organic matter (i.e. COD, TS, TSS, etc.) inside the reactor at the end of the experiment minus the one at the beginning of the experiment, TsubOM out/sub is the total OM outflow and TsubOM in/sub is the total OM inflow (see also Fig. 11.14).
Fig. 11.13 Overall reactor scheme for determining the mineralization potential, where DN are the dissolved nutrients in the water, UN the undissolved nutrients in the sludge (i.e. TN-DN) and TN the total nutrients
Fig. 11.14 Overall reactor scheme for determining the organic material reduction potential, where TsubOM/sub is the total organic matter and ΔOM the change of organic matter inside the reactor
ManualsAquaponics
Aquaponics Food Production Systems
Case story examples
Stay up-to-date on the latest Aquaponic Tech
Fish Library
Crop Library
Disease Library
Aquaponic Guides
Commercial & Research
Backyard DIY
Copyright © 2019 Aquaponics AI. All rights reserved.
|
CommonCrawl
|
Hikers Meeting in the Middle
Two hikers are separated by a two-dimensional mountain range, like the one shown below. The mountain range alternates between peaks and valleys, connected by straight lines.
Both hikers are at sea level, and the mountain range never dips below sea level.
The two hikers want to meet up with each other. Prove that they can do this while staying at the same altitude as each other for their entire journey. They are allowed to backtrack.
Source: http://www.cs.cmu.edu/puzzle/puzzle12.html
mathematics geometry
f''
Mike EarnestMike Earnest
$\begingroup$ I can figure it out intuitively but I haven't the wherewithal to type it all out logically. I like it, though. $\endgroup$ – Engineer Toast Oct 28 '15 at 19:12
$\begingroup$ The way the question is worded makes it sound like they aren't allowed to ever go above sea level. Please clarify that they only have to stay at the same altitude as each other. $\endgroup$ – trentcl Oct 28 '15 at 20:33
$\begingroup$ The source also provides a solution, in case anyone wants to spoil it for themselves. It's quite elegant, and nicely avoids the problems the currently posted attempts run into. $\endgroup$ – user2357112 Oct 28 '15 at 21:21
$\begingroup$ @JacobHolloway: Well, if they meet somewhere else, they can just travel to the middle together. $\endgroup$ – user2357112 Oct 29 '15 at 5:23
$\begingroup$ I read about a more general problem on wikipedia once. It's called the mountain climbing problem. en.wikipedia.org/wiki/Mountain_climbing_problem $\endgroup$ – Ben Frankel Oct 29 '15 at 18:47
I think @Sleafar's deleted answer works:
Put the hikers and the mountains in a big swimming pool. The hikers will always swim at the water level which we will change.
Start increasing the water level until one hiker reaches a local peak. The hiker that reached the peak will now descend on the other side of the mountain. Decrease the water level until one hiker reaches a valley. This hiker will now ascend the next mountain.
Whenever one hiker reaches a peak or a valley, we reverse the water direction and the hiker continues on the other side of the peak or valley, while the other hiker reverses direction. (If both hikers encounter a peak or valley at the same time, they both continue in the same direction as before.)
This process is the same in reverse. Because it can be reversed without ambiguity, it is impossible to get stuck in a cycle.
Suppose the process ever returns to the starting state. Then, we can run it backwards and it will look exactly the same as it did forwards. Then, exactly halfway through, both hikers must reverse direction at the same time, which is impossible.
Because the procedure cannot be stuck in a loop, and it can't return to the starting state, it must end with at least one hiker reaching the opposite side (in fact, they both reach the opposite side, but we don't need to prove this). This guarantees that the hikers meet at some point.
f''f''
$\begingroup$ Take a JonTheMon's picture. The person on the left has to get all the way to the second mountain and then return all the way to the left side of the first mountain. It is not obvious from your strategy how you handle this. $\endgroup$ – Trenin Oct 29 '15 at 12:19
$\begingroup$ @Trenin The water level goes up and down several times, and the person on the right reverses direction several times, while the person on the left backtracks. This has no effect on the final result. $\endgroup$ – f'' Oct 29 '15 at 15:09
Suppose we reformulate the problem as follows. I apologize for the mathematical jargon; the essential idea is that we are to consider the set of legal positions for the two hikers, and show that the initial position is reachable from the position with the hikers reversed. We do this by showing, in a precise sense, that nothing disconnects them.
To start out, let us make a definition:
Let $f:[0,1]\rightarrow \mathbb [0,\infty)$ be a continuous* function with $f(0)=f(1)=0$, representing the height of the mountain range a given distance from the left. Assume that the hikers begin at $0$ and $1$ respectively. Next, let $S=[0,1]\times[0,1]$ be the unit square, which is the set of tuples $(x,y)$ of possible positions of the hikers. Define the difference in height $g$ as: $$g(x,y)=f(x)-f(y).$$ Note that the legal positions are exactly those for which $g(x,y)=0$.
We want to know if there is a path with $g(x,y)=0$ throughout starting at $(1,0)$ and ending at $(0,1)$. This is equivalent to saying the hikers may meet (as they will meet in the middle when they swap positions like in this example). The only way this could fail to happen is if there is some path of illegal positions $\gamma$ running from $(1,x)$ or $(x,1)$ to a position of $(0,x)$ or $(x,0)$, which would divide the square suitably. Notice, moreover, that $g(1,x)\leq 0$ and $g(0,x)\leq 0$ whereas $g(x,1)\geq 0$ and $g(x,0)\geq 0$. As the sign of $\gamma$ must not change due to the intermediate value theorem, it either runs bottom to top or left to right. However, this is impossible as it must then intersect the line $x=y$, along which $g(x,y)=0$, which must not be true of any point on $\gamma$. Therefore, no path of illegal positions divides the space into two parts, and there is thus, to the contrary, a path from $(0,1)$ to $(1,0)$, as desired.
(*I will admit I have no proof in mind to show that "two points in a closed set in $S$ are disconnected if only if a path in the complement divides them." Maybe someone who wants to do some analysis can fill this in. In the piecewise case, we can, with some reductions, work on a discrete grid rather than the unit square and the proof is obvious there)
Milo BrandtMilo Brandt
Alright, let's call the person on the left Alice, and the person on the right Bob.
The conditions that make it possible for them to be at the same altitude until they meet are:
the first peak that Alice encounters is higher than the first peak that Bob encounters
the bottom of the first valley that Bob encounters is the lowest non-sea level valley on the graph, and the second peak that he encounters is the highest point on the graph. Therefore, this slope encompasses every possible altitude in the mountain range, except for sea level.
Therefore, the solution:
Bob climbs to his first peak; Alice matches his altitude
Bob descends to his first valley; Alice goes back down the mountain, matching his altitude
Alice proceeds to the right; Bob simply matches her altitude the whole way by traversing the slope between his first valley and second peak, until Alice reaches her fourth mountain, which Bob is on the other side of, and they both ascend to meet at the peak.
Steve EckertSteve Eckert
$\begingroup$ This of course works for this particular image, but I suppose the question was asked in greater generality. It doesn't happen here, but it's not hard to draw examples of mountain ranges which require both Alice and Bob to move backwards at a given point. $\endgroup$ – Fimpellizieri Oct 28 '15 at 20:17
At each given time we should only care about the next peak or valley of each of the hikers(let's call them Bruce B. and Tony S.). There are 3 possibilities:
The next peak of Bruce B. is higher than the next peak of Tony S.
The next peak of Bruce B. is lower than the next peak of Tony S.
The next peaks are equally high
In case 1: Tony S. would climb up his peak and Bruce B. would go just as high. Then Tony S. would move to the valley while Bruce B. continues his backtracking descent
In case 2: Bruce B. would climb up his peak and Tony S. would go just as high. Then Bruce B. would move to the valley while Tony S. continues his backtracking descent
In case 3: They both go to the next peak and:
Either they meet(if it was the only remaining peak between them)
Or they look at the next peak and repeat the steps
So basically, the one whose next peak is higher will have to backtrack, letting the other one move forward.
Using this strategy they will eventually meet either:
On the highest peak
Or on the highest point between 2 highest peaks
NovargNovarg
$\begingroup$ What if you can't backtrack low enough? $\endgroup$ – user2357112 Oct 28 '15 at 21:13
$\begingroup$ @user2357112 - Provided the mountains never dip below sea level, this should not occur. Since we know they both started at sea level, there will never be a point at which one needs to go lower but the other is unable to. $\endgroup$ – Darrel Hoffman Oct 28 '15 at 21:33
$\begingroup$ @DarrelHoffman: Suppose Bruce has a valley of height 4 behind him and a peak of height 8 ahead, while Tony is at a peak of height 6 and has a valley of height 2 ahead of him. $\endgroup$ – user2357112 Oct 28 '15 at 21:38
$\begingroup$ Bruce will have to backtrack until he gets back down to height 2 so that Tony can get across his valley. We know he can get to height 2 because he started at height 0. Meanwhile, Tony stays on the forward side of his valley, which by definition he can do because Bruce is somewhere between 4 and 2, which is less than 6. $\endgroup$ – Darrel Hoffman Oct 28 '15 at 22:31
Let's take a peak diagram of:
/\ C
/ \ /\
A / \ / \
/\____/ \ / \
/ \____/ \
1/ \2
The steps would be:
1 to A
1 down, then up towards B
2 hits C, goes down, 1 goes down
1 hits valley, goes back up to A, down until 2 can hit valley
1 goes up to B, 2 goes up to B
Another way of looking at it is: 1-Peak, 1-Valley, 2-Peak, 1-Valley, 1-Peak, 2-Valley, 1-Peak, 1-Valley, 12-Final
Let's try to factor in direction (Forward is towards highest peak): 1-Peak(F-F), 1-Valley(F-A), 2-Peak(F-F), 1-Valley(A-F), 1-Peak(A-A), 2-Valley(A-F), 1-Peak(F-F), 1-Valley(F-A), 12-Final(F-F)
So I don't know if there's a good way of determining which peak/valley to go to.
I would say that it looks like you take the current altitude span (e.g. 2 to C initially vs 1 to A initially) and see if the entirety of the other's route falls into it. Then take the next span (C to valley) and see how much of the other's span falls into that altitude.
JonTheMonJonTheMon
Not the answer you're looking for? Browse other questions tagged mathematics geometry or ask your own question.
The Monster Garden
Ernie and the Underground Network
Shooting a Laser Between two Mirrors
Ever increasing highway numbers
Two prisoners and the prison ward
Matchsticks for lengths
Reassembling the Marquetry II: The Coffee Table Strikes Back
8 Train Stations
Moving 4 sticks to form 8 equilateral triangles
Inner Triangles in the circle
|
CommonCrawl
|
Prove the Vandermonde identity.
Data Science & Statistics
For integers \(N, M, n \geq 0\), the following identity holds:
\sum_{k=0}^n\left(\begin{array}{l}
N \\
\end{array}\right)\left(\begin{array}{c}
M \\
n-k
\end{array}\right)=\left(\begin{array}{c}
N+M \\
\end{array}\right)
asked Nov 27, 2022 in Data Science & Statistics by ♦MathsGee Platinum (163,814 points) | 37 views
Proof: Sums of the form
\sum_{k=0}^n a_k b_{n-k}
are reminiscent of polynomial multiplication: If two polynomials \(A(x)=\sum_{k=0}^N a_k x^k\) and \(B(x)=\sum_{l=0}^M b_l x^l\) are given, then one obtains for the product of the two
A(x) \cdot B(x)=\sum_{k=0}^N \sum_{l=0}^M a_k b_l x^{k+l} .
The coefficient of \(x^n\) is obtained from those terms for which \(k+l=n\), or equivalently \(l=n-k\). It follows that the coefficient of \(x^n\) equals
where we set \(a_k=0\) if the degree \(N\) of the polynomial \(A\) is less than \(k\) (and likewise \(b_l=0\) if \(l>M\) ). The same argument is also used later in Section 4.3.
In the present case the problem can be solved by applying the binomial theorem to the polynomial \((1+x)^{N+M}\), which is also the product of the two polynomials \((1+x)^N\) and \((1+x)^M\)
(1+x)^{N+M}=(1+x)^N \cdot(1+x)^M=\left(\sum_{k=0}^N\left(\begin{array}{c}
\end{array}\right) x^k\right)\left(\sum_{l=0}^M\left(\begin{array}{c}
\end{array}\right) x^l\right)
=\sum_{k=0}^N \sum_{l=0}^M\left(\begin{array}{c}
\end{array}\right) x^{k+l}=\sum_{n=0}^{N+M} \sum_{k=0}^n\left(\begin{array}{c}
\end{array}\right) x^n .
Now we obtain the desired identity upon comparing coefficients with
(1+x)^{N+M}=\sum_{n=0}^{N+M}\left(\begin{array}{c}
Once again, it is possible to prove the identity by means of a simple counting argument: consider a set of \(N+M\) elements, and divide it into two parts with \(N\) and \(M\) elements respectively. If one wants to select \(n\) elements, one can do so by first choosing \(k\) elements from the first part \((k \leq n)\) and the remaining \(n-k\) elements from the second part. For each \(k\), there are precisely \(\left(\begin{array}{c}N \\ k\end{array}\right)\left(\begin{array}{c}M \\ n-k\end{array}\right)\) possibilities. Summing over all \(k\), we obtain the above expression for \(\left(\begin{array}{c}N+M \\ n\end{array}\right)\).
answered Nov 27, 2022 by ♦MathsGee Platinum (163,814 points)
Prove that the following statements are identities:
Make use of trigonometric identities to prove the following: \[ \frac{\sin ^{4} x+\sin ^{2} x \cdot \cos ^{2} x}{1+\cos x}=1-\cos x \]
asked Jul 6, 2022 in Mathematics by ♦Gauss Diamond (71,587 points) | 62 views
Prove that the identity \(\cos (\mathrm{A}-\mathrm{B})=\cos \mathrm{A} \cdot \cos \mathrm{B}+\sin \mathrm{A} \cdot \sin \mathrm{B}\)
Use compound angle formula to prove the identity \(\cos(2x) = \cos^2 x - \sin^2 x\)
asked Jan 9 in Mathematics by ♦MathsGee Platinum (163,814 points) | 12 views
Use double angle formula to prove the identity \(\sin(2x) = 2 \sin x \cos x\)
asked Jan 9 in Mathematics by ♦MathsGee Platinum (163,814 points) | 9 views
Prove the identity \(\tan(2x) = \dfrac{2 \tan x }{ (1 - \tan^2 x)}\) using compound angle formula.
Prove the identity \(\cos(2x) = \cos^2 x - \sin^2 x\) using double angle formula.
Prove the identity \(\sin(2x) = 2\sin x \cos x\) using compound angle formula.
Prove the identity: \[ \frac{1}{\cos \theta}-\frac{\cos \theta}{1+\sin \theta}=\tan \theta \]
Prove the identity $\frac{\cos \alpha}{\sin \alpha} \cdot \frac{1}{\cos ^{2} \alpha} \cdot \sin ^{2} \alpha=\tan \alpha$
asked Aug 28, 2021 in Mathematics by Siyavula Bronze Status (8,302 points) | 437 views
Prove the identity: $\dfrac{\sin{\theta} - \tan{\theta}.\cos^{2}{\theta}}{\cos{\theta} - 1 + \sin^{2}{\theta}} = \tan{\theta}$
asked Apr 8, 2020 in Mathematics by ♦MathsGee Platinum (163,814 points) | 664 views
Verify the identity: $\sin \theta \cot \theta=\cos \theta$.
Prove that $1-\sin ^{4} \alpha=\cos ^{2} \alpha\left(1+\sin ^{2} \alpha\right)$
Prove that $\sin 2 \alpha=2 \sin \alpha \cdot \cos \alpha$
asked May 26, 2021 in Mathematics by ♦MathsGee Platinum (163,814 points) | 417 views
Prove that $\sin (\alpha+\beta)=\sin \alpha \cdot \cos \beta+\cos \alpha \cdot \sin \beta$
|
CommonCrawl
|
16th International Conference on Bioinformatics (InCoB 2017): Systems Biology
CPredictor3.0: detecting protein complexes from PPI networks with expression data and functional annotations
Ying Xu1,
Jiaogen Zhou3,
Shuigeng Zhou2,4 &
Jihong Guan1
Effectively predicting protein complexes not only helps to understand the structures and functions of proteins and their complexes, but also is useful for diagnosing disease and developing new drugs. Up to now, many methods have been developed to detect complexes by mining dense subgraphs from static protein-protein interaction (PPI) networks, while ignoring the value of other biological information and the dynamic properties of cellular systems.
In this paper, based on our previous works CPredictor and CPredictor2.0, we present a new method for predicting complexes from PPI networks with both gene expression data and protein functional annotations, which is called CPredictor3.0. This new method follows the viewpoint that proteins in the same complex should roughly have similar functions and are active at the same time and place in cellular systems. We first detect active proteins by using gene express data of different time points and cluster proteins by using gene ontology (GO) functional annotations, respectively. Then, for each time point, we do set intersections with one set corresponding to active proteins generated from expression data and the other set corresponding to a protein cluster generated from functional annotations. Each resulting unique set indicates a cluster of proteins that have similar function(s) and are active at that time point. Following that, we map each cluster of active proteins of similar function onto a static PPI network, and get a series of induced connected subgraphs. We treat these subgraphs as candidate complexes. Finally, by expanding and merging these candidate complexes, the predicted complexes are obtained.
We evaluate CPredictor3.0 and compare it with a number of existing methods on several PPI networks and benchmarking complex datasets. The experimental results show that CPredictor3.0 achieves the highest F1-measure, which indicates that CPredictor3.0 outperforms these existing method in overall.
CPredictor3.0 can serve as a promising tool of protein complex prediction.
Proteins, as the material basis of life, are the ultimate controller and direct performer of life activities, participate almost all of biological functions. Most proteins do not perform biological functions alone, but form protein complexes with others [1]. So to have a more comprehensive and deep understanding of cell compositions and life processes, the identification of protein complexes is very important.
Although biological techniques such as Tandem Affinity Purification with Mass Spectrometry (TAP-MS) [2] can detect protein complex directly, the accuracy is not high. In addition, biological techniques are usually time-consuming and costly. These make biological techniques cannot meet the requirement of post-genome research for handling big biological data.
With the development of high-throughput experimental technologies, PPI data rapidly increase, which provides chance for using computational methods to detect protein complexes. Moreover, computational methods can overcome drawbacks of experimental technologies. PPI networks can be constructed by using PPI data, where nodes and edges represent proteins and interactions between proteins respectively. Empirical studies on PPI networks indicate that there are modular components in these networks [3]. From the view of network topography, these modules are made up of closely related proteins; From the view of biology, these modules aggregate proteins that perform functions together. Thus, protein complexes can be detected by mining the modular structures (i.e., dense subgraphs or subnetworks) from PPI networks.
So far, there have been many researches that put forward different graph clustering methods to detect local dense subgraphs to detect protein complexes from PPI networks [4–9]. These methods are intuitive and straightforward. To overcome the high false-positive and false-negative problems in PPI networks, many studies have attempted to improve the reliability of PPI data by exploiting gene expression data [10, 11] and protein functional annotations [12, 13] to improve the accuracy of protein complex prediction. In addition to dense subgraph mining based approaches, in the past decade some other method have also developed, including the core-attachment structure based methods [14, 15], methods for non-dense junction complexes and small complexes [16, 17], and methods using dynamic PPI networks [18]. In next section, we will present a relatively comprehensive survey on complex prediction.
In this paper, based on our previous works CPredictor [19] and CPredictor2.0 [16, 17], we propose a new method called CPredictor3.0, which considers both dynamic PPI and functional information. First, we use expression data of different time points to detect active proteins at the same time point, meanwhile we cluster proteins by functional annotations such that each cluster contains proteins of similar function(s). Then, we compute protein clusters of similar function(s) and being active at the same time point by set intersection operation with one set corresponding to an active protein set generated by expression data and the other set corresponding to a protein cluster generated from functional annotations. Following that, we map the resulting clusters onto a static PPI network and obtain a series of induced connected subgraphs, which are treated as candidate complexes. Finally, we identify protein complexes by expanding and merging the candidate complexes. Our experimental results validate the effectiveness of CPredictor3.0, which outperforms the existing methods in overall.
So far, a variety of computational methods for complex prediction have been proposed. Here, we present a brief survey on the related works by roughly classifying the existing methods into the following types: methods based on local dense subgraphs, methods based on the Core-Attachment Model, methods based on dynamic PPI networks, methods based on supervised learning. Among them, methods based on local dense subgraphs constitute the most part of existing works. Note that this method hierarchy only reflects our view of existing works. There may be other hierarchies of existing works in the literature. And a brief survey, we cannot cover all existing works, but we try our best to present the major existing works.
Methods based on local dense subgraphs
As one of earliest computational methods of complex prediction, MCODE [4] first weights each protein based on its core-clustering density in the PPI network, then the protein (say p) with the largest weight is selected to be a seed node of a primary complex, which is expanded by including other proteins whose weights exceed a pre-set threshold recursively, till there are no more nodes to be added. If there are unprocessed nodes, new complexes will be generated in the way above. Finally, the neighbors of each complexes generated above are included into the complexes if their weights is higher than a pre-set "fluff" parameter. MCL [5] predicts complexes based on random walk in a PPI network, it is a fast and highly scalable clustering method. To simulate random walk, two operators, expansion and inflation are used to manipulate the adjacency matrix iteratively. The aim of those two operators is to separate dense subgraphs out from the network. Protein complexes predicted by this method are non-overlapped. ClusterONE [6] uses a new measure to compute the cohesiveness of one subgraph, and works by seeding and expanding with neighboring nodes. This method performs better than the other methods when it was developed.
As there exists high false-positives and false-negatives in PPI networks. Some methods weight edges of PPI network using PPI network topology, gene expression data and protein function to improve the reliability of PPI networks. DPClus [20] weights each edge according to the number of shared neighbors of the node pair, then the weight of each node is computed by summing the weights of all its edges. Cao et al. [21] treated complex prediction as an optimization problem and built the objective function by considering a variety of topology characteristics, then the genetic algorithm was employed to detect complexes from PPI networks.
In general, proteins that form functional groups have similar gene expression, so some methods weight edges of PPI networks using expression data. MATISEE [10] measures the intensity of interaction of a pair of proteins using the correlation of expression data. Ou-Yang et al. [11] detected protein complexes from signed PPI networks, the sign of each edge is computed using the pearson correlation coefficient of gene expression of the two proteins.
Except for expression data, protein function provides important clue for protein complex detection. SWEMODE [12] proposed by Lubovac weights each edge based on the semantic similarity of the function(s) of two proteins, the weight of each node is given by the weighted clustering coefficients of the nearest neighbors. Cho et al. [13] weighted each edge according to the functional similarity of two nodes, and the weight of each node is the sum of weights of its edges. The flow simulation algorithm is then used to split the flow from the nodes with larger weights. As each flow goes along edges and its influence decay according to the similarity of each node pair it passes, it stops when its influence is less than a certain threshold. Thus, the PPI network is divided into a plurality of subgraphs consisting of proteins connected by flow from the same source protein.
In our previous works CPredictor [19] and CPredictor2.0 [16, 17], we also used protein functional information. But different from the existing methods, we first used protein functional information to cluster proteins, then mapped the clusters onto PPI networks. The difference between CPredictor and CPredictor2.0 lies in the usage of functional information. In this paper, we follow the same idea of CPredictor and CPredictor2.0, but we also use expression data. That is, we consider the dynamic property of PPI networks.
Methods based on Core-Attachment model
Gavin et al. [1] studied the structures of yeast protein complexes and found that each protein complex consists of two parts: the core is made of proteins connected tightly, and attachments that have relatively sparse interactions with the core.
Following the core-attachment structure, two methods CORE [14] and COACH [15] were proposed. CORE assesses the probability that two proteins belong to the same core using their common neighbors. Then, larger cores are produced by merging cores of sizes two, three and so on repeatedly. Finally, a protein can be added into one core as attachment if it has interactions with more than half proteins in the core. COACH first identifies small dense subgraphs around proteins of high weight, and then generates cores by merging those dense subgraphs. It uses the same way to add attachments as CORE does. Later, Peng et al. [22] porposed the WPNCA method, which divides a weighted PPI network into multiple closely connected subgraphs by using the PageRank-Nibble algorithm, and then, protein complexes are generated in each subgraph based on the Core-Attachment structure.
Methods based on dynamic PPI networks
Earlier methods detect complexes from static PPI networks. Actually, the interactions among proteins are dynamic and change over time at different biological stages. In recent year, there are some works on detecting protein complexes from dynamic PPI networks. Tang et al. [18] applied a fixed threshold to cluster proteins using expression data such that each cluster consists of proteins active at the same time point. Since the expression levels of different proteins are quite different, it is unreasonable to use a fixed threshold for all proteins. Later, Wang et al. [23] proposed the three-sigma model to calculate active threshold for proteins, and achieved better performance of complex prediction. Zhang et al. [24] first identified transient and stable protein interactions to construct dynamic PPI networks based on the three-sigma model, then predicted protein complexes from the dynamic PPI networks. Lei et al. [25] constructed dynamic PPI networks using the same method as in [24], and then optimized the parameters of Markov clustering by the firefly algorithm to detect protein complexes.
Methods based on supervised learning
Some works use supervised learning to detect protein complexes. Qi et al. [26] classified the topological properties of protein complexes into four categories, and used these properties as features to train probability bayesian network, which was used to predict complexes from the subgraphs generated from PPI networks randomly. Yong et al. [27] used true complexes as training data, and a variety of information such as interactions, functions, text and topology as features, to train Bayesian model to predict the probabilities of protein interactions included in small complexes, large complexes and non-complexes, and then small complexes of size 2,3 were extracted.
Here, we first give an overview of CPredictor3.0, then describe the major components of CPredictor3.0 in detail.
Figure 1 shows the flowchart of CPredictor3.0, which consists of six major steps: 1) Detecting active proteins; 2) Clustering proteins by function; 3) Computing active proteins of similar function; 4) Extracting candidate complexes from PPI networks; 5) Expanding candidate complexes; 6) Merging candidate complexes.
The flowchart of CPredictor3.0. 1) Detecting active proteins; 2) Clustering proteins by function; 3) Computing active proteins of similar function; 4) Extracting candidate complexes from PPI networks; 5) Expanding candidate complexes; 6) Merging candidate complexes
The rationale behind our method is that proteins of a complex performs some function(s) by interacting with each other at the same time and the same place in cellular systems [28]. CPredictor3.0 works like this: First, it detect active proteins from gene expression data for different time points, then it cluster proteins according to functions by using functional annotations. With the results of the above two steps, it computes active protein clusters of similar function. Following that, these clusters are mapped onto a PPI network to extract induced connected subgraphs, which are taken as candidate complexes. Finally, we expand the resulting candidate complexes and merge overlapping ones to get the final predicted complexes. In what follows, we describes these steps in detail.
Detecting active proteins
Gene expression data reveal the dynamic properties of proteins in their lifetime. As a protein is not always active, its expression level changes with its activity degree. Concretely, higher gene expression level means higher activity. To get the active time points of each protein, Tang et al. [18] set a global fixed threshold for all proteins. There are two drawbacks with a global threshold. On the one hand, there is noise in biological data. On the other hand, the gene expression curve for each protein is different. To solve these problems, Wang et al. [23] proposed the three-sigma model to compute active threshold for each protein. In this paper, we use the three-sigma model to calculate the active threshold for each protein.
Suppose the expression data are measured at n time points. For a protein p, V k (p) represents protein p's expression value at time point k, μ(p) and σ(p) are the mean and the standard deviation of expression values over the period from 1 to n. The active threshold of protein p is evaluated as follows:
$$ Active(p) = \mu(p) + \beta*\sigma(p)*\left(1-\frac{1}{1+{\sigma(p)}^{2}}\right), $$
Above, β is an adjustable parameter that helps us to get the most optimal threshold. Usually, we set β=0, 1, 2, 3.
After obtaining the active thresholds for all proteins, we can collect all active proteins at each time point. That is, for each protein p at time point i (i=1,…,n), if its expression value is no less than Active(p), then it is an active protein at time point i. In such a way, we can get the set of active proteins AP i for each time point i. Thus, we have a series of active protein sets { AP i (i=1, …, n)}.
Clustering proteins by function
Here, we cluster proteins by functional annotations. First, we compute the functional similarity of any two proteins using the method proposed by Wang et al. [29], then we employ the spectral clustering algorithm to cluster the proteins with the computed similarity matrix.
The similarity between any two proteins is computed by the GO terms annotated on the two proteins. GO includes a series of biological terms to describe gene and gene products such as protein, and covers three aspects: biological process (BP), cellular component (CC), and molecular function (MF). Here, we use only BP data. GO can be represented as a directed acyclic graph (DAG), in which nodes and edges represent terms and their relationships (e.g. 'is-a' and 'part-of') between two terms. A GO term A can be described as DAG A =(A,T A ,E A ) where T A consists of term A and all its ancestors in DAG, E A is composed of all edges (relationships) connecting A to all terms in T A . As defined in Wang's method, the semantic content of a term is the sum of semantic contributions of all its ancestors in DAG A to A. The semantic contribution of term t to A is as follows:
$$ S_{A}(t) = \left\{ \begin{array}{rcl} 1 & &{t=A} \\ max\{w_{e} *S_{A}(t')|t' \in childrenof(t)\} & &{ t \ne A} \end{array} \right. $$
where function childrenof(t) returns the children of t in DAG A , and w e as the weight on the edge between t and t ′, which depends on the relationship type between the two terms. For example, the weight is 0.8 for 'is-a' and 0.6 for 'part-of'.
So the semantic value SV(A) of term A is evaluated as follows:
$$ SV(A) = \sum_{t \in T_{A}}S_{A}(t). $$
The semantic similarity S GO (A,B) between term A and B is evaluated as follows:
$$ S_{GO}(A,B)= \frac{\sum_{t \in {T_{A} \cap T_{B}}}SV(t)}{SV(A)+SV(B)}. $$
Generally, one protein may participate one or more biological functions, so one protein may be annotated by multiple terms. For two proteins P1 and P2, which are annotated by { go 11, go 12, ⋯, go 1m } and { go 21, go 22, ⋯, go 2n } respectively, their similarity can be evaluated as follows:
$$ Sim(P1,P2)= \frac{\sum_{i=1}^{m} Sim\left(go_{1i},P_{2}\right)+\sum_{j=1}^{n} Sim\left(go_{2j},P_{1}\right)}{m+n} $$
$$ Sim(go,P)= max_{1 \leq i \leq k} \left(S_{GO}\left(go, go_{i}\right)\right) $$
After getting the similarity matrix for all proteins, where each element represents the semantic similarity of two proteins. Then, we apply the spectral clustering algorithm [30] to the matrix to cluster all proteins into K disjointed clusters PC={ PC 1, PC 2, ⋯, PC K } where K is an adjustable parameter to control the number of protein clusters.
Complex generation
Computing active protein clusters of similar function
With the sets of active proteins {AP i |i=1,…,n} and the set of protein clusters of similar function {PC j |j=1,…,K}, here we go to compute the active protein clusters of similar function. For time point i, the set of active protein clusters of similar function is APC i =AP i ∩{PC j |j=1,…,K}={AP i ∩PC j |j=1,…,K}. Thus, we can get all active protein clusters of similar function as follows:
$$ \begin{aligned} APC&=\{APC_{i}|i=1, \cdots, n\}\\ &=\{AP_{i} \cap PC_{j} |i=1, \cdots, n; j=1, \ldots, K\}\\ &=\{APC_{ij}|i=1, \cdots, n; j=1, \ldots, K\}. \end{aligned} $$
Computing candidate complexes
We have already gotten the set of active protein clusters of similar function, considering that complexes consist of interacting proteins, we map all active protein clusters of similar function onto a PPI network G=(V,E) where V and E represent proteins and interactions respectively, to get connected subgraphs induced by each cluster on G. Concretely, given the active protein cluster of similar function APC ij , we map APC ij onto G and get the induced graph G ij =(V ij ,E ij ) by APC ij . That is, V ij =APC ij and E ij are the set of edges in G that connect proteins in APC ij . G ij may be not a connected graph, i.e., it may consist of several connected subgraphs. We treat each resulting subgraph of size > 1 as a candidate complex. Thus, from G ij we get a set of candidate complexes CC ij . Similarly, by mapping other active protein clusters of similar function onto G, we obtain other candidate complexes. We denote the set of all candidate complexes as CC ={CC ij |i=1,⋯,n;j=1,…,K}.
Candidate complex expanding
Here, we try to expand each candidate complex on G. Consider a candidate complex c∈CC, its corresponding graph is G c =(V c ,E c ). First, we search the set of neighbor nodes of candidate complex c in G, which is denoted as N G (c). We have
$$ N_{G}(c)= \left\{p| E(p,G_{c})\in E \quad and \quad p \in (V-V_{c}) \right\} $$
where p is any protein not in V c and E(p,G c ) is the set of interactions between protein p and any protein in G c . For any protein p∈N G (c), if the following condition holds, we add p and its interactions with proteins in c to c:
$$ |E(p, G_{c})| \geq \alpha *|V_{c}|. $$
Above, α is a pre-specified threshold. In this paper, it is set to 0.6 by experience. α is in the range 0 to 1. When α is set to 0, all neighbors of V c will be added into V c . Otherwise, if α is set to 1, no nodes will be added into V c . Usually, if one node interacts with more than half nodes of V c , the node will be added [15, 19]. Our experimental results validate the rationality of the value setting. The expansion process continues till no any more neighbor can be added to c. We do expansion to all candidate complexes in CC, and denote the set of candidate complexes after expansion as CC exp .
Candidate complexes merging
There may be overlapping between candidate complexes in CC exp . For two overlapping candidate complexes, if their overlapping score is larger than a predefined threshold, we merge them to one complex. Concretely, given two candidate complexes c A and c B , their overlapping score is evaluate as follows:
$$ OS(c_{A},c_{B}) = \frac{\left|V_{c_{A}} \cap V_{c_{B}}\right|}{\left|V_{c_{A}} \cup V_{c_{B}}\right|}. $$
If OS(c A ,c B )≥γ, we merge c A and c B . Here, γ is a pre-specified parameter. By experiments, we set γ=0.8. When there are no more candidate complexes that can be merged, the resulting and remaining candidate complexes constitute the final set of predicted complexes.
The Algorithm
The algorithm of CPredictor3.0 is presented in Algorithm 1. Here, Lines 5-10 are for computing active protein clusters of similar function, Lines 11-24 are for candidate complexes extraction, Lines 25-35 are for candidate complexes expansion, and Lines 36-40 are for candidate complexes merging.
Data sources and Metrics
We downloaded gene expression data GSE3413 [31] from Gene Expression Omnibus (GEO) to compute active proteins. As gene products can cover more than 96% proteins in PPI networks, it is reasonable to detect active proteins from expression data for different time points. GSE3413 is the expression profiling of yeast in the form of matrix, which contains three successive metabolic cycles, and each cycle has 12 time intervals. So each protein has 12 expression values in every cycle. To reduce the impaction of noise, we took the averaged expression values of 12 time points over three cycles, and used these averaged values in our experiments.
In addition to gene expression data, we used three PPI datasets of yeast, which are referred to as Krogan [32], Collins [33] and WI-PHI [34]. The numbers of proteins and interactions in these three datasets are presented in the 2nd and 3rd columns of Table 1. MIPS [35] and CYC2008 [36] were used as reference complex sets, the numbers of complexes and proteins contained in these two sets are presented in the 2nd and 3rd columns of Table 2. In this paper, the GOSemsin package [37] was employed to compute the protein functional similarity matrix.
Table 1 The statistics of PPI datasets
Table 2 The statistics of benchmark datasets
To measure the quality of predicted protein complexes, predicted complexes are checked against with reference complexes. Let P =(V p ,E p ) and R= (V r ,E r ) are a predicted complex and a known complex, respectively. The affinity score (AS) of the two complexes is defined as follows:
$$ AS(P,R) = \frac{|V_{p} \cap V_{r}|^{2}}{|V_{p}| *|V_{r}|}. $$
Usually, P and R are considered matched when AS(P,R)≥0.2. This criterion was widely used in the literature [19, 21, 38–43]. However, as stated in PPSampler2 [43], for complexes of size 2, that is, the size of V p and V r is 2, then we have \(\frac {1}{2 *2} = 0.25 > 0.2\). This means that size-2 candidates can be easily considered as real complexes, which may bring randomness to the final result and affect the correctness of performance evaluation. Actually, most existing methods cannot effectively detect size-2 complexes, because they treat complexes as dense subgraphs while size-2 complexes are just single edges. So a common strategy is simply neglecting the size-2 complexes. In our method, we follow this strategy to discard those predicted complexes with only two proteins.
In our method, recall, precision and F1-measure are used to measure the prediction performance. Let PS = { ps 1, ⋯, ps m } and RS={ rs 1, ⋯, rs n } are the predicted complex set and the benchmark complex set respectively, the three performance metrics are evaluated as follows:
$$\begin{array}{*{20}l} recall &= \frac{N_{r}}{|RS|}, \end{array} $$
$$\begin{array}{*{20}l} precision &= \frac{N_{p}}{|PS|}, \end{array} $$
$$\begin{array}{*{20}l} F1-measure &= \frac {2*recall *precision}{recall +precision}. \end{array} $$
Above, N r is the number of reference complexes that match at least one predicted complex, N p is the number of predicted complexes that match at least one reference complex. |RS| and |PS| are the size of benchmark complex set and the size predicted complex set respectively.
We present the experimental results from three aspects. Firstly, we count the size distribution of predicted protein complexes of different algorithms. Secondly, we check the impact of two parameters K and β on prediction performance of our method. Finally, we compare our method with major existing methods in terms of recall, precision and F1-measure.
The size distribution of predicted protein complexes
As our method employs function and expression constraints to filter complexes, which may tend to produce small complex candidates. However, our method also use cluster expansion and merging strategies to generate the final predictions. To check the effectiveness of the expansion and merging strategies, here we present the size distribution of predicted protein complexes for different methods on different PPI datasets against different complex benchmark sets in Fig. 2. It is clear that complexes with 5 or more proteins count the largest part of our method's prediction results. This means that the expansion and merging strategies employed in our method are effective.
The distribution of protein complex size. a Krogan PPI data set. b Collins PPI data set. c WI-PHI PPI data set
The effect of parameters on the performance of CPredictor3.0
In our method, there are two adjustable parameters K and β which can impact prediction performance. Here, we present the results of how F1-measure changes with the values of the two parameters, which are shown in Fig. 3.
The effect of K and β on prediction performance. a Krogan PPI data and MIPS reference complexes set. b Krogan PPI data and CYC2008 reference complexes set. c Collins PPI data and MIPS reference complexes set. d Collins PPI data and CYC2008 reference complexes set. e WI-PHI PPI data and MIPS reference complexes set. f WI-PHI PPI data and CYC2008 reference complexes set
By checking the complexes in the reference sets, we can see that the size of most protein complexes is less than 30. In experiments, we set the value of K to from 1 to 100. The parameter β is used to set the threshold for filtering active proteins. According to three sigma(SD) model, we set the largest value of β to 3, and change it from 0 to 3. As shown in Fig. 3, the performance tends to be stable when K is greater than 20. For different K values, the best F1-measure is achieved when β is set to 0. So in the comparison experiments, we set K =30, β=0 for Collins PPI data, K=35, β=0 for Krogan PPI data, and K=17, β=1 for WI-PHI PPI data.
By checking the predicted complexes further, we can see that there are some large complexes of size > 100 when K is set small. This is reasonable. As parameter K indicates the number of clusters that the proteins are to be divided. So, small K will lead to large clusters, i,e, large complexes, and vice versa.
As for parameter β, which is the threshold for filtering active proteins from gene expression data. A larger β will results in more proteins being filtered as inactive proteins. In Fig. 3, we can see that when β is set to 0, i.e., we set β to the mean of expression values over all time points, we get the best performance on Collins and Krogan PPI networks, while the best performance is achieved on WI-PHI network with β=1.0.
Comparison with major existing methods
Here, we compare our method CPreditor3.0 with eight existing protein complex prediction methods, including MCODE [4], DPClus [20], RNSC [44], CORE [14], ClusterONE [6], Zhang et al. [24], CPredictor [19], and CPredictor2.0 [16, 17]. Some of them are the state of the art techniques, such as ClusterONE [6] and CPredictor2.0 [16, 17]. All parameters in these compared methods were set as suggested by their authors.
The experimental results are shown in Fig. 4. We can see in five of the six experimental settings, CPredictor3.0 achieves the highest F1-measure. And in the remaining setting, CPredictor3.0 still has comparable F1-measure to the best one. In three of the six settings, CPredictor3.0 has the highest precision, and has the 2nd highest precision in the other three settings. As for recall, CPredictor3.0 stays at the second or third position in five settings and at the fifth position in one setting. Thus, in overall our method performs best among the nine methods.
Performance comparison with eight existing protein complex prediction algorithms in terms of recall, precision, and F1-measure. Our method CPreditor3.0 achieves the highest F1-measure in five of the six experimental settings. (a) Results with Krogan as PPI dataset and MIPS as complex reference set, (b) Results with Krogan as PPI dataset and CYC2008 as complex reference set, (c) Results with Collins as PPI dataset and MIPS as complex reference set, (d) Results with Collins as PPI dataset and CYC2008 as complex reference set, (e) Results with WI-PHI as PPI dataset and MIPS as complex reference set, (f) Results with WI-PHI as PPI dataset and CYC2008 as complex reference set
From Fig. 4, we can see that all methods have different performance on different PPI datasets and complexes reference sets. To give a detailed picture, we compute the average F1 values of all compared methods in the six settings. The results are presented in Table 3. Checking these results, we can see that: on the one hand, giving the PPI dataset (Krogan, Collins or WI-PHI), the performance with CYC2008 as reference set is better than that with MIPS as reference set. On the other hand, giving the complex reference set (MIPS or CYC2008), using the Collins PPI dataset gets the best performance and using the WI-PHI PPI dataset has the worst performance. This observation can be explained by the number of overlapping proteins between the PPI dataset and the reference set used in the prediction. Comparing with Krogan and Collins, WI-PHI has the largest number of proteins. Most predicted complexes from WI-PHI cannot find matching complexes in the two reference sets, which results in low performance.
Table 3 The average F1-measure values of the nine algorithms on various PPI datasets and complexes reference sets
To give a detailed explanation, we compute the ratio of the number of overlapping proteins between each PPI dataset and each complexs reference set over the number of proteins contained in the PPI dataset. We call it "overlapping ratio" in short. The results are presented in Table 4. From this table, we can see that 30.6% and 49.2% proteins in the Krogan PPI dataset and the Collins PPI dataset are overlapping with that of the MIPS complex set, while there are only 19.1% proteins in the WI-PHI PPI dataset are overlapping with that of the MIPS complex set. The overlapping ratio of Krogan, Collins and WI-PHI with CYC2008 are 43.1, 68.8 and 25.3% respectively. In summary, for any PPI dataset, the overlapping ratio with CYC2008 is higher than that with MIPS; For any reference set, the highest overlapping ratio is with Collins, then with Krogan, and the lowest overlapping ratio is with WI-PHI. This trend is completely consistent with the results in Table 3. This explains the performance difference of the six settings.
Table 4 Overlapping protein ratios of between PPI datasets and complexes reference sets
This paper introduced a new method CPredictor3.0 to boost complex prediction performance from PPI networks by using both expression data and functional annotations. Experiments on three commonly used PPI datasets and two benchmark complexes sets show that CPreditor3.0 performs best in overall. It is well recognized that complexes consist of proteins that have similar function and are active at the same time and place in cellular systems. Our method considers all these aspects, including function and dynamic interaction by using PPI data, functional annotations and expression data. This may explain the best performance of our method.
As for future work, on the one hand, we are considering more advanced models to extract complexes from PPI networks, such as graph sparsity models [45] and temporal graph mining models [46]. On the other hand, small complex detection is a more challenging task [17], which is another focus of our future study. Thirdly, for better complex prediction performance, we will also consider building reliable and robust PPI networks by fusing multiple networks [47].
Directed acyclic graph
Gavin AC, Aloy P, Grandi P, Krause R, Boesche M, Marzioch M, Rau C, Jensen LJ, Bastuck S, Dumpelfeld B, Edelmann A, Heurtier MA, Hoffman V, Hoefert C, Klein K, Hudak M, Michon AM, Schelder M, Schirle M, Remor M, Rudi T, Hooper S, Bauer A, Bouwmeester T, Casari G, Drewes G, Neubauer G, Rick JM, Kuster B, Bork P, Russell RB, Superti-Furga G. Proteome survey reveals modularity of the yeast cell machinery. Nature. 2006; 440(7084):631–6.
Rigaut G, Shevchenko A, Rutz B, Wilm M, Mann M, Seraphin B. A generic protein purification method for protein complex characterization and proteome exploration. Nat Biotechnol. 1999; 17(10):1030–2.
Barabasi AL, Oltvai ZN. Network biology: Understanding the cell's functional organization. Nat Rev Genet. 2004; 5(2):101–15.
Bader GD, Hogue CW. An automated method for finding molecular complexes in large protein interaction networks. BMC Bioinformatics. 2003; 4:2.
Pereira-Leal JB, Enright AJ, Ouzounis CA. Detection of functional modules from protein interaction networks. Proteins Struct Funct Bioinforma. 2004; 54(1):49–57.
Nepusz T, Yu H, Paccanaro A. Detecting overlapping protein complexes in protein-protein interaction networks. Nat Methods. 2012; 9(5):471–2.
Spirin V, Mirny LA. Protein complexes and functional modules in molecular networks. Proc Natl Acad Sci. 2003; 100(21):12123–8.
Adamcsek B, Palla G, Farkas IJ, Derenyi I, Vicsek T. Cfinder: locating cliques and overlapping modules in biological networks. Bioinformatics. 2006; 22(8):1021–3.
Zhang W, Zou X. A new method for detecting protein complexes based on the three node cliques. IEEE/ACM Trans Comput Biol Bioinforma. 2015; 12(4):879–86.
Ulitsky I, Shamir R. Identification of functional modules using network topology and high-throughput data. BMC Syst Biol. 2007; 1:8.
Ou-Yang L, Dai DQ, Zhang XF. Detecting protein complexes from signed protein-protein interaction networks. IEEE/ACM Trans Comput Biol Bioinforma. 2015; 12(6):1333–44.
Lubovac Z, Gamalielsson J, Olsson B. Combining functional and topological properties to identify core modules in protein interaction networks. Proteins Struct Funct Bioinforma. 2006; 64(4):948–59.
Cho YR, Hwang W, Ramanathan M, Zhang A. Semantic integration to identify overlapping functional modules in protein interaction networks. BMC Bioinformatics. 2007; 8:265.
Leung HCM, Xiang Q, Yiu SM, Chin FYL. Predicting protein complexes from ppi data: A core-attachment approach. J Comput Biol. 2009; 16(2):133–44.
Wu M, Li X, Kwoh CK, Ng SK. A core-attachment based method to detect protein complexes in ppi networks. BMC Bioinformatics. 2009; 10:169.
Xu B, Guan J, Wang Y, Zhou S. Cpredictor2.0: Effectively detecting both small and large complexes from protein-protein interaction networks In: Bourgeois A, Skums P, Wan X, Zelikovsky A, editors. Lecture Notes in Bioinformatics. vol. 9683. Berlin: Springer: 2016. p. 301–3.
Xu B, Wang Y, Wang Z, Zhou J, Zhou S, Guan J. An effective approach to detecting both small and large complexes from protein-protein interaction networks. BMC Bioinformatics. 2017; 18(S12):19–28.
Tang X, Wang J, Liu B, Li M, Chen G. A comparison of the functional modules identified from time course and static ppi network data. BMC Bioinformatics. 2011; 12(13):339–53.
Xu B, Guan J. From function to interaction: A new paradigm for accurately predicting protein complexes based on protein-to-protein interaction networks. IEEE/ACM Trans Comput Biol Bioinforma. 2014; 11(4):616–27.
Altaf-Ul-Amin M, Shinbo Y, Mihara K, Kurokawa K, Kanaya S. Development and implementation of an algorithm for detection of protein complexes in large interaction networks. BMC Bioinformatics. 2006; 7:207.
Cao B, Luo J, Liang C, Wang S, Song D. Moepga: A novel method to detect protein complexes in yeast protein–protein interaction networks based on multiobjective evolutionary programming genetic algorithm. Comput Biol Chem. 2015; 58:173–81.
Peng W, Wang J, Zhao B, Wang L. Identification of protein complexes using weighted pagerank-nibble algorithm and core-attachment structure. IEEE/ACM Trans Comput Biol Bioinforma (TCBB). 2015; 12(1):179–92.
Wang J, Peng X, Li M, Pan Y. Construction and application of dynamic protein interaction network based on time course gene expression data. Proteomics. 2013; 13(2):301–12.
Zhang Y, Lin H, Yang Z, Wang J, Liu Y, Sang S. A method for predicting protein complex in dynamic ppi networks. BMC Bioinformatics. 2016; 17(7):229.
Lei X, Wang F, Wu FX, Zhang A, Pedrycz W. Protein complex identification through markov clustering with firefly algorithm on dynamic protein–protein interaction networks. Inf Sci. 2016; 329:303–16.
Qi Y, Balem F, Faloutsos C, Klein-Seetharaman J, Bar-Joseph Z. Protein complex identification by supervised graph local clustering. Bioinformatics. 2008; 24(13):250–8.
Yong CH, Maruyama O, Wong L. Discovery of small protein complexes from ppi networks with size-specific supervised weighting. BMC Syst Biol. 2014; 8(S-5):S3.
Ruepp A, Brauner B, Dunger-Kaltenbach I, Frishman G, Montrone C, Stransky M, Waegele B, Schmidt T, Doudieu ON, Stumpflen V, Mewes HW. Corum: the comprehensive resource of mammalian protein complexes. Nucleic Acids Res. 2007; 36(Database):646–50.
Wang JZ, Du Z, Payattakool R, Yu PS, Chen CF. A new method to measure the semantic similarity of go terms. Bioinformatics. 2007; 23(10):1274–81.
von Luxburg U. A tutorial on spectral clustering. Stat Comput. 2007; 17(4):395–416.
Tu BP, Kudlicki A, Rowicka M, McKnight SL. Logic of the yeast metabolic cycle: Temporal compartmentalization of cellular processes. Science. 2005; 310(5751):1152–8.
Krogan NJ, Cagney G, Yu HY, Zhong GQ, Guo XH, Ignatchenko A, Li J, Pu SY, Datta N, Tikuisis AP, Punna T, Peregrin-Alvarez JM, Shales M, Zhang X, Davey M, Robinson MD, Paccanaro A, Bray JE, Sheung A, Beattie B, Richards DP, Canadien V, Lalev A, Mena F, Wong P, Starostine A, Canete MM, Vlasblom J, Wu S, Orsi C, Collins SR, Chandran S, Haw R, Rilstone JJ, Gandi K, Thompson NJ, Musso G, St Onge P, Ghanny S, Lam M, Butland G, Altaf-Ui AM, Kanaya S, Shilatifard A, O'Shea E, Weissman JS, Ingles CJ, Hughes TR, Parkinson J, Gerstein M, Wodak SJ, Emili A, Greenblatt JF. Global landscape of protein complexes in the yeast saccharomyces cerevisiae. Nature. 2006; 440(7084):637–43.
Collins SR, Kemmeren P, Zhao XC, Greenblatt JF, Spencer F, Holstege FCP, Weissman JS, Krogan NJ. Toward a comprehensive atlas of the physical interactome of saccharomyces cerevisiae. Mol Cell Proteomics. 2007; 6(3):439–50.
Kiemer L, Costa S, Ueffing M, Cesareni G. Wi-phi: a weighted yeast interactome enriched for direct physical interactions. Proteomics. 2007; 7(6):932–43.
Mewes HW, Amid C, Arnold R, Frishman D, Guldener U, Mannhaupt G, Munsterkotter M, Pagel P, Strack N, Stumpflen V, Warfsmann J, Ruepp A. Mips: analysis and annotation of proteins from whole genomes. Nucleic Acids Res. 2004; 32(SI):41–4.
Pu S, Wong J, Turner B, Cho E, Wodak SJ. Up-to-date catalogues of yeast protein complexes. Nucleic Acids Res. 2009; 37(3):825–31.
Yu G, Li F, Qin Y, Bo X, Wu Y, Wang S. Gosemsim: an r package for measuring semantic similarity among go terms and gene products. Bioinformatics. 2010; 26(7):976–8.
Pellegrini M, Baglioni M, Geraci F. Protein complex prediction for large protein protein interaction networks with the core&peel method. BMC Bioinformatics. 2016; 17(12):372.
Li X, Wu M, Kwoh CK, Ng SK. Computational approaches for detecting protein complexes from protein interaction networks: a survey. BMC Genomics. 2010; 11(1):3.
Luo J, Lin D. A cell-core-attachment approach for identifying protein complexes in ppi network. In: Natural Computation (ICNC), 2015 11th International Conference On. Piscataway: IEEE: 2015. p. 405–12.
Hu AL, Chan KC. Utilizing both topological and attribute information for protein complex identification in ppi networks. IEEE/ACM Trans Comput Biol Bioinforma. 2013; 10(3):780–92.
Li XL, Foo CS, Tan SH, Ng SK. Interaction graph mining for protein complexes using local clique merging. Genome Inform. 2005; 16(2):260–9.
Widita CK, Maruyama O. Ppsampler2: Predicting protein complexes more accurately and efficiently by sampling. BMC Syst Biol. 2013; 7(Suppl 6):14.
King AD, Przulj N, Jurisica I. Protein complex prediction via cost-based clustering. Bioinformatics. 2004; 20(17):3013–20.
Gao L, Zhou S. Group and graph joint sparsity for linked data classification In: Schuurmans D, Wellman MP, editors. Proceedings of AAAI. Palo Alto: AAAI press: 2016.
Yang Y, Yan D, Wu H, Cheng J, Zhou S, Lui JCS. Diversified temporal subgraph pattern mining In: Krishnapuram B, Shah M, Smola AJ, Aggarwal CC, Shen D, Rastogi R, editors. Proceedings of KDD. New York: ACM: 2016.
Zheng X, Wang Y, Tian K, Zhou J, Guan J, Luo L, Zhou S. Fusing multiple protein-protein similarity networks to effectively predict lncrna-protein interactions. BMC Bioinformatics. 2017; 18(S12):11–18.
National Natural Science Foundation of China (NSFC) (grant No. 61772367) for manuscript writing and publication costs of this article; The National Key Research and Development Program of China (grant No. 2016YFC0901704) for data collection and analysis; the Program of Shanghai Subject Chief Scientist (15XD1503600) for data collection and manuscript writing. No funding body played any role in design/conclusion.
All datasets, codes and results are available at http://dmb.tongji.edu.cn/supplementary-information/cpredictor3.
This article has been published as part of BMC Systems Biology Volume 11 Supplement 7, 2017: 16th International Conference on Bioinformatics (InCoB 2017): Systems Biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-11-supplement-6.
JH and SG designed the research and revised the manuscript. YX developed the algorithm, carried out experiments, analyzed the experimental results, and drafted the manuscript. JG was involved in data analysis and revising the paper. All authors read and approved the final manuscript.
Department of Computer Science and Technology, Tongji University, Shanghai, 201804, China
Ying Xu & Jihong Guan
Shanghai Key Lab of Intelligent Information Processing, and School of Computer Science, Fudan University, Shanghai, 200433, China
Shuigeng Zhou
The institute of subtropical Agriculture, China Academy of Sciences, 444 Yuandaer Road, Mapoling, Changsha, 410125, China
Jiaogen Zhou
The Bioinformatics Lab at Changzhou NO. 7 People's Hospital, Changzhou, Jiangsu, 213011, China
Ying Xu
Jihong Guan
Correspondence to Jihong Guan.
Xu, Y., Zhou, J., Zhou, S. et al. CPredictor3.0: detecting protein complexes from PPI networks with expression data and functional annotations. BMC Syst Biol 11, 135 (2017). https://doi.org/10.1186/s12918-017-0504-3
PPI network
Protein complex
GO annotation
|
CommonCrawl
|
Volume 10, Number 2 (2016), 223-234.
On Jordan centralizers of triangular algebras
Lei Liu
More by Lei Liu
Full-text: Access denied (no subscription detected)
We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text
Let A be a unital algebra over a number field F. A linear mapping ϕ from A into itself is called a Jordan-centralized mapping at a given point G∈A if ϕ(AB+BA)=ϕ(A)B+ϕ(B)A=Aϕ(B)+Bϕ(A) for all A, B∈A with AB=G. In this paper, it is proved that each Jordan-centralized mapping at a given point of triangular algebras is a centralizer. These results are then applied to some non-self-adjoint operator algebras.
Banach J. Math. Anal., Volume 10, Number 2 (2016), 223-234.
First available in Project Euclid: 23 February 2016
https://projecteuclid.org/euclid.bjma/1456246277
doi:10.1215/17358787-3492545
Primary: 47L35: Nest algebras, CSL algebras
Secondary: 47B47: Commutators, derivations, elementary operators, etc. 17B40: Automorphisms, derivations, other operators 17B60: Lie (super)algebras associated with other structures (associative, Jordan, etc.) [See also 16W10, 17C40, 17C50]
Jordan centralizer triangular algebra non-self-adjoint operator algebra centralizer
Liu, Lei. On Jordan centralizers of triangular algebras. Banach J. Math. Anal. 10 (2016), no. 2, 223--234. doi:10.1215/17358787-3492545. https://projecteuclid.org/euclid.bjma/1456246277
[1] C. Akemann, G. Pedersen, and J. Tomiyama, Multipliers of $C^{\ast}$-algebras, J. Funct. Anal. 13 (1973), no. 3, 277–301.
Mathematical Reviews (MathSciNet): MR470685
Digital Object Identifier: doi:10.1016/0022-1236(73)90036-0
[2] P. Ara and M. Mathieu, Local Multipliers of $C^{\ast}$-Algebras, Springer, London, 2003.
Mathematical Reviews (MathSciNet): MR1940428
[3] M. Bre$\breve{\mathrm{s}}$ar, Characterizing homomorphisms, derivations and multipliers in rings with idempotents, Proc. Roy. Soc. Edinburgh, Sect. A 137 (2007), no. 1, 9–21.
[4] W. S. Cheng, Commuting maps of triangular algebras, J. London Math. Soc. 63 (2001), no. 1, 117–127.
Digital Object Identifier: doi:10.1112/S0024610700001642
[5] W. S. Cheng, Lie derivations of triangular algebras, Linear Multilinear Algebra 51 (2003), no. 3, 299–310.
Digital Object Identifier: doi:10.1080/0308108031000096993
[6] D. Hadwin and J. Li, Local derivations and local automorphisms on some algebras, J. Operator Theory 60 (2008), no. 1, 29–44.
[7] R. Kadison, Local derivations, J. Algebra 130 (1990), no. 2, 494–509.
[8] I. Kosi-Ulbl and J. Vukman, On centralizers of standard operator algebras and semisimple $H^{\ast}$-algebras, Acta Math. Hungar. 110 (2006), no. 3, 217–223.
[9] D. Larson and A. Sourour, "Local derivations and local automorphisms" in Operator Theory: Operator Algebras and Applications, Part 2 (Durham, N.H., 1988), Proc. Symp. Pure Math. 51, Amer. Math. Soc., Providence, 1990, 187–194.
[10] P. Li, D. Han, and W. Tang, Centralizers and Jordan derivations for CLS subalgebras of von Neumann algebras, J. Operator Theory 69 (2013), no. 1, 117–133.
Digital Object Identifier: doi:10.7900/jot.2010jul19.1870
[11] J. Li, Q. Shen, and J. Guo, On Generalized (M, N, L)-Jordan Centralizers of some algebras, Banach J. Math. Anal. 6 (2012), no. 2, 19–37.
Digital Object Identifier: doi:10.15352/bjma/1342210158
Project Euclid: euclid.bjma/1342210158
[12] F. Lu, Lie isomorphisms of reflexive algebras, J. Funct. Anal. 240 (2006), no. 1, 84–104.
[13] X. Qi and J. Hou, Characterizing centralizers and generalized derivations on triangular algebras by acting on zero product, Acta Math. Sinica (Engl. Ser.) 29 (2013), no. 7, 1245–1256.
[14] J. Vukman, Centralizers of semiprime rings, Comment. Math. Univ. Carolinae 42 (2001), no. 2, 237–245.
[15] J. Vukman, Identities related to derivations and centralizers on standard operator algebras, Taiwanese J. Math. 11 (2007), no. 1, 255–265.
[16] J. Vukman and I. Kosi-Ulbl, On centralizers of semiprime rings with involution, Studia Sci. Math. Hungar. 43 (2006), no. 1, 61–67.
Digital Object Identifier: doi:10.1556/SScMath.43.2006.1.4
[17] Z. Xiao, F. Wei, and A. Fošner, Centralizing traces and Lie triple isomorphisms on triangular algebras, Linear Multilinear Algebra 63 (2015), no. 7, 1309–1331.
Digital Object Identifier: doi:10.1080/03081087.2014.932356
[18] B. Zalar, On centralizers of semiprime rings, Comment. Math. Univ. Carolinae 32 (1991), no. 4, 609–614.
Nonlinear maps preserving the Jordan triple ∗-product on von Neumann algebras
Li, Changjing, Lu, Fangyan, and Wang, Ting, Annals of Functional Analysis, 2016
ADDITIVITY OF JORDAN MULTIPLICATIVE MAPS ON JORDAN OPERATOR ALGEBRAS
An, Runling and Hou, Jinchuan, Taiwanese Journal of Mathematics, 2006
Characterizations of Jordan left derivations on some algebras
An, Guangyu, Ding, Yana, and Li, Jiankui, Banach Journal of Mathematical Analysis, 2016
On generalized ($m, n, l$)-Jordan centralizers of some algebras
Guo, Jianbin, Li, Jiankui, and Shen, Qihua, Banach Journal of Mathematical Analysis, 2012
JORDAN HIGHER ALL-DERIVABLE POINTS IN NEST ALGEBRAS
Zhen, Nannan and Zhu, Jun, Taiwanese Journal of Mathematics, 2012
Non-linear $\lambda $-Jordan triple $\ast $-derivation on prime $\ast $-algebras
Taghavi, A., Nouri, M., Razeghi, M., and Darvish, V., Rocky Mountain Journal of Mathematics, 2018
The Radon-Nikodym theorem for von neumann algebras
Pedersen, Gert K. and Takesaki, Masamichi, Acta Mathematica, 1973
On triangles in the universal Teichmüller space
Zhou, Zemin and Liu, Lixin, Kodai Mathematical Journal, 2013
On an Integral-Type Operator Acting between Bloch-Type Spaces on the Unit Ball
Stević, Stevo and Ueki, Sei-Ichiro, Abstract and Applied Analysis, 2010
Maps Preserving Peripheral Spectrum of Generalized Jordan Products of Self-Adjoint Operators
Zhang, Wen and Hou, Jinchuan, Abstract and Applied Analysis, 2014
euclid.bjma/1456246277
|
CommonCrawl
|
Symposium on New Frontiers in Knowledge Compilation
This symposium is supported by VCLA and the Wolfgang Pauli Institute.
Symposium chairs:
Pierre Marquis (CRIL-CNRS/Université d'Artois, France)
Stefan Szeider (TU Wien, Austria)
Many reasoning problems in artificial intelligence and automated deduction are computationally intractable in general. Various approaches have been devised to deal with this complexity by exploiting additional conditions that are satisfied in the domain of application at hand. One example involves the case where many instances share a common part, which can be preprocessed once for many instances. Knowledge compilation is devoted to understanding the potential and the limits of preprocessing in computational models and concrete applications.
In typical reasoning scenarios (for instance, automated configuration) many instances share a common background knowledge that is not very often subject to change. This "constant" piece of information can be compiled into a format that allows for more efficient reasoning, once the "varying" part of the instances becomes available. The time needed to compile the background knowledge together with the cumulative time needed to solve a sequence of instances with the compiled background knowledge can be lower (sometimes by orders of magnitude) than the cumulative time needed to solve the sequence of instances with the non-compiled background knowledge. Consequently, sometimes a relatively expensive compilation process may be worthwhile if its results are amortized by repeated use of the compiled knowledge.
Pioneered more than two decades ago, knowledge compilation is now a very active field of research. The aim of this symposium is to bring together researchers who work on knowledge compilation from various angles, including knowledge representation, constraints, theory of algorithms, complexity, machine learning, and databases, as well as researchers from related areas. Lectures and discussions will put all these different approaches into context and will stimulate a fruitful exchange of ideas between researchers from different fields.
The symposium will feature invited talks and provide opportunities for all participants to engage in discussions on open problems and future directions.
Participation is by invitation only. Registration is free of charge and includes attendance to all symposium events and lectures, coffee breaks, and lunches.
The invited talks will be open to the public. Local people who are interested in some of the talks but do not want to participate in the entire symposium (and the social program) do not need to register.
Simone Bova (TU Wien, Austria)
Guy Van den Broeck (University of California Los Angeles, USA)
Hubie Chen (Universidad del País Vasco and Ikerbasque, Spain)
Adnan Darwiche (University of California Los Angeles, USA)
Hélène Fargier (IRIT-CNRS, Université Paul Sabatier, France)
Frédéric Koriche (CRIL-CNRS, Université d'Artois, France)
Stefan Kratsch (Universität Bonn, Gemany)
João Marques-Silva (IST/INESC-ID, Portugal and University College Dublin, Ireland)
Igor Razgon (Birkbeck University of London, UK)
Dan Suciu (University of Washington, USA)
Vienna University of Technology
Zemanek seminar room (location on ground floor)
Favoritenstraße 9-11 (Google Maps)
June 4th:
10:20-11:10 Adnan Darwiche: Knowledge Compilation and Machine Learning: A New Frontier.
11:10-12:00 Frédéric Koriche: Affine Decision Trees.
12:00-13:00 Lunch break (sandwiches)
13:00-13:50 Hubie Chen: Parameter Compilation.
13:50-14:15 Ronald de Haan: Parameterized Compilability.
14:15-14:40 Umut Oztok: Exhaustive DPLL for Model Counting and Knowledge Compilation.
14:40-15:05 Friedrich Slivovsky: On Compiling CNFs into Structured Deterministic DNNFs.
15:35-16:25 Hélène Fargier: A KC Map of Valued Decision Diagrams – application to product configuration.
16:25-16:50 Alexandre Niveau: Towards a knowledge compilation map for heterogeneous representation languages
16:50-17:10 Adnan Darwiche: Beyond NP: Keeping up with solvers that reach beyond NP!
9:30-10:20 João Marques-Silva: Prime Compilation of Non-Clausal Formulae.
10:20-10:45 Laurent Simon: SAT and Knowledge Compilation: a Just-in-Time Approach.
11:15-11:40 Ondrej Cepek: Complexity aspects of CNF to CNF compilation.
11:40-12:05 Oliver Kullmann: A measured approach towards "good representations".
12:05-13:30 Lunch break (in the garden if the weather is good, otherwise in Seminar room 388 on first floor)
13:30-14:20 Igor Razgon: On the relationship between Non-deterministic read-once branching programs and DNNFs.
14:20-15:10 Simone Bova: A Strongly Exponential Separation of DNNFs from CNFs.
18:30 Bus departure for restaurant from Favoritenstraße 12 (in front of Hotel Johann Strauß, directly opposite our computer science building)
19:00-22:00 Dinner at Heuriger Werner Welser (Probusgasse 12, 1190 Vienna)
22:00 Bus departs for Favoritenstraße 12, 1040 Vienna
9:15-10:05 Stefan Kratsch: Kernelization: Efficient Preprocessing for NP-hard Problems.
10:05-10:30 Dan Olteanu: Factorized Databases.
10:30-10:45 Short break
10:45-11:35 Guy Van den Broeck: First-Order Knowledge Compilation for Probabilistic Reasoning.
11:35-12:25 Dan Suciu: Query Compilation: the View from the Database Side.
12:30 end of symposium, joint lunch (Restaurant Ischia, Mozartplatz 1, 1040 Vienna) until ~13:30
Simone Bova: A Strongly Exponential Separation of DNNFs from CNFs. [abstract]
Decomposable Negation Normal Forms (DNNFs) are Boolean circuits in negation normal form where the subcircuits leading into each AND gate are defined on disjoint sets of variables. We prove a strongly exponential lower bound on the size of DNNFs for a class of CNF formulas built from expander graphs. As a corollary, we obtain a strongly exponential separation between DNNFs and CNF formulas in prime implicates form. This settles an open problem in the area of knowledge compilation (Darwiche and Marquis, 2002).
This is joint work with Florent Capelli (Universite Paris Diderot), Stefan Mengel (Ecole Polytechnique), and Friedrich Slivovsky (Technische Universitat Wien).
[slides]
Guy Van den Broeck: First-Order Knowledge Compilation for Probabilistic Reasoning. [abstract]
The popularity of knowledge compilation for probabilistic reasoning is due to the realization that two properties, determinism and decomposability of logical sentences permit efficient (weighted) model counting. This insight has led to state-of-the-art probabilistic reasoning algorithms for graphical models, statistical relational models, and probabilistic databases, all based on knowledge compilation, to either d-DNNF, OBDD, or SDD. The statistical relational and probabilistic database formalisms are probabilistic extensions of first-order logic. To count the models of a first-order logic sentence, however, these insightful properties are missing. We even lack the formal language to describe and reason about such representations at the first-order level, in the context of knowledge compilation.
To this end, we propose group logic, which extends function-free first-order logic to give groups (i.e., sets of objects) the same status as objects. Group logic facilitates the expression and identification of sentences whose models can be counted efficiently. Indeed, it allows us to lift decomposability and determinism properties to the first-order case, and introduce a new requirement, called automorphism, that is specific to first-order sentences.
Ondrej Cepek: Complexity aspects of CNF to CNF compilation. [abstract]
Knowledge compilation usually deals with transforming some input representation of a given knowledge to some other type of representation on the output. In this talk we will concentrate on compilation where both input and output representation are of the same type, namely in the CNF format. In this case the purpose of the compilation process is to add clauses to the input CNF in order to improve its inference properties. We will look at this process in more detail and study its complexity.
Hubie Chen: Parameter Compilation. [abstract]
In resolving instances of a computational problem, if multiple instances of interest share a feature in common, it may be fruitful to compile this feature into a format that allows for more efficient resolution, even if the compilation is relatively expensive. In this talk, we introduce a complexity-theoretic framework for classifying problems according to their compilability, which includes complexity classes and a notion of reduction.The basic object in our framework is that of a parameterized problem, which here is a language along with a parameterization—a map which provides, for each instance, a so-called parameter on which compilation may be performed. Our framework is positioned within the paradigm of parameterized complexity, and our notions are relatable to established concepts in the theory of parameterized complexity. Indeed, we view our framework as playing a unifying role, integrating together parameterized complexity and compilability theory. Prior to presenting the framework, we will provide some motivation by discussing our work on model checking existential positive queries (see http://arxiv.org/abs/1206.3902). The talk will be mainly based on the article available at http://arxiv.org/abs/1503.00260.
Adnan Darwiche: Knowledge Compilation and Machine Learning: A New Frontier. [abstract]
Knowledge compilation has seen much progress in the last decade, especially as work in this area has been normalized into a systematic study of tractable languages, their relative succinctness, and their efficient support for various queries. What has been particularly exciting is the impact that knowledge compilation has had on several areas, such as probabilistic reasoning and probabilistic databases. In this talk, I will discuss a new area, machine learning, which is bound to be significantly impacted by knowledge compilation. In particular, I will discuss recent work in which knowledge compilation has been used to learn probabilistic models under massive logical constraints, and over combinatorial objects, such as rankings and game traces. I will further identify and discuss three specific roles for knowledge compilation in machine learning, which arise in defining (a) more structured probability spaces, (b) more expressive queries, and (c) new types of datasets that significantly generalize the standard datasets used in the machine learning literature.
Joint work with Arthur Choi and Guy Van den Broeck.
Adnan Darwiche: Beyond NP: Keeping up with solvers that reach beyond NP! [abstract]
We will discuss in this presentation a new community website, BeyondNP.org, which is planned to launch later this summer. Beyond NP aims to disseminate and promote research on solvers that reach beyond NP, including model counters, knowledge compilers, QBF solvers and function-problem solvers (e.g. MaxSAT, MUS and MCS). Beyond NP will serve as a news and information aggregator for such solvers, including a catalog of open-source solvers, repositories of corresponding benchmarks, and news on related academic activities. The presentation aims to raise awareness about this initiative, to discuss its underlying vision and objectives, and to seek input and participation from the broader community.
Hélène Fargier: A KC Map of Valued Decision Diagrams – application to product configuration. [abstract]
Valued decision diagrams (VDDs) are data structures that represent functions mapping variable-value assignments to non-negative real numbers. Existing languages in VDD family, including ADD, AADD , and those of the SLDD family, seem to be valuable target languages for compiling utility functions, probability distributions and, in the domain of application we are interested in, cost functions over a catalog of configurable products.This talks first presents a compilation map of such structures and shows that many tasks that are hard on valued CSPs are actually tractable on VDDs.
Indeed, languages from the VDD family (especially, ADD, SLDD, AADD) benefit from polynomial-time algorithms for some tasks of interest (e.g., the optimization one) for which no polynomial-time algorithm exists when the input is the VCSP considered at start.However, the efficiency of these algorithms is directly related to the size of the compiled formulae. The target languages and the heuristics under consideration have been tested on two families of benchmarks, additive VCSPs representing car configuration problems with cost functions and multiplicative VCSPs representing Bayesian nets. It turns out that even if the AADD language is strictly more succinct (from the theoretical side) than SLDD$_{+}$ (resp. SLDD$_{\times}$), the language SLDD$_{+}$ (resp. SLDD$_{\times}$) proves to be good enough in practice when purely additive (resp. purely multiplicative) problems are to be compiled.
This talk is based on a joint work with Pierre Marquis, Alexandre Niveau and Nicolas Schmidt, partially supported by the project BR4CP ANR-11-BS02-008 of the French National Agency for Research:
Hélène Fargier, Pierre Marquis, Nicolas Schmidt: Semiring Labelled Decision Diagrams, Revisited: Canonicity and Spatial Efficiency Issues. IJCAI 2013.
Hélène Fargier, Pierre Marquis, Alexandre Niveau, Nicolas Schmidt: A Knowledge Compilation Map for Ordered Real-Valued Decision Diagrams. AAAI 2014.
Ronald de Haan: Parameterized Compilability. [abstract]
In the framework of Knowledge Compilation (KC), knowledge bases are preprocessed (or compiled) once in order to decrease the computational efforts needed for performing queries on the knowledge base. However, in many cases such compilations lead to a exponential blow-up in the size of the knowledge base. Such an incompilability result occurs for example in the case of clause entailment (CE), where the knowledge base is a propositional formula, and the queries consist of deciding whether a given clause is entailed by the formula. With the aim of relativizing such negative results, following work by Chen (IJCAI 2005), we extend the framework of KC with concepts from parameterized complexity where structure in the input is captured by a problem parameter. In the resulting framework, we focus on fpt-size compilations whose size is polynomial in the input size, but can depend exponentially (or worse) in the problem parameter. We argue that this approach combines the power of KC and parameterized complexity. Concretely, for the problem of CE, we identify several parameters that allow the problem to be compiled in fpt-size. In addition, we provide evidence that for several other parameters, such compilations are not possible.
Joint work with: Simone Bova, Neha Lodha and Stefan Szeider.
Frédéric Koriche: Affine Decision Trees. [abstract]
Decision trees have received a great deal of interest in various areas of computer science. In this talk, we examine a family of tree-like languages which include decision trees as a special case. Notably, we investigate the class of "affine" decision trees (ADT), for which decision nodes are labeled by affine (xor) clauses, and its extension (EADT) to decomposable and-nodes. The key interest of this family is that (possibly conditioned) model counting can be solved in polynomial-time, by exploiting Gauss elimination. After presenting a knowledge compilation map for this family, we describe a top-down compiler "cnf2eadt", together with comparative experimental results on various benchmarks for #SAT problems. We conclude by mentioning two current research perspectives: probabilistic inference with weighted EADTs, and structure learning of maximum likelihood EADTs.
Stefan Kratsch: Kernelization: Efficient Preprocessing for NP-hard Problems. [abstract]
Efficient preprocessing is a widely applied opening move when faced with a combinatorially hard problem. The framework of parameterized complexity and its notion of kernelization offer a rigorous approach to understanding the capabilities of efficient preprocessing. In particular, it is possible to prove both upper and lower bounds on the output sizes that be achieved by polynomial-time algorithms. Crucially, using the perspective of parameterized complexity, these bounds are given in relation to problem-specific parameters, whereas unless P = NP there can be no efficient algorithm that shrinks every instance of an NP-hard problem.
The talk will give an introduction to kernelization and cover several different problems like \textsc{Point Line Cover}, \textsc{$d$-Hitting Set}, and \textsc{Planar Steiner Tree}. We will discuss some recent examples of kernelizations that may be of particular interest to this meeting. Finally, we will briefly address the basic intuition behind lower bounds for kernelization.
Oliver Kullmann: A measured approach towards "good representations". [abstract]
I want to give an overview on the usage of "hardness measures" in the theory of representations of boolean functions via CNF's. A special focus will be on separation of classes (given by the levels of the hardness measures), showing that increasing various hardness measures enables much shorter representations.The measures we consider are closely related to SAT solving, that is, making the implicit knowledge explicit happens with SAT solvers in mind. This makes for good connections to proof complexity, but now in a stronger setting — satisfiable clause-sets are the target, and we wish to represent the underlying boolean function as good as possible. "As good as possible" means that the hidden(!) unsatisfiable subinstances are as easy as possible. Since we are aiming at making the life easier for SAT solvers, the concrete nature of the hardness measures becomes of importance, different from general Knowledge Compilation, where one uses whatever polynomial time offers.
João Marques-Silva: Prime Compilation of Non-Clausal Formulae. [abstract]
Formula compilation by generation of prime implicates or implicants finds a wide range of applications in AI. Recent work on formula compilation by prime implicate/implicant generation often assumes a Conjunctive/Disjunctive Normal Form (CNF/DNF) representation. However, in many settings propositional formulae are naturally expressed in non-clausal form. Despite a large body of work on compilation of non-clausal formulae, in practice existing approaches can only be applied to fairly small formulae, containing at most a few hundred variables. This paper describes two novel approaches for the compilation of non-clausal formulae either with prime implicants or implicates, that is based on propositional Satisfiability (SAT) solving. These novel algorithms also find application when computing all prime implicates of a CNF formula. The proposed approach is shown to allow the compilation of non-clausal formulae of size significantly larger than existing approaches.
Alexandre Niveau: Towards a knowledge compilation map for heterogeneous representation languages. [abstract]
The knowledge compilation map introduced by Darwiche and Marquis takes advantage of a number of concepts (mainly queries, transformations, expressiveness, and succinctness) to compare the relative adequacy of representation languages to some AI problems. However, the framework is limited to the comparison of languages that are interpreted in a homogeneous way (formulas are interpreted as Boolean functions). This prevents one from comparing, on a formal basis, languages that are close in essence, such as OBDD, MDD, and ADD.To fill the gap, we present a generalized framework into which comparing formally heterogeneous representation languages becomes feasible. In particular, we explain how the key notions of queries and transformations, expressiveness, and succinctness can be lifted to the generalized setting.
The talk is based on the IJCAI'13 paper by Fargier, Marquis, and Niveau.
Dan Olteanu: Factorized Databases. [abstract]
I will overview recent work on compilation of join queries (First Order formulas with conjunction and existential quantification) into lossless factorized representations. The primary motivation for this compilation is to avoid redundancy in the representation of results (satisfying assignments) of queries in relational databases. The relationship between a relation encoded as a set of tuples and an equivalent factorized representation is on a par with the relationship between propositional formulas in disjunctive normal form and their equivalent nested formulas obtained by algebraic factorization.
For any fixed join query, we give asymptotically tight bounds on the size of their factorized results by exploiting the structure of the query, and we quantify the size gap between factorized and standard relational representation of query results. Factorized databases allow for constant-delay enumeration of represented tuples and provide efficient support for subsequent queries and analytics, such as linear regression.
Joint work with Jakub Zavodny.
Umut Oztok: Exhaustive DPLL for Model Counting and Knowledge Compilation. [abstract]
DPLL-based methods have played a crucial role in the success of modern SAT solvers, and it is also known that running DPLL-based methods to exhaustion can yield model counters and knowledge compilers. However, a clear semantics of exhaustive DPLL and a corresponding proof of correctness have been lacking, especially in the presence of techniques such as clause learning and component caching. This seems to have hindered progress on model counting and knowledge compilation, leading to a limited number of corresponding systems, compared to the variety of DPLL-based SAT solvers. In this talk, we will present an exhaustive DPLL algorithm with a formal semantics and a corresponding proof of correctness, showing how it can be used for both model counting and knowledge compilation. The presented algorithm is based on a formal framework that abstracts primitives used in SAT solvers in a manner that makes them suitable for use in an exhaustive setting. We will also introduce an upcoming open-source package that implements this framework, which aims to provide the community with a new basis for furthering the development of model counters and knowledge compilers based on exhaustive DPLL.
Joint work with Adnan Darwiche.
Igor Razgon: On the relationship between Non-deterministic read-once branching programs and DNNFs. [abstract]
This talk consists of two parts. In the first part I will present a result published in (Razgon,IPEC2014) stating that for each $k$ there is an infinite class of monotone 2-CNFs of primal graph treewidth at most $k$ for which the equivalent Non-Deterministic Read-Once Branching programs (NROBPs) require space $\Omega(n^{k/c})$ for some constant $c$. Then I will show that, essentially, replacing $k$ with $\log n$ we obtain a class of monotone 2-CNFs with pseudopolynomial space complexity of the equivalent NROBPs. Using a well known result of Darwiche about space fixed parameter tractability of DNNFs for CNFs of bounded primal graph treewidth, it is easy to show that the space complexity of DNNFs on this class of CNFs is polynomial. Thus we obtain a pseudopolynomial separation between NROBPs and DNNFs.
In the second part of the talk I will show that the above separation is essentially tight. In particular I will present a transformation of a DNNF of size $m$ with $n$ variables into an equivalent NROBP of size $O(m^{\log n+2})$. It follows for this transformation that an exponential lower bound (on the space complexity of) NROBP for any class of functions implies an exponential lower bound for DNNFs for this class of functions. Since NROBPs are much better studied than DNNFs from the lower bounds perspective with many exponential lower bounds known, I believe this result is a significant progress in our understanding of the complexity of DNNFs. The proposed transformation is an adaptation of the approach for transformation of a decision DNNF into an FBDD presented in (Beame et al, UAI2013).
Laurent Simon: SAT and Knowledge Compilation: a Just-in-Time Approach. [abstract]
Knowledge Compilation (KC) principles rely on an off-line phase to rewrite the Knowledge base in an appropriate form, ready to be efficiently queried. In our talk, we propose an alternative approach, built on top of an efficient SAT solver. The recent progresses in the practical solving of SAT problems allows us to directly use them to answer the set of classical queries used in most KC works. We show that this very simple approach gives very good practical results. In addition, the learning mechanism is fully exploited from queries to queries, allowing to amortize previous calls by speeding up the process of new queries.
Friedrich Slivovsky: On Compiling CNFs into Structured Deterministic DNNFs. [abstract]
We show that the traces of recently introduced dynamic programming algorithms for #SAT can be used to construct structured deterministic DNNF (decomposable negation normal form) representations of propositional formulas in CNF (conjunctive normal form). This allows us prove new upper bounds on the complexity of compiling CNF formulas into structured deterministic DNNFs in terms of parameters such as the treewidth and the clique-width of the incidence graph.
Joint work with Simone Bova, Florent Capelli, and Stefan Mengel.
Dan Suciu: Query Compilation: the View from the Database Side. [abstract]
We study knowledge compilation for Boolean formulas that are given as groundings of First Order formulas. This problem is motivated by probabilistic databases, where each record in the database is an independent probabilistic event, and the query is given by a SQL expression or, equivalently, a First Order formula. The query's probability can be computed in linear time in the size of the compilation representation, hence the interest in studying the size of such a representation. We consider the "data complexity" setting, where the query is fixed, and the input to the problem consists only of the database instance. We consider several compilation targets, of increasing expressive power: OBDDs, FBDDs, and decision-DNNFs (a subclass of d-DNNFs). For the case of OBDDs we establish a dichotomy theorem for queries in restricted languages FO(\exists, \wedge, \vee) and FO(\forall, \wedge, \vee): for each such query the OBDD is either linear in the size of the database, or grows exponentially, and the complexity can be determined through a simple analysis of the query expression. For the other targets we describe a class of queries for which (a) the decision-DNNF is exponentially large in the size of the database, and (b) the probability of the query can be computed in polynomial time in the size of the database. This suggests that the compilation target decision-DNNF is too weak to capture all tractable cases of probabilistic inference. Our lower bound for decision-DNNF's relies on a translation into FBDD's, which is of independent interest.
Joint work with Paul Beame, Abhay Jha, Jerry Li, and Sudeepa Roy.
Gilles Audemard
Simone Bova
Guy van den Broeck
Florent Capelli
Ondrej Cepek
Hubie Chen
Jessica Davies
Adnan Darwiche
Eduard Eiben
Hélène Fargier
Johannes Fichte
Ronald de Haan
Frédéric Koriche
Stefan Kratsch
Oliver Kullmann
Jean-Marie Lagniez
Neha Lodha
Joao Marques-Silva
Pierre Marquis
Stefan Mengel
Alexandre Niveau
Dan Olteanu
Sebastian Ordyniak
Umut Oztok
Igor Razgon
Marco Schaerf
Laurent Simon
Friedrich Slivovsky
Zeynep Saribatur
Stefan Szeider
Here are some suggestions for hotels close to the symposium venue:
Clima Cityhotel (www.climacity-hotel.com)
Hotel Carlton Opera (www.carlton.at)
Hotel Erzherzog Rainer (www.schick-hotels.com)
Hotel Europa (www.austria-trend.at)
Hotel Astoria (www.austria-trend.at)
Wombats City Hostel (http://www.wombats-hostels.com)
For further information, please have a look at our visitors' guide.
|
CommonCrawl
|
Physics And Astronomy (8)
On the small-scale structure of turbulence and its impact on the pressure field
Dimitar G. Vlaykov, Michael Wilczek
Journal: Journal of Fluid Mechanics / Volume 861 / 25 February 2019
Print publication: 25 February 2019
Understanding the small-scale structure of incompressible turbulence and its implications for the non-local pressure field is one of the fundamental challenges in fluid mechanics. Intense velocity gradient structures tend to cluster on a range of scales which affects the pressure through a Poisson equation. Here we present a quantitative investigation of the spatial distribution of these structures conditional on their intensity for Taylor-based Reynolds numbers in the range [160, 380]. We find that the correlation length of the second invariant of the velocity gradient is proportional to the Kolmogorov scale. It is also a good indicator for the spatial localization of intense enstrophy and strain-dominated regions, as well as the separation between them. We describe and quantify the differences in the two-point statistics of these regions and the impact they have on the non-locality of the pressure field as a function of the intensity of the regions. Specifically, across the examined range of Reynolds numbers, the pressure in strong rotation-dominated regions is governed by a dissipation-scale neighbourhood. In strong strain-dominated regions, on the other hand, it is determined primarily by a larger neighbourhood reaching inertial scales.
Acceleration statistics of tracer particles in filtered turbulent fields
Cristian C. Lalescu, Michael Wilczek
Journal: Journal of Fluid Mechanics / Volume 847 / 25 July 2018
Published online by Cambridge University Press: 29 May 2018, R2
Print publication: 25 July 2018
We present results from direct numerical simulations of tracer particles advected in filtered velocity fields to quantify the impact of the scales of turbulence on Lagrangian acceleration statistics. Systematically removing spatial scales reduces the frequency of extreme acceleration events, consistent with the notion that they are rooted in the small-scale structure of turbulence. We also find that acceleration variance and flatness as a function of filter scale closely resemble experimental results of neutrally buoyant, finite-sized particles, corroborating the picture that particle size determines the scale on which turbulent fluctuations are sampled.
New insights into the fine-scale structure of turbulence
Michael Wilczek
Journal: Journal of Fluid Mechanics / Volume 784 / 10 December 2015
Published online by Cambridge University Press: 11 November 2015, pp. 1-4
Print publication: 10 December 2015
In a recent study, Lawson & Dawson (J. Fluid Mech., vol. 780, 2015, pp. 60–98) present experimental results on the fine-scale structure of turbulence, which are obtained with a novel variant of particle image velocimetry, to elucidate the relation between the small-scale structure, dynamics and statistics of turbulence. The results are carefully validated against direct numerical simulation data. Their extensive study focuses on the mean structure of the velocity gradient and the pressure Hessian fields for various small-scale flow topologies. It thereby reveals the dynamical impact of turbulent strain and vorticity structures on the velocity gradient statistics through non-local interactions, and points out ways to improve low-dimensional closure models for the dynamics of small-scale turbulence.
Turbulent Rayleigh–Bénard convection described by projected dynamics in phase space
Johannes Lülff, Michael Wilczek, Richard J. A. M. Stevens, Rudolf Friedrich, Detlef Lohse
Journal: Journal of Fluid Mechanics / Volume 781 / 25 October 2015
Print publication: 25 October 2015
Rayleigh–Bénard convection, i.e. the flow of a fluid between two parallel plates that is driven by a temperature gradient, is an idealised set-up to study thermal convection. Of special interest are the statistics of the turbulent temperature field, which we are investigating and comparing for three different geometries, namely convection with periodic horizontal boundary conditions in three and two dimensions as well as convection in a cylindrical vessel, in order to determine the similarities and differences. To this end, we derive an exact evolution equation for the temperature probability density function. Unclosed terms are expressed as conditional averages of velocities and heat diffusion, which are estimated from direct numerical simulations. This framework lets us identify the average behaviour of a fluid particle by revealing the mean evolution of a fluid with different temperatures in different parts of the convection cell. We connect the statistics to the dynamics of Rayleigh–Bénard convection, giving deeper insights into the temperature statistics and transport mechanisms. We find that the average behaviour is described by closed cycles in phase space that reconstruct the typical Rayleigh–Bénard cycle of fluid heating up at the bottom, rising up to the top plate, cooling down and falling again. The detailed behaviour shows subtle differences between the three cases.
Spatio-temporal spectra in the logarithmic layer of wall turbulence: large-eddy simulations and simple models
Michael Wilczek, Richard J. A. M. Stevens, Charles Meneveau
Journal: Journal of Fluid Mechanics / Volume 769 / 25 April 2015
Published online by Cambridge University Press: 13 March 2015, R1
Print publication: 25 April 2015
Motivated by the need to characterize the spatio-temporal structure of turbulence in wall-bounded flows, we study wavenumber–frequency spectra of the streamwise velocity component based on large-eddy simulation (LES) data. The LES data are used to measure spectra as a function of the two wall-parallel wavenumbers and the frequency in the equilibrium (logarithmic) layer. We then reformulate one of the simplest models that is able to reproduce the observations: the random sweeping model with a Gaussian large-scale fluctuating velocity and with additional mean flow. Comparison with LES data shows that the model captures the observed temporal decorrelation, which is related to the Doppler broadening of frequencies. We furthermore introduce a parameterization for the entire wavenumber–frequency spectrum $E_{11}(k_{1},k_{2},{\it\omega};z)$ , where $k_{1}$ , $k_{2}$ are the streamwise and spanwise wavenumbers, ${\it\omega}$ is the frequency and $z$ is the distance to the wall. The results are found to be in good agreement with LES data.
Large-eddy simulation study of the logarithmic law for second- and higher-order moments in turbulent wall-bounded flow
Richard J. A. M. Stevens, Michael Wilczek, Charles Meneveau
The logarithmic law for the mean velocity in turbulent boundary layers has long provided a valuable and robust reference for comparison with theories, models and large-eddy simulations (LES) of wall-bounded turbulence. More recently, analysis of high-Reynolds-number experimental boundary-layer data has shown that also the variance and higher-order moments of the streamwise velocity fluctuations $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}u^{\prime +}$ display logarithmic laws. Such experimental observations motivate the question whether LES can accurately reproduce the variance and the higher-order moments, in particular their logarithmic dependency on distance to the wall. In this study we perform LES of very high-Reynolds-number wall-modelled channel flow and focus on profiles of variance and higher-order moments of the streamwise velocity fluctuations. In agreement with the experimental data, we observe an approximately logarithmic law for the variance in the LES, with a 'Townsend–Perry' constant of $A_1\approx 1.25$ . The LES also yields approximate logarithmic laws for the higher-order moments of the streamwise velocity. Good agreement is found between $A_p$ , the generalized 'Townsend–Perry' constants for moments of order $2p$ , from experiments and simulations. Both are indicative of sub-Gaussian behaviour of the streamwise velocity fluctuations. The near-wall behaviour of the variance, the ranges of validity of the logarithmic law and in particular possible dependencies on characteristic length scales such as the roughness length $z_0$ , the LES grid scale $\Delta $ , and subgrid scale mixing length $C_s\Delta $ are examined. We also present LES results on moments of spanwise and wall-normal fluctuations of velocity.
Pressure Hessian and viscous contributions to velocity gradient statistics based on Gaussian random fields
Michael Wilczek, Charles Meneveau
Understanding the non-local pressure contributions and viscous effects on the small-scale statistics remains one of the central challenges in the study of homogeneous isotropic turbulence. Here we address this issue by studying the impact of the pressure Hessian as well as viscous diffusion on the statistics of the velocity gradient tensor in the framework of an exact statistical evolution equation. This evolution equation shares similarities with earlier phenomenological models for the Lagrangian velocity gradient tensor evolution, yet constitutes the starting point for a systematic study of the unclosed pressure Hessian and viscous diffusion terms. Based on the assumption of incompressible Gaussian velocity fields, closed expressions are obtained as the results of an evaluation of the characteristic functionals. The benefits and shortcomings of this Gaussian closure are discussed, and a generalization is proposed based on results from direct numerical simulations. This enhanced Gaussian closure yields, for example, insights on how the pressure Hessian prevents the finite-time singularity induced by the local self-amplification and how its interaction with viscous effects leads to the characteristic strain skewness phenomenon.
On the velocity distribution in homogeneous isotropic turbulence: correlations and deviations from Gaussianity
MICHAEL WILCZEK, ANTON DAITCHE, RUDOLF FRIEDRICH
Journal: Journal of Fluid Mechanics / Volume 676 / 10 June 2011
Print publication: 10 June 2011
We investigate the single-point probability density function of the velocity in three-dimensional stationary and decaying homogeneous isotropic turbulence. To this end, we apply the statistical framework of the Lundgren–Monin–Novikov hierarchy combined with conditional averaging, identifying the quantities that determine the shape of the probability density function. In this framework, the conditional averages of the rate of energy dissipation, the velocity diffusion and the pressure gradient with respect to velocity play a key role. Direct numerical simulations of the Navier–Stokes equation are used to complement the theoretical results and assess deviations from Gaussianity.
|
CommonCrawl
|
How far can you see in solar plasma at just under 1 solar radius?
I was trying to compose an answer to this question here, about touching the surface of a star, and I was going to mention that the popular depiction of the sun's surface as 'basically lava' (as depicted in Sunshine, for example) is wrong.
The usual definition of a star's radius is where the optical depth is 2/3. At this radius, the density of the sun is about 1/10th of air at sea level and can hardly be described as solid or liquid.
I wanted to get a sense of what this diffuse plasma would look like if you were in it and somehow impervious to the heat and radiation. Would it appear as a thin, glowing fog, through which you could still discern distant objects? Or would it appear as opaque as, say, a bonfire?
Rephrasing in a little more technical way, What is the extinction coefficient of solar plasma at ~1 Solar radius in the visible spectrum? What is the path length at which all light is effectively attenuated?
(Since it varies by wavelength, I mean specifically the region of the visible spectrum where the plasma is at its most transparent).
the-sun plasma-physics opacity
IngolifsIngolifs
$\begingroup$ It is of order a few 100 km. Do you need it more accurately than that? Then specify the wavelength. $\endgroup$ – ProfRob Oct 31 '18 at 20:21
$\begingroup$ So I'm looking out my window at the mountains, and can tell they're far away by how blue-tinted they appear. What you're saying is that the view would be just as good, if not better if the earth (sans atmosphere) were within the sun's photosphere? I mean briefly, of course. If you were able to provide calculations for 400 nm (which I suspect of being the wavelength of least attenuation) and turn that into an answer, I'll accept it. $\endgroup$ – Ingolifs Nov 1 '18 at 0:13
If you have a look at the top-left panel of Fig.11 in these lecture notes by Rob Rutten, you will see that the continuum opacity at optical wavelengths at the photosphere is about $10^{-6.7}$ cm$^{-1}$.
The inverse of this is the optical depth. You can see stuff through about 2-3 optical depths, so your "horizon", looking horizontally in the solar atmosphere is about 100 km.
However, the same plot shows that, because of the temperature and density variations with height, the photosphere becomes an order of magnitude more or less opaque at only 50km below or above the "photosphere" respectively.
What this means is that you can see about 100 km horizontally, but less than this (guesstimate 30km) looking downwards and more than this (guesstimate infinity - that is how the photosphere is defined!) looking upwards.
These numbers would all be revised downwards at the wavelengths of prominent absorption lines.
If you could look all around you, you would see a very asymmetric brightness/colour distribution. Overhead would be dimmer and cooler (redder); underneath, much brighter and hotter (more or less the same colour we see the photosphere).
ProfRobProfRob
I suspect part of the problem here is that cosmological optical depth is defined vertically (so far as I can discern from the wikipedia article), while you seem to be asking for the extinction coefficient for a horizontal path at the specified star-radius. So you'll need to find out what the $\alpha(z)$ is at that radius, play some games with integrated radiance due to the plasma in the horizontal line of site to estimate the noise factor, and compare that with the putative radiance from the distant object you wish to see.
Carl WitthoftCarl Witthoft
Not the answer you're looking for? Browse other questions tagged the-sun plasma-physics opacity or ask your own question.
How far is the Earth/Sun above/below the galactic plane, and is it heading toward/away from it?
How to get the longitude/latitude from solar zenith/azimuth?
How can the Advanced Composition Explorer (ACE) warn of incoming solar storms?
Can a solar flare destroy every electronic item on earth?
How far does the moon move relative to the sun during a solar eclipse
What solar features would I see at 10x magnification?
Why is the Sun so bright, but you can feel it far away?
How far could be an object from the Sun and still be under the influence of its gravitational field?
Will just a glimpse (less than a second) of sun during partial solar eclipse damage eyes?
|
CommonCrawl
|
Journal of the Korean Mathematical Society (대한수학회지)
Korean Mathematical Society (대한수학회)
This journal endeavors to publish significant research of broad interests in pure and applied mathematics. One volume is published each year, and each volume consists of six issues (January, March, May, July, September, November).
http://jkms.kms.or.kr/submission KSCI KCI SCOPUS SCIE
EVOLUTION AND MONOTONICITY FOR A CLASS OF QUANTITIES ALONG THE RICCI-BOURGUIGNON FLOW
Daneshvar, Farzad;Razavi, Asadollah 1441
https://doi.org/10.4134/JKMS.j180525 PDF
In this paper we consider the monotonicity of the lowest constant ${\lambda}_a^b(g)$ under the Ricci-Bourguignon flow and the normalized Ricci-Bourguignon flow such that the equation $$-{\Delta}u+au\;{\log}\;u+bRu={\lambda}_a^b(g)u$$ with ${\int}_{M}u^2dV=1$, has positive solutions, where a and b are two real constants. We also construct various monotonic quantities under the Ricci-Bourguignon flow and the normalized Ricci-Bourguignon flow. Moreover, we prove that a compact steady breather which evolves under the Ricci-Bourguignon flow should be Ricci-flat.
NUMBER OF WEAK GALOIS-WEIERSTRASS POINTS WITH WEIERSTRASS SEMIGROUPS GENERATED BY TWO ELEMENTS
Komeda, Jiryo;Takahashi, Takeshi 1463
Let C be a nonsingular projective curve of genus ${\geq}2$ over an algebraically closed field of characteristic 0. For a point P in C, the Weierstrass semigroup H(P) is defined as the set of non-negative integers n for which there exists a rational function f on C such that the order of the pole of f at P is equal to n, and f is regular away from P. A point P in C is referred to as a weak Galois-Weierstrass point if P is a Weierstrass point and there exists a Galois morphism ${\varphi}:C{\rightarrow}{\mathbb{p}}^1$ such that P is a total ramification point of ${\varphi}$. In this paper, we investigate the number of weak Galois-Weierstrass points of which the Weierstrass semigroups are generated by two positive integers.
RIGIDITY CHARACTERIZATION OF COMPACT RICCI SOLITONS
Li, Fengjiang;Zhou, Jian 1475
In this paper, we firstly define the Ricci mean value along the gradient vector field of the Ricci potential function and show that it is non-negative on a compact Ricci soliton. Furthermore a Ricci soliton is Einstein if and only if its Ricci mean value is vanishing. Finally, we obtain a compact Ricci soliton $(M^n,g)(n{\geq}3)$ is Einstein if its Weyl curvature tensor and the Kulkarni-Nomizu product of Ricci curvature are orthogonal.
WEAK NORMAL PROPERTIES OF PARTIAL ISOMETRIES
Liu, Ting;Men, Yanying;Zhu, Sen 1489
This paper describes when a partial isometry satisfies several weak normal properties. Topics treated include quasi-normality, subnormality, hyponormality, p-hyponormality (p > 0), w-hyponormality, paranormality, normaloidity, spectraloidity, the von Neumann property and Weyl's theorem.
STRONG PRESERVERS OF SYMMETRIC ARCTIC RANK OF NONNEGATIVE REAL MATRICES
Beasley, LeRoy B.;Encinas, Luis Hernandez;Song, Seok-Zun 1503
A rank 1 matrix has a factorization as $uv^t$ for vectors u and v of some orders. The arctic rank of a rank 1 matrix is the half number of nonzero entries in u and v. A matrix of rank k can be expressed as the sum of k rank 1 matrices, a rank 1 decomposition. The arctic rank of a matrix A of rank k is the minimum of the sums of arctic ranks of the rank 1 matrices over all rank 1 decomposition of A. In this paper we obtain characterizations of the linear operators that strongly preserve the symmetric arctic ranks of symmetric matrices over nonnegative reals.
EMBEDDING DISTANCE GRAPHS IN FINITE FIELD VECTOR SPACES
Iosevich, Alex;Parshall, Hans 1515
We show that large subsets of vector spaces over finite fields determine certain point configurations with prescribed distance structure. More specifically, we consider the complete graph with vertices as the points of $A{\subseteq}F^d_q$ and edges assigned the algebraic distance between pairs of vertices. We prove nontrivial results on locating specified subgraphs of maximum vertex degree at most t in dimensions $d{\geq}2t$.
EXISTENCE OF WEAK SOLUTIONS TO A CLASS OF SCHRÖDINGER TYPE EQUATIONS INVOLVING THE FRACTIONAL p-LAPLACIAN IN ℝN
Kim, Jae-Myoung;Kim, Yun-Ho;Lee, Jongrak 1529
We are concerned with the following elliptic equations: $$(-{\Delta})^s_pu+V (x){\mid}u{\mid}^{p-2}u={\lambda}g(x,u){\text{ in }}{\mathbb{R}}^N$$, where $(-{\Delta})_p^s$ is the fractional p-Laplacian operator with 0 < s < 1 < p < $+{\infty}$, sp < N, the potential function $V:{\mathbb{R}}^N{\rightarrow}(0,{\infty})$ is a continuous potential function, and $g:{\mathbb{R}}^N{\times}{\mathbb{R}}{\rightarrow}{\mathbb{R}}$ satisfies a $Carath{\acute{e}}odory$ condition. We show the existence of at least one weak solution for the problem above without the Ambrosetti and Rabinowitz condition. Moreover, we give a positive interval of the parameter ${\lambda}$ for which the problem admits at least one nontrivial weak solution when the nonlinearity g has the subcritical growth condition.
ON TOPOLOGICAL ENTROPY AND TOPOLOGICAL PRESSURE OF NON-AUTONOMOUS ITERATED FUNCTION SYSTEMS
Ghane, Fatemeh H.;Sarkooh, Javad Nazarian 1561
In this paper we introduce the notions of topological entropy and topological pressure for non-autonomous iterated function systems (or NAIFSs for short) on countably infinite alphabets. NAIFSs differ from the usual (autonomous) iterated function systems, they are given [32] by a sequence of collections of continuous maps on a compact topological space, where maps are allowed to vary between iterations. Several basic properties of topological pressure and topological entropy of NAIFSs are provided. Especially, we generalize the classical Bowen's result to NAIFSs ensures that the topological entropy is concentrated on the set of nonwandering points. Then, we define the notion of specification property, under which, the NAIFSs have positive topological entropy and all points are entropy points. In particular, each NAIFS with the specification property is topologically chaotic. Additionally, the ${\ast}$-expansive property for NAIFSs is introduced. We will prove that the topological pressure of any continuous potential can be computed as a limit at a definite size scale whenever the NAIFS satisfies the ${\ast}$-expansive property. Finally, we study the NAIFSs induced by expanding maps. We prove that these NAIFSs having the specification and ${\ast}$-expansive properties.
COUNTING SUBRINGS OF THE RING ℤm × ℤn
Toth, Laszlo 1599
Let $m,n{\in}{\mathbb{N}}$. We represent the additive subgroups of the ring ${\mathbb{Z}}_m{\times}{\mathbb{Z}}_n$, which are also (unital) subrings, and deduce explicit formulas for $N^{(s)}(m,n)$ and $N^{(us)}(m,n)$, denoting the number of subrings of the ring ${\mathbb{Z}}_m{\times}{\mathbb{Z}}_n$ and its unital subrings, respectively. We show that the functions $(m,n){\mapsto}N^{u,s}(m,n)$ and $(m,n){\mapsto}N^{(us)}(m,n)$ are multiplicative, viewed as functions of two variables, and their Dirichlet series can be expressed in terms of the Riemann zeta function. We also establish an asymptotic formula for the sum $\sum_{m,n{\leq}x}N^{(s)}(m,n)$, the error term of which is closely related to the Dirichlet divisor problem.
STRONG SHELLABILITY OF SIMPLICIAL COMPLEXES
Guo, Jin;Shen, Yi-Huang;Wu, Tongsuo 1613
Imposing a strong condition on the linear order of shellable complexes, we introduce strong shellability. Basic properties, including the existence of dimension-decreasing strong shelling orders, are developed with respect to nonpure strongly shellable complexes. Meanwhile, pure strongly shellable complexes can be characterized by the corresponding codimension one graphs. In addition, we show that the facet ideals of pure strongly shellable complexes have linear quotients.
THE QUASI-NEUTRAL LIMIT OF THE COMPRESSIBLE MAGNETOHYDRODYNAMIC FLOWS FOR IONIC DYNAMICS
Kwon, Young-Sam 1641
In this paper we study the quasi-neutral limit of the compressible magnetohydrodynamic flows in the periodic domain ${\mathbb{T}}^3$ with the well-prepared initial data. We prove that the weak solution of the compressible magnetohydrodynamic flows governed by the Poisson equation converges to the strong solution of the compressible flow of magnetohydrodynamic flows as long as the latter exists.
α-TYPE HOCHSCHILD COHOMOLOGY OF HOM-ASSOCIATIVE ALGEBRAS AND BIALGEBRAS
Hurle, Benedikt;Makhlouf, Abdenacer 1655
In this paper we define a new type of cohomology for multiplicative Hom-associative algebras, which generalizes Hom-type Hochschild cohomology and fits with deformations of Hom-associative algebras including the deformation of the structure map ${\alpha}$. Moreover, we provide various observations and similarly a new type cohomology of Hom-bialgebras extending the Gerstenhaber-Schack cohomology for Hom-bialgebras and fitting with formal deformations including deformations of the structure map.
SEMISTAR G-GCD DOMAIN
Gmiza, Wafa;Hizem, Sana 1689
Let ${\star}$ be a semistar operation on the integral domain D. In this paper, we prove that D is a $G-{\tilde{\star}}-GCD$ domain if and only if D[X] is a $G-{\star}_1-GCD$ domain if and only if the Nagata ring of D with respect to the semistar operation ${\tilde{\star}}$, $Na(D,{\star}_f)$ is a G-GCD domain if and only if $Na(D,{\star}_f)$ is a GCD domain, where ${\star}_1$ is the semistar operation on D[X] introduced by G. Picozza [12].
|
CommonCrawl
|
Philosophical Studies
pp 1–8 | Cite as
No escape from Allais: reply to Buchak
Johanna Thoma
Jonathan Weisberg
In Risk and Rationality, Buchak (Risk and rationality, Oxford University Press, Oxford, 2013) advertises REU theory as able to recover the modal preferences in the Allais paradox. In our Thoma and Weisberg (Philos Stud 174(9):2369–2384, 2017. https://doi.org/10.1007/s11098-017-0916-3) however, we pointed out that REU theory only applies in the "grand world" setting, where it actually struggles with the modal Allais preferences. Buchak (Philos Stud 174(9):2397–2414, 2017. https://doi.org/10.1007/s11098-017-0907-4) offers two replies. Here we enumerate a variety of technical and philosophical problems with each.
Decision theory Risk Allais paradox Expected utility Risk-weighted expected utility
In Allais' (1953) paradox, we face two choices. The first is between options 1A and 1B.
1A. \((\$1 \text{ million}, 1)\)
1B. \((\$0,\, .01;\; \$1 \text{ million},\,.89;\; \$5 \text{ million},\, .1)\)
The second choice is between options 2A and 2B.
2A. \((\$0,\, .9 ;\; \$5 \text{ million},\, .1)\)
2B. \(( \$0,\, .89;\; \$1 \text{ million},\, .11)\)
Most people prefer 1A over 1B, and 2A over 2B. The prospect of a guaranteed $1 million is tempting enough to win out against the risky shot at $5 million.1 But people are more venturesome when there is no safe option, as in the choice between 2A and 2B.
Expected utility theory famously forbids this sort of thing. If you're willing to take on an additional .01 risk of empty-handedness in exchange for a .10 chance at $5 million when there's no safe option, then you should be willing to make the same tradeoff even when there is a safe option.
REU theory is more permissive. Instead of maximizing expected utility, REU theory maximizes risk-weighted expected utility:
$$\begin{aligned} {\textit{REU}}(G) = u_1 + \sum _{i = 2}^n r(p(u \ge u_{i+1}))(u_{i+1} - u_i). \end{aligned}$$
Here \(G\) is a gamble with outcomes of utility \(u_i\), ordered from least desirable (\(u_1\)) to most desirable (\(u_n\)). The value \(p(u \ge u_i)\) is the probability of obtaining an outcome with utility at least \(u_i\). And \(r\) is a novel component of REU theory, the risk function: a non-decreasing map of probabilities into \([0,1]\), with \(r(0)=0\) and \(r(1)=1\).
The function \(r\) captures the agent's attitudes toward risk. If \(r(p) < p\) for \(p \in (0,1)\), then the agent is generally risk averse. Buchak's running example of such an \(r\) function is \(r(p) = p^2\). Given this \(r\) function and the utility values
$$\begin{aligned} \begin{aligned} u(\$0)&= 0,\\ u(\$1 \text{ million })&= 1,\\ u(\$5 \text{ million })&= 2, \end{aligned} \end{aligned}$$
we find that \(REU(\text{1A}) > REU(\text{1B})\) and \(REU(\text{2A})>REU(\text{2B})\). So REU theory seems well-placed to capture most people's choices in the Allais problem.
2 Grand world models
The problem is that no choice is ever really safe or certain. Even if you take the guaranteed $1 million, your other plans and projects might go well or poorly, superbly or disastrously. Grand world decision problems model this background risk explicitly. Buchak, however, treats the Allais choices as simple problems, in which monetary gains and losses are final outcomes. In any realistic scenario, where there is background risk, this amounts to taking a small world perspective. But because risk-weighted expected utility is non-additive, it has to be applied to grand world problems, as Buchak acknowledges.
We showed that REU theory no longer accommodates the Allais preferences once background risk is modeled explicitly (Thoma and Weisberg 2017). We constructed a grand world model by replacing each small world outcome with a normal distribution, where the mean matches the small world utility and the height corresponds to its probability. For example, the small world version of gamble 1B is depicted in Fig. 1, its grand world counterpart in Fig. 2. On this model, REU theory fails to predict the usual Allais preferences, provided the normal distributions used are minimally spread out.
Small world 1B
Grand world 1B
Importantly, if we squeeze the normal distributions tight enough, the grand world problem collapses back into the small world problem. Then REU theory can recover the Allais preferences; Buchak's original, small world model would be adequate then.
But the normal distributions have to be squeezed absurdly tight to get this result. A small standard deviation like \(\sigma = .1\) lets REU theory recover the Allais preferences.2 But it also has outlandish consequences. For example, it entails virtual certainty that a windfall of $1 million will lead to a better life than the one you'd expect to lead without it. The probability of a life of utility at most 0, despite winning $1 million, would have to be smaller than \(1 \times 10^{-23}\).3 Yet the chances are massively greater than that of suffering life-ruining tragedy: illness, financial ruin, etc.
This issue will prove pivotal for Buchak's first reply.
3 Buchak's first reply
Buchak's first response is to tweak our grand world model in two ways. First, the utility associated with winning $5 million is shifted down from 2 to 1.3. Second, all normal distributions are skewed with a shape parameter of 5: positive 5 for the $0 outcome, negative 5 for the other two. So, for example, the Allais gamble of Fig. 2 becomes that in Fig. 3.
Skewed grand world 1B
We'll focus on the second tweak here, the introduction of skew. It rests on a technical error, as we'll show momentarily. But it also wants for motivation.
3.1 Motivational problems
Why should the grand world model be skewed? And why in this particular way? Buchak writes:
receiving $1M makes the worst possibilities much less likely. Receiving $1M provides security in the sense of making the probability associated with lower utility values smaller and smaller. The utility of $1M is concentrated around a high mean with a long tail to the left: things likely will be great, though there is some small and diminishing chance they will be fine but not great. Similarly, the utility of $0 is concentrated around a low mean with a long tail to the right: things likely will be fine but not great, though there is some small and diminishing chance they will be great. In other words, $1M (and $5M) is a gamble with negative skew, and $0 is a gamble with positive skew (Buchak 2017, 2401)
But this passage never actually identifies any asymmetry in the phenomena we're modeling. True enough, "receiving $1M makes the worst possibilities much less likely." But it also makes the best possibilities much more likely. Likewise, "[r]eceiving $1M provides security in the sense of making the probability associated with lower utility values smaller and smaller." But $1 million also makes the probability associated with higher utility values larger. And so on.
The tendencies of large winnings to control bad outcomes and promote good ones was already captured in our original model. A normal distribution centered on utility 1 already admits "some small and diminishing chance that [things] will be fine but not great." It just also admits some small chance that things will be much better than great, since it's symmetric around utility 1. To motivate the skewed model, we would need some reason to think this symmetry should not hold. None has been given.
3.2 Technical difficulties
Setting motivation to one side, there is a technical fault in Buchak's presentation of the skewed model.
Introducing skew is supposed to make room for a reasonably large standard deviation, while still recovering the modal Allais preferences. Buchak advertises a standard deviation of \(\sigma = .17\) for the skewed model, but the true value is actually \(.106\)—essentially the same as the \(.1\) value Buchak concedes is implausibly small, and seeks to avoid by introducing skew.4
The effect of skew on standard deviation
Where does the \(.17\) figure come from then? It's the scale parameter of the skew normal distribution, often denoted \(\omega\). For an ordinary normal distribution, \(\omega\) famously coincides with the standard deviation \(\sigma\), and so we write \(\sigma\) for both.5 But when we skew a normal distribution, we tighten it; we shrink the standard deviation. Figure 4 illustrates: it shows two normal distributions with the same scale parameter, \(.17\). But the skewed one in yellow is much narrower.6
3.3 Implications
Of course, what really matters isn't the value of the standard deviation itself, but the probabilities that result from whatever parameters we choose. And Buchak argues that her model avoids the implausible probabilities we cited in the introduction. How can this be?
Buchak says the skewed model has "more overlap in the utility that $0 and $1M might deliver":
there is a 0.003 probability that the $0 gamble will deliver more than 0.5 utils, and a 0.003 probability that the $1M gamble will deliver less than 0.5 utils. (Buchak 2017, 2402)
But this overlap is not the problematic quantity we raised in our critique and rehearsed in Sect. 2. The problem was, rather, that a small standard deviation like \(.1\) requires you to think it less than \(1 \times 10^{-23}\) likely that, despite a $1 million windfall, you will end up with a life no better than what you already expect.
On Buchak's model the corresponding probability is now \(3.4 \times 10^{-7}\), which is much better, but still absurdly small.7 For example, the probability that this paper's second author (Jonathan Weisberg) will die prematurely in the coming year is about \(4,000\) times greater than that.
Worse yet, the improvement here comes at a steep price: worsened probabilities on the other side. For example, the probability that the life you'll lead with $1 million will end up at least as good as the one you'd expect with $5 million is approximately \(5 \times 10^{-9}\) on Buchak's model.8 Without the skew Buchak introduces, the equivalent figure would have been a much more reasonable .0013.
4 Buchak's second reply
Buchak's second reply is that it wouldn't in fact be a problem if REU theory could only recover the Allais preferences in a "simple" setting. We should think of the Allais problem as a thought experiment: it asks us to abstract away from anything but the immediate rewards mentioned in the problem, and to consider them stand-ins for things that are of ultimate value. For the purposes of this thought experiment, then, the simple decision problem really is grand world: it is a grand world model of a very idealized hypothetical choice situation.
What her simple model shows, according to Buchak, is that REU theory can accommodate people's intuitions regarding such a thought experiment. And this is a success, because this establishes that the theory can accommodate a certain kind of reasoning that we all engage in. Buchak moreover concedes that it may well be a mistake for agents to think of the choices they actually face in such simple terms. But she claims this is no problem for her theory.
[I]f people 'really' face the simple choices, then their reasoning is correct and REU captures it. If people 'really' face the complex choices, then the reasoning in favor of their preferences is misapplied, and REU does not capture their preferences. Either way, the point still stands: REU-maximization rationalizes and formally reconstructs a certain kind of intuitive reasoning, as seen through REU theory's ability to capture preferences over highly idealized gambles to which this reasoning is relevant. (Buchak 2017, 2403)
The first thing to note here is that ordinary agents really do only face "complex" choices as we modeled them. Any reward from an isolated gamble an agent faces in her life really should itself be thought of as a gamble. This is not only true when the potential reward is something like money, which is only a means to something else. Even if the good in question is "ultimate," it just adds to the larger gamble of the agent's life she is yet to face. She might win a beautiful holiday in a prize draw, but she will still face 20 micromorts per day for the rest of her life. Even on our deathbeds we are unsure about how a lot of things we care about will play out after we're gone. REU theory makes this background risk relevant to the evaluation of any individual gamble.
This should make clear just how idealized a thought experiment that is appropriately modeled as a "simple" choice would have to be. It not only has to feature a gamble over ultimate goods, it also has to ask agents to imagine there isn't and never will be any uncertainty about anything else they value. In standard presentations of the Allais problem, this is at least not something that agents are explicitly asked to do. Rather, the task seems to be the "complex" one of evaluating the prospect of adding a particular sum of money to one's actual lifetime earnings. And then, as Buchak acknowledges, REU theory fails to rationalize the Allais preferences.
Still, presumably many of us would also display Allais preferences when considering an appropriately idealized, truly "simple" thought experiment. And REU theory can rationalize that. But why should we care about accommodating reasoning in such highly idealized decision contexts?
Buchak's original project was to rationally accommodate the ordinary decision-maker. But now what we are rationally accommodating are at best her responses to thought experiments that are very far removed from her real life. If our model gets things right, then REU theory still has to declare the ordinary decision-maker irrational if she acts in real life as she would in the thought experiment, as presumably ordinary decision-makers would. And then we haven't done very much to rationally accommodate her. In fact, as far as the project of accommodating ordinary risk aversion is concerned, the burden of proof is still on proponents of REU theory to show that there are any decisions commonly faced by real agents where REU theory comes to a significantly different assessment than expected utility theory. If there are not, then agents may as well just maximize expected utility, as it is easier to compute.
Even if the ordinary decision-maker cannot be accommodated either way, and even if expected utility theory should be used for practical purposes either way, it may of course be that intuitions about highly idealized thought experiments speak in favour of REU theory over expected utility theory as the true theory of rational choice. While we are open to this idea, the proponent of REU theory has a challenge to answer. Why should we trust our intuitive assessments of highly idealized choice situations when we are so prone to misapplying the same kind of reasoning in virtually any other context? Specifically, in the case of the Allais problem, presumably those who report Allais preferences report them whether they are really considering the simple thought experiment, or whether they are really facing the complex choice. What should make us confident in the rationality of their assessment of the thought experiment, when we all agree that their preferences are irrational in the "complex" case?
Ultimately, if Buchak's first reply fails, and all we can rely on is her second reply, we're left with no reason to abandon expected utility theory as an action-guiding theory in actual choice scenarios. Even if we grant that REU theory is a better theory of rational choice in hypothetical scenarios we never face, this is a much less exciting result than the one Risk and Rationality advertised.
Throughout, we mean the probabilities to correspond to the agent's subjective probabilities. Where probabilities are objectively given, e.g., through the experimental setup, we assume agents take these at face value. REU theory takes there to be no essential difference between that case (often referred to as choice under 'risk'), and the case where probabilities are not externally given (choice under 'uncertainty'), as long as subjective probabilities can be ascribed to the agent.
Though we need a slightly more severe risk function: \(r(p) = p^{2.05}\) instead of \(r(p) = p^2\).
To get this figure we calculate the cumulative density, at zero, of the normal distribution \({\mathscr {N}}(1,.1)\). In Mathematica: CDF[NormalDistribution[1, .1], 0].
StandardDeviation[SkewNormalDistribution[1, .17, -5]].
Unfortunately, Mathematica uses \(\sigma\) for the scale parameter even in skewed normal distributions. This gives the misleading impression that it's still the standard deviation.
Skewing also shifts the mean, we note.
In Mathematica, the relevant computation is: CDF[SkewNormalDistribution[1, .17, -5], .133], because .133 is approximately the mean of the skew normal distribution Buchak uses to represent a $0 outcome (location 0, scale .17, shape 5). Another natural value to consider is the mode, which is approximately 0.063. In that case the relevant probability is even more problematic, approximately \(3.5 \times 10^{-8}\).
Here we calculate the complement of the cumulative density at \(1.17\) of the skew normal distribution with location \(1\), scale \(.17\), and skew \(-5\): 1 - CDF[SkewNormalDistribution[1, .17, -5], 1.17], because 1.17 is the mean of the relevant skew normal distribution (location 1.3, scale .17, skew \(-5\)).
We are grateful to Lara Buchak and two anonymous referees for feedback on an earlier draft.
Allais, M. (1953). Le Comportement de l'Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l'Ecole Americaine. Econometrica, 21(4), 503–546.CrossRefGoogle Scholar
Buchak, L. (2013). Risk and rationality. Oxford: Oxford University Press.CrossRefGoogle Scholar
Buchak, L. (2017). Replies to commentators. Philosophical Studies, 174(9), 2397–2414. https://doi.org/10.1007/s11098-017-0907-4.CrossRefGoogle Scholar
Thoma, J., & Weisberg, J. (2017). Risk writ large. Philosophical Studies, 174(9), 2369–2384. https://doi.org/10.1007/s11098-017-0916-3.CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.London School of EconomicsLondonUK
2.University of TorontoTorontoCanada
Thoma, J. & Weisberg, J. Philos Stud (2019). https://doi.org/10.1007/s11098-019-01322-z
Publisher Name Springer Netherlands
|
CommonCrawl
|
Primes P such that ((P-1)/2)!=1 mod P
I was looking at Wilson's theorem: If $P$ is a prime then $(P-1)!\equiv -1\pmod P$. I realized this implies that for primes $P\equiv 3\pmod 4$, that $\left(\frac{P-1}{2}\right)!\equiv \pm1 \pmod P$.
Question: For which primes $P$ is $\left(\frac{P-1}{2}\right)!\equiv 1\pmod P$?
After convincing myself that it's not a congruence condition for $P,$ I found this sequence in OEIS. I'd appreciate any comments that shed light on the nature of such primes (for example, they appear to be of density 1/2 in all primes that are $3\bmod 4$).
nt.number-theory
Robin Houston
jacobjacob
$\begingroup$ Given that there are no comments of any note on the sequence in the OEIS, there's a fair chance that little is known about your question. $\endgroup$ – Kevin Buzzard Feb 23 '10 at 10:28
$\begingroup$ For all p<=250000, p=3 mod 4, we have 5458 +1s and 5589 -1s. $\endgroup$ – Kevin Buzzard Feb 23 '10 at 10:56
I am a newcomer here. If p >3 is congruent to 3 mod 4, there is an answer which involves only $p\pmod 8$ and $h\pmod 4$, where $h$ is the class number of $Q(\sqrt{-p})$ . Namely one has $(\frac{p-1}{2})!\equiv 1 \pmod p$ if an only if either (i) $p\equiv 3 \pmod 8$ and $h\equiv 1 \pmod 4$ or (ii) $p\equiv 7\pmod 8$ and $h\equiv 3\pmod 4$.
The proof may not be original: since $p\equiv 3 \pmod 4$, one has to determine the Legendre symbol
$${{(\frac{p-1}{2})!}\overwithdelims (){p}} =\prod_{x=1}^{(p-1)/2}{x\overwithdelims (){p}}=\prod_{x=1}^{(p-1)/2}(({x\overwithdelims (){p}}-1)+1).$$ It is enough to know this modulo 4 since it is 1 or -1. By developping, one gets $(p+1)/2+S \pmod 4$, where $$S=\sum_{x=1}^{(p-1)/2}\Bigl({x\over p}\Bigr).$$ By the class number formula, one has $(2-(2/p))h=S$ (I just looked up Borevich-Shafarevich, Number Theory), hence the result, since $\Bigl({2\over p}\Bigr)$ depends only on $p \pmod 8$.
Edit: For the correct answer see KConrad's post or Mordell's article.
Alexey Ustinov
$\begingroup$ That's very slick! $\endgroup$ – David E Speyer Feb 23 '10 at 13:02
$\begingroup$ Yes, very nice! My interpretation of the question is "do the primes for which the square root is 1 give a set of density 1/2?" and this at least gives some way of attacking the problem. $\endgroup$ – Kevin Buzzard Feb 23 '10 at 13:09
$\begingroup$ +1. Salut, et bienvenu ! $\endgroup$ – Chandan Singh Dalawat Feb 23 '10 at 13:58
$\begingroup$ In the paper emis.de/journals/EM/expmath/volumes/12/12.1/pp99_113.pdf they mention that Cohen proved modulo CL a conjecture of Hooley about a sum over $h(p)$. $\endgroup$ – Victor Miller Feb 23 '10 at 15:14
$\begingroup$ Interesting. In particular, that paper claims that the odd part of h(p), as p runs through primes, seems to have the same distribution as the odd part of h(D), as D runs through square free integers. That's something I didn't know. I point out, however, that this paper deals with real quadratic fields. $\endgroup$ – David E Speyer Feb 23 '10 at 15:42
There is some history to this question. Dirichlet observed (see p. 275 of ``History of the Theory of Numbers,'' Vol. 1) that since we already know $(\frac{p-1}{2})! \equiv \pm 1 \bmod p$, computing modulo squares gives $(\frac{p-1}{2})! \equiv (-1)^{n} \bmod p$, where $n$ is the number of quadratic nonresidues mod $p$ which lie between 1 and $(p-1)/2$.
Jacobi (pp. 275-276 in Dickson's book) determined $n \bmod 2$ in terms of the class number $h_p$ of ${\mathbf Q}(\sqrt{-p})$, for $p \equiv 3 \bmod 4$ and $p \not= 3$. By the class number formula, $$ \left(2-\left(\frac{2}{p}\right)\right)h_p = r-n, $$ where $r$ is the number of quadratic residues from 1 to $(p-1)/2$. Also $r + n = (p-1)/2$, so $$ 2n = \frac{p-1}{2} - \left(2 - \left(\frac{2}{p}\right)\right)h_p. $$ In particular, $h_p$ is odd when $p \equiv 3 \bmod 4$.
Taking cases if $p \equiv 3 \bmod 8$ and $p \equiv 7 \bmod 8$, we find both times that $n \equiv (h_p+1)/2 \bmod 2$, so $$ \left(\frac{p-1}{2}\right)! \equiv (-1)^{(h_p+1)/2} \bmod p. $$
This shows why getting precise statistics on when the congruence has 1 on the right side will be hard.
KConradKConrad
The following is a relevant classical paper:
Mordell, L. J. The congruence $(p-1/2)!\equiv ±1$ $({\rm mod}$ $p)$. Amer. Math. Monthly 68 1961 145--146.
http://www.math.uga.edu/~pete/Mordell61.pdf
Put $((p-1)/2)!\equiv(-1)^a\ (\text{mod}\,p)$, where $p$ is a prime $\equiv 3\ (\text{mod}\,4)$. The author proves the following result. If $p\equiv 3\ (\text{mod}\,4)$ and $p>3$, then $$ a\equiv{\textstyle\frac 1{2}}\{1+h(-p)\}\quad(\text{mod}\,2), \tag1 $$ where $h(-p)$ is the class number of the quadratic field $k(\surd-p)$ [$\mathbb{Q}(\sqrt{-p})$ must be meant here. --PLC]. The author points out that (1) follows easily from a result of Dirichlet; also that Jacobi had conjectured an equivalent result before the class number formula was known. (MathReview by L. Carlitz)
Pete L. ClarkPete L. Clark
$\begingroup$ The notation k(\sqrt{-p}) for our Q(\sqrt{-p}) is "classical" and was used e.g. by Hilbert in his Bericht. The idea was that k(\sqrt{-p}) is the field k you get by adjoining a square root of -p to the rationals. $\endgroup$ – Franz Lemmermeyer Feb 23 '10 at 17:07
$\begingroup$ Hecke uses $K(\root l\of\mu;k)$ to denote $k(\root l\of\mu)$ in his Vorlesungen. $\endgroup$ – Chandan Singh Dalawat Feb 24 '10 at 1:06
This is an attempt to justify the answer $1/2$ based on the Cohen-Lenstra heuristics. There will be a lot of nonsensical steps, and I am not an expert, so this should be viewed with caution.
As is observed above, this is equivalent to determining $h(p) \mod 4$, where $h(p)$ is the class number of $\mathbb{Q}(\sqrt{-p})$. Since $p$ is odd and $3 \mod 4$, the only ramified prime in $\mathbb{Q}(\sqrt{-p})$ is the principal ideal $(\sqrt{-p})$. Thus, there is no $2$-torsion in the class group and $h(p)$ is odd.
For any odd prime $q$, let $a(q,p)$ be the power of $q$ which divides $h(p)$. We want to compute the average value of $$\prod_{q \equiv 3 \mod 4} (-1)^{a(q,p)}.$$
First nonsensical step: Let's pretend that the CL-heuristics work the same way for the odd part of the class group of $\mathbb{Q}(\sqrt{-p})$, that they do for the odd part of the class group of $\mathbb{Q}(\sqrt{-D})$. We just saw above that the fact that $p$ is prime constrains the $2$-part of the class group; this claim says that it does not effect the distribution of anything else.
Then we are supposed to have: $$P(a(q,p)=0) = \prod_{i=1}^{\infty} (1-q^{-i}) = 1-1/q +O(1/q^2),$$ $$P(a(q,p)=1) = \frac{1}{q-1} \prod_{i=1}^{\infty} (1-q^{-i}) = 1/q +O(1/q^2),$$ and $$P(a(q,p) \geq 2) = O(1/q^2).$$
If you believe all of the above, then the average value of $(-1)^{a(p,q)}$ is $ 1-2/q+O(1/q^2)$.
Second nonsensical step: Let's pretend that $a(q,p)$ and $a(q',p)$ are uncorrelated. Furthermore, let's pretend that everything converges to its average value really fast, to justify the exchange of limits I'm about to do.
Then $$E \left( \prod_{q \equiv 3 \mod 4} (-1)^{a(q,p)} \right) = \prod_{q \equiv 3 \mod 4} \left( 1- 2/q + O(1/q^2) \right)$$.`
The right hand side is zero, just as if $h(p)$ were equally like to be $1$ or $3 \mod 4$.
David E SpeyerDavid E Speyer
$\begingroup$ Most of the latex formulas don't parse on my browser (Firefox 3.5.8) (though some do). Does any one know why? $\endgroup$ – Anonymous Feb 23 '10 at 17:33
$\begingroup$ Might you be missing the jsmath fonts? math.union.edu/~dpvc/jsMath/users/fonts.html $\endgroup$ – David E Speyer Feb 23 '10 at 17:50
Apologies for repeating some information in my reply to question 121678, which I came across before seeing this one.
Several previous answers already explain the connection to the class number. It can be added that the value of $h(-p)$ was investigated by Louis C. Karpinski in his doctoral dissertation (Mathematischen und Naturwissenschaftlichen Facultät der Kaiser Wilhelms-Universität zu Strassburg, 1903), published as "Über die Verteilung der quadratischen Reste," Journal für die Reine und Angewandte Mathematik 127 (1904): 1–19. Karpinski proved a collection of formulae (all of which assume $p > 3$) involving sums over Legendre symbols, and showed that the most concise sums possible contain only $\lfloor p/6 \rfloor$ terms:
\begin{equation} \left\{ 2 - \left( \frac{2}{p} \right) \right\} h(-p) = \sum_{k=1}^{(p-1)/2} \left( \frac{k}{p} \right) \quad (p \equiv 3 \bmod{4}); \end{equation}
\begin{equation} \left\{ 3 - \left( \frac{3}{p} \right) \right\} h(-p) = 2 \sum_{k=1}^{\lfloor p/3 \rfloor} \left( \frac{k}{p} \right) \quad (p \equiv 3 \bmod{4}); \end{equation}
\begin{equation} \left\{ 2 - \left( \frac{2}{p} \right) \right\} h(-p) = \sum_{k=\lfloor p/4 \rfloor +1}^{(p-1)/2} \left( \frac{k}{p} \right) (p \equiv 3 \bmod{8}); \end{equation}
\begin{equation} \left\{ 2 - \left( \frac{2}{p} \right) \right\} h(-p) = \quad \sum_{k=1}^{\lfloor p/4 \rfloor} \quad \left( \frac{k}{p} \right) (p \equiv 7 \bmod{8}); \end{equation}
\begin{equation} \left\{ 1 + \left( \frac{2}{p} \right) + \left( \frac{3}{p} \right) - \left( \frac{6}{p} \right) \right\} h(-p) = 2 \sum_{k=1}^{\lfloor p/6 \rfloor} \left( \frac{k}{p} \right) \quad (p \equiv 7, 11, 23 \bmod{24}); \end{equation}
\begin{equation} \left\{ 1 + \left( \frac{2}{p} \right) + \left( \frac{3}{p} \right) - \left( \frac{6}{p} \right) \right\} h(-p) = -2p + 2 \sum_{k=1}^{\lfloor p/6 \rfloor} \left( \frac{k}{p} \right) \quad (p \equiv 19 \bmod{24}). \end{equation}
John Blythe DobsonJohn Blythe Dobson
$\begingroup$ Interesting reference. I guess this answers one of my questions: mathoverflow.net/questions/106359/… $\endgroup$ – js21 Nov 27 '17 at 11:32
$\begingroup$ @js21, if you haven't already seen it, you may also be interested in the discussion of Karpinski's work in Wells Johnson and Kevin J. Mitchell, "Symmetries for sums of the Legendre symbol," Pacific Journal of Mathematics 69(1) (May 1977): 117-124, available online at msp.org/pjm/1977/69-1/pjm-v69-n1-p11-p.pdf. $\endgroup$ – John Blythe Dobson Dec 3 '17 at 2:10
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
((p-1)/2)! mod p
The value $\pm 1$ for the square root of Wilson's theorem, ((p-1)/2)! mod p
The square root of Wilson's theorem when $p\equiv 1 \mod 4$
Closed form evaluation of the sum of the Legendre symbol over an interval ?
Primes p such that p | ((p-1)/2)! + 1
Shortest interval over which there are more quadratic residues than nonresidues
Biquadratic reciprocity for $p\equiv 1\pmod 4$ and $q\equiv 3\pmod 4$
Proof for new deterministic primality test
Prove $4\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}\equiv 3\sum_{k=1}^{p-1}\frac{1}{k^2}\pmod{p^2}$
congruence for modular forms coefficients
Expressing quartic Dirichlet characters modulo primes $p\equiv 1\bmod 4$ with Legendre symbols
Does $n$ and $(n-1)/2$ being strong probable primes imply $n$ prime?
Does $n^2$ divide $\det\left[\left(\frac{i^2+2ij+3j^2}n\right)\right]_{1\le i,j\le n-1}$ for each odd integer $n>3$?
Semi-primes represented by quadratic polynomials
|
CommonCrawl
|
Learning representation theory of Lie groups for someone who knows Lie algebras
I'd like to learn the representation theory of Lie groups. I have a good knowledge of semisimple Lie algebras and their representation theory as well as the basics of Lie groups.
To what extent are the representation theories of the groups and the corresponding algebras interrelated?
Is there a book you would recommend for someone who knows a lot about Lie algebras representations, but has only rather basic knowledge of Lie groups?
representation-theory lie-groups lie-algebras
Balerion_the_blackBalerion_the_black
The representation theory of Lie groups and Lie algebras are very related. In fact, in the case of Simply-connected Lie groups, the irreducible representations of these Lie groups are in bijection with the irreducible representations of its corresponding Lie algebra. In the case of a connected Lie group, the irreducible representations of its corresponding Lie algebra are in bijection with the irreducible representations of its universal covering space ( which is a Lie group as well ). If your Lie group is not connected, one can still use this correspondence by considering the connected component of the identity ( your group modulo this component will only be a discrete/dimension 0 group ).
Off the top of my head, I can give 2 good books that illustrate this correspondence very well: Representation theory: a first course (Fulton and Harris) and Introduction to Lie Groups and Lie algebras ( Krilliov Jr ). You can also learn a good deal by reading the first couple of chapters of Representations of Compact Lie groups by Bockner. However, the rest of the book develops the theory almost entirely without any reference to Lie algebras.
ElliotElliot
There is an important watershed between finite-dimensional and infinite-dimensional representations of both/either Lie algebras and Lie groups, especially for non-compact Lie groups, most of whose irreducible unitary representations are definitely not finite-dimensional.
Thus, @Elliot's recommendation of Fulton-Harris seems to be a popular one, connecting to finite-dimensional representation theory...
But all but the trivial finite-dimensional repns of infinite-dimensional [Edit! non-compact! :)] Lie groups, like $SL_2(\mathbb R)$, are not unitary. The unitary repns of $SL_2(\mathbb R)$ and such are infinite-dimensional.
For the latter, one of the most congenial paper books is Varadarajan's Cambridge-Press "Harmonic analysis on semi-simple Lie groups". Formal treatments are Knapp's (Princeton Press) and Wallach's (Academic Press).
Harish-Chandra discovered in the 1950s that the repn theory of semi-simple or reductive Lie groups, not only compact ones as considered by Weyl 20+ years earlier, was well-delineated by Lie algebra repns, maybe remembering also the repn theory restricted to a maximal compact. Thus, $\frak{g}$,$K$-modules.
Perhaps the questioner can clarify their needs...
$\begingroup$ "of infinite-dimensional Lie groups, like SL2(R)"? You mean noncompact? $\endgroup$ – darij grinberg Sep 12 '13 at 1:14
$\begingroup$ @darijgrinberg Whoa! Indeed! Thanks for alerting me! :) $\endgroup$ – paul garrett Sep 12 '13 at 2:05
I would suggest Knapp, Lie groups beyond and introduction and Knapp, Representation theory of semisimple groups (an overview based on examples). If needed for a very simple introduction on very basics fact about Lie Groups is Hall, Lie Groups, Lie Algebras, and Representations
Dac0Dac0
Not the answer you're looking for? Browse other questions tagged representation-theory lie-groups lie-algebras or ask your own question.
Open questions in theory of Lie groups and Lie algebras
Lie Groups/Lie algebras to algebraic groups
P-adic Lie groups - Representation theory
Reference request: classification of simple Lie groups and simple real Lie algebras
References and suggestions about the elementary theory of Lie groups and Lie algebras
Is it possible to study Lie algebras without knowing too much of representation theory?
Reference Request: Lie Theory For Quantum Field Theory
Representation theory of associative algebras applied to Lie Algebras
Prerequisites for learning about lie algebras and their representations.
Indecomposable modules in the representation theory of Lie algebras
|
CommonCrawl
|
DOI:10.1088/0305-4470/38/26/013
A Bohmian approach to quantum fractals
@article{Sanz2005ABA,
title={A Bohmian approach to quantum fractals},
author={{\'A}ngel S. Sanz},
journal={Journal of Physics A},
pages={6037-6049}
Á. S. Sanz
Mathematics, Physics
Journal of Physics A
A quantum fractal is a wavefunction with a real and an imaginary part continuous everywhere, but differentiable nowhere. This lack of differentiability has been used as an argument to deny the general validity of Bohmian mechanics (and other trajectory-based approaches) in providing a complete interpretation of quantum mechanics. Here, this assertion is overcome by means of a formal extension of Bohmian mechanics based on a limiting approach. Within this novel formulation, the particle dynamics…
Figures from this paper
Quantum Interference, Hidden Symmetries: Theory and Experimental Facts
The concept of quantum superposition is reconsidered and discussed from the viewpoint of Bohmian mechanics, the hydrodynamic formulation of quantum mechanics, in order to elucidate some physical…
Analyzing the quantum phase with quantum trajectories: A step towards the creation of a Bohmian thinking
(Dated: April 8, 2011)Standard quantum mechanics relies on a series of postulates which constitute its fundamentalpillars. In Bohmian mechanics such postulates appear as a natural consequence of the…
Quantum carpets of a slightly relativistic particle
I. Marzoli, A. Kaplan, F. Saif, W. Schleich
We analyze the structures emerging in the spacetime representation of the probability density woven by a slightly relativistic particle caught in a one-dimensional box. In particular, we evaluate the…
Quantum phase analysis with quantum trajectories: A step towards the creation of a Bohmian thinking
Á. S. Sanz, S. Miret-Artés
We introduce a pedagogical discussion on Bohmian mechanics and its physical implications in connection with the important role played by the quantum phase in the dynamics of quantum processes. In…
Why isn't every physicist a Bohmian?
O. Passon
This note collects, classifies and evaluates common criticism against the de Broglie Bohm theory, including Ockham's razor, asymmetry in the de Broglie Bohm theory, the ``surreal trajectory''…
Boundary bound diffraction: a combined spectral and Bohmian analysis
J. Tounli, A. Alvarado, Á. S. Sanz
Physics, Mathematics
The diffraction-like process displayed by a spatially localized matter wave is here analyzed in a case where the free evolution is frustrated by the presence of hard-wall-type boundaries (beyond the…
A causal look into the quantum Talbot effect.
Medicine, Physics
The Journal of chemical physics
The authors provide an insightful picture of this nonlocal phenomenon as well as its classical limit in terms of Bohmian mechanics, also showing the causal reasons and conditions that explain its appearance.
Setting up tunneling conditions by means of Bohmian mechanics
Usually tunneling is established after imposing some matching conditions on the (time-independent) wave function and its first derivative at the boundaries of a barrier. Here an alternative scheme is…
Bound System Dynamics
Up to now we have considered quantum systems which are not subject to any external force except for open systems. Such systems are usually associated with translational properties. On the contrary,…
N-Slit Interference: Fractals in Near-Field Region, Bohmian Trajectories
V. Sbitnev
Scattering cold particles on an $N$-slit grating is shown to reproduce an interference pattern, that manifests itself in the near-field region as the fractal Talbot carpet. In the far-field region…
Incompleteness of trajectory-based interpretations of quantum mechanics
M. Hall
Trajectory-based approaches to quantum mechanics include the de Broglie?Bohm interpretation and Nelson's stochastic interpretation. It is shown that the usual route to establishing the validity of…
Time evolution of quantum fractals
Wójcik, Bialynicki-Birula, Zyczkowski
Physics, Medicine
A universal relation D(t) = 1+Dx/2 linking the dimensions of space cross sections Dx and time cross sections of the fractal quantum carpets is proved.
Quantum fractals in boxes
M. Berry
A quantum wave with probability density , confined by Dirichlet boundary conditions in a D-dimensional box of arbitrary shape and finite surface area, evolves from the uniform state . For almost all…
Fractal noise in quantum ballistic and diffusive lattice systems
E. Amanatidis, D. E. Katsanos, S. Evangelou
We demonstrate fractal noise in the quantum evolution of wave packets moving either ballistically or diffusively in periodic and quasiperiodic tight-binding lattices, respectively. For the ballistic…
Quantum equilibrium and the origin of absolute uncertainty
D. Dürr, S. Goldstein, N. Zanghí
The quantum formalism is a "measurement" formalism-a phenomenological formalism describing certain macroscopic regularities. We argue that it can be regarded, and best be understood, as arising from…
Derivation of the Schrodinger equation from Newtonian mechanics
Edward Nelson
We examine the hypothesis that every particle of mass $m$ is subject to a Brownian motion with diffusion coefficient $\frac{\ensuremath{\hbar}}{2m}$ and no friction. The influence of an external…
Unravelling quantum carpets: a travelling-wave approach
M. Hall, Martina S. Reineker, W. Schleich
Generic channel and ridge structures are known to appear in the time-dependent position probability distribution of a one-dimensional quantum particle confined to a box. These structures are shown to…
Quantum trajectories in atom-surface scattering with single adsorbates: the role of quantum vortices.
Á. S. Sanz, F. Borondo, S. Miret-Artés
In this work, a full quantum study of the scattering of He atoms off single CO molecules, adsorbed onto the Pt(111) surface, is presented within the formalism of quantum trajectories provided by…
Causal trajectories description of atom diffraction by surfaces
The method of quantum trajectories proposed by de Broglie and Bohm is applied to the study of atom diffraction by surfaces. As an example, a realistic model for the scattering of He off corrugated Cu…
Quantum trajectories in elastic atom-surface scattering: threshold and selective adsorption resonances.
Both threshold and selective adsorption resonances are explained by means of this quantum trapping considering different space and time scales, and a mapping between each region of the (initial) incoming plane wave and the different parts of the diffraction and resonance patterns can be easily established.
|
CommonCrawl
|
Number of atoms per unit cell
I don't understand the concept of unit cell and the number of atoms per unit cell in a cubic lattice also the calculations for the number of atoms. For example in the $\ce{fcc}$ lattice, number of atoms per unit cell is:
$$8\cdot\frac{1}{8} + 6\cdot\frac{1}{2}=4$$
what does the 2 and 8 in the denominator stand for?
also 4?
crystal-structure solid-state-chemistry crystallography
Wafaa J. GhWafaa J. Gh
The denominator signifies the number of cubes that are needed to completely encompass the whole point. For example, a corner point can be thought of as a center of 8 whole cubes, while a face centre is enclosed by 2 cubes and an edge center by 4. Hence, only 1/8 of a corner atom is in a specific unit cell and so on and so forth.
Consequently, the total number of atoms in a unit cell (say a FCC) would be equal to -
(no of corners)(fraction of corner in the unit cell) = 8(1/8)
(no of face centers)(fraction of face center in the unit cell) = 6(1/2)
which equals to 4
AyushmaanAyushmaan
$\begingroup$ It should be noted that this answer (and the question!) only make sense for a unit cell which contains an atom in the corner and one on each symmetry-equivalent face. You might then translate the unit cell (or the atoms) to show how one can arrive at the end result of four in a different way. $\endgroup$ – Jan Oct 8 '17 at 15:59
Not the answer you're looking for? Browse other questions tagged crystal-structure solid-state-chemistry crystallography or ask your own question.
How do I calculate lattice points per unit volume?
Confusion in unit cells of crystal system
Hexagonal unit cells
Unit cell of CdCl2
Why is a face-centred cubic unit cell not regarded as equivalent to a body-centred tetragonal unit cell?
Solid state and packing
How many crystal forms of HCl have been identified - where to find the unit cells/lattice constants?
How to determine the edge length of the aluminium unit cell?
Does a molecule need to be placed symmetrically in the unit cell?
How can I find the smaller symmetric structure from big crystal unit cell?
|
CommonCrawl
|
Geometry and the imagination
← A tale of two arithmetic lattices
Groups quasi-isometric to planes →
Div, grad, curl and all this
Posted on May 26, 2014 by Danny Calegari
The title of this post is a nod to the excellent and well-known Div, grad, curl and all that by Harry Schey (and perhaps also to the lesser-known sequel to one of the more consoling histories of Great Britain), and the purpose is to explain how to generalize these differential operators (familiar to electrical engineers and undergraduates taking vector calculus) and a few other ones from Euclidean 3-space to arbitrary Riemannian manifolds. I have a complicated relationship with the subject of Riemannian geometry; when I reviewed Dominic Joyce's book Riemannian holonomy groups and calibrated geometry for SIAM reviews a few years ago, I began my review with the following sentence:
Riemannian manifolds are not primitive mathematical objects, like numbers, or functions, or graphs. They represent a compromise between local Euclidean geometry and global smooth topology, and another sort of compromise between precognitive geometric intuition and precise mathematical formalism.
Don't ask me precisely what I meant by that; rather observe the repeated use of the key word compromise. The study of Riemannian geometry is — at least to me — fraught with compromise, a compromise which begins with language and notation. On the one hand, one would like a language and a formalism which treats Riemannian manifolds on their own terms, without introducing superfluous extra structure, and in which the fundamental objects and their properties are highlighted; on the other hand, in order to actually compute or to use the all-important tools of vector calculus and analysis one must introduce coordinates, indices, and cryptic notation which trips up beginners and experts alike.
Actually, my complicated relationship began the first time I was introduced to vectors. It was 1986, I was at a training camp for Australian mathematics olympiad hopefuls, and Ben Robinson gave me a 2 minute introduction to the subject over lunch. I found the notation overwhelming, and there was no connection in my mind between the letters and subscripts on one side of the page and the squiggly arrows and parallelograms on the other side. By the time the subject came up again a few years later in high school, somehow the mystery had faded, and the vocabulary and meaning of vectors, inner products, determinants etc. was crystal clear. I think that the difference this time around was that I concentrated first on learning what vectors were, and only when I had gotten the point did I engage with the question of how to represent them or calculate with them. In a similar way, my introduction to div, grad and curl was equally painless, since we learned the subject in physics class (in the last couple of years of high school) in the context of classical electrodynamics. I might have been challenged to grasp the abstract idea of a "vector field" as it is introduced in some textbooks, but those little pictures of lines of force running from positive to negative charges made immediate and intuitive sense. In fact, the whole idea of describing a vector field as a partial differential operator such as obscures an enormous complexity; it's easy enough to compute with an expression like this, but as a mathematical object itself it is quite sophisticated, since even to define it we need not just one coordinate but an entire system of coordinates on some nearby smooth patch. Contrast this with the intuitive idea of a particle moving along a line of force, and being subjected to some influence which varies along the trajectory. I'm grateful to whoever designed the Melbourne high school science curriculum in the late 1980's for integrating the maths and physics curricula so successfully.
A few years later, as an undergraduate at the University of Melbourne, I was attending Marty Ross' reading group as we attempted to go through Cheeger and Ebin's Comparison theorems in Riemannian geometry, and the confusion was back. Noel Hicks' MathSciNet review calls this book a "tight, elegant, and delightful addition to the literature on global Riemannian geometry", although he remarks that the "tightness of the exposition and a few misprints leave the reader with some challenging work". Today I love this book, and recommend it to anyone; but at the time it was a terrible book to learn Riemannian geometry from for the first time (actually, since I was not a maths major, my confusion was amplified by many gaps in my intermediate education). Some aspects of the book I could appreciate — at least we were not drowning in indices, and the formulae were almost readable. But I was simply at a loss to understand the rules of the game — what sort of manipulations of formulae were allowed? how do you contract a vector field with a form? why am I allowed to choose coordinates at this point so that everything magically simplifies? how would anyone ever stumble on the formula for the Ricci curvature and see that it was invariant and had such nice properties? and so on.
And yet again, the duration of a couple of years made a world of difference. As a graduate student at Berkeley taking classes from Shoshichi Kobayashi and Sasha Givental, suddenly everything made sense (well, not everything, but at least the rudiments of Riemannian geometry). The difference again was that the notation and the calculations followed a discussion of what the objects were, and what information they contained and why you might want to use them or talk about them. And, crucially, this initial discussion was carried out first informally in words rather than by beginning with a formal definition or a formula.
So with this backstory in mind, I hope it might be useful to the graduate student out there who is struggling with the elements of the tensor calculus to go through a brief informal discussion of the meaning of some of the basic differential operators, which are the ingredients out of which much of the beauty of the subject can be synthesized.
Let's get down to brass tacks. We start with a smooth manifold M and a vector field X. What is a vector field? For me I always think of it dynamically as a flow: the manifold is something like a fluid, and an object in M will be swept along by this flow and moved along the flowlines, or integral curves of the vector field. On a smooth manifold without a metric it doesn't make sense to talk about whether the flow is moving "fast" or "slow", but it does make sense to look at the places where it is stationary (the zeros of the vector field) and see whether the zeros are isolated or not, stable or unstable, or come in families. If f is a smooth function on M, the value of f varies along the integral curves of the vector field, and we can look at the rate at which the value changes; this is the derivative of f in the direction X, and denoted Xf. It is a smooth function on M; we can iterate this procedure and compute X(Xf), X(X(Xf)) and so on. The level sets of a smooth function f are (generically) smooth manifolds, and the whole idea of calculus is to approximate smooth things locally by linear things; thus generically through most points we can look at the level set of f through that point, and the tangent space to that level set. This is a hyperplane, and is spanned locally by the vector fields for which Xf is zero at the given point. More precisely, we can define a 1-form df just by setting df(X) = Xf; where df is nonzero, the kernel of df is the tangent space to the level set to f as described above.
Grad. Now we introduce a Riemannian metric, which is a smooth choice of inner product on the tangent space at each point. It does two things for us: first, it lets us talk about the speed of a flow generated by a vector field X (or equivalently, the size of the vectors); and second, it lets us measure the angle between two vectors at each point, in particular it lets us say what it means for vectors to be perpendicular. If f is a smooth function on a Riemannian manifold, we can do more than just construct the level sets of f; we can ask in which direction the value of f increases the fastest (and we can further ask how fast it increases in that direction). The answer to this question is the gradient; the gradient of f is a vector field which points always in the direction in which f increases the fastest, and with a magnitude proportional to the rate at which it increases there. In terms of the level sets of the function f, any vector field can be decomposed into a part which is tangent to the level sets (this is the part of the vector field whose flow keeps f unchanged) and a part which is perpendicular to it; the gradient is thus everywhere perpendicular to the level sets of f.
The inner product lets us give isomorphisms between vector fields and 1-forms called the sharp and flat isomorphisms. If is a 1-form, and X is a vector field, we define the vector field and the 1-form by the formulae
Sharp and flat are inverse operations. In words, a vector field and a 1-form are related by these operations if at each point they have the same magnitude, and the direction of the vector field is perpendicular to the kernel of the 1-form (i.e. the tangent space on which the 1-form vanishes). Using these isomorphisms, the gradient of a function f is just the vector field obtained by applying the sharp isomorphism to the 1-form df. In other words, it is the unique vector field such that for any other vector field X there is an identity
The zeros of the gradient are the critical points of f; for instance, the gradient vanishes at the minimum and the maximum of f.
Div. In Euclidean space of some dimension n, a collection of n linearly independent vectors form the edges of a parallelepiped. The volume of the parallelepiped is the determinant of the matrix whose columns are the given vectors. Actually there is a subtlety here — we need to choose an ordering of the vectors to take the determinant. A permutation might change the determinant by a factor of -1 if the sign of the permutation is odd. On an oriented Riemannian n-manifold if we have n vectors at a point, we can convert them to 1-forms and wedge them together — the result is an n-form. On an n-dimensional vector space, any two n-forms are proportional. Wedging together the 1-forms associated to a basis of perpendicular vectors of length 1 (an orthonormal collection) gives an n-form at each point which we call the volume form, and denote it . For any other n-tuple of vectors the volume of the parallelepiped is equal to the ratio of the n-form they determine (by taking sharp flat and wedging) and the volume form.
Now, there is an operator called Hodge star which acts on differential forms as follows. A k-form can be wedged with an (n-k) form to make an n-form, and this n-form can be compared in size to the volume form. We define the (n-k) form to be the smallest form such that
In other words, is perpendicular to the subspace of forms with . With this notation is the constant function equal to 1 everywhere; conversely for any smooth function f we have .
If X is a vector field, the flow generated by X carries along not just points, but tensor fields of all kinds. Covariant tensor fields are pushed forward by the flow, contravariant ones are pulled back. Thus a stationary observer at a point in M sees a one-parameter family of tensors of some fixed kind flowing through their point, and they may differentiate this family. The result is the Lie derivative of the tensor field, and is denote . The divergence of a vector field X measures the extent to which the flow generated by X does or does not preserve volume. It is a function which vanishes where the field infinitesimally preserves volume, and is biggest where the flow expands volume the most and smallest where the flow compresses volume the most.
The Lie derivative of the volume form is an n-form; taking Hodge star gives a function, and this function is the divergence. Thus:
In terms of the operators we have described above, applying flat to a vector field X gives a 1-form . Applying Hodge star to this one form gives rise to an (n-1)-form, then applying d gives an n-form, and this n-form (finally) is precisely . Thus,
Gradient and divergence are "almost" dual to each other under Hodge star, in the following sense. Let's suppose we have some function f and some vector field X. We can take the gradient and form , and then we can look at the inner product of the gradient with X to obtain a function, and then integrate this function over the manifold. I.e.
If M is closed, the integral of an exact form over M is zero, so we deduce that
so that -div is a formal adjoint to grad.
Laplacian. If f is a function, we can first apply the gradient and then the divergence to obtain another function; this composition (or rather its negative) is the Laplacian, and is denoted . In other words,
Note that there are competing conventions here: it is common to denote the negative of this quantity (i.e. the composition div grad itself) as the Laplacian. But this convention is also common, and has the advantage that the Laplacian is a non-negative self-adjoint operator. The Laplacian governs the flow of heat in the manifold; if we imagine our manifold is filled with some collection of microscopic particles buzzing around randomly at great speed and carrying kinetic energy around, then the temperature is a measure of the amount of energy per unit of volume. If the temperature is constant, then although the particles can move from point to point, on average for each particle that moves out of a small box, there will be another particle that moves in from the outside; thus the ensemble of particles is in "thermal equilibrium". However, if there is a local hot spot — i.e. a concentration of high energy particles — then these particles will have a tendency to spread out, in the sense that the average number of particles that leave the small hot box will exceed the number of particles that enter from neighboring cooler boxes. Thus, heat will tend to spread out by the vector field which is its negative gradient, and where this vector field diverges, the heat will dissipate and the temperature will cool. In other words, if f is the temperature, then the derivative of temperature over time satisfies the heat equation . Actually, since heat can come in or out from any direction, what is important is how the heat at a point deviates from the average of the heat at nearby points. The stationary heat distributions — i.e. the functions f with — are therefore the functions which satisfy an (infinitesimal) mean value property. These functions are called harmonic.
The erratic motion of the infinitesimal particles as they bump into each other and drift around is called Brownian motion, after the botanist Robert Brown, who is known to Australians for being the naturalist on the scientific voyage of the Investigator which sailed to Western Australia in 1801. Later, in 1827, he observed the jittery motion of minute particles ejected from pollen grains, and the phenomenon came to be named after him. Thus, a function on a Riemannian manifold is harmonic if its expected value stays constant under random Brownian motion, and the Laplacian describes the way that the expected value of the function changes under such motion.
Curl. After converting a vector field to a 1-form with the flat operator, one can apply the operator d to obtain a closed 2-form. On an arbitrary Riemannian manifold, this is more or less the end of the story, but on a 3-manifold, applying Hodge star to a 2-form gives back a 1-form, which can then be converted back to a vector field with the sharp operator. This composition is the curl of a vector field; i.e.
Notice that this satisfies the identities
Thus one of the functions of the curl operator is to give a necessary condition on a vector field to arise as the gradient of some function; such a function, if it exists, is called a potential for the vector field. Since a gradient flows from places where the function is small to where it is large, it does not recur or circulate; hence in a sense the curl measures the tendency of the vector field to circulate, or to form closed orbits. Actually there is a subtlety here which is that the curl will vanish precisely on vector fields which are locally the gradient of a smooth function. The topology of M — in particular its first homology group with real coefficients — parameterizes curl-free vector fields modulo those which are gradients of smooth functions.
As mentioned above, the curl measures the tendency of the vector field to spiral around an axis (locally); the direction of this axis of spiraling is the direction of the vector field , and the magnitude is the rate of twisting. Another way to say this is that the magnitude of the curl measures the tendency of flowlines of the vector field to wind positively around each other. A vector field and its curl can be proportional; such vector fields are called Beltrami fields and they arise (up to rescaling) as the Reeb flows associated to contact structures.
On an arbitrary Riemannian n-manifold it is still possible to interpret the curl in terms of rotation or twisting. Using the sharp and flat isomorphisms, a 2-form determines at each point a skew-symmetric endomorphism of the tangent space. The endomorphism applies to a vector by first contracting it with the 2-form to produce a 1-form, then using the sharp operator to transform it back to a vector. The skew-symmetry of this endomorphism is equivalent to the alternating property of forms. Now, a skew-symmetric endomorphism of a vector space can be thought of as an infinitesimal rotation, since the Lie algebra of the orthogonal group consists precisely of skew-symmetric matrices. Thus a vector field X on a Riemannian manifold determines a field of infinitesimal rotations, and this field is one way of thinking of . On a 3-manifold, a rotation has a unique axis, and this axis points in the direction of the vector field . On a Kähler manifold, the Kähler form determines a field of infinitesimal rotations which rotate the complex directions at constant speed.
Strain. Actually, the curl, the divergence, and a third operator called the strain can all be put on a uniform footing, as follows. We continue to think of a vector field X as a flow on a smooth manifold M. Tensor fields are pushed or pulled around by X, and an observer at a fixed point sees a 1-parameter family of tensors (of a fixed kind) evolving over time. But we would like to be able to study the effect of X on an object which is carried about and distorted by the flow; for example, we might have a curve or a submanifold in M, and we might want to understand how the geometry of this submanifold is preserved or distorted as it is carried along by the flow. Calculus takes place in a fixed vector space, and the flow is moving our object along the flowlines. We need some way to bring the object back along the flowline to a fixed reference frame so that we can understand how it is being transformed by the flow. On a Riemannian manifold there is a canonical way to move tensor fields along flowlines: we move them by parallel transport. There is a unique connection on the manifold called the Levi-Civita connection which preserves the metric, and is torsion-free. The first condition just means that parallel transport is an isometry from one tangent space to the other. The second condition is more subtle, and it means (roughly) that there is no "unnecessary twisting" of the tangent space as it is transported around (no yaw, in aviation terms). Think of a car moving down a straight freeway; the geometry of the car is (hopefully!) not distorted by its motion, and the occupants of the car are not unnecessarily rotated or twisted. When the car hits some ice, it begins to skid and twist; the occupants are still moved in roughly the same overall direction, and the geometry is still not distorted (until a collision, anyway), but there is unnecessary twisting — the "torsion" of the connection.
So on a Riemannian manifold, we can flow objects away by a vector field X, and then parallel transport them back along the flowlines with the Levi-Civita connection. Now "the same" tensor experiences the effect of the vector field X while staying in "the same" vector space, so that we can compute the derivative to determine the infinitesimal effect of the flow. This derivative is the operator denoted by Kobayashi–Nomizu, and it is easy to check that it is itself a tensor field for any fixed X, and therefore determines a section of the bundle of endomorphisms of the tangent bundle.
On a Riemannian manifold, the space of endomorphisms of the tangent space at each point is a module for the Lie algebra of the orthogonal group, and it makes sense to decompose an endomorphism into components which correspond to the irreducible factors. Said more prosaically, an endomorphism is expressed (in terms of an orthonormal basis) as a matrix, and we can decompose this matrix into an antisymmetric and a symmetric part. Further, the symmetric part can be decomposed into its trace (a diagonal matrix, up to scale) and a trace-free part.
In this language,
the divergence of X is the negative of the trace of ;
the curl of X is the skew-symmetric part of ; and
the strain of X is the trace-free symmetric part of .
The strain measures the infinitesimal failure of flow by X to be conformal. Under a conformal transformation, lengths might change but angles are preserved. The strain measures the extent to which some directions are pushed and pulled by the flow of X more than others; in general relativity, this is expressed by talking about the tidal force of the gravitational field. An extreme example of tidal forces is the spaghettification experienced (briefly) by an observer falling in to a black hole. In the theory of quasiconformal analysis, a Beltrami field prescribes the strain of a smooth mapping between domains.
and so on. This is a far from exhaustive survey of some of the key players in Riemannian geometry, and yet strangely I am temporarily exhausted. It is hard work to unpack the telegraphic beauty of Levi-Civita's calculus into a collection of stories. And this is the undeniable advantage of the notational formalism — its concision. A geometric formula can (and often does) contain an enormous amount of information — much of it explicit, but some of it implicit, and depending on the reader to be familiar with a host of conventions, simplifications, abbreviations, and even ad hoc identifications which might depend on context. Maybe the trick is to learn to read more slowly. Or if you have a couple of years to spare, you can always do what I did, and go away and come back later when the material is ready for you. For the curious, I have a few notes on my webpage, including notes from a class on Riemannian geometry I taught in Spring 2013, and notes from a class on minimal surfaces that I'm teaching right now (much of this blog post is adapted from the introduction to the latter). Bear in mind that these notes are not very polished in places, and the minimal surface notes are very rudimentary and only cover a couple of topics as of this writing.
This entry was posted in 3-manifolds, Riemannian geometry and tagged curl, div, exposition, grad, Riemannian geometry, vector field. Bookmark the permalink.
12 Responses to Div, grad, curl and all this
Jon Awbrey says:
Here's a playground you may find some fun in —
Differential Logic and Dynamic Systems
Danny Calegari says:
AlexE says:
Do have a nice reference for the connection of the operator A_X with div, curl and strain, and also for the interpretation of the strain as the infinitesimal conformal change? I looked into the book of Kobayashi and Nomizu and only found the computation that the divergence is minus the trace of A_X. Thanks in advance.
Hi AlexE – no idea about a reference. Maybe Kobayashi says something about it in "Transformation groups in differential geometry"? By the way, this idea of obtaining natural differential operators from representation theory is just the tip of a very big iceberg. One piece of this iceberg is the Bernstein-Gelfand-Gelfand complex – there was an introductory article on this by Michael Eastwood in the Notices back in 1999:
http://www.ams.org/notices/199911/fea-eastwood.pdf
Siran Li says:
Prof. Calegari, Thanks for the nice blog! There is one point I failed to work out: in defining curl(X) on arbitrary dimensional manifold $M^n$, you mentioned that we can think of it as a 2-form, hence a rotation. It seems clear to me that a 2-form is an infinitesimal rotation, as e.g. $\Omega^2(M) \cong \Gamma (T^\ast M \otimes TM) \cong \Gamma (End (TM)) \cong \mathfrak{so}(n)$. But how to associate to a vector field X a 2-form curl (X)?
Hi Siran – thanks for your question! I assume you read the part where I said that from a vector field X we can get a 1-form by applying the flat operator, then take exterior d to get a 2-form. In dimension 3 we can then dualize with Hodge star and apply sharp to get back a vector field; but in other dimensions we should just stick with the 2-form, and think of it as a "rotation field". But maybe you were asking how to think about curl(X) as a 2-form. The way to think about 2-forms in general is to think about how they assign a number to any 2-dimensional plane. How does a vector field X assign a number to 2-dimensional planes? Imagine a toy boat being carried down a stream; as it moves forward, it might also spin around in an eddy, eg because the water is flowing faster on one side than the other. The rate at which the boat spins in each 2-dimensional plane is the curl (thought of as a 2-form) applied to that plane. In an arbitrary Riemannian manifold, we need to use the Levi-Civita connection to say what we mean by "the" plane in which we are measuring the rate of rotation (since these planes are in tangent spaces at different points).
Thank you very much Prof. Calegari! I've always thought that curl is just a 3-dim'l thing, but your interpretation as a 2-form is certainly quite natural!
For your last sentence ("In an arbitrary Riemannian manifold…"), I assume that you mean the following: the boat dashes in the vertical direction (i.e. as a vector field, X is vertical in the bundle TM), while we want to measure the rate of rotation in any horizontal 2-plane. Thus, we need the connection to specify the choice of horizontal directions. Am I correct?
You can also measure rotation in a 2-plane which is tangent to the direction of motion (here maybe the mental model is to imagine something "rolling" as it moves forward; there is rotation – relative to parallel transport – in the plane perpendicular to the axis on which it rolls); i.e. the curl doesn't have to vanish in 2-planes tangent to the flow. Think about a vector field in 2 dimensions: the curl is now just a number at every point, the rate of turning of the 2-dimensional tangent space under the flow, relative to parallel transport.
Oh I see–Thank you!!
Pingback: Why is force a 1-form?
Paul Masham says:
Dear Professor Calegari,
Many thanks for this very illuminating blog. Although I was aware that a differential geometric definition of grad, div and curl existed from my undergraduate days 40 years or so ago when Tom Willmore gave a course on DG at Durham University UK, I had never quite assimilated this until now. In the mean time I worked as an industrial mathematician until I retired a few years ago. I have since taken a keen interest again in pure maths.
Just a minor point, at the end of the first paragraph above under your description of div where you say '(by taking sharp and wedging)'- did you mean 'flat' (and not 'sharp')? (otherwise I am confused!).
Dear Paul – yes, I think you're right, it should be "flat".
Leave a Reply to Siran Li Cancel reply
Bing's wild involution
Stiefel-Whitney cycles as intersections
Schläfli – for lush, voluminous polyhedra
Slightly elevated Teichmuller theory
Mr Spock complexes (after Aitchison)
Roots, Schottky semigroups, and Bandt's Conjecture
Taut foliations and positive forms
Explosions – now in glorious 2D!
Dipoles and Pixie Dust
Mapping class groups: the next generation
Groups quasi-isometric to planes
A tale of two arithmetic lattices
3-manifolds everywhere
kleinian, a tool for visualizing Kleinian groups
Kähler manifolds and groups, part 2
Liouville illiouminated
Scharlemann on Schoenflies
You can solve the cube – with commutators!
Chiral subsurface projection, asymmetric metrics and quasimorphisms
Random groups contain surface subgroups
wireframe, a tool for drawing surfaces
Cube complexes, Reidemeister 3, zonohedra and the missing 8th region
Orthocentricity
0xDE
Bluefawnpinkmanga
Combinatorics and more
Deep street soul
Evaluating E-Discovery
floerhomology
Gaddeswarup
Geometric Group Theory
Godel's lost letter and P=NP
Images des mathematiques
Jim Woodring
Low dimensional topology
Math Overflow
Math/Art Blog
n-Category Cafe
Noncommutative geometry
Quomodocumque
Scott McCloud
Secret blogging seminar
Sketches of topology
Tanya Khovanova
Terry Tao
Tim Gowers
Foliations and the Geometry of 3-Manifolds
Computop
my GitHub repository
Larry Taylor on Scharlemann on Schoenflies
Anton Izosimov on How to see the genus
Adam Wood on How to see the genus
Constancy of the spe… on Measure theory, topology, and…
Torsten on Circle packing – theory…
3-manifolds (21)
4-manifolds (2)
Algebraic Geometry (2)
Algebraic Topology (1)
Complex analysis (11)
Convex geometry (2)
Diophantine approximation (1)
Dynamics (13)
Ergodic Theory (8)
Euclidean Geometry (8)
Foliations (2)
Geometric structures (6)
Hyperbolic geometry (25)
Knot theory (1)
Lie groups (8)
Number theory (2)
Polyhedra (3)
Projective geometry (2)
Riemannian geometry (1)
Rigidity (2)
Special functions (2)
Surfaces (20)
Symplectic geometry (3)
TQFT (1)
|
CommonCrawl
|
PostBQP Postscripts: A Confession of Mathematical Errors
tl;dr: This post reveals two errors in one of my most-cited papers, and also explains how to fix them. Thanks to Piotr Achinger, Michael Cohen, Greg Kuperberg, Ciaran Lee, Ryan O'Donnell, Julian Rosen, Will Sawin, Cem Say, and others for their contributions to this post.
If you look at my Wikipedia page, apparently one of the two things in the world that I'm "known for" (along with algebrization) is "quantum Turing with postselection." By this, Wikipedia means my 2004 definition of the complexity class PostBQP—that is, the class of decision problems solvable in bounded-error quantum polynomial time, assuming the ability to postselect (or condition) on certain measurement outcomes—and my proof that PostBQP coincides with the classical complexity PP (that is, the class of decision problems expressible in terms of whether the number of inputs that cause a given polynomial-time Turing machine to accept does or doesn't exceed some threshold).
To explain this a bit: even without quantum mechanics, it's pretty obvious that, if you could "postselect" on exponentially-unlikely events, then you'd get huge, unrealistic amounts of computational power. For example (and apologies in advance for the macabre imagery), you could "solve" NP-complete problems in polynomial time by simply guessing a random solution, then checking whether the solution is right, and shooting yourself if it happened to be wrong! Conditioned on still being alive (and if you like, appealing to the "anthropic principle"), you must find yourself having guessed a valid solution—assuming, of course, that there were any valid solutions to be found. If there weren't any, then you'd seem to be out of luck! (Exercise for the reader: generalize this "algorithm," so that it still works even if you don't know in advance whether your NP-complete problem instance has any valid solutions.)
So with the PostBQP=PP theorem, the surprise was not that postselection gives you lots of computational power, but rather that postselection combined with quantum mechanics gives you much more power even than postselection by itself (or quantum mechanics by itself, for that matter). Since PPP=P#P, the class PP basically captures the full difficulty of #P-complete counting problems—that is, not just solving an NP-complete problem, but counting how many solutions it has. It's not obvious that a quantum computer with postselection can solve counting problems, but that's what the theorem shows. That, in turn, has implications for other things: for example, I showed it can be used to prove classical facts about PP, like the fact that PP is closed under intersection (the Beigel-Reingold-Spielman Theorem), in a straightforward way; and it's also used to show the hardness of quantum sampling problems, in the work of Bremner-Jozsa-Shepherd as well as my BosonSampling work with Arkhipov.
I'm diffident about being "known for" something so simple; once I had asked the question, the proof of PostBQP=PP took me all of an hour to work out. Yet PostBQP ended up being a hundred times more influential for quantum computing theory than things on which I expended a thousand times more effort. So on balance, I guess I'm happy to call PostBQP my own.
That's why today's post comes with a special sense of intellectual responsibility. Within the last month, it's come to my attention that there are at least two embarrassing oversights in my PostBQP paper from a decade ago, one of them concerning the very definition of PostBQP. I hasten to clarify: once one fixes up the definition, the PostBQP=PP theorem remains perfectly valid, and all the applications of PostBQP that I mentioned above—for example, to reproving Beigel-Reingold-Spielman, and to the hardness of quantum sampling problems—go through just fine. But if you think I have nothing to be embarrassed about: well, read on.
The definitional subtlety came clearly to my attention a few weeks ago, when I was lecturing about PostBQP in my 6.845 Quantum Complexity Theory graduate class. I defined PostBQP as the class of languages L⊆{0,1}* for which there exists a polynomial-time quantum Turing machine M such that, for all inputs x∈{0,1}*,
M(x) "succeeds" (determined, say, by measuring its first output qubit in the {|0>,|1>} basis) with nonzero probability.
If x∈L, then conditioned on M(x) succeeding, M(x) "accepts" (determined, say, by measuring its second output qubit in the {|0>,|1>} basis) with probability at least 2/3.
If x∉L, then conditioned on M(x) succeeding, M(x) accepts with probability at most 1/3.
I then had to reassure the students that PostBQP, so defined, was a "robust" class: that is, that the definition doesn't depend on stupid things like which set of quantum gates we allow. I argued that, even though we're postselecting on exponentially-unlikely events, it's still OK, because the Solovay-Kitaev Theorem lets us approximate any desired unitary to within exponentially-small error, with only a polynomial increase in the size of our quantum circuit. (Here we actually need the full power of the Solovay-Kitaev Theorem, in contrast to ordinary BQP, where we only need part of the power.)
A student in the class, Michael Cohen, immediately jumped in with a difficulty: what if M(x) succeeded, not with exponentially-small probability, but with doubly-exponentially-small probability—say, exp(-2n)? In that case, one could no longer use the Solovay-Kitaev Theorem to show the irrelevance of the gate set. It would no longer even be clear that PostBQP⊆PP, since the PP simulation might not be able to keep track of such tiny probabilities.
Thinking on my feet, I replied that we could presumably choose a set of gates—for example, gates involving rational numbers only—for which doubly-exponentially-small probabilities would never arise. Or if all else failed, we could simply add to the definition of PostBQP that M(x) had to "succeed" with probability at least 1/exp(n): after all, that was the only situation I ever cared about anyway, and the only one that ever arose in the applications of PostBQP.
But the question still gnawed at me: was there a problem with my original, unamended definition of PostBQP? If we weren't careful in choosing our gate set, could we have cancellations that produced doubly-exponentially-small probabilities? I promised I'd think about it more.
By a funny coincidence, just a couple weeks later, Ciaran Lee, a student at Oxford, emailed me the exact same question. So on a train ride from Princeton to Boston, I decided to think about it for real. It wasn't hard to show that, if the gates involved square roots of rational numbers only—for example, if we're dealing with the Hadamard and Toffoli gates, or the cos(π/8) and CNOT gates, or other standard gate sets—then every measurement outcome has at least 1/exp(n) probability, so there's no problem with the definition of PostBQP. But I didn't know what might happen with stranger gate sets.
As is my wont these days—when parenting, teaching, and so forth leave me with almost no time to concentrate on math—I posted the problem to MathOverflow. Almost immediately, I got incisive responses. First, Piotr Achinger pointed out that, if we allow arbitrary gates, then it's easy to get massive cancellations. In more detail, let {an} be extremely-rapidly growing sequence of integers, say with an+1 > exp(an). Then define
$$ \alpha = \sum_{n=1}^{\infty} 0.1^{a_n}. $$
If we write out α in decimal notation, it will consist of mostly 0's, but with 1's spaced further and further apart, like so: 0.1101000000000001000…. Now consider a gate set that involves α as well as 0.1 and -0.1 as matrix entries. Given n qubits, it's not hard to see that we can set up an interference experiment in which one of the paths leading to a given outcome E has amplitude α, and the other paths have amplitudes $$ -(0.1^{a_1}), -(0.1^{a_2}), \ldots, -(0.1^{a_k}), $$ where k is the largest integer such that ak≤n. In that case, the total amplitude of E will be about $$0.1^{a_{k+1}},$$ which for most values of n is doubly-exponentially small in n. Of course, by simply choosing a faster-growing sequence {an}, we can cause an even more severe cancellation.
Furthermore, by modifying the above construction to involve two crazy transcendental numbers α and β, I claim that we can set up a PostBQP computation such that deciding what happens is arbitrarily harder than PP (though still computable)—say, outside of exponential space, or even triple-exponential space. Moreover, we can do this despite the fact that the first n digits of α and β remain computable in O(n) time. The details are left as an exercise for the interested reader.
Yet even though we can engineer massive cancellations with crazy gates, I still conjectured that nothing would go wrong with "normal" gates: for example, gates involving algebraic amplitudes only. More formally, I conjectured that any finite set A=(a1,…,ak) of algebraic numbers is "tame," in the sense that, if p is any degree-n polynomial with integer coefficients at most exp(n) in absolute value, then p(a1,…,ak)≠0 implies |p(a1,…,ak)|≥1/exp(n). And indeed, Julian Rosen on MathOverflow found an elegant proof of this fact. I'll let you read it over there if you're interested, but briefly, it interprets the amplitude we want as one particular Archimedean valuation of a certain element of a number field, and then lower-bounds the amplitude by considering the product of all Archimedean and non-Archimedean valuations (the latter of which involves the p-adic numbers). Since this was a bit heavy-duty for me, I was grateful when Will Sawin reformulated the proof in linear-algebraic terms that I understood.
And then came the embarrassing part. A few days ago, I was chatting with Greg Kuperberg, the renowned mathematician and author of our climate-change parable. I thought he'd be interested in this PostBQP progress, so I mentioned it to him. Delicately, Greg let me know that he had recently proved the exact same results, for the exact same reason (namely, fixing the definition of PostBQP), for the latest revision of his paper How Hard Is It to Approximate the Jones Polynomial?. Moreover, he actually wrote to me in June to tell me about this! At the time, however, I regarded it as "pointless mathematical hairsplitting" (who cares about these low-level gate-set issues anyway?). So I didn't pay it any attention—and then I'd completely forgotten about Greg's work when the question resurfaced a few months later. This is truly a just punishment for looking down on "mathematical hairsplitting," and not a lesson I'll soon forget.
Anyway, Greg's paper provides yet a third proof that the algebraic numbers are tame, this one using Galois conjugates (though it turns out that, from a sufficiently refined perspective, Greg's proof is equivalent to the other two).
There remains one obvious open problem here, one that I noted in the MathOverflow post and in which Greg is also extremely interested. Namely, we now know that it's possible to screw up PostBQP using gates with amplitudes that are crazy transcendental numbers (closely related to the Liouville numbers). And we also know that, if the gates have algebraic amplitudes, then everything is fine: all events have at least 1/exp(n) probability. But what if the gates have not-so-crazy transcendental amplitudes, like 1/e, or (a bit more realistically) cos(2)? I conjecture that everything is still fine, but the proof techniques that worked for the algebraic case seem useless here.
Stepping back, how great are the consequences of all this for our understanding of PostBQP? Fortunately, I claim that they're not that great, for the following reason. As Adleman, DeMarrais, and Huang already noted in 1997—in the same paper that proved BQP⊆PP—we can screw up the definition even of BQP, let alone PostBQP, using a bizarre enough gate set. For example, suppose we had a gate G that mapped |0> to x|0>+y|1>, where y was a real number whose binary expansion encoded the halting problem (for example, y might equal Chaitin's Ω). Then by applying G more and more times, we could learn more and more bits of y, and thereby solve an uncomputable problem in the limit n→∞.
Faced with this observation, most quantum computing experts would say something like: "OK, but this is silly! It has no physical relevance, since we'll never come across a magical gate like G—if only we did! And at any rate, it has nothing to do with quantum computing specifically: even classically, one could imagine a coin that landed heads with probability equal to Chaitin's Ω. Therefore, the right way to deal with this is simply to define BQP in such a way as to disallow such absurd gates." And indeed, that is what's done today—usually without even remarking on it.
Now, it turns out that even gates that are "perfectly safe" for defining BQP, can turn "unsafe" when it comes to defining PostBQP. To screw up the definition of PostBQP, it's not necessary that a gate involve uncomputable (or extremely hard-to-compute) amplitudes: the amplitudes could all be easily computable, but they could still be "unsafe" because of massive cancellations, as in the example above involving α. But one could think of this as a difference of degree, rather than of kind. It's still true that there's a large set of gates, including virtually all the gates anyone has ever cared about in practice (Toffoli, Hadamard, π/8, etc. etc.), that are perfectly safe for defining the complexity class; it's just that the set is slightly smaller than it was for BQP.
The other issue with the PostBQP=PP paper was discovered by Ryan O'Donnell and Cem Say. In Proposition 3 of the paper, I claim that PostBQP = BQPPostBQP||,classical, where the latter is the class of problems solvable by a BQP machine that's allowed to make poly(n) parallel, classical queries to a PostBQP oracle. As Ryan pointed out to me, nothing in my brief argument for this depended on quantum mechanics, so it would equally well show that PostBPP = BPPPostBPP||, where PostBPP (also known as BPPpath) is the classical analogue of PostBQP, and BPPPostBPP|| is the class of problems solvable by a BPP machine that can make poly(n) parallel queries to a PostBPP oracle. But BPPPostBPP|| clearly contains BPPNP||, which in turn contains AM—so we would get AM in PostBPP, and therefore AM in PostBQP=PP. But Vereshchagin gave an oracle relative to which AM is not contained in PP. Since there was no nonrelativizing ingredient anywhere in my argument, the only possible conclusion is that my argument was wrong. (This, incidentally, provides a nice illustration of the value of oracle results.)
In retrospect, it's easy to pinpoint what went wrong. If we try to simulate BPPPostBPP|| in PostBPP, our random bits will be playing a dual role: in choosing the queries to be submitted to the PostBPP oracle, and in providing the "raw material for postselection," in computing the responses to those queries. But in PostBPP, we only get to postselect once. When we do, the two sets of random bits that we'd wanted to keep separate will get hopelessly mixed up, with the postselection acting on the "BPP" random bits, not just on the "PostBPP" ones.
How can we fix this problem? Well, when defining the class BQPPostBQP||,classical, suppose we require the queries to the PostBQP oracle to be not only "classical," but deterministic: that is, they have to be generated in advance by a P machine, and can't depend on any random bits whatsoever. And suppose we define BPPPostBPP||,classical similarly. In that case, it's not hard to see that the equalities BQPPostBQP||,classical = PostBQP and BPPPostBPP||,classical = PostBPP both go through. You don't actually care about this, do you? But Ryan O'Donnell and Cem Say did, and that's good enough for me.
I wish I could say that these are the only cases of mistakes recently being found in decade-old papers of mine, but alas, such is not the case. In the near future, my student Adam Bouland, MIT undergrad Mitchell Lee, and Singapore's Joe Fitzsimons will post to the arXiv a paper that grew out of an error in my 2005 paper Quantum Computing and Hidden Variables. In that paper, I introduced a hypothetical generalization of the quantum computing model, in which one gets to see the entire trajectory of a hidden variable, rather than just a single measurement outcome. I showed that this generalization would let us solve problems somewhat beyond what we think we can do with a "standard" quantum computer. In particular, we could solve the collision problem in O(1) queries, efficiently solve Graph Isomorphism (and all other problems in the Statistical Zero-Knowledge class), and search an N-element list in only ~N1/3 steps, rather than the ~N1/2 steps of Grover's search algorithm. That part of the paper remains fine!
On the other hand, at the end of the paper, I also gave a brief argument to show that, even in the hidden-variable model, ~N1/3 steps are required to search an N-element list. But Mitchell Lee and Adam Bouland discovered that that argument is wrong: it fails to account for all the possible ways that an algorithm could exploit the correlations between the hidden variable's values at different moments in time. (I've previously discussed this error in other blog posts, as well as in the latest edition of Quantum Computing Since Democritus.)
If we suitably restrict the hidden-variable theory, then we can correctly prove a lower bound of ~N1/4, or even (with strong enough assumptions) ~N1/3; and we do that in the forthcoming paper. Even with no restrictions, as far as we know an ~N1/3 lower bound for search with hidden variables remains true. But it now looks like proving it will require a major advance in our understanding of hidden-variable theories: for example, a proof that the "Schrödinger theory" is robust to small perturbations, which I'd given as the main open problem in my 2005 paper.
As if that weren't enough, in my 2003 paper Quantum Certificate Complexity, I claimed (as a side remark) that one could get a recursive Boolean function f with an asymptotic gap between the block sensitivity bs(f) and the randomized certificate complexity RC(f). However, two and a half years ago, Avishay Tal discovered that this didn't work, because block sensitivity doesn't behave nicely under composition. (In assuming it did, I was propagating an error introduced earlier by Wegener and Zádori.) More broadly, Avishay showed that there is no recursively-defined Boolean function with an asymptotic gap between bs(f) and RC(f). On the other hand, if we just want some Boolean function with an asymptotic gap between bs(f) and RC(f), then Raghav Kulkarni observed that we can use a non-recursive function introduced by Xiaoming Sun, which yields bs(f)≈N3/7 and RC(f)≈N4/7. This is actually a larger separation than the one I'd wrongly claimed.
Now that I've come clean about all these things, hopefully the healing can begin at last.
Posted in Announcements, Complexity, Embarrassing Myself, Quantum | 70 Comments »
Lens of Computation on the Sciences
This weekend, the Institute for Advanced Study in Princeton hosted a workshop on the "Lens of Computation in the Sciences," which was organized by Avi Wigderson, and was meant to showcase theoretical computer science's imperialistic ambitions to transform every other field. I was proud to speak at the workshop, representing CS theory's designs on physics. But videos of all four of the talks are now available, and all are worth checking out:
Computational Phenomena in Biology, by Leslie Valiant
Computational Phenomena in Economics, by Tim Roughgarden
Computational Phenomena in Social Science, by Jon Kleinberg
Computational Phenomena in Physics, by me
Unfortunately, the videos were slow to buffer when I last tried it. While you're waiting, you could also check my PowerPoint slides, though they overlap considerably with my previous talks. (As always, if you can't read PowerPoint, then go ask another reader of this blog to convert the file into a format you like.)
Thanks so much to Avi, and everyone else at IAS, for organizing an awesome workshop!
Posted in Adventures in Meatspace, Complexity, CS/Physics Deathmatch, Quantum | 60 Comments »
Kuperberg's parable
Sunday, November 23rd, 2014
Recently, longtime friend-of-the-blog Greg Kuperberg wrote a Facebook post that, with Greg's kind permission, I'm sharing here.
A parable about pseudo-skepticism in response to climate science, and science in general.
Doctor: You ought to stop smoking, among other reasons because smoking causes lung cancer.
Patient: Are you sure? I like to smoke. It also creates jobs.
D: Yes, the science is settled.
P: All right, if the science is settled, can you tell me when I will get lung cancer if I continue to smoke?
D: No, of course not, it's not that precise.
P: Okay, how many cigarettes can I safely smoke?
D: I can't tell you that, although I wouldn't recommend smoking at all.
P: Do you know that I will get lung cancer at all no matter how much I smoke?
D: No, it's a statistical risk. But smoking also causes heart disease.
P: I certainly know smokers with heart disease, but I also know non-smokers with heart disease. Even if I do get heart disease, would you really know that it's because I smoke?
D: No, not necessarily; it's a statistical effect.
P: If it's statistical, then you do know that correlation is not causation, right?
D: Yes, but you can also see the direct effect of smoking on lungs of smokers in autopsies.
P: Some of whom lived a long time, you already admitted.
D: Yes, but there is a lot of research to back this up.
P: Look, I'm not a research scientist, I'm interested in my case. You have an extended medical record for me with X-rays, CAT scans, blood tests, you name it. You can gather more data about me if you like. Yet you're hedging everything you have to say.
D: Of course, there's always more to learn about the human body. But it's a settled recommendation that smoking is bad for you.
P: It sounds like the science is anything but settled. I'm not interested in hypothetical recommendations. Why don't you get back to me when you actually know what you're talking about. In the meantime, I will continue to smoke, because as I said, I enjoy it. And by the way, since you're so concerned about my health, I believe in healthy skepticism.
Posted in Procrastination, Rage Against Doofosity, The Fate of Humanity | 168 Comments »
What does the NSA think of academic cryptographers? Recently-declassified document provides clues
Brighten Godfrey was one of my officemates when we were grad students at Berkeley. He's now a highly-successful computer networking professor at the University of Illinois Urbana-Champaign, where he studies the wonderful question of how we could get the latency of the Internet down to the physical limit imposed by the finiteness of the speed of light. (Right now, we're away from that limit by a factor of about 50.)
Last week, Brighten brought to my attention a remarkable document: a 1994 issue of CryptoLog, an NSA internal newsletter, which was recently declassified with a few redactions. The most interesting thing in the newsletter is a trip report (pages 12-19 in the newsletter, 15-22 in the PDF file) by an unnamed NSA cryptographer, who attended the 1992 EuroCrypt conference, and who details his opinions on just about every talk. If you're interested in crypto, you really need to read this thing all the way through, but here's a small sampling of the zingers:
Three of the last four sessions were of no value whatever, and indeed there was almost nothing at Eurocrypt to interest us (this is good news!). The scholarship was actually extremely good; it's just that the directions which external cryptologic researchers have taken are remarkably far from our own lines of interest.
There were no proposals of cryptosystems, no novel cryptanalysis of old designs, even very little on hardware design. I really don't see how things could have been any better for our purposes. We can hope that the absentee cryptologists stayed away because they had no new ideas, or even that they've taken an interest in other areas of research.
Alfredo DeSantis … spoke on "Graph decompositions and secret-sharing schemes," a silly topic which brings joy to combinatorists and yawns to everyone else.
Perhaps it is beneficial to be attacked, for you can easily augment your publication list by offering a modification.
This result has no cryptanalytic application, but it serves to answer a question which someone with nothing else to think about might have asked.
I think I have hammered home my point often enough that I shall regard it as proved (by emphatic enunciation): the tendency at IACR meetings is for academic scientists (mathematicians, computer scientists, engineers, and philosophers masquerading as theoretical computer scientists) to present commendable research papers (in their own areas) which might affect cryptology at some future time or (more likely) in some other world. Naturally this is not anathema to us.
The next four sessions were given over to philosophical matters. Complexity theorists are quite happy to define concepts and then to discuss them even though they have no examples of them.
Don Beaver (Penn State), in another era, would have been a spellbinding charismatic preacher; young, dashing (he still wears a pony-tail), self-confident and glib, he has captured from Silvio Micali the leadership of the philosophic wing of the U.S. East Coast cryptanalytic community.
Those of you who know my prejudice against the "zero-knowledge" wing of the philosophical camp will be surprised to hear that I enjoyed the three talks of the session better than any of that ilk that I had previously endured. The reason is simple: I took along some interesting reading material and ignored the speakers. That technique served to advantage again for three more snoozers, Thursday's "digital signature and electronic cash" session, but the final session, also on complexity theory, provided some sensible listening.
But it is refreshing to find a complexity theory talk which actually addresses an important problem!
The other two talks again avoided anything of substance. [The authors of one paper] thought it worthwhile, in dealing [with] the general discrete logarithm problem, to prove that the problem is contained in the complexity classes NP and co-AM, but is unlikely to be in co-NP.
And Ueli Maurer, again dazzling us with his brilliance, felt compelled, in "Factoring with an Oracle" to arm himself with an Oracle (essentially an Omniscient Being that complexity theorists like to turn to when they can't solve a problem) while factoring. He's calculating the time it would take him (and his Friend) to factor, and would like also to demonstrate his independence by consulting his Partner as seldom as possible. The next time you find yourself similarly equipped, you will perhaps want to refer to his paper.
The conference again offered an interesting view into the thought processes of the world's leading "cryptologists." It is indeed remarkable how far the Agency has strayed from the True Path.
Of course, it would be wise not to read too much into this: it's not some official NSA policy statement, but the griping of a single, opinionated individual somewhere within the NSA, who was probably bored and trying to amuse his colleagues. All the same, it's a fascinating document, not only for its zingers about people who are still very much active on the cryptographic scene, but also for its candid insights into what the NSA cares about and why, and for its look into the subculture within cryptography that would lead, years later, to Neal Koblitz's widely-discussed anti-provable-security manifestos.
Reading this document drove home for me that the "provable security wars" are a very simple matter of the collision of two communities with different intellectual goals, not of one being right and the other being wrong. Here's a fun exercise: try reading this trip report while remembering that, in the 1980s—i.e., the decade immediately preceding the maligned EuroCrypt conference—the "philosophic wing" of cryptography that the writer lampoons actually succeeded in introducing revolutionary concepts (interactive proofs, zero-knowledge, cryptographic pseudorandomness, etc.) that transformed the field, concepts that have now been recognized with no fewer than three Turing Awards (to Yao, Goldwasser, and Micali). On the other hand, it's undoubtedly true that this progress was of no immediate interest to the NSA. On the third hand, the "philosophers" might reply that helping the NSA wasn't their goal. The best interests of the NSA don't necessarily coincide with the best interests of scientific advancement (not to mention the best interests of humanity—but that's a separate debate).
Posted in Complexity, Nerd Interest | 45 Comments »
Der Quantencomputer
Those of you who read German (I don't) might enjoy a joint interview of me and Seth Lloyd about quantum computing, which was conducted in Seth's office by the journalist Christian Meier, and published in the Swiss newspaper Neue Zürcher Zeitung. Even if you don't read German, you can just feed the interview into Google Translate, like I did. While the interview covers ground that will be forehead-bangingly familiar to regular readers of this blog, I'm happy with how it turned out; even the slightly-garbled Google Translate output is much better than most quantum computing articles in the English-language press. (And while Christian hoped to provoke spirited debate between me and Seth by interviewing us together, we surprised ourselves by finding very little that we actually disagreed about.) I noticed only one error, when I'm quoted talking about "the discovery of the transistor in the 1960s." I might have said something about the widespread commercialization of transistors (and integrated circuits) in the 1960s, but I know full well that the transistor was invented at Bell Labs in 1947.
Posted in Complexity, Quantum, Speaking Truth to Parallelism | 64 Comments »
Interstellar's dangling wormholes
Update (Nov. 15): A third of my confusions addressed by reading Kip Thorne's book! Details at the bottom of this post.
On Saturday Dana and I saw Interstellar, the sci-fi blockbuster co-produced by the famous theoretical physicist Kip Thorne (who told me about his work on this movie when I met him eight years ago). We had the rare privilege of seeing the movie on the same day that we got to hang out with a real astronaut, Dan Barry, who flew three shuttle missions and did four spacewalks in the 1990s. (As the end result of a project that Dan's roboticist daughter, Jenny Barry, did for my graduate course on quantum complexity theory, I'm now the coauthor with both Barrys on a paper in Physical Review A, about uncomputability in quantum partially-observable Markov decision processes.)
Before talking about the movie, let me say a little about the astronaut. Besides being an inspirational example of someone who's achieved more dreams in life than most of us—seeing the curvature of the earth while floating in orbit around it, appearing on Survivor, and publishing a Phys. Rev. A paper—Dan is also a passionate advocate of humanity's colonizing other worlds. When I asked him whether there was any future for humans in space, he answered firmly that the only future for humans was in space, and then proceeded to tell me about the technical viability of getting humans to Mars with limited radiation exposure, the abundant water there, the romantic appeal that would inspire people to sign up for the one-way trip, and the extinction risk for any species confined to a single planet. Hearing all this from someone who'd actually been to space gave Interstellar, with its theme of humans needing to leave Earth to survive (and its subsidiary theme of the death of NASA's manned space program meaning the death of humanity), a special vividness for me. Granted, I remain skeptical about several points: the feasibility of a human colony on Mars in the foreseeable future (a self-sufficient human colony on Antarctica, or under the ocean, strike me as plenty hard enough for the next few centuries); whether a space colony, even if feasible, cracks the list of the top twenty things we ought to be doing to mitigate the risk of human extinction; and whether there's anything more to be learned, at this point in history, by sending humans to space that couldn't be learned a hundred times more cheaply by sending robots. On the other hand, if there is a case for continuing to send humans to space, then I'd say it's certainly the case that Dan Barry makes.
OK, but enough about the real-life space traveler: what did I think about the movie? Interstellar is a work of staggering ambition, grappling with some of the grandest themes of which sci-fi is capable: the deterioration of the earth's climate; the future of life in the universe; the emotional consequences of extreme relativistic time dilation; whether "our" survival would be ensured by hatching human embryos in a faraway world, while sacrificing almost all the humans currently alive; to what extent humans can place the good of the species above family and self; the malleability of space and time; the paradoxes of time travel. It's also an imperfect movie, one with many "dangling wormholes" and unbalanced parentheses that are still generating compile-time errors in my brain. And it's full of stilted dialogue that made me giggle—particularly when the characters discussed jumping into a black hole to retrieve its "quantum data." Also, despite Kip Thorne's involvement, I didn't find the movie's science spectacularly plausible or coherent (more about that below). On the other hand, if you just wanted a movie that scrupulously obeyed the laws of physics, rather than intelligently probing their implications and limits, you could watch any romantic comedy. So sure, Interstellar might make you cringe, but if you like science fiction at all, then it will also make you ponder, stare awestruck, and argue with friends for days afterward—and enough of the latter to make it more than worth your while. Just one tip: if you're prone to headaches, do not sit near the front of the theater, especially if you're seeing it in IMAX.
For other science bloggers' takes, see John Preskill (who was at a meeting with Steven Spielberg to brainstorm the movie in 2006), Sean Carroll, Clifford Johnson, and Peter Woit.
In the rest of this post, I'm going to list the questions about Interstellar that I still don't understand the answers to (yes, the ones still not answered by the Interstellar FAQ). No doubt some of these are answered by Thorne's book The Science of Interstellar, which I've ordered (it hasn't arrived yet), but since my confusions are more about plot than science, I'm guessing that others are not.
SPOILER ALERT: My questions give away basically the entire plot—so if you're planning to see the movie, please don't read any further. After you've seen it, though, come back and see if you can help with any of my questions.
1. What's causing the blight, and the poisoning of the earth's atmosphere? The movie is never clear about this. Is it a freak occurrence, or is it human-caused climate change? If the latter, then wouldn't it be worth some effort to try to reverse the damage and salvage the earth, rather than escaping through a wormhole to another galaxy?
2. What's with the drone? Who sent it? Why are Cooper and Murph able to control it with their laptop? Most important of all, what does it have to do with the rest of the movie?
3. If NASA wanted Cooper that badly—if he was the best pilot they'd ever had and NASA knew it—then why couldn't they just call him up? Why did they have to wait for beings from the fifth dimension to send a coded message to his daughter revealing their coordinates? Once he did show up, did they just kind of decide opportunistically that it would be a good idea to recruit him?
4. What was with Cooper's crash in his previous NASA career? If he was their best pilot, how and why did the crash happen? If this was such a defining, traumatic incident in his life, why is it never brought up for the rest of the movie?
5. How is NASA funded in this dystopian future? If official ideology holds that the Apollo missions were faked, and that growing crops is the only thing that matters, then why have the craven politicians been secretly funneling what must be trillions of dollars to a shadow-NASA, over a period of fifty years?
6. Why couldn't NASA have reconnoitered the planets using robots—especially since this is a future where very impressive robots exist? Yes, yes, I know, Matt Damon explains in the movie that humans remain more versatile than robots, because of their "survival instinct." But the crew arrives at the planets missing extremely basic information about them, like whether they're inhospitable to human life because of freezing temperatures or mile-high tidal waves. This is information that robotic probes, even of the sort we have today, could have easily provided.
7. Why are the people who scouted out the 12 planets so limited in the data they can send back? If they can send anything, then why not data that would make Cooper's mission completely redundant (excepting, of course, the case of the lying Dr. Mann)? Does the wormhole limit their transmissions to 1 bit per decade or something?
8. Rather than wasting precious decades waiting for Cooper's mission to return, while (presumably) billions of people die of starvation on a fading earth, wouldn't it make more sense for NASA to start colonizing the planets now? They could simply start trial colonies on all the planets, even if they think most of the colonies will fail. Yes, this plan involves sacrificing individuals for the greater good of humanity, but NASA is already doing that anyway, with its slower, riskier, stupider reconnaissance plan. The point becomes even stronger when we remember that, in Professor Brand's mind, the only feasible plan is "Plan B" (the one involving the frozen human embryos). Frozen embryos are (relatively) cheap: why not just spray them all over the place? And why wait for "Plan A" to fail before starting that?
9. The movie involves a planet, Miller, that's so close to the black hole Gargantua, that every hour spent there corresponds to seven years on earth. There was an amusing exchange on Slate, where Phil Plait made the commonsense point that a planet that deep in a black hole's gravity well would presumably get ripped apart by tidal forces. Plait later had to issue an apology, since, in conceiving this movie, Kip Thorne had made sure that Gargantua was a rapidly rotating black hole—and it turns out that the physics of rotating black holes are sufficiently different from those of non-rotating ones to allow such a planet in principle. Alas, this clever explanation still leaves me unsatisfied. Physicists, please help: even if such a planet existed, wouldn't safely landing a spacecraft on it, and getting it out again, require a staggering amount of energy—well beyond what the humans shown in the movie can produce? (If they could produce that much acceleration and deceleration, then why couldn't they have traveled from Earth to Saturn in days rather than years?) If one could land on Miller and then get off of it using the relatively conventional spacecraft shown in the movie, then the amusing thought suggests itself that one could get factor-of-60,000 computational speedups, "free of charge," by simply leaving one's computer in space while one spent some time on the planet. (And indeed, something like that happens in the movie: after Cooper and Anne Hathaway return from Miller, Romilly—the character who stayed behind—has had 23 years to think about physics.)
10. Why does Cooper decide to go into the black hole? Surely he could jettison enough weight to escape the black hole's gravity by sending his capsule into the hole, while he himself shared Anne Hathaway's capsule?
11. Speaking of which, does Cooper go into the black hole? I.e., is the "tesseract" something he encounters before or after he crosses the event horizon? (Or maybe it should be thought of as at the event horizon—like a friendlier version of the AMPS firewall?)
12. Why is Cooper able to send messages back in time—but only by jostling books around, moving the hands of a watch, and creating patterns of dust in one particular room of one particular house? (Does this have something to do with love and gravity being the only two forces in the universe that transcend space and time?)
13. Why does Cooper desperately send the message "STAY" to his former self? By this point in the movie, isn't it clear that staying on Earth means the death of all humans, including Murph? If Cooper thought that a message could get through at all, then why not a message like: "go, and go directly to Edmunds' planet, since that's the best one"? Also, given that Cooper now exists outside of time, why does he feel such desperate urgency? Doesn't he get, like, infinitely many chances?
14. Why is Cooper only able to send "quantum data" that saves the world to the older Murph—the one who lives when (presumably) billions of people are already dying of starvation? Why can't he send the "quantum data" back to the 10-year-old Murph, for example? Even if she can't yet understand it, surely she could hand it over to Professor Brand. And even if this plan would be unlikely to succeed: again, Cooper now exists outside of time. So can't he just keep going back to the 10-year-old Murph, rattling those books over and over until the message gets through?
15. What exactly is the "quantum data" needed for, anyway? I gather it has something to do with building a propulsion system that can get the entire human population out of the earth's gravity well at a reasonable cost? (Incidentally, what about all the animals? If the writers of the Old Testament noticed that issue, surely the writers of Interstellar could.)
16. How does Cooper ever make it out of the black hole? (Maybe it was explained and I missed it: once he entered the black hole, things got extremely confusing.) Do the fifth-dimensional beings create a new copy of Cooper outside the black hole? Do they postselect on a branch of the wavefunction where he never entered the black hole in the first place? Does Murph use the "quantum data" to get him out?
17. At his tearful reunion with the elderly Murph, why is Cooper totally uninterested in meeting his grandchildren and great-grandchildren, who are in the same room? And why are they uninterested in meeting him? I mean, seeing Murph again has been Cooper's overriding motivation during his journey across the universe, and has repeatedly been weighed against the survival of the entire human race, including Murph herself. But seeing Murph's kids—his grandkids—isn't even worth five minutes?
18. Speaking of which, when did Murph ever find time to get married and have kids? Since she's such a major character, why don't we learn anything about this?
19. Also, why is Murph an old woman by the time Cooper gets back? Yes, Cooper lost a few decades because of the time dilation on Miller's planet. I guess he lost the additional decades while entering and leaving Gargantua? If the five-dimensional beings were able to use their time-travel / causality-warping powers to get Cooper out of the black hole, couldn't they have re-synced his clock with Murph's while they were at it?
20. Why does Cooper need to steal a spaceship to get to Anne Hathaway's planet? Isn't Murph, like, the one in charge? Can't she order that a spaceship be provided for Cooper?
21. Astute readers will note that I haven't yet said anything about the movie's central paradox, the one that dwarfs all the others. Namely, if humans were going to go extinct without a "wormhole assist" from the humans of the far future, then how were there any humans in the far future to provide the wormhole assist? And conversely, if the humans of the far future find themselves already existing, then why do they go to the trouble to put the wormhole in their past (which now seems superfluous, except maybe for tidying up the story of their own origins)? The reason I didn't ask about this is that I realize it's supposed to be paradoxical; we're supposed to feel vertigo thinking about it. (And also, it's not entirely unrelated to how PSPACE-complete problems get solved with polynomial resources, in my and John Watrous's paper on computation with closed timelike curves.) My problem is a different one: if the fifth-dimensional, far-future humans have the power to mold their own past to make sure everything turned out OK, then what they actually do seems pathetic compared to what they could do. For example, why don't they send a coded message to the 21st-century humans (similar to the coded messages that Cooper sends to Murph), telling them how to avoid the blight that destroys their crops? Or just telling them that Edmunds' planet is the right one to colonize? Like the God of theodicy arguments, do the future humans want to use their superpowers only to give us a little boost here and there, while still leaving us a character-forming struggle? Even if this reticence means that billions of innocent people—ones who had nothing to do with the character-forming struggle—will die horrible deaths? If so, then I don't understand these supposedly transcendently-evolved humans any better than I understand the theodical God.
Anyway, rather than ending on that note of cosmic pessimism, I guess I could rejoice that we're living through what must be the single biggest month in the history of nerd cinema—what with a sci-fi film co-produced by a great theoretical physicist, a Stephen Hawking biopic, and the Alan Turing movie coming out in a few weeks. I haven't yet seen the latter two. But it looks like the time might be ripe to pitch my own decades-old film ideas, like "Radical: The Story of Évariste Galois."
Update (Nov. 15): I just finished reading Kip Thorne's interesting book The Science of Interstellar. I'd say that it addresses (doesn't always clear up, but at least addresses) 7 of my 21 confusions: 1, 4, 9, 10, 11, 15, and 19. Briefly:
1. Thorne correctly notes that the movie is vague about what's causing the blight and the change to the earth's atmosphere, but he discusses a bunch of possibilities, which are more in the "freak disaster" than the "manmade" category.
4. Cooper's crash was supposed to have been caused by a gravitational anomaly, as the bulk beings of the far future were figuring out how to communicate with 21st-century humans. It was another foreshadowing of those bulk beings.
9. Thorne notices the problem of the astronomical amount of energy needed to safely land on Miller's planet and then get off of it—given that this planet is deep inside the gravity well of the black hole Gargantua, and orbiting Gargantua at a large fraction of the speed of light. Thorne offers a solution that can only be called creative: namely, while nothing about this was said in the movie (since Christopher Nolan thought it would confuse people), it turns out that the crew accelerated to relativistic speed and then decelerated using a gravitational slingshot around a second, intermediate-mass black hole, which just happened to be in the vicinity of Gargantua at precisely the right times for this. Thorne again appeals to slingshots around unmentioned but strategically-placed intermediate-mass black holes several more times in the book, to explain other implausible accelerations and decelerations that I hadn't even noticed.
10. Thorne acknowledges that Cooper didn't really need to jump into Gargantua in order to jettison the mass of his body (which is trivial compared to the mass of the spacecraft). Cooper's real reason for jumping, he says, was the desperate hope that he could somehow find the quantum data there needed to save the humans on Earth, and then somehow get it out of the black hole and back to the humans. (This being a movie, it of course turns out that Cooper was right.)
11. Yes, Cooper encounters the tesseract while inside the black hole. Indeed, he hits it while flying into a singularity that's behind the event horizon, but that isn't the black hole's "main" singularity—it's a different, milder singularity.
15. While this wasn't made clear in the movie, the purpose of the quantum data was indeed to learn how to manipulate the gravitational anomalies in order to decrease Newton's constant G in the vicinity of the earth—destroying the earth but also allowing all the humans to escape its gravity with the rocket fuel that's available. (Again, nothing said about the poor animals.)
19. Yes, Cooper lost the additional decades while entering Gargantua. (Furthermore, while Thorne doesn't discuss this, I guess he must have lost them only when he was still with Anne Hathaway, not after he separates from her. For otherwise, Anne Hathaway would also be an old woman by the time Cooper reaches her on Edmunds' planet, contrary to what's shown in the movie.)
Posted in Adventures in Meatspace, Nerd Interest, Procrastination, The Fate of Humanity | 123 Comments »
You are currently browsing the Shtetl-Optimized weblog archives for November, 2014.
|
CommonCrawl
|
From the standpoint of absorption, the drinking of tobacco juice and the interaction of the infusion or concoction with the small intestine is a highly effective method of gastrointestinal nicotine administration. The epithelial area of the intestines is incomparably larger than the mucosa of the upper tract including the stomach, and the small intestine represents the area with the greatest capacity for absorption (Levine 1983:81-83). As practiced by most of the sixty-four tribes documented here, intoxicated states are achieved by drinking tobacco juice through the mouth and/or nose…The large intestine, although functionally little equipped for absorption, nevertheless absorbs nicotine that may have passed through the small intestine.
The evidence? Found helpful in reducing bodily twitching in myoclonus epilepsy, a rare disorder, but otherwise little studied. Mixed evidence from a study published in 1991 suggests it may improve memory in subjects with cognitive impairment. A meta-analysis published in 2010 that reviewed studies of piracetam and other racetam drugs found that piracetam was somewhat helpful in improving cognition in people who had suffered a stroke or brain injury; the drugs' effectiveness in treating depression and reducing anxiety was more significant.
The chemicals he takes, dubbed nootropics from the Greek "noos" for "mind", are intended to safely improve cognitive functioning. They must not be harmful, have significant side-effects or be addictive. That means well-known "smart drugs" such as the prescription-only stimulants Adderall and Ritalin, popular with swotting university students, are out. What's left under the nootropic umbrella is a dizzying array of over-the-counter supplements, prescription drugs and unclassified research chemicals, some of which are being trialled in older people with fading cognition.
An expert in legal and ethical issues surrounding health care technology, Associate Professor Eric Swirsky suggested that both groups have valid arguments, but that neither group is asking the right questions. Prof Swirsky is the clinical associate professor of biomedical and health information sciences in the UIC College of Applied Health Sciences.
The smart pill industry has popularized many herbal nootropics. Most of them first appeared in Ayurveda and traditional Chinese medicine. Ayurveda is a branch of natural medicine originating from India. It focuses on using herbs as remedies for improving quality of life and healing ailments. Evidence suggests our ancestors were on to something with this natural approach.
If you're suffering from blurred or distorted vision or you've noticed a sudden and unexplained decline in the clarity of your vision, do not try to self-medicate. It is one thing to promote better eyesight from an existing and long-held baseline, but if you are noticing problems with your eyes, then you should see an optician and a doctor to rule out underlying medical conditions.
Nootropics, also known as 'brain boosters,' 'brain supplements' or 'cognitive enhancers' are made up of a variety of artificial and natural compounds. These compounds help in enhancing the cognitive activities of the brain by regulating or altering the production of neurochemicals and neurotransmitters in the brain. It improves blood flow, stimulates neurogenesis (the process by which neurons are produced in the body by neural stem cells), enhances nerve growth rate, modifies synapses, and improves cell membrane fluidity. Thus, positive changes are created within your body, which helps you to function optimally irrespective of your current lifestyle and individual needs.
In this large population-based cohort, we saw consistent robust associations between cola consumption and low BMD in women. The consistency of pattern across cola types and after adjustment for potential confounding variables, including calcium intake, supports the likelihood that this is not due to displacement of milk or other healthy beverages in the diet. The major differences between cola and other carbonated beverages are caffeine, phosphoric acid, and cola extract. Although caffeine likely contributes to lower BMD, the result also observed for decaffeinated cola, the lack of difference in total caffeine intake across cola intake groups, and the lack of attenuation after adjustment for caffeine content suggest that caffeine does not explain these results. A deleterious effect of phosphoric acid has been proposed (26). Cola beverages contain phosphoric acid, whereas other carbonated soft drinks (with some exceptions) do not.
This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137.
After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order, $12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it's always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 1 May 2013. In March 2014, I spent $19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts's claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn't believe Roberts's claims for a second - my only reason to do it would be to prove the claim wrong but he'd just ignore me and no one else cares.) I didn't try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g & $0.11 per day.
Another classic approach to the assessment of working memory is the span task, in which a series of items is presented to the subject for repetition, transcription, or recognition. The longest series that can be reproduced accurately is called the forward span and is a measure of working memory capacity. The ability to reproduce the series in reverse order is tested in backward span tasks and is a more stringent test of working memory capacity and perhaps other working memory functions as well. The digit span task from the Wechsler (1981) IQ test was used in four studies of stimulant effects on working memory. One study showed that d-AMP increased digit span (de Wit et al., 2002), and three found no effects of d-AMP or MPH (Oken, Kishiyama, & Salinsky, 1995; Schmedtje, Oman, Letz, & Baker, 1988; Silber, Croft, Papafotiou, & Stough, 2006). A spatial span task, in which subjects must retain and reproduce the order in which boxes in a scattered spatial arrangement change color, was used by Elliott et al. (1997) to assess the effects of MPH on working memory. For subjects in the group receiving placebo first, MPH increased spatial span. However, for the subjects who received MPH first, there was a nonsignificant opposite trend. The group difference in drug effect is not easily explained. The authors noted that the subjects in the first group performed at an overall lower level, and so, this may be another manifestation of the trend for a larger enhancement effect for less able subjects.
First off, overwhelming evidence suggests that smart drugs actually work. A meta-analysis by researchers at Harvard Medical School and Oxford showed that Modafinil has significant cognitive benefits for those who do not suffer from sleep deprivation. The drug improves their ability to plan and make decisions and has a positive effect on learning and creativity. Another study, by researchers at Imperial College London, showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions.
The Defense Department reports rely on data collected by the private real estate firms that operate base housing in partnership with military branches. The companies' compensation is partly determined by the results of resident satisfaction surveys. I had to re-read this sentence like 5 times to make sure I understood it correctly. I just can't even. Seriously, in what universe did anyone think that this would be a good idea?
Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement.
(We already saw that too much iodine could poison both adults and children, and of course too little does not help much - iodine would seem to follow a U-curve like most supplements.) The listed doses at iherb.com often are ridiculously large: 10-50mg! These are doses that seems to actually be dangerous for long-term consumption, and I believe these are doses that are designed to completely suffocate the thyroid gland and prevent it from absorbing any more iodine - which is useful as a short-term radioactive fallout prophylactic, but quite useless from a supplementation standpoint. Fortunately, there are available doses at Fitzgerald 2012's exact dose, which is roughly the daily RDA: 0.15mg. Even the contrarian materials seem to focus on a modest doubling or tripling of the existing RDA, so the range seems relatively narrow. I'm fairly confident I won't overshoot if I go with 0.15-1mg, so let's call this 90%.
Another interpretation of the mixed results in the literature is that, in some cases at least, individual differences in response to stimulants have led to null results when some participants in the sample are in fact enhanced and others are not. This possibility is not inconsistent with the previously mentioned ones; both could be at work. Evidence has already been reviewed that ability level, personality, and COMT genotype modulate the effect of stimulants, although most studies in the literature have not broken their samples down along these dimensions. There may well be other as-yet-unexamined individual characteristics that determine drug response. The equivocal nature of the current literature may reflect a mixture of substantial cognitive-enhancement effects for some individuals, diluted by null effects or even counteracted by impairment in others.
Caveats aside, if you do want to try a nootropic, consider starting with something simple and pretty much risk-free, like aromatherapy with lemon essential oil or frankincense, which can help activate your brain, Barbour says. You could also sip on "golden milk," a sweet and anti-inflammatory beverage made with turmeric, or rosemary-infused water, she adds.
Ashwagandha has been shown to improve cognition and motivation, by means of reducing anxiety [46]. It has been shown to significantly reduce stress and anxiety. As measured by cortisol levels, anxiety symptoms were reduced by around 30% compared to a placebo-controlled (double-blind) group [47]. And it may have neuroprotective effects and improve sleep, but these claims are still being researched.
Racetams, such as piracetam, oxiracetam, and aniracetam, which are often marketed as cognitive enhancers and sold over-the-counter. Racetams are often referred to as nootropics, but this property is not well established.[31] The racetams have poorly understood mechanisms, although piracetam and aniracetam are known to act as positive allosteric modulators of AMPA receptors and appear to modulate cholinergic systems.[32]
|
CommonCrawl
|
First look by the Yutu-2 rover at the deep subsurface structure at the lunar farside
Jialong Lai ORCID: orcid.org/0000-0002-2157-27671,2,
Yi Xu ORCID: orcid.org/0000-0001-8894-525X1,
Roberto Bugiolacchi ORCID: orcid.org/0000-0002-1748-63721,3,
Xu Meng1,4,
Long Xiao1,5,
Minggang Xie ORCID: orcid.org/0000-0001-7397-201X1,6,
Bin Liu7,
Kaichang Di7,
Xiaoping Zhang1,
Bin Zhou ORCID: orcid.org/0000-0002-5576-09348,9,
Shaoxiang Shen8,9 &
Luyuan Xu ORCID: orcid.org/0000-0003-4708-74411
Nature Communications volume 11, Article number: 3426 (2020) Cite this article
Rings and moons
The unequal distribution of volcanic products between the Earth-facing lunar side and the farside is the result of a complex thermal history. To help unravel the dichotomy, for the first time a lunar landing mission (Chang'e-4, CE-4) has targeted the Moon's farside landing on the floor of Von Kármán crater (VK) inside the South Pole-Aitken (SPA). We present the first deep subsurface stratigraphic structure based on data collected by the ground-penetrating radar (GPR) onboard the Yutu-2 rover during the initial nine months exploration phase. The radargram reveals several strata interfaces beneath the surveying path: buried ejecta is overlaid by at least four layers of distinct lava flows that probably occurred during the Imbrium Epoch, with thicknesses ranging from 12 m up to about 100 m, providing direct evidence of multiple lava-infilling events that occurred within the VK crater. The average loss tangent of mare basalts is estimated at 0.0040-0.0061.
Unraveling the shallow subsurface structure of the lunar mare offers the key to a better understanding of the local history of basaltic volcanism, an important process coupled to the Moon's thermal evolution1. The thickness and surface area of basalt layers can be used to constrain lava eruption volumes. A range of remote-sensing data including the study of impact craters morphology2,3, the analysis of high-resolution gravity data4, and the reflectance spectra of crater ejecta deposits5,6,7,8 have contributed to developing the current model of lunar evolution.
Ground-penetrating radars on the lunar surface and radar sounders onboard orbiting spacecraft have helped to investigate the physical properties of the subsurface materials and their possible stratigraphy. The Apollo Lunar Sounder Experiment, part of the Apollo 17 mission, was the first instrument to detect deep subsurface reflectors corresponding to the interface between mare and bedrock9 at average apparent depths of 1–1.6 km in Mare Serenitatis, Mare Crisium, and Oceanus Procellarum9,10,11. The apparent depth is defined as the propagation depth of a radar signal with the speed of light in the vacuum. Later, the Lunar Radar Sounder onboard the Kaguya spacecraft (SELENE) observed relatively shallow reflectors interpreted as subsurface boundaries between distinct basaltic rock layers in the nearside maria at apparent depths in the range of hundreds of meters12,13,14.
Compared with the spaceborne radar experiment, the lunar-penetrating radar (LPR) onboard Chang'e-3 (CE-3) and Chang'e-4 (CE-4) rover have a much higher range resolution (1–2 m in the mare basalt for the 60 MHz channel), thus offering a unique opportunity to survey in greater detail the shallow subsurface of both the lunar nearside and farside15,16,17. CE-4 landed in the South Pole-Aitken (SPA) Basin, the largest known impact structure on the Moon and a key region ideally suited to address several outstanding geological questions as the impact might have even penetrated the entire lunar crust18,19. The radargrams produced from the data acquired by the CE-4 instrument reveal the basalt layer thickness of each lava eruption and the time sequence of surface modification events that occurred in the Von Kármán (VK) crater (Supplementary Fig. 1). In a broader context, this new information adds to our limited understanding of the igneous history of the SPA Basin, which is thought to have been significantly shorter and less extensive than its equivalent on the nearside1,20. The reason for the asymmetric distribution between the lunar sides is understood to relate either to differences in crustal thickness, to the abundance of radioactive elements, or to the geological consequences of the large SPA-forming impact itself21,22,23,24,25,26,27.
In this work, we report the LPR results for the first 9 months derived from channel one (CH-1, 60 MHz) data and test our interpretations using LPR simulation. A stratigraphic model of the surveying area (landing coordinates based on Lunar Reconnaissance Orbiter terrain data: 177.5885°E, 45.4561°S, −5927 m28,29) was generated from the extracted reflectors profile, which suggests possible lava flows sources and a potentially complex buried topography. The local geological history of the CE-4 landing site is inferred based on the revealed stratigraphy.
Geologic settings
VK crater (171 km) lies within the SPA basin, an impact crater about 2500 km in diameter. The thermal history of the crater and its neighborhood thus should be interpreted within an atypical geological context19,20,21,22,23,24,25. During the Late Heavy Bombardment (LHB30) period, several giant impacts including Imbrium on the nearside and VK's northern neighbor, crater Leibnitz (245 km in diameter) were produced. Post LHB, the region underwent a relatively prolonged phase of lava infill, which lasted about 200–600 Ma21,31, with the youngest flows estimated between 3.15 and 3.6 Ga19,21. However, currently, no direct evidence of the volcanic history of VK crater indicates whether the mare deposits were formed by one episode of basaltic volcanism based on the uniform reflectance spectral characteristics or multiple lava-infilling events19. LPR can provide first-hand data to disclose the subsurface stratigraphy and constraint the thermal history.
VK's neighboring region is geologically highly complex: the map I-104731 and the inset32 (Supplementary Fig. 1) show a superposition of impact morphologies spanning from the pre-Nectarian to the Copernican epochs. The neighboring impacts produced ejecta materials that punctuated the infill and post-infill phase of the VK crater. The time sequence of these craters is relevant to the interpretation of the stratigraphy at the CE-4 exploration path, which is analyzed in Supplementary Note 1.
Lunar-penetrating radar results
The penetrating depth of LPR CH-1 can reach up to ~330 m (Supplementary Note 2 and Supplementary Fig. 2), although the top section of the radar signals becomes saturated due to the strong coupling effects from the electromagnetic interaction with the metal in the rover. However, channel two (CH-2) of the LPR data, the center frequency of which is 500 MHz and can penetrate up to ~35 m17, can be employed to complement the profile of the close-to-surface section17 (Fig.1c). Here we focus on the LPR data analysis between 52 and 328 m.
Fig. 1: Chang'e-4 radargram for the 60 MHz channel.
a LPR CH-1 radargram of CE-4 landing site using an Automatic Gain Control (AGC) method49 for amplitude compensation. The yellow line represents the aggregated data traces to show the enhanced subsurface echoes A–E. This approach also minimizes "anomalous" signal points along the travel path that might arise from the random distribution of rocks and debris among more heterogeneous layers. b Interpreted LPR CH-1 radargram of CE-4 landing site using image enhancement techniques. The cumulative length of the traversed path is 284.6 m. Yellow lines represent enhanced subsurface echoes A–E (a); light blue lines are subtle boundaries denoting differences in "stripe" directions and sharpness; dashed lines denote higher uncertainty in location. The two red vertical lines indicate waypoint 37 and 42, respectively. Please note that LPR data were not collected at a fixed speed. For example, the jump from 180 to 240 m at the end of X axis is because LPR CH-1 collected much fewer data at the end of the traverse path than at the beginning stage when rover traveled around a small crater. The enlarged images of the end of the traverse are given in Supplementary Fig. 3. c The interpreted stratigraphy structure inferred from the LPR results. ε = 4.5(≤52 m) and 6.5(>52 m) is used for time-to-depth conversion.
The prominent and continuous subsurface reflectors A–E at depths of (A) 51.8 ± 1.1 m, (B) 63.2 ± 1.2 m, (C) 96.2 ± 3.2 m, (D) 130.2 ± 3.7 m, and (E) 225.8 ± 5.5 m can be observed both in the processed radar image and aggregated data traces displayed in terms of signal strength (dB, yellow line) (Fig. 1a). The horizontal reflectors appear relatively constant running parallel to the surface (see Fig. 1), except for the horizontal reflectors D that shows a gradual rise of 7.1 m in the right end. This is probably due to the change in subsurface topography, e.g., crater at depth of 130 m (see simulation results in Supplementary Fig. 4). From around waypoint 42, the reflector D becomes flat, because the rover conducted a local exploration mission to collect other scientific data at the end of the ninth month exploration with consequent little variation of the subsurface topography. Nonetheless, this localized and repeated sampling phase helps to constrain the consistency and reliability of the data gathering process.
The materials between the most prominent horizontals are rather uniform and strong radar echo are rare (e.g., A–B, B–C, C–D in Fig.1b); however, a couple of subtle features stand out at the bottom part region of the radargram using image enhancement technique. Some relatively short lines appear in the D–E strata and more continuous ones occur below reflector E, which are interpreted as ejecta at a different scale. As the thickness of stratum D–E is about 100 m, it is possible that it was formed by multiple lava eruption events interposed by small-scale ejecta deposits or thin regolith that was formed in the lull period: this geologically complex admix may reflect in the scattered features in the radar results as evident in Fig.1b. Alternatively, the large-scale ejecta layers at depths of over 200 m may produce relatively continuous signal discontinuities, but with more pronounced fluctuations than a well-defined interface.
Simulation results
To test our geological interpretation of the radar data, several subsurface models were designed for LPR simulation with various sets of loss tangent and permittivity values.
The average loss tangent (the ratio of the imaginary and real part of permittivity, tanδ) in CE-4 site is inverted with three types of geometric spreading corrections, as shown in Fig. 2. For R2 correction, tanδ = 0.0060 ± 0.0001; for R3 correction, tanδ = 0.0051 ± 0.0001; for R4 correction, tanδ = 0.0041 ± 0.0001. In the case of rough interface (R3 correction), we confirm the estimation value with the result inferred from the penetrating depth of LPR CH-1, which is 0.005 (see Supplementary Note 3). The first model (Fig. 3a) adopts the derived loss tangent value and models the subsurface structure underneath a short path, including regolith, ejecta from nearby craters, basalt layers admixed with small-scale ejecta, and large-scale ejecta formed in a time sequence from recent to remote. The simulation results show the clear boundary between the regolith and mare basalts (Fig. 3b). Also, both small- and large-scale ejecta could produce clear radar echoes, especially when larger debris is present at depths of over 200 m, which would produce flatter horizontal lines instead of large hyperbola shaped signals. The echoes of small-scale ejecta appear shorter and less continuous, comparable to the observations of the LPR results in D–E section. The simulation results show similar characteristics to the CE-4 LPR radargram, confirming the plausibility of the subsurface model. The other two models (Fig. 3c, d) illustrate how different loss tangent values affect the radargram: Fig. 3c shows that the penetrating depth of the case of tanδ = 0.001 is 6400 ns, much deeper than 4900 ns in the case of tanδ = 0.005, and reflections from the ejecta are clearly visible, while LPR attenuates faster with tanδ = 0.009 (Fig. 3d), the reflected signals become weak beneath 3000 ns. Simulation results with different permittivity values of mare basalt are given in the Supplementary Note 4 and Supplementary Fig. 5.
Fig. 2: Depth vs. signal power profile.
R2, R3, R4 backscatter/spreading corrections are applied, respectively. The best-fit lines are used to calculate the attenuation η (dB/m). ε = 6.5 is used for time-to-depth conversion.
Fig. 3: Radar simulation results.
a An 80 m-long path simulation model. b Simulation LPR result with loss tangent tanδ = 0.005. c Loss tangent tanδ = 0.001. d Loss tangent tanδ = 0.009. The color bar in a shows the permittivity value used for each layer. From top to bottom, the simulation model contains a vacuum, regolith layer, ejecta, and multiple basalt layers with interlayered regolith. In a, both small and large amount of ejecta is modeled at depth ranges of 120–220 m and 220–300 m, respectively.
Trend surface analysis
The CE-4 exploration path crosses old surfaces scattered with different sized craters, as revealed in the surface Digital Elevation Model (Fig. 4). The subsurface echoes marked as A–E are derived from aggregating all the repeated data tracks collected at the same waypoint. Their depth profile reflects lateral variations of the subsurface structure along the traveling path. The topographical variation within each subsurface layer is no larger than 8 m, averaging about 4 m. The trend surface analysis method is used to estimate the relatively large-scale systematic variation of the subsurface layer (Fig. 4). The arrows show that A, B, D layer rise toward the same direction, indicating possible sources for the lava flows located west of the lander, whereas layer C tends to rise toward the east, implying the potential source of an ejecta layer. E stratum also shows a prevailing rise at the western end, probably because it tends to become thicker in a westerly direction.
Fig. 4: Trend surface analysis results.
Top image shows the surface DEM. A–E are the trend surface analysis of the subsurface structure of the CE-4 surveying area, corresponding to Fig. 1. The color bars indicate the elevation variation within each layer and the arrows show the rising direction of the layer. The coordinate system is based on the lander location marked by the red triangle. The black dots represent the rover's path during the initial nine months. Surface DEM data were derived from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) stereo pair image28. The positive direction of Y axis points to the north and the positive direction of X axis points to the East.
The data from the LPR CH-2 reveal that the first 38 m-thick section could consist of three distinct layers of fine-grained regolith, coarse ejecta, and fractured basalt. The bottom layer probably also features low-calcium pyroxene-bearing ejecta materials from neighboring craters, e.g., Finsen crater (Table 1), and autochthonous high-calcium pyroxene-bearing materials19.
Table 1 Estimations of the thickness of ejecta deposits.
The first recognized reflector A in LPR CH-1 radargram (Fig. 1a) is found at the depth of 52 m; a uniform basalt layer may be present between 38 and 52 m. Alternatively, the possible boundary within this range is narrower than the CH-1 resolution, thus preventing detection.
Reflectors B, C, D probably represent interfaces between basalt layers from different periods, caused by the high permittivity contrast between solid mare basalt and high-porosity deposits formed during latent periods of lava activity. In this scenario, a stratum of regolith would have sufficient time to develop through a process of surface weathering, admixed with random ejecta deposits12. The trend surface analysis suggests that layer C rises towards the East, unlike layers A, B, D. This could be due to the presence of ejecta deposits delivered from the East, plausibly originating from crater Alder.
The 100 m-thick D–E stratum is interpreted as a layer formed by an undefined number of intermittent lava flows and small-scale ejecta deposits, possibly interposed by shallow regolith layers. This interpretation agrees with a prior study using small craters close to the CE-4 landing site, which found evidences of mare basalts at a depth between ~30 and 90 m32.
The occurrence of large-scale, multiple lava flooding events within the VK crater is also revealed by several geomorphological features relating to a prominent dome located west of the crater. The elevation cross-sections of the 40 km mound/dome structure (Fig. 5b, c) suggest that it represents the last, less voluminous, and possibly more viscous lava flows that accumulated relatively close to the vent/s, probably located at the foot of the crater terrace. Three finger-shaped flow lobes have heights of around 110 m (section a–b) with a slope of about 4.6°, whereas to the south and close to the rim, the flow also drops relatively abruptly by 100 m (section 1) with a 2.6° slope, which is comparable to the thickness estimation of the D–E stratum. These are substantial even in comparison with the well-studied mare Imbrium lobes, which range between 40 and 65 m33,34. This suggests that the infill history of the basin was punctuated and probably prolonged in time. However, much of VK's crater floor has been considerably scarred by countless secondary impact events since its formation, some very recently judging by the secondaries' size and freshness, rendering any attempt to estimate ages based on craters size-frequency distribution surveys arduous at best. However, given that these types of sharp geological boundaries have a long but limited lifespan on the lunar surface, it offers the intriguing possibility that the erupting activity might have continued into the Eratosthenian epoch, comparable to the Mare Imbrium 'young' flows35. Pasckert et al.21 based on Neukum fits of the size-frequency distribution of craters within the VK crater floor derived a temporal interval between 3.75 and 3.15 Ga as the last eruptive phase. Using the same technique, extrusive events within the SPA basin are estimated to have peaked in the Late Imbrian period, ~3.74–3.71 Ga36 ending about \(3.6_{ - 0.2}^{ + 0.09}\) Ga19.
Fig. 5: Western Von Kàrmàn flow structure.
Lobate fronts and other morphologic details of the flow structure are shown in the figure. Cross-sections altimetry data were generated from SLDEM2015 + LOLA data maps50. Centre lat-lon coordinates for image a are: 45.45°S,174.10°E, respectively. Arrows point at prominent flow fronts. LROC Wide Angle Camera (WAC)51 image mosaic.
The ejecta of the nearby fresh crater Zhinyu with a diameter of 3.8 km represent target materials down to about 300 m and the radial variation in olivine estimation content suggests the existence of at least three distinct layers with different olivine abundance (Fig. 6). The excavation process from impacts produces an inverse stratigraphy of ejected materials, with those closest to the crater rim representing the deepest part of the excavation. The spectrally derived olivine %wt abundances37 reveal at least three types of potentially heterogeneous composition as shown by concentric circles marked in Fig. 6. Spectrally derived compositional data of shocked materials should be interpreted with a caution of course; however, the heterogeneous concentric pattern associated with Zhinyu is uncommon, thus making it more likely of reflecting actual compositional/petrological differences with depth.
Fig. 6: Olivine abundance (wt%) map mosaic of the VK crater.
Crater Zhinyu (3.8 km) displays an extended ejecta apron characterized by three different spectral signatures interpreted as relating to different olivine content of the radially distributed materials37.
The deepest strata seen by the LPR represent the large-scale ejecta deposits located at the depth of about 230 m with thickness more than 116 m, which is within the range of estimated ejecta deposits from several source candidates (Table 1). The subsurface structure indicates the ejecta might come from the western direction, as it becomes thicker toward the right end of the radargram series (Fig.1). Based on the analysis of the timing sequence of impacts, estimated thickness values, and the direction of the ejecta source, one possibility is that the strata E were emplaced by the Imbrium impact event or a mix of both Imbrium and Orientale. Based on broader geological considerations18, the fact that the CE-4 landing site coincides to the antipodal position of the Imbrium impact suggests that the geologic unit Ig (Supplementary Fig. 1d) might be the product of its ejecta surge. However, VK was heavily impacted by many later events and ejecta distribution is notoriously difficult to associate to a distal impact unless it is deposited within a clear ballistic path. Therefore, we cannot exclude other possibilities as sources of large-scale ejecta.
Based on the surface age derivations from size-frequency distribution of impact craters in the SPA basin, the volcanic activity appears to have peaked in the Late Imbrian. However, the start time of the volcanism in the region is not well constrained. The eruptive activities may have started as early as several million years later after the VK impact event (3.97 Ga). Later, large impact events such as Ingenii (~3.91 Ga), Leibnitz (~3.88 Ga), Imbrium, and Orientale, etc. together produced up to over 200 m of ejecta at the CE-4 landing site region (Table 1).
The mare infill of the basin probably followed this main deposition phase. This stage was punctuated by the arrival of small-scale ejecta from other distant impact craters or nearby relatively small craters. For example, the ejecta from Alder crater (Table 1) in the east might be buried by mare basalts at the depth of 96 m. In this scenario, the thickness of mare basalt would lie beyond the LPR CH-1 detection limit.
Using the excavation depths of the largest impacts that did not penetrate to the crater basement, the maximum thickness of the mare infill has been estimated to about 200 m21 and possibly over 300 m. A higher estimate of 310 m was derived looking at the spectral characteristics of crater Zhinyv19 ejecta, which is 32 km west of the CE-4 landing site. Our findings based on LPR observations also align with an overall basalt layer thickness larger than ~300 m.
Overall, the LPR data lead to an interpretative model of the local stratigraphy, which is comparable to that inferred from reflectance spectra data of crater ejecta31. The main difference between the methodologies relates to the depth of the proposed layers, in the case of the LPR results, consistently deeper than previous estimations19,21. Another new insight is that the volcanism within VK was punctuated and prolonged, with at least four major infill events that can be interpreted from both the radargrams and geological considerations. The radargram provides direct evidence of multiple lava-infilling events having occurred within the VK crater, resulting in 12, 33, 34, and 96 m-thick lava layers at the CE-4 site. The radargram also shows that large-scale, multiple lava flooding occurrences were punctuated by the arrival of ejecta from impacts of different sizes and origin. In this work, we also derive an estimate for the average loss tangent of mare basalts on farside is inferred as 0.0040-0.0061.
Lunar-penetrating radar data
CH-1 of LPR operates at the center frequency of 60 MHz with a 40 MHz bandwidth. The monopole antennas (12 mm in diameter and 1150 mm in length) are located at the back of the rover standing about 60 cm above the ground. In this work, we analyze LPR CH-1 data (file name list is given in Supplementary Table 1) collected in the first 9 months by the Yutu-2 rover along its 284.6 m-long exploration journey (see exploration path in Fig. 7).
Fig. 7: The routing path of the rover.
The yellow dots represent waypoints. The exploration phase from waypoint 42 to 49 is highlighted in the cartoon inset. The base image is Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) image M1311886645RC.
The radargram from the LPR CH-1 data was derived after removing repetitive data and background noise, applying filtering and amplitude compensation. Further processing details can be found in Supplementary Note 5 and Supplementary Figs. 6 and 7. The actual depth of the subsurface reflector, d, is converted from the two-way traveling time t at the reflector and the relative permittivity of lunar basalt, ɛ, using
$${d} = \frac{{ct}}{{2\sqrt \varepsilon }}$$
The permittivity of the Apollo regolith and basalt samples17,38, ε = 4.5(≤52 m) and 6.5(>52 m) asured at 60 MHz is adopted in this work.
To identify the subsurface reflectors, not only the radargrams showing LPR data collected in the motion state were used but also the data trace from each waypoint (total 49 waypoints in this work, shown as yellow dots in Fig. 7) was generated by aggregating all the acquired repetitive data (~400–1000 tracks) at the same location to further reduce random noise and increase the signal-to-noise ratio. Furthermore, the identified reflectors using CE-4 LPR CH-1 data were compared with those derived from CE-3 data (Supplementary Note 6 and Supplementary Figs. 8–10) to avoid signal artifacts caused by the inherent system noise39.
The trend surface analysis was performed with the reflector location of each waypoint and low-order polynomial fitting40.
Radar signal simulation
To evaluate our geological interpretations based on the LPR radargram, simulation of the proposed subsurface stratigraphy model was conducted in the transverse electric mode with a two-dimensional finite-difference time-domain method using gprMax41 and Gaussian noise set as the average signal level below the LPR CH-1 detection limit was included in the simulation. The detailed model and permittivity value of each layer are shown in Fig. 3 and Supplementary Note 4.
Ejecta deposition estimation
Large volumes of ejecta were delivered to the CE-4 landing site by several impacts42. The ejecta thickness was estimated using
$$T = 0.068R_{\mathrm{{t}}}\left( {r/R_{{\mathrm{{at}}}}} \right)^{ - 3}$$
where r is the distance from crater center to the landing site with the consideration of the curvature of the Moon and Rat is the radius of a transient cavity at the preimpact surface in meters43,44. For the complex craters,
$$R_{{\mathrm{{at}}}} = 0.4906\left( {2R} \right)^{0.85},\quad R \, > \, 9.5\,{\mathrm{{km}}}$$
R is rim-to-rim radius of a final crater45. For the Imbrium and Orientale basins, which were formed after the VK crater, Rat were obtained from Miljković et al.46. The apparent radius of Ingenii crater is 114 km47. The thickness of ejecta delivered to landing site is listed in Table 1.
The cratering efficiency (μ) is the ratio between the thickness of local material excavated by the impact of ejecta and the ejecta. \({\mu} = 0.0092r_{{\mathrm{{gc}}}}^{0.87}\)(4) is adopted from Petro and Pieters48, where rgc is the great circle distance. The thickness of ejecta deposits including ejecta and local excavated materials can be obtained from \(h = T \times (1 + \mu )\)(5).
CE-3 LPR data and CE-4 LPR data are available at Data Publishing and Information Service System of China's Lunar Exploration Program (http://moon.bao.ac.cn/). All the LPR data IDs are listed in Supplementary Table 1. Data for Figs. 1 and 2 are available at https://doi.org/10.5281/zenodo.3763355. Data sources of Figs. 4–6 are given in the captions. Additional data related to this paper are available from the corresponding author upon reasonable request.
Yingst, R. A. & Head, J. W. Volumes of lunar lava ponds in South Pole-Aitken and Orientale Basins: implications for eruption conditions, transport mechanisms, and magma source regions. J. Geophys. Res. Planets 102, 10909–10931 (1997).
De Hon, R. A. Thickness of mare material in the Tranquillitatis and Nectaris basins. In 5th Lunar Sci. Conf. Proc. Houston, TX, 18–22 March 1974. Vol. 1 (A75-39540 19-91) 53–59 (Pergamon Press. Inc., New York, 1974).
Hon De, R. A. Thickness of the western mare basalts. In 10th Lunar Planet. Sci. Conf. Houston, TX, 19–23 March 1979. Vol. 3 (A80-23677 08-91) 2935–2955 (Pergamon Press, Inc., New York, 1979).
Gong, S. et al. Thicknesses of mare basalts on the Moon from gravity and topography. J. Geophys. Res. Planets 121, 854–870 (2016).
Budney, C. J. & Lucey, P. G. Basalt thickness in Mare Humorum: the crater excavation method. J. Geophys. Res. Planets 103, 16855–16870 (1998).
Gillis, J. J. & Spudis, P. D. Geology of the Smythii and Marginis region of the Moon: using integrated remotely sensed data. J. Geophys. Res. Planets 105, 4217–4233 (2000).
Heather, D. & Dunkin, S. A stratigraphic study of southern Oceanus Procellarum using Clementine multispectral data. Planet. Space Sci. 50, 1299–1309 (2002).
Thomson, B. J., Grosfils, E. B., Bussey, D. B. J. & Spudis, P. D. A new technique for estimating the thickness of mare basalts in Imbrium Basin. Geophys. Res. Lett. 36, 1–5 (2009).
Cooper, B. L., Carter, J. L. & Sapp, C. A. New evidence for graben origin of Oceanus Procellarum from lunar sounder optical imagery. J. Geophys. Res. 99, 3799 (1994).
Peeples, W. J. et al. Orbital radar evidence for lunar subsurface layering in Maria Serenitatis and Crisium. J. Geophys. Res. Solid Earth 83, 3459–3468 (1978).
Phillips, R. J. et al. in Apollo 17 Preliminary Science Report (NASA SP-330) 22-1:22–26 (National Aeronautics and Space Administration, 1973).
Ono, T. et al. Lunar radar sounder observations of subsurface layers under the nearside maria of the Moon. Science 323, 909–912 (2009).
Oshigami, S. et al. Distribution of the subsurface reflectors of the western nearside maria observed from Kaguya with Lunar Radar Sounder. Geophys. Res. Lett. 36, L18202 (2009).
Oshigami, S. et al. Mare volcanism: reinterpretation based on Kaguya Lunar Radar Sounder data. J. Geophys. Res. Planets 119, 1037–1045 (2014).
Xiao, L. et al. A young multilayered terrane of the northern Mare Imbrium revealed by Chang'E-3 mission. Science (80) 347, 1226 (2015).
Lai, J., Xu, Y., Zhang, X. & Tang, Z. Structural analysis of lunar subsurface with Chang׳E-3 lunar penetrating radar. Planet. Space Sci. 120, 96–102 (2016).
Lai, J. et al. Comparison of dielectric properties and structure of lunar Regolith at Chang'e‐3 and Chang'e‐4 landing sites revealed by ground penetrating radar. Geophys. Res. Lett. 46, 12783–12793 (2019).
Stuart-Alexander, D. E. Geologic Map of the Central Far Side of the Moon I-1047, https://doi.org/10.3133/i1047 (US Geological Survey, 1978).
Huang, J. et al. Geological characteristics of Von Kármán Crater, Northwestern South Pole-Aitken Basin: Chang'E-4 landing site region. J. Geophys. Res. Planets 123, 1684–1700 (2018).
Head, J. W. Lunar volcanism in space and time. Rev. Geophys. 14, 265 (1976).
Pasckert, J. H., Hiesinger, H. & van der Bogert, C. H. Lunar farside volcanism in and around the South Pole–Aitken basin. Icarus 299, 538–562 (2018).
Head, J. W. & Wilson, L. Generation, ascent and eruption of magma on the Moon: New insights into source depths, magma supply, intrusions and effusive/explosive eruptions (Part 2: Predicted emplacement processes and observations). Icarus 283, 176–223 (2017).
Wieczorek, M. A. & Phillips, R. J. The "Procellarum KREEP Terrane": implications for mare volcanism and lunar evolution. J. Geophys. Res. Planets 105, 20417–20430 (2000).
Wieczorek, M. A., Zuber, M. T. & Phillips, R. J. The role of magma buoyancy on the eruption of lunar basalts. Earth Planet. Sci. Lett. 185, 71–83 (2001).
Wieczorek, M. A. et al. The crust of the Moon as seen by GRAIL. Science (80) 339, 671–675 (2013).
Wilson, L. & Head, J. W. Generation, ascent and eruption of magma on the Moon: new insights into source depths, magma supply, intrusions and effusive/explosive eruptions (Part 1: theory). Icarus 283, 146–175 (2017).
Laneuville, M., Wieczorek, M. A., Breuer, D. & Tosi, N. Asymmetric thermal evolution of the Moon. J. Geophys. Res. Planets 118, 1435–1452 (2013).
Topographic Map of the Chang'e 4 Site. Available at: http://www.lroc.asu.edu/posts/1100. Accessed date: June 20, 2020.
Liu, J. et al. Descent trajectory reconstruction and landing site positioning of Chang'E-4 on the lunar farside. Nat. Commun. 10, 4229 (2019).
Cohen, B. A. Support for the Lunar Cataclysm hypothesis from Lunar Meteorite Impact Melt Ages. Science (80) 290, 1754–1756 (2000).
Ling, Z. et al. Composition, mineralogy and chronology of mare basalts and non-mare materials in Von Kármán crater: landing site of the Chang'E−4 mission. Planet. Space Sci. 179, 104741 (2019).
Qiao, L., Ling, Z., Fu, X. & Li, B. Geological characterization of the Chang'e-4 landing area on the lunar farside. Icarus 333, 37–51 (2019).
Schaber, G. G., Boyce, J. M. & Moore, H. J. The scarcity of mappable flow lobes on the lunar maria: unique morphology of the Imbrium flows. In 7th Lunar Planet. Sci. Conf. Proc. Houston, TX, 15–19 March 1976. Vol. 3 (A77-34651 15-91) 2783–2800 (Pergamon Press, Inc., New York, 1976).
Garry, W. B., Robinson, M. S. & Team, L. Observations of flow lobes in the phase I Lavas, Mare Imbrium, the Moon. In 41st Lunar Planet. Sci. Conf. 1–5 March 2010, Woodlands, TX. LPI Contribution No. 1533, 2278 (2010).
Schaber, G. G. Lava flows in Mare Imbrium: geologic evaluation from Apollo orbital photography. In 4th Lunar Sci. Conf. 5–8 March 1973, Houston, TX. 4, 73 (1973).
Yingst, R. A., Chuang, F. C., Berman, D. C. & Mest, S. C. Geologic Mapping of the Planck Quadrangle of the Moon (LQ-29). In 48th Lunar Planet. Sci. Conf. 20–24 March 2017, Woodlands, TX. LPI Contribution No. 1964, 1680 (2017).
Lemelin, M., Lucey, P. G., Gaddis, L. R., Hare, T. & Ohtake, M. Global map products from the Kaguya Multiband Imager at 512 ppd: Minerals, FeO and OMAT. In 47th Lunar Planet. Sci. Conf. 21–25 March 2016, Woodlands, TX. LPI Contribution No. 1903, 2994 (2016).
Carrier, W. D., Olhoeft, G. R. & Mendell, W. Physical properties of the lunar surface. Lunar Sourcebook 522–530 (Lunar and Planetary Institute, 1991).
Li, C. et al. Pitfalls in GPR data interpretation: false reflectors detected in lunar radar cross sections by Chang'e-3. IEEE Trans. Geosci. Remote Sens. 56, 1325–1335 (2018).
Davis, J. C. Statistics and Data Analysis in Geology (Wiley, 1986).
Warren, C., Giannopoulos, A. & Giannakis, I. gprMax: open source software to simulate electromagnetic wave propagation for Ground Penetrating Radar. Comput. Phys. Commun. 209, 163–170 (2016).
Lin, H. et al. Olivine-norite rock detected by the lunar rover Yutu-2 likely crystallized from the SPA impact melt pool. Natl. Sci. Rev. 7, 913–920 (2020).
Pike, R. J. Ejecta from large craters on the Moon: Comments on the geometric model of McGetchin et al. Earth Planet. Sci. Lett. 23, 265–271 (1974).
Xie, M. & Zhu, M.-H. Estimates of primary ejecta and local material for the Orientale basin: Implications for the formation and ballistic sedimentation of multi-ring basins. Earth Planet. Sci. Lett. 440, 71–80 (2016).
Xie, M., T. Liu, and A. Xu Ballistic sedimentation of impact crater ejecta: Implication for resurfacing and provenance of lunar samples. J. Geophys. Res. Planets 125, e2019JE006113 (2020).
Miljković, K. et al. Asymmetric distribution of lunar impact basins caused by variations in target properties. Science 342, 724–726 (2013).
Miljković, K. et al. Subsurface morphology and scaling of lunar impact basins. J. Geophys. Res.: Planets 121, 1695–1712, https://doi.org/10.1002/2016je005038 (2016).
Petro, N. E. & Pieters, C. M. Modeling the provenance of the Apollo 16 regolith. J. Geophys. Res. 111, E09005 (2006).
Rabiner, L. R. & Gold, B. Theory and Application of Digital Signal Processing (Prentice-Hall, 1975).
Barker, M. K. et al. A new lunar digital elevation model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera. Icarus 273, 346–355 (2016).
Robinson, M. S. et al. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview. Space Sci. Rev. 150, 81–124 (2010).
Scientific data of Chang'e missions are provided by the China National Space Administration (CNSA). We are grateful for the support from the team members of the Ground Application and Research System (GRAS), who contributed to data receiving and preprocessing. This study is supported by the Science and Technology Development Fund (FDCT) of Macau (Grants 0042/2018/A2, 0089/2018/A3, 005/2017/A1, and 0079/2019/A2), the Pre-research Project on Civil Aerospace Technologies of CNSA (D020101), the Science and technology project of Jiangxi education department (Grant GJJ180489), and the Scientific Research Starting Foundation for scholars from Jiangxi University of Science and Technology (Grant jxxjbs18017).
State Key Laboratory of Lunar and Planetary Sciences, Macau University of Science and Technology, Macau, China
Jialong Lai, Yi Xu, Roberto Bugiolacchi, Xu Meng, Long Xiao, Minggang Xie, Xiaoping Zhang & Luyuan Xu
School of Science, Jiangxi University of Science and Technology, Ganzhou, China
Jialong Lai
University College London, Earth Sciences, London, UK
Roberto Bugiolacchi
School of Civil Engineering, Guangzhou University, Guangzhou, China
Xu Meng
Planetary Science Institute, School of Earth Sciences, China University of Geosciences, Wuhan, China
Long Xiao
College of Science, Guilin University of Technology, Guilin, China
Minggang Xie
State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Science, Beijing, China
Bin Liu & Kaichang Di
Key Laboratory of Electromagnetic Radiation and Detection Techonology, Chinese Academy of Sceience, Beijing, China
Bin Zhou & Shaoxiang Shen
Aerospace Information Research Institute, Chinese Academy of Science, Beijing, China
Yi Xu
Bin Liu
Kaichang Di
Xiaoping Zhang
Bin Zhou
Shaoxiang Shen
Luyuan Xu
J.L.L., Y.X., and R.B. designed the research and wrote the paper. L.X. and X.P.Z. helped with the geologic analysis. J.J.L., X.M., and M.G.X. performed the calculations. B.L. and K.C.D. generated the Digital Elevation Model (DEM) of the Yutu-2 surveying path and helped with the NAC data processing. B.Z. and S.X.S. designed the instrument. L.Y.X. helped data calibration and mapping.
Correspondence to Yi Xu.
Peer review information Nature Communications thanks Roberto Orosei and David Stillman for their contribution to the peer review of this work. Peer reviewer reports are available.
Peer Review File
Lai, J., Xu, Y., Bugiolacchi, R. et al. First look by the Yutu-2 rover at the deep subsurface structure at the lunar farside. Nat Commun 11, 3426 (2020). https://doi.org/10.1038/s41467-020-17262-w
Accepted: 16 June 2020
Mineralogical and chemical properties inversed from 21-lunar-day VNIS observations taken during the Chang'E-4 mission
Qinghong Zeng
Shengbo Chen
Scientific Reports (2021)
|
CommonCrawl
|
What percentage of a spiral galaxy is the center/bulge?
Is there a credible source that can tell me how big the bulge of a spiral galaxy is compared to the rest of the galaxy? Unfortunately, I could not find any.
galaxy galaxy-center galactic-center
xabdax
xabdaxxabdax
The term you're looking for is called the bulge-to-disk (size) ratio. Sort of by definition, the answer depends on the morphology of the galaxy, i.e. how "late-type" spiral it is. "Sa" spirals are the ones that resemble ellipticals the most, and hence have large size ratios (of order, but below, unity), whereas bulges in Sc galaxies are (less than) one-tenth the size of the disk.
(EDIT: In fact, the way you phrase your question, you're looking for bulge-to-total, rather than bulge-to-disk, but there's a 1:1 correspondence between. Also, in retrospect I think you're interested in the mass ratio, but I interpreted "how big" as the size ratio. Peter Erwin's answer discusses the masses.)
The ratio is obtained by fitting the luminosity distribution of the bulge and the disk separately. Typically both are fitted as exponentials, with scale lengths $R_\mathrm{b}$ and $R_\mathrm{d}$, respectively, but other forms (e.g. Sérsic profiles) are also used. In this case, the effective radius $R_\mathrm{eff}$ is used, i.e. the radius inside which half the light is emitted. Furthermore, the answer will depend the band in which you observe the galaxy (i.e. IR, optical, UV, …)
The following figure (from Möllenhoff 2004), shows the ratio $R_\mathrm{eff,b}/R_\mathrm{d}$ as a function of "Hubble type" going from 1 (Sa), to 3 (Sb), to 5 (Sc). I've annotated examples of galaxies of Hubble type 1, 3, and 5.
You see that the ratio goes from $\sim0.5\text{–}1$ for early-type spirals, to $\sim0.01\text{–}0.2$ for late-type spirals.
The different symbols correspond to different filters going from the I band (infrared) denoted by squares, to the U band (ultraviolet) denoted by circles; they have been offset slightly along the $x$ axis for visualization purposes — you see that the ratio decreases slightly for shorter wavelengths. In other words, the bulge is less prominent, the bluer the light you consider.
pelapela
29.7k7575 silver badges106106 bronze badges
$\begingroup$ How about our own galaxy, the Milky Way? $\endgroup$
$\begingroup$ Hey thanks for that answer. Can you also tell me how the amount of stars and the mass compare between arms and bulge? $\endgroup$
– xabdax
$\begingroup$ @barrycarter The Milky has a bulge radius of roughly 2 kpc, and a disk radius of ~16 kpc, so Rb/Rd ~ 0.12. This is consistent with its Hubble type being close to an Sbc, or 4 in the diagram, where the ratio lies around ~0.1, depending on the band. $\endgroup$
– pela
$\begingroup$ @xabdax The stellar mass density in the arms is a factor of 2-3 higher in the arms than in between (e.g. Rix & Rieke 1993). In the bulge, I think the stellar mass density is more like 5 times higher; I can't find any data right now, but the MW bar has a density roughly 5 times higher than outside the bar (Portail et al. 2017). In the most central regions the density is even higher. $\endgroup$
$\begingroup$ Note that R_eff ("effective radius" = radius within which half of the light of the bulge is found) is not that same thing as the exponential scale length! (The Möllenhoff 2004 plots you show used Sérsic profiles for the bulges, not exponentials.) $\endgroup$
– Peter Erwin
To answer the title question, what you want to know is the bulge-to-total ($B/T$) ratio, which is the fraction of a spiral galaxy's light (and thus, approximately, its stars) which in the bulge; this ranges from 1 (it's all bulge, nothing else there -- i.e., it's an elliptical galaxy) -- to 0 (no bulge at all).
Consistent with pela's answer about sizes, the answer depends on what type of spiral galaxy you're talking about; traditionally, part of the definitions of the Hubble sequence was how much extra light appeared to be in the central region of the galaxy, which is roughly the same as $B/T$.
These days, the answer is actually rather uncertain, because astronomers are in a debate about just what constitutes a "bulge": there are "classical bulges" (kind of like mini-elliptical galaxies and more or less what you are probably thinking about), "pseudobulges", "boxy/peanut-shaped bulges", and possibly other things, all of which are "extra light/stars" in the central region of the galaxy, but which have different shapes, dynamics, and origins. For example, the Milky Way definitely has a boxy/peanut-shaped bulge (what you see sticking up out of the disk), which is really part of its bar; it appears to have a "nuclear disk" (a.k.a. "disky pseudobulge"), which is a dense, bright disk of stars extending to 150 parsecs or so in radius; but it may not have a "classical bulge" at all.
To give you something to look at that has $B/T$ values, here's a figure from Laurikainen et al. (2010), which is based on moderately sophisticated analysis of near-infrared images (less confused by dust and recent star-formation than optical images). The small symbols are individual-galaxy measurements, the large filled circles are median values for each Hubble type, and the open circles are from an earlier study. This plot includes S0/lenticular galaxies (Hubble type < 0; these have disks, but no spiral arms) as well as actual spiral galaxies (Hubble types >= 0). Note that the $B/T$ axis is on a logarithmic scale. Early-type spirals (e.g., Sa galaxies) have $B/T \sim 0.3$; Sc and later spirals typically have $B/T < 0.1$.
Peter ErwinPeter Erwin
$\begingroup$ Yes… You're right of course, the quantity asked for is bulge-to-total and not bulge-to-disk. And I think you're right, the OP was looking was mass ratios — I interpreted "how big" as referring to size. $\endgroup$
Not the answer you're looking for? Browse other questions tagged galaxy galaxy-center galactic-center or ask your own question.
How do we know Milky Way is a 'barred' spiral galaxy?
Why don't we see the galaxy center?
How do we know that our galaxy is a spiral galaxy?
What is the movement of stars within the nucleus of a barred spiral galaxy
Can we learn anything from observing galaxies edge-on, like the Spindle Galaxy?
What coordinate system is at rest relative to the center of the galaxy?
How small can a spiral galaxy be?
|
CommonCrawl
|
Internet Medical Association
Promoting the safe and responsible practice of medicine online since 1996.
New evidence on the affordable care act: coverage impacts of early medicaid expansions.
Health Aff (Millwood). 2014 Jan;33(1):78-87
Authors: Sommers BD, Kenney GM, Epstein AM
The Affordable Care Act expands Medicaid in 2014 to millions of low-income adults in states that choose to participate in the expansion. Since 2010 California, Connecticut, Minnesota, and Washington, D.C., have taken advantage of the law's option to expand coverage earlier to a portion of low-income childless adults. We present new data on these expansions. Using administrative records, we documented that the ramp-up of enrollment was gradual and linear over time in California, Connecticut, and D.C. Enrollment continued to increase steadily for nearly three years in the two states with the earliest expansions. Using survey data on the two earliest expansions, we found strong evidence of increased Medicaid coverage in Connecticut (4.9 percentage points; $$p ) and positive but weaker evidence of increased coverage in D.C. (3.7 percentage points; $$p=\mathbf{\boldsymbol{0.08}}$$). Medicaid enrollment rates were highest among people with health-related limitations. We found evidence of some crowd-out of private coverage in Connecticut (30-40Â percent of the increase in Medicaid coverage), particularly for healthier and younger adults, and a positive spillover effect on Medicaid enrollment among previously eligible parents.
PMID: 24395938 [PubMed - in process]
Communicating Ebola through social media and electronic news media outlets: A cross-sectional study.
Using social media for support and feedback by mental health service users: thematic analysis of a twitter conversation.
What Do Social Media Say About Makeovers? A Content Analysis of Cosmetic Surgery Videos and Viewers' Responses on YouTube.
The Benefits of Social Technology Use Among Older Adults Are Mediated by Reduced Loneliness.
Pathology Image-Sharing on Social Media: Recommendations for Protecting Privacy While Motivating Education.
Internet Medical Association Opportunities
Balancing Privacy and Professionalism: A Survey of General Surgery Program Directors on Social Media and Surgical Education.
Perspectives on social media in and as research: A synthetic review.
Ethical Considerations for Using Social Media in Medical Research
"I will take a shot for every 'like' I get on this status": posting alcohol-related facebook content is linked to drinking outcomes.
Internet and Web Culture
Internet Medical Journal
© Megasimple.com | Privacy Policy. Awesome Inc. theme. Powered by Blogger.
|
CommonCrawl
|
A novel genomic signature predicting FDG uptake in diverse metastatic tumors
Aurora Crespo-Jara1, 2,
Maria Carmen Redal-Peña1, 2,
Elena Maria Martinez-Navarro1, 2,
Manuel Sureda1, 2,
Francisco Jose Fernandez-Morejon1, 2,
Francisco J. Garcia-Cases1, 2,
Ramon Gonzalez Manzano†1, 2Email author and
Antonio Brugarolas†1, 2
†Contributed equally
EJNMMI Research20188:4
Received: 10 October 2017
Accepted: 27 December 2017
Building a universal genomic signature predicting the intensity of FDG uptake in diverse metastatic tumors may allow us to understand better the biological processes underlying this phenomenon and their requirements of glucose uptake.
A balanced training set (n = 71) of metastatic tumors including some of the most frequent histologies, with matched PET/CT quantification measurements and whole human genome gene expression microarrays, was used to build the signature. Selection of microarray features was carried out exclusively on the basis of their strong association with FDG uptake (as measured by SUVmean35) by means of univariate linear regression. A thorough bioinformatics study of these genes was performed, and multivariable models were built by fitting several state of the art regression techniques to the training set for comparison.
The 909 probes with the strongest association with the SUVmean35 (comprising 742 identifiable genes and 62 probes not matched to a symbol) were used to build the signature. Partial least squares using three components (PLS-3) was the best performing model in the training dataset cross-validation (root mean square error, RMSE = 0.443) and was validated further in an independent validation dataset (n = 13) obtaining a performance within the 95% CI of that obtained in the training dataset (RMSE = 0.645). Significantly overrepresented biological processes correlating with the SUVmean35 were identified beyond glycolysis, such as ribosome biogenesis and DNA replication (correlating with a higher SUVmean35) and cytoskeleton reorganization and autophagy (correlating with a lower SUVmean35).
PLS-3 is a signature predicting accurately the intensity of FDG uptake in diverse metastatic tumors. FDG-PET might help in the design of specific targeted therapies directed to counteract the identified malignant biological processes more likely activated in a tumor as inferred from the SUVmean35 and also from its variations in response to antineoplastic treatments.
FDG uptake
Genomic signature
Gene expression microarray
2 [18F] Fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET) is a metabolic imaging technique commonly used in the clinic to evaluate the extension of primary or metastatic tumors prior to therapy. Another use of this technique that is gaining more acceptance in oncology is the assessment of early metabolic response to antineoplastic agents in advanced and metastatic tumors [1, 2].
At the molecular level, FDG uptake has been related mainly to aerobic glycolysis, but a full picture of the different biological pathways involved in this process is currently lacking. While the core molecular machinery of glycolysis is widespread in all tumors, the intensity of FDG uptake is quite variable among different tumor histologies and even among the same tumor histotypes according to specific tumor characteristics [3, 4]. Some studies have correlated the tumor FDG uptake with the expression of essential glycolytic enzymes such as hexokinase-2 or related proteins like glucose transporters Glut1-3 [5–9]. However, a good correlation between these biomarkers and the intensity of FDG uptake is not always found in all tumor types [9]. Other preclinical studies in cancer cell lines have also shown that other biological processes can also be concomitantly upregulated in the presence of higher uptake of FDG in tumors, as it happens in the activation of oncogenic pathways such as KRAS, PI3K, and c-MYC [10–12]. All these studies have focused in a limited number of selected genes and in specific tumor types, gathering thus a limited view of the biology of FDG uptake.
As metastases are the main cause of cancer-related death, a growing interest in metastatic cancer has been recently spurred on by a more thorough characterization of the genomic landscape of these tumors [13]. Hence, we reasoned that a better understanding of the biological processes involved in FDG uptake could be glimpsed by studying a representative sample of diverse human metastatic tumors, accounting thus for a greater tumor heterogeneity but retaining a number of common processes underlying the biology of FDG uptake beyond glycolysis.
The purpose of the present study was to build a genomic signature able to predict FDG uptake intensity in a diverse population of metastatic tumors, by using an unbiased gene expression profiling not limited to a predefined set of genes, but rather using whole human genome gene expression microarrays. To achieve this goal, a different methodology from that used previously in other signatures, that were trained on a single tumor type [14, 15], was required. Individual genes were selected exclusively by their strong association with FDG uptake by means of univariate linear regression. Then, these selected genomic features were used to build and validate the signature, chosen by comparing several state of the art predictive regression methods. The selected genes would also allow us to deepen into the overrepresented biological processes and signaling pathways common to glucose uptake in different metastatic tumors, as well as into the potential protein-protein interaction (PPI) subnetworks found among the selected features.
A deeper knowledge of the metabolic pathways beyond glycolysis involved in FDG uptake might contribute to establish the usefulness of FDG PET/CT in indications such as the evaluation of early metabolic response with different targeted therapies.
The conditions that patients should meet to enter this study were (a) a diagnosis of metastatic tumor (all were solid except a single patient with non-Hodgkin lymphoma) with a baseline FDG-PET/CT in order to evaluate the extent of disease and at a later point treatment response, (b) a fresh frozen tumor biopsy taken at the same metastatic location in which FDG uptake was measured for a gene expression microarray, that was performed within a maximum interval of 8 weeks of the FDG-PET/CT, (c) the patients had not received chemotherapy treatment in the 3 weeks prior to the inclusion in the study, and (d) patients in whom no active tumor could be identified by FDG-PET were excluded from the study. Seventy-one cancer patients, seen between July 2010 and July 2015 at Hospital Quironsalud Torrevieja (Alicante, Spain), met these requirements and were retrospectively evaluated. In 3 of these patients, more than one microarray studies had been performed several months apart, but only the first one was included in this study. No other restrictions applied to the patients entering the study on the basis of sex, age, histology of the tumor, or previous treatments. Informed consents for the obtention of the diagnostic-therapeutic biopsy and for undergoing FDG-PET/CT in the patients included in this study were obtained. Approval of this study by the Institutional Review Board of Hospital Quironsalud Torrevieja (Alicante, Spain) was also obtained.
These 71 patients comprised the training set used to build the predictive genomic signature. A balanced proportion of some of the most frequent tumor histologies (eight tumor types comprising from 5 to 9 patients) along with a group of miscellaneous tumor types constituted this training set (see Table 2). The hypothesis made was that we would be able to capture the underlying common biological processes related to the intensity of FDG uptake shared by different solid tumors by selecting the microarray probes most strongly correlated with FDG uptake (by means of univariate linear regression) in order to build a predictive signature (see Additional file 1 for full details). Besides, 14 additional patients were seen at our institution after July 2015 and were evaluated prospectively to validate (external validation set) the predictive signature generated with the training set. The signature underwent first an internal validation (tenfold cross-validation × 5 in the training set) to choose the best performing model among the four tested as well as to estimate the performance of the signature in an unseen (by the model) dataset [16] and an external validation in an independent dataset (not used to build the model) to test it further. One patient was excluded from the external validation set as he was considered a clear-cut outlier presenting extremely high values of FDG uptake, as outliers have detrimental effects both in the generation and in the validation of the model. An outlier was defined as those measurement values of FDG uptake that, taking as reference the values of the training set, were either:
≤ P1 − 1.5 × (P3 − P1); or
≥ P3 + 1.5 × (P3 − P1)
where P1 was the 25% percentile and P3 was the 75% percentile.
Among the remaining 13 patients of the validation set, one had a value of FDG uptake just below the patient with the lowest limit in the training set, although it was not a low outlier as defined here. We called this patient sample an "influential observation." This term is taken from the regression argot and used here with a similar meaning: how different would it be a model prediction if we were to exclude this observation. After logarithmic transformation, this observation was just a little outside the limit of the low outlier boundaries. Nevertheless, in spite of being aware of the limitations of including such a patient (with an uptake value below the prediction range of the training set and a borderline low outlier) for achieving an accurate prediction, we did not exclude her from the validation set in order to study the effect on the predictive accuracy of the signature of excluding this influential observation from the validation set.
FDG-PET/CT imaging and quantification
All patients fasted for at least 6 h prior to imaging, and pre-examination blood glucose levels were obtained. Patients were injected with 444 MBq (12 mCi) pyrogen-free 18F-FDG. Imaging was performed 90 min (± 10) later on a Biograph 6 Hi-Rez (Siemens Medical Solutions). Whole body PET/CT scanners were acquired in accordance with the HQT PET protocol. CT data was used for attenuation correction (120 mAs Care Dose; 110 Kv, slice 5 mm) and X-ray contrast medium was injected (65 ml ULTRAVIST®, rate 1.6–1.8 ml/s and delay 50 s). All images were iteratively reconstructed using post-emission transmission attenuation-corrected datasets (size 168; zoom 1; full width at half maximum (FHWM) 5.0 mm; iterations 4; subsets 8).
FDG uptake in the biopsied location was quantified. Individual tumor VOIs (volume of interest) were automatically drawn threshold-based, one for each patient. A standard VOI analysis tool provided with the scanner was used to calculate the different quantitative parameters obtained (Leonardo workstation; TRUE D Syngo MMWP 2009B). We did not correct for partial volume effect based on the resolution of our Siemens FDG-PET scanner (< 5 mm.), considering that the minimum diameter of all the lesions studied were at least threefolds the FHWM (> 1.5 cm). The following parameters of FDG quantitation were obtained (as defined below): SUVmax, SUVmean35, SUL, SUVglu, MTV (metabolic tumor volume), TLG (total lesion glycolysis), and tumor to background index (T/B).
Microarray processing and statistical methodology
The protocol followed for the obtention of the matched biopsies is the usual one at our institution and has been previously published [17]. Total RNA extraction was done with RNAeasy columns (QIAGEN), and the amount obtained was measured with the Nanodrop spectrophotometer (ND-1000). Quality of the RNA was measured with the Agilent 2100 Bioanalyzer. Microarray processing and the statistical methodology used to build and validate the signature is described in the Additional file 1.
PET quantification parameters
SUVmean35 was defined based on our previous study (unpublished data) as the SUV mean in a thresholded VOI (3D isocontour at 35% of the maximum pixel value). To calculate T/B index, two identical circular ROIs (region of interest) 50% in size to corresponding VOIs were centered on the area with maximum uptake tumor localization and on the tumor-free neighboring area respectively.
SUL (SUV normalized to lean body mass) was calculated as follows:
\( \mathrm{SUL}=\mathrm{LBM}\times \raisebox{1ex}{$\mathrm{SUVmean}35$}\!\left/ \!\raisebox{-1ex}{$\mathrm{patient}\ \mathrm{weight}\ \left(\mathrm{kg}\right)$}\right. \), where
LBM (lean body mass) was calculated according to the formula of Janmahasatian et al. [18]
$$ \mathrm{LBM}\ \mathrm{male}=9270\times \raisebox{1ex}{$\mathrm{patient}\ \mathrm{weight}\ \left(\mathrm{kg}\right)$}\!\left/ \!\raisebox{-1ex}{$\left(6680+216\times \mathrm{BMI}\right)$}\right. $$
$$ \mathrm{LBM}\ \mathrm{female}=9270\times \raisebox{1ex}{$\mathrm{patient}\ \mathrm{weight}\ \left(\mathrm{kg}\right)$}\!\left/ \!\raisebox{-1ex}{$\left(8780+244\times \mathrm{BMI}\right)$}\right., $$
BMI (body mass index): weight/height2 (kg/m2).
SUVglu (SUV corrected for the blood glucose level) was obtained as follows [19]:
$$ \mathrm{SUVglu}=\raisebox{1ex}{$\left(\mathrm{SUVmean}35\times \mathrm{basal}\ \mathrm{glucose}\right)$}\!\left/ \!\raisebox{-1ex}{$100\ \mathrm{mg}/\mathrm{dl}$}\right. $$
MTV was calculated as tumor volume in centimeter cube contained in the 35% thresholded VOI. TLG was calculated as (SUV mean) × (MTV).
Selection of a representative FDG uptake value for the predictive signature
Among the different FDG quantification parameters mentioned above, a thorough descriptive statistic was carried out in the 71 patients belonging to the training set. This preliminary analysis showed that SUVmax, SUVmean35, and SUVglu had certain linearity and a data distribution close to a normal distribution as demonstrated by normality tests (Shapiro-Milk and Kolmogorov-Smirnov) and QQ-plots. However, the remaining parameters obtained (SUL, MTV, T/B, and TLG) did not follow a normal distribution and were not linear. We reasoned that it was convenient to choose a representative parameter that followed a normal distribution and that showed certain linearity as some methods making use of principal components are known to have a better fit to this kind of data. Hence, we preferred to use either the SUVmax, SUVmean35, or SUVglu as continuous dependent variable (the response or outcome variable). As expected, there was a very good correlation between SUVmean35, SUVmax, and SUVglu. The Pearson correlation coefficient was highest and most significant between SUVmean35 and SUVmax (r = 0.976, p < 0.001). Given the concordance between SUVmean35 and SUVmax and to avoid redundancies, SUVmean35 was chosen. Moreover, we selected the SUVmean35 as the dependent (or outcome) variable to be predicted as the parameter representing FDG uptake because of the higher intrinsic uncertainty associated with the calculation of SUVmax and also for the better reproducibility of SUVmean35 in accordance to our experience (unpublished data). Among these three quantification parameters mentioned above, SUVmean35 was also preferred because its calculation has demonstrated greater inter- and intra-observer reproducibility, in agreement with reports recommending the use of the SUV mean in quantifying the biological effects on tumor response [20, 21]. SPSS software version 15.0 for Windows was used for the descriptive statistics.
To get a better fit of the SUVmean35 to a normal distribution, and also to achieve a similar range as the predictors (probes), the SUVmean35 underwent a base 2 logarithmic transformation. The transformed data would be used in the elaboration of the predictive model. It is important to notice that the log-transformed SUVmean35 values from the 71 patients in the training set did not contain any outlier. To improve readability, the term SUV instead of SUVmean35 was used throughout the manuscript.
Feature selection for building the genomic signature
A key factor to the good performance of predictive models containing a higher number of features (probes in our case) than observations (patient samples) (i.e., p ≫ N) is the selection of the most relevant features to the response. The algorithm of supervised principal components suggested by Hastie et al. [22] was followed with some modifications. In brief, first, with the predictors standardized, univariate regression coefficients for the outcome (the SUV) for each one of the 22,814 filtered probes was obtained. Second, reduced matrices were formed including only those features that exceeded certain absolute threshold in their regression coefficients, and the first three principal components of these matrices were calculated; then, these principal components were used in a regression model to predict the SUV. The absolute regression coefficient threshold used and the number of principal components were chosen by tenfold cross-validation (CV). The functions superpc.cv and superpc.plotcv from the superpc library (created by one of the authors of [22]) from the R statistical environment were used for this purpose. The third principal component and a regression coefficient threshold of ± 1 were selected. The selected threshold included 909 probes corresponding to 742 genes with gene symbol and 62 probes without it. As these 62 probes without symbol might also have an important contribution to the performance of the signature, they were also kept.
All the 909 probes selected were included to build the predictive model considering each one of them individually as predictor (independent variable). Thus, the statistical models tested were allowed to assign the coefficients (for three of the methods used in this study, see the Additional file 1) or proximity measures (for random forest, the fourth method tested in this study as shown in the Additional file 1) most appropriate for each probe, with the intention of increasing the overall accuracy of the tested models. Also, bias related to any form of summarization of the probes can be avoided in the comparison of the models tested. The full list of the 909 probes along with their regression coefficients is shown in Additional file 2: Table S4. As a measurement of importance of the 909 selected probes, variable importance of projection (VIP) values were calculated using the R library plsVarSel.
Bioinformatics analysis of the selected probes
Hierarchical clustering with the selected 909 probes was performed with the function hclust of R, using the Spearman correlation coefficient as distance metric (more precisely 1—correlation coefficient) and complete linkage. A heatmap was generated with the gplots library from R.
For a correct interpretation of the results presented in this work, it is worth noticing that no patient in this study had a SUV = 0 (or "negative"). Thus, when we speak below (and throughout this work) about genes (or biological processes) positively or negatively correlated with the SUV, what is implied is that positive refers to a higher and negative to a lower SUV. In other words, all the genes and biological processes studied here have a clear relationship with the SUV.
The DAVID Bioinformatics Resources 6.7 (https://david.ncifcrf.gov/) was used to study the biological processes overrepresented among the signature selected genes. To include additional biological processes less represented in DAVID, another public resource used was the Consensus Pathway Database, release 31 (http://ConsensusPathDB.org) as a complement.
For the study and identification of potential protein-protein interaction (PPI) networks among the genes selected for the predictive signature, all the genes in each subnetwork (positive and negative correlation with the SUV) were first mapped to their respective protein products using the bioinformatics resource STRING 10.0 (https://string-db.org). The threshold used to establish the edges (interactions) among the nodes (proteins) of the PPI networks was 0.7 ("high confidence"). Two subnetworks were studied separately. One built using the genes with positive and another with those with negative correlation with the SUV (according to the sign of their regression coefficients). In addition, hierarchical clustering using a fastgreedy algorithm (done separately in the two subnetworks) was carried out with the libraries STRINGdb (http://www.bioconductor.org) as an API to the STRING database and igraph from R in order to assign membership in the two subnetworks obtained. The study of network characteristics such as those related with centrality and connectivity was done with the igraph library from R in the two subnetworks obtained.
Gene Set Enrichment Analysis (GSEA) was done using the method single-sample GSEA (ssGSEA) as implemented in the library GSVA (function gsva, method = "ssgsea") from R. Default parameters of this method were used as described by Barbie et al. [23]. This method was applied to all the normalized and filtered microarray intensity data after summarization of the 22,814 probes in the training set (retaining only the maximum intensity value for those genes represented by more than one probe and eliminating those probes without a gene symbol). The C2 subset (curated gene sets) from the Molecular Signatures Database (MSigDB) v5.1 maintained by the Broad Institute (http://software.broadinstitute.org/gsea/msigdb/collections.jsp) was used. The scores obtained with ssGSEA for each patient and each signature used in the training set were then pairwise correlated independently with the corresponding transformed SUV values of each patient (Pearson correlation), and the corresponding correlation coefficients and probabilities were obtained for each signature of the C2 subset. To gain further insight into some specific findings obtained with the C2 subset of MSigDB v5.1, other subsets from this database were also used such as the H, C5, and C6 subsets. Using the same ssGSEA methodology described, we also used the 10 genesets containing highly selective and specific genes for 10 different cell populations. These genesets have been validated extensively in thousands of different human solid tumors (> 19,000) to estimate the abundance of immune and non-immune cells and have also been shown to have a good correlation with immunohistochemistry [24].
FDG-PET quantification
The characteristics of the patients (demographics and quantification data) and biopsies are shown in Tables 1 and 2. The detailed tumor histologies can be found in Additional file 3: Table S6. No statistically significant differences were found between the training and validation set (U Mann-Whitney p > 0.05 for all variables shown). The only difference with the training set was the inclusion of two aggressive locally advanced primary tumors in the validation set: a patient with a pancreatic adenocarcinoma and another with a biliary duct carcinoma.
Demographics and quantification data in the training and validation sets; mean and range values are given
Training set (n = 71)
Validation set (n = 13)
Females/males
52.3 (29.5–81.8)
Baseline blood glucose (mg/dl)
100.8 (66–149)
Injected dose (mCi)
11.5 (9.9–13.4)
11.3 (10,0–12.9)
PET quantification data
Diameter of the lesion (cm)
SUVmax
SUVmed35
5.1 (2.3–8.8)
SUVglu
6.7 (2–14.9)
MTV (cm3)a
45.2 (0.7–434)
197.4 (2.1–1009)
TLGa
358.7 (2.3–3958.1)
1784 (4.1–9058.1)
T/B
Abbreviations: LBM lean body mass, SUVmax maximum standard uptake value, SUVmed35 thresholded 35% medium standard uptake value, SUVglu standard uptake value corrected for plasma glucose levels, SUL standard uptake value normalized by lean body mass, MTV metabolic tumor value, TLG total lesion glycolysis, T/B tumor-to-background ratio
aMissing data: 3 in the training set and 1 in the validation set
Tumor histologies and locations of the biopsies obtained for microarray analysis of the patients in the training set
Total (n = 71)
Percentage (%)
Genitourinary tumor
Carcinoma of unknown primary (CUP)
Locations of biopsies
Retroperitoneal
Lymphadenopathy
Head and neck mucosa
Pleural
Mediastinum
Hierarchical clustering with the probes selected for the elaboration of the signature
Hierarchical clustering was performed in order to check whether the selected probes (the 909 most strongly correlated with FDG uptake as measured by the SUV) were able to discriminate different groups of patients in the training set according to the SUV values and not to other clinical or pathological data. In Fig. 1, a heatmap is shown with the results of the patient samples hierarchical clustering with the 909 probes (as described in the "Methods" section). Five main clusters could be easily distinguished (C1 to C5 in Fig. 1). Comparing the average SUV values of the patient samples of each of the five clusters (Table 3a), they were significantly different as shown by one-way ANOVA (p = 0.001). The SUV averages of the clusters were significantly different on account of significant differences between the higher average SUV of C1 samples versus the average SUV of the remaining clusters (i.e., C1 vs C2, C1 vs C3, C1 vs C4, and C1 vs C5, p < 0.05 for all comparisons by the Student t test). The C2 vs C5 comparison was found close to significance by t test (p = 0.076). Therefore, this unsupervised methodology is indeed able to discriminate clusters of patients with statistically significant average SUV values. Furthermore, none of the major tumor types in this series was grouped in a single cluster (for example, breast, colorectal, genitourinary, ovarian, lung cancers, or soft tissue sarcomas). We found that as a group in our training set, lung cancers (7 patients) had an average SUV value significantly higher than most of the other major tumor types. However, lung cancers were evenly distributed in three different clusters. The remainder of the most represented tumor types had average SUV values that were not significantly different among them, and nevertheless, they were distributed in a minimum of two or more clusters. In addition, no statistically significant differences were found between the average SUV values of the metastatic tumors located in the liver (liver biopsies) and the rest of metastatic locations (t test p = 0.34). Likewise, the samples coming from liver metastases were widely distributed among the five clusters. Overall, these results point to the suitability of these genes as building blocks of a multivariable model to predict the SUV. As a control, hierarchical clustering using the same methodology was also applied to the training set with all the filtered unselected probes to identify five clusters. However, the SUV averages of the clusters identified with all the unselected probes were not significantly different by one-way ANOVA (p = 0.357), as shown in Table 3b.
Hierarchical clustering and heatmap of samples in the training set with the 909 probes of the signature. Microarray samples of the 71 patients in the training set are in columns and standardized probes in rows. The five sample clusters obtained are denoted by C1 to C5 in the upper part of the dendrogram
SUVmean35 (SUV) averages, standard deviations (SD), minimum and maximum values of the samples of each of the five clusters identified using the indicated number of probes in the training set
Average SUV
Maximum SUV
a) 909 selected probes
One way ANOVA, p = 0.001
b) 22,814 unselected probes
Biological processes related to the selected genes
Tables 4 and 5 show the top 20 most significantly overrepresented biological processes related to the genes with positive and negative correlation with the SUV. Among the biological processes with positive correlation with the SUV, it was interesting to note the RNA processing, ncRNA processing, RNA splicing, ribosome biogenesis, and protein aminoacid N-linked glycosylation via asparagine. All these processes were related to the preliminary and required steps conducing to the synthesis and processing of proteins. Cellular growth rate is directly proportional to the number of new ribosomes formed in a cell [25]. Among the biological processes with negative correlation with the SUV, cell adhesion, actin cytoskeleton organization and its regulation, regulation of glycogen biosynthetic process, and ruffle organization were noticeable.
Biological processes overrepresented in the genes with positive correlation with the SUV (from DAVID Bioinformatics Resources 6.7)
Benjamini
GO:0006396~RNA processing
1.87E−08
GO:0022613~ribonucleoprotein complex biogenesis
GO:0034470~ncRNA processing
GO:0034660~ncRNA metabolic process
GO:0046148~pigment biosynthetic process
GO:0008380~RNA splicing
GO:0016071~mRNA metabolic process
GO:0018279~protein amino acid N-linked glycosylation via asparagine
GO:0018196~peptidyl-asparagine modification
GO:0042254~ribosome biogenesis
GO:0042440~pigment metabolic process
GO:0006397~mRNA processing
GO:0009101~glycoprotein biosynthetic process
GO:0070085~glycosylation
GO:0006486~protein amino acid glycosylation
GO:0043413~biopolymer glycosylation
GO:0065003~macromolecular complex assembly
GO:0006487~protein amino acid N-linked glycosylation
GO:0008033~tRNA processing
GO:0000375~RNA splicing, via transesterification reactions
Biological processes overrepresented in the genes with negative correlation with the SUV (from DAVID Bioinformatics Resources 6.7)
GO:0007160~cell-matrix adhesion
GO:0031589~cell-substrate adhesion
GO:0030029~actin filament-based process
GO:0007155~cell adhesion
GO:0022610~biological adhesion
GO:0007015~actin filament organization
GO:0030036~actin cytoskeleton organization
GO:0051493~regulation of cytoskeleton organization
GO:0051017~actin filament bundle formation
GO:0005979~regulation of glycogen biosynthetic process
GO:0032885~regulation of polysaccharide biosynthetic process
GO:0010962~regulation of glucan biosynthetic process
GO:0048771~tissue remodeling
GO:0032881~regulation of polysaccharide metabolic process
GO:0031529~ruffle organization
GO:0043244~regulation of protein complex disassembly
GO:0043255~regulation of carbohydrate biosynthetic process
GO:0008015~blood circulation
GO:0003013~circulatory system process
GO:0035150~regulation of tube size
We also checked the Consensus Pathway database with the same genes (see Additional file 4: Table S1). Other processes of potential interest not identified by DAVID were noted (all with q ≤ 0.1). Regarding the genes with positive correlation with the SUV, this database unveils biological processes such as scavenging by class A receptors, DNA replication, and its regulation. Other relevant processes identified are those related to the immune system: PD1 signaling, antigen processing and presentation, CD4 T cell receptor signaling, downstream TCR signaling, phosphorylation of CD3, and TCR zeta chains among others. Previous reports have shown that a high glucose uptake is required for T cell activation [26].
Although less statistically significant than the aforementioned biological processes, those related to the energetic metabolism of carbohydrates were apparent: glycolysis, pentose phosphate cycle, and insulin-mediated glucose transport. In common with DAVID, protein processing and N-linked glycosylation were also apparent. As far as the genes with negative correlation with the SUV were concerned, a deeper biological insight could be obtained from the Consensus Pathway database (all with q < 0.05). In common with DAVID, the regulation of the actin cytoskeleton scores high. However, a relevant contribution to this regulation can be envisaged in the identified biological pathways related to the small GTPases RHO, RAC1, and CDC42 as they are known potential controllers of dynamic processes affecting the cytoskeleton such as the formation of stress fibers (RHO), lamellipodia (RAC1), and filopodia (CDC42) as well as membrane ruffling (RAC1). E-cadherin signaling, integrin, integrin-linked kinase signaling, and focal adhesions also seem relevant to cell adhesion processes. Muscle and smooth muscle contraction processes cannot be overlooked as they may correlate with some of the cytoskeleton changes and with motility. Eukaryotic translation termination is also worth mentioning. Another potentially relevant group of biological processes are those related to common downstream signaling by different growth factors particularly through RAS and the RAF/MAPK cascade and last but not the least, signaling by the VEGFA-VEGFR2 pathway. Several of the processes mentioned may in fact occur in the tumor microenvironment, like those related to angiogenesis.
Identification of protein-protein interaction (PPI) subnetworks among the selected genes
To identify in each subnetwork (positive and negative correlation with the SUV) modules of relevant functional (and/or physical) interactions assigning membership to each of the interacting proteins, we applied a fastgreedy clustering algorithm, disregarding the proteins with no interactions. After applying the procedure, we selected those clusters containing at least three proteins in each of the two subnetworks. Ten clusters of these characteristics were isolated in the subnetwork of proteins with positive correlation with the SUV and 16 clusters in the subnetwork of those with negative correlation. All the isolated clusters were highly significant in their PPI enrichment value, as defined by the authors of [27] (the cluster range of PPI enrichment p values was from 0.00277 to < 5 × 10−16). This means that there were more interactions among the proteins in each of the clusters considered than would be expected by chance alone in a random set of proteins of similar size extracted from the genome, suggesting a functional cooperation among them. Some potentially relevant clusters are shown in Additional file 5: Figure S1. Among the clusters obtained from the subnetwork of genes with positive correlation with the SUV, clusters 1, 4, 6, and 8 are shown. Cluster 1 is shown containing proteins related to the folding and processing of proteins (HSP90B1, DNAJA1, CALR), transport of proteins (CLPB), proteins with special relevance in hypoxia-like HYOU1, the gene that encodes ORP150, that is overexpressed in many tumors and it is tightly correlated with invasion and tumor progression [28–30] and proteins involved in the glycosylation of proteins (RPN1, STT3A, MAGT1, and DDOST). Overall, this cluster has to do with different stages of protein processing. Cluster 6 shows ribonucleoproteins with a role in the different stages of preparation of the pre-mRNA, like the assembly (NHP2L1 and HNRNPL), elongation (EFTUD2), and splicing (SUGP1, TXNL4A). Cluster 4 contains mainly proteins related to the biogenesis of ribosomes (PES1, RRP1, RRP1B, BMS1, EBNA1BP2). And cluster 8 shows glycolysis enzymes (GPI, PFKP, and HK3). Among the clusters obtained from the subnetwork of genes with negative correlation with the SUV in Additional file 5: Figure S1, cluster 11 is also shown. It contains proteins associated with cellular adhesion and the reorganization of the cytoskeleton (MAPK3, RHOC, RHOA, RAC1, MYL, CTNNB1, and CTNNA1 among others) and also angiogenesis (TEK, FGFR1, HGF). Some of these processes occur in the tumor microenvironment. Cluster 16 contains large ribosomal proteins (RPL9, RPL12, RPL14, RPL19, RPL21, RPL24) and translation termination (ETF1). Of particular interest is cluster 19 containing proteins related to autophagy (ATG5, ATG7, SQSTM1), a homeostatic process that can function in both tumor and stromal cells under conditions of low input of nutrients; in tumors, it has also been related to therapeutic resistance to some antineoplastic agents such as tyrosine kinase inhibitors [31]. It is also interesting to underline that autophagy is not a hit in any other method used in this study.
Gene Set Enrichment Analysis (GSEA)
In order to characterize further the biological and signaling pathways related to the uptake of FDG as measured by the SUV, we performed single sample GSEA (ssGSEA) in our whole filtered training dataset as explained in the "Methods" section, with the C2 subset of the Molecular Signatures Database (MSigDB) v5.1. The significant results obtained (p < 0.05) are reported in Additional file 6: Table S2. This method (ssGSEA) can identify coordinated changes of genes belonging to a gene set in a more sensitive way than other over-representation methods (more centered on individual genes) like the ones mentioned above, that could miss some signaling or biological pathways. That is why we also used this method to complement the methods commented above. Nonetheless, the results were consistent with those of DAVID, Consensus Pathways, and STRING PPIs databases. Just to mention a few, in common with these databases pathways involved in motility (KEGG_vascular_smooth_muscle_contraction, r = − 0.3827), reorganization of the cytoskeleton (KEGG_regulation_of_actin_cytoskeleton, r = − 0.269), cell adhesion (st_integrin_signaling_pathway, r = − 0.3382), and angiogenesis (pid_lymph_angiogenesis_pathway, r = − 0.25) were identified as negatively correlated with the SUV in a statistically significant manner. Another significant gene set identified as positively correlated with the SUV that is worth mentioning is the reactome_facilitative_na_independent_glucose_transporters (r = 0.2496). ssGSEA indeed identified statistically significant relevant signaling pathways missed by the other methods used above. Particularly relevant, the activation of c-MYC is positively correlated with the SUV in some of the genesets of the C2 subset of MSigDB studied (dang_regulated_by_myc_up, r = 0.2839, coller_myc_targets_down, r = − 0.274, and dang_myc_targets_up, r = 0.2436). To strengthen this relationship, we also used ssGSEA with the C6 (oncogenic signatures) and the H (hallmark genesets) subsets of MSigDB v5.1. We found that the single geneset related to upregulation of MYC of the C6 subset, MYC_UP.V1_UP, was significantly associated (p = 0.031) with the SUV (r = 0.25) and that HALLMARK_MYC_TARGETS_V2 was also borderline significantly associated with the SUV (r = 0.23, p = 0.052). The hallmark subset contains just an additional HALLMARK_MYC_TARGETS_V1 geneset which, although not significant, it was also positively correlated with the SUV (r = 0.12). Overall, this in-silico results on MYC targets appear to suggest that MYC targets are positively associated with higher SUV levels, although experimental confirmation would be required for specific tumor types. MYC upregulation is related to the metabolism reprogramming in cancer cells, influencing a variety of aspects [32]. Overall, the biological processes mentioned above seem to fit the hallmark of several of the stress phenotypes of cancer [33].
As mentioned above in the section "Biological processes related to the selected genes," several biological processes related to the immune system were found significantly associated with higher SUV values in the Consensus Pathway database. We explored whether these biological processes were related to a change in the population abundance of specific immune cells in association with the SUV. For this purpose, we also used likewise ssGSEA with 10 genesets reported by [24]. A statistically significant association of the abundance of cytotoxic lymphocytes with higher SUV values was found (r = 0.29, p value = 0.013). Cytotoxic lymphocytes comprise T cells and NK (natural killer) cells. This correlation appears to suggest that there is a trend towards a recruitment of these cytotoxic lymphocytes in tumors with a higher uptake of FDG. The other finding of interest was a borderline statistically significant association of the abundance of endothelial cells in tumors with lower SUV values (r = − 0.23, p value = 0.055). This result is in agreement with the findings reported above, linking angiogenesis with lower SUV values.
Building a signature to predict the SUV
After the selection of features and the study of their biological meaning, we fitted and compared four different models: partial least squares (PLS), principal components regression (PCR), support vector machine (SVM), and random forest (RF) to the training set (n = 71) by tenfold cross-validation (CV) repeated five times (tenfold CV × 5), selecting the best parameters that for each model minimize the RMSE and maximize R2. Fifty resamples per model were generated with their respective RMSE and R2 values as metrics of performance. A summary of the results obtained is shown in Table 6. For the pairwise comparison between models of the RMSE and R2 values, a t test or a Wilcoxon test (both with Bonferroni correction) were used respectively, and the results are also shown in Table 7. PLS and PCR were the models with the best performance (lower RMSE and higher R2), and there were no statistically significant differences between them. For the final selection of the model, we also took into account the number of components needed to achieve the lowest RMSE between the two best performing models. Hence, PLS requiring three components (PLS-3) was preferred over PCR (which required 18) on the basis of the statistical principle of parsimony.
Summary statistics of metrics RMSE and R2 in the four models tested in the training set (50 resamples)
CI (95%)
RMSE
± 0.234
(0.035–0.886)
Results of pairwise comparisons between methods
(t test adjusted p values)
(Wilcoxon test adjusted p values)
PLS vs PCR
PLS vs RF
5.3E−07
PLS vs SVM
PCR vs RF
2.087E−11
5.925E−5
PCR vs SVM
RF vs SVM
Additional file 5: Figure S2 shows the goodness of fit of the predictions of the SUV made by PLS-3 in the training set, as this fit would be used to validate the model in our independent validation set (n = 13). As a more realistic estimate of the goodness of fit in an independent validation set, we performed a tenfold CV in the training set (Additional file 5: Figure S2c).
Measurement of performance of the PLS-3 signature in an independent validation set
The characteristics of the patients of the validation set along with the actual measured and predicted SUV values (by PLS-3) are shown in Additional file 7: Table S3. As mentioned in the "Patients and Methods" section, one of the patients in this validation set can be considered an influential observation as her measured SUV was below the range of SUVs measured in the training set (patient 9 in Additional file 7: Table S3). The measured SUV in this patient was 1.96, and the prediction made by PLS-3 was actually suboptimal: 6.43 (i.e., 3.28-fold higher than the measured value). Taking into account this patient, the RMSE obtained in the validation set (n = 13) was 0.645 which nevertheless is within the 95% confidence interval of the tenfold CV × 5 used to select the best model (see Table 6). Excluding this patient, the RMSE of the validation set (n = 12) would be 0.454 and quite similar to the mean RMSE value estimated by tenfold CV × 5 (0.443 as seen in Table 6). Therefore, the estimates of performance of PLS-3 obtained by tenfold CV × 5 were accurate in predicting the performance of the signature in our independent test set. The inclusion of the influential observation worsens the performance of the signature. Using as a benchmark the RMSE values of the validation set with (n = 13) and without (n = 12) the influential observation using the 909 probes of the original signature, we next tested the performance and stability of the PLS-3 signature by both reducing and increasing the number of probes.
In addition to the 149 and 249 probe signatures (only with positive regression coefficients) and the 201 and 301 probe signatures (only with negative regression coefficients) commented in Additional file 1, we also tested the signatures containing probes with an absolute regression coefficient varying by 0.1 intervals (i.e., including probes with both positive and negative regression coefficients. In Fig. 2, a graph showing the RMSE results of the different PLS-3 signatures tested in the validation set with (n = 13) and without (n = 12) the influential observation is shown. All signatures were first trained by PLS-3 in the training set, and then, the model generated was tested in the validation set.
RMSE values in the validation set (with or without influential observation) of PLS-3 signatures with different number of probes ((+) only probes with positive regression coefficients, (mix) probes with both positive and negative regression coefficients, and(−) only probes with negative regression coefficients)
It was noticeable that all tested signatures perform worse in the full validation set containing the influential observation (n = 13) than in the same validation set without it (n = 12). Therefore, it is not advisable to include such observations in future testing of the predictive signature. Further, a trend towards a degradation of performance is apparent in the tested signatures with an increasing number of probes in the full validation set. However, a clear picture emerges by seeing the performance RMSE results of Fig. 2 in the validation set when the influential observation is omitted (n = 12). The signatures containing exclusively probes with positive regression coefficients (the 149 and 249 probe signatures) perform worse than all the remainder (RMSE = 0.59 and 0.57 respectively). The signatures containing exclusively probes with negative regression coefficients (the 201 and 301 probes signatures) perform the best in the validation dataset (RMSE = 0.39 and 0.38 respectively) and perform better than the results obtained with signatures containing a mixture of probes with both positive and negative regression coefficients. Moreover, the performance of the original signature (909 probes) is not degraded by either increasing (the 0.8 and 0.9 threshold signatures) or reducing (the 1.1 and 1.2 threshold signatures) the number of probes. The RMSE for the original PLS-3 signature with 909 probes (1.0 threshold) is approximately the same (between 0.45 and 0.46) as the 0.8 (2248 probes), 0.9 (1461 probes), 1.1 (547 probes), and 1.2 (323 probes) threshold signatures. Only the 1.3 threshold signature (174 probes) has a worse performance (RMSE = 0.52).
Evaluation of the importance of the probes comprising the PLS-3 signature by using the variable importance of projection (VIP)
As a measurement of importance of the PLS-3 probes, we estimated the VIP values of the third component of our PLS model, which are useful to evaluate the relative contribution of each probe to the model. In Additional file 8: Table S5, the 320 probes of the signature with VIP values ≥ 1 are shown. This threshold is frequently chosen to select the most relevant features that contribute to PLS models. The higher the VIP value for a specific predictor (in our case probe), the more relevant it is for the PLS model. In Additional file 8: Table S5, 14 probes not matched to a symbol have VIP values ≥ 1, and 2 of them are among the top 10 probes with the highest VIP values. It is interesting to note that the proportion of probes not matched to a symbol in the whole signature is not significantly different from this proportion in the selected probes with VIP values ≥ 1 (p = 0.137, by Fisher exact test). These results show that the selection of probes according to VIP values is not enriched in probes matched to a symbol as compared with the whole 909 probe signature, and therefore that unmatched probes seem to also have a similar relevant contribution to the PLS-3 model as matched ones with the same characteristics of VIP values.
Just to mention a few putative relevant genes among the probes matched to a symbol, TEK (also called TIE2) scores high (VIP = 2.06). TEK is a kinase that is expressed in endothelial cells and is involved in angiogenesis. Other selected probes are also related to signaling pathways involving angiogenesis such as NRP2, HGF, and FGFR1. EDNRA and EDNRB have also been found expressed in endothelial cells in different human tumors and a role in angiogenesis and cancer metastasis has been reported [34, 35]. It is also interesting to find RHOJ that is known to be enriched in tumor endothelial cells and to be involved in their motility and in tumor progression, and it is also considered a selective antiangiogenic target [36]. Several genes related to autophagy (ATG7, SQSTM1, ULK2), the glycolytic enzyme HK3 and cytoskeleton organization like RAC1 among others were also selected.
One of the most relevant contributions of this study is the generation and validation of a novel genomic signature using methods of regression not previously reported for the prediction of FDG uptake in diverse metastatic tumors. We reasoned that by predicting the intensity of FDG uptake, we could derive a better understanding of the glucose requirements of the biological processes operating in different metastatic tumors. We found that the best performing model in our dataset was PLS-3. We acknowledge that the small size of our independent validation set (n = 13) might have been a limitation of our study. Notwithstanding the PLS-3 signature has also been thoroughly validated by tenfold CV × 5 in a balanced training set (n = 71). Tenfold CV × 5 was accurate in estimating the performance of the signature in the independent validation set. The PLS-3 signature has a high stability, as shown by similar performance in the validation set using a wide number of probes (from 323 to 2248, see Fig. 2 without the influential observation). And also, the possibility of using a reduced version of the signature with a lower number of probes seems to be feasible.
It was also of interest to note that PLS-3 explained 89% of the SUV variance. It must be taken into consideration that the SUV has some known and definite sources of irreducible error. Among these, methodological aspects related to the preparation of the patient and his/her own physiology, and also those related to the quantification and processing of the PET study which can account for up to 20% of the variation obtained when acquiring the SUV value [1]. The interobserver reproducibility of the SUV (≈ 10%) is also a known issue [37].
It was of interest to note that biological processes previously described as relevant to FDG uptake, like glycolysis and glucose transport, were identified in the bioinformatics analysis carried out in the present work. These findings lend credence to the methodological approach followed in this study. Moreover, another relevant contribution of this study is the identification in a variety of metastatic tumors of multiple common biological processes beyond glycolysis correlated with different SUV values (higher or lower) through a bioinformatics analysis of the signature genes. The knowledge acquired in this study could be of use to design specific targeted therapies based on SUV values or on its variation in response to antineoplastic treatments.
As could be expected, the different bioinformatics methods used (DAVID, Consensus Pathway and STRING databases, and ssGSEA) in this study show some degree of overlapping in the biological processes identified, but they are complementary as each one also finds some relevant unique biological processes not identified by the others. This makes worthwhile the combined use of all of them for a thorough view of the biological landscape of FDG uptake. There seems to be among the biological processes correlating with lower SUV values a preponderance of biological processes related with the tumor microenvironment and its interaction with tumor cells (such as cell adhesion, cell motility, cytoskeleton reorganization, autophagy, and lymphangiogenesis). Of special interest are the processes related to angiogenesis and specifically neolymphangiogenesis. Recently, it has been shown that in primary melanomas, distant lymph nodes and organs may increase their lymphatic vessel density as a pre-metastatic niche to favor and promote distant metastases through the tumor secretion of midkine (encoded by MDK) [38]. Consistent with these data, MDK is weakly correlated with lower SUV values in our training dataset, and it is known that this heparin-binding factor is also secreted by several cancer types (e.g., pancreatic carcinoma) [38]. Hence, it could be inferred that neolymphangiogenesis occurs not only at the pre-metastatic stage, but also at the initial stages of the metastatic process when apparently a lower uptake of glucose is required and/or available.
Among the biological processes correlated with higher SUV values, the predominant biological processes have to do mainly with the tumor compartment (like ribosome biogenesis, DNA replication, and RNA processing and splicing) and also with the immune system. These data suggest that as metastatic tumors evolve through the acquisition of new mutations towards more advanced stages of the metastatic process, which seem to require a higher glucose uptake, they also generate neoantigens capable of inducing effective T cell responses which may compete with the tumor for a higher glucose uptake.
To the best of our knowledge, only a few previous reports have characterized signatures predicting FDG uptake from microarray data [14, 15, 39]. In common with our study, Palaskas et al. [39] used samples (clinical and cell lines) from different histological origins finding in all, enrichment of glucose metabolic pathways (e.g., glycolysis/gluconeogenesis, pentose phosphate pathway) in samples with "high" versus "low" FDG uptake. They also elaborated a classifier by weighted gene voting [40] using as training set 11 primary breast cancer patients (5 with "high" and 6 with "low" FDG uptake) and tested it in 7 breast cancer cell lines. Although with different methodology, we also found among our samples from a diverse variety of metastatic tumors, most of the same top enriched metabolic pathways, including upregulation of MYC.
The other two studies on signatures predicting FDG uptake in non-small cell lung cancer (NSCLC), which included regression studies, were from the same group [14, 15]. In [14], models are built to predict radiologic image features (114 plus PET SUVmax) in terms of 56 metagenes (defined as the first principal component of each one of the corresponding 56 most homogeneous clusters of coexpressed genes) derived from matched microarray and CT imaging data. Using the same methodology and developing further this model to predict specifically 14 FDG uptake features, Nair et al. [15] trained a linear regression model in their study cohort of patients (n = 25) with NSCLC with the metagenes most significantly associated with the FDG uptake features. The range of accuracies reported in their study cohort (as defined in (14)) was from 0.725 to 0.875. Using this metric, the higher accuracy values the better performance of the model tested (maximum around 1). When using the same metric (the accuracy) in our dataset (instead of RMSE), we obtained values of 0.95 in our training set and 0.78 in our validation set. These values compare favorably with those reported by Nair et al. [15].
Only one of the signatures reported in [15] was found statistically significant in a multivariable Cox regression model in their external cohort (n = 63), but not in their validation cohort (n = 84). This signature predicted the SUVmax by means of a linear regression of 15 metagenes, comprising 508 genes.
We wondered whether the genes selected for the elaboration of our predictive signature were able, in an unsupervised manner, to separate patients with different prognosis. For this purpose, we performed hierarchical clustering in a previously published large microarray series of primary breast cancer patients (n = 850) with availability of distant metastases-free survival data (DMFS) [41]. The clustering technique was the same used in the training set of this study. Statistically significant differences in DMFS were found among the clusters identified with the signature probes by logrank test (p = 0.001). However, in agreement with Nair et al. [15] in NSCLC, we also found that a multivariable Cox regression analysis adjusted for known prognostic factors in breast cancer failed to show an independent prognostic value in this primary breast cancer series (data not shown).
In summary, we obtained and validated PLS-3 predicting accurately FDG uptake intensity in different metastatic tumors. The PLS-3 genes allowed us to understand better the biological processes underlying the different requirements of FDG uptake in such tumors. FDG-PET might help in the design of specific targeted therapies directed to counteract the malignant biological processes more likely activated in a tumor as inferred from the SUV and also from its variations in response to antineoplastic treatments.
Ramon Gonzalez Manzano and Antonio Brugarolas contributed equally to this work.
We acknowledge the previous generous contribution of Fundacion Tedeca for the acquisition of the microarray platform used in this study. The authors also acknowledge the payment by Fundacion Tedeca of the article processing charges of this study.
ACJ carried out PET studies, calculated the FDG uptake measurements, collaborated in part of the statistical analysis related to the FDG uptake measurements, and participated in the design and conception of the study and in drafting the manuscript. MCRP carried out the PET studies and calculated FDG uptake measurements. EMMN participated in carrying out the gene expression microarrays. MS contributed patients to the study and helped to draft the manuscript. FJFM performed the pathological evaluation of the suitability of the biopsies used for microarray analysis. FJGC participated in the calculation of FDG uptake measurements. RGM participated in carrying out the gene expression microarrays, performed microarray statistical analysis, elaborated the genomic signature from microarray data and carried out the related statistical analysis, participated in the design and conception of the study, and helped to draft the manuscript. AB contributed patients, participated in the design and conception of the study, and helped to draft the manuscript. All authors read and approved the final manuscript.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Additional file 1: Supplementary Methods. (DOCX 29 kb)
Additional file 2: Table S4. Complete list of the 909 probes selected for the generation of the multivariable model along with their correspondent regression coefficient. (DOCX 110 kb)
Additional file 3: Table S6. Detailed tumor histologies of the patients in the training and validation datasets. (DOCX 17 kb)
Additional file 4: Table S1. Biological processes related to the signature genes (Consensus Pathway Database, release 31 (http://ConsensusPathDB.org)). (DOCX 33 kb)
Additional file 5: Figure S1. Selected clusters identified in the Protein Protein Interaction (PPI) subnetworks obtained from the signature genes in the STRING 10.0 PPI database. Figure S2. Goodness of fit of PLS-3 in the training set. a) Goodnes of fit including Pearson correlation of measured vs predicted SUV values b) Residuals of third component. No pattern is apparent in the residuals distribution c) Estimated goodness of fit after 10-fold CV. (ZIP 242 kb)
Additional file 6: Table S2. Correlation coefficient (CC) on SUV of ssGSEA scores with the C2 subset from the MSigDB v5.1 in the training dataset (p < 0.05) (DOCX 31 kb)
Additional file 7: Table S3. Characteristics of the patients in the validation set along with their measured and predicted (SUVPLS) SUV values. (DOCX 13 kb)
Additional file 8: Table S5. List of PLS-3 probes with VIP values equal or greater than 1 along with their regression coefficients. (XLSX 34 kb)
Plataforma de Oncologia, Hospital Quironsalud Torrevieja, Pda. La Loma s/n, 03184 Torrevieja, Alicante, Spain
Catedra Oncologia Multidisciplinar, Universidad Catolica de Murcia, Murcia, Spain
Shankar LK, Hoffman JM, Bacharach S, Graham MM, Karp J, Lammertsma AA, et al. Consensus recommendations for the use of 18F-FDG PET as an indicator of therapeutic response in patients in National Cancer Institute Trials. J Nucl Med. 2006;47(6):1059–66.PubMedGoogle Scholar
Wahl RL, Jacene H, Kasamon Y, Lodge MA. From RECIST to PERCIST: evolving considerations for PET response criteria in solid tumors. J Nucl Med. 2009;50(Suppl 1):122S–50S.View ArticlePubMedPubMed CentralGoogle Scholar
Adler LP, Crowe JP, al-Kaisi NK, Sunshine JL. Evaluation of breast masses and axillary lymph nodes with [F-18] 2-deoxy-2-fluoro-D-glucose PET. Radiology. 1993;187(3):743–50.View ArticlePubMedGoogle Scholar
Avril N, Menzel M, Dose J, Schelling M, Weber W, Janicke F, et al. Glucose metabolism of breast cancer assessed by 18F-FDG PET: histologic and immunohistochemical tissue analysis. J Nucl Med. 2001;42(1):9–16.PubMedGoogle Scholar
Bos R, van Der Hoeven JJ, van Der Wall E, van Der Groep P, van Diest PJ, Comans EF, et al. Biologic correlates of (18)fluorodeoxyglucose uptake in human breast cancer measured by positron emission tomography. J Clin Oncol. 2002;20(2):379–87.View ArticlePubMedGoogle Scholar
Higashi T, Saga T, Nakamoto Y, Ishimori T, Mamede MH, Wada M, et al. Relationship between retention index in dual-phase (18)F-FDG PET, and hexokinase-II and glucose transporter-1 expression in pancreatic cancer. J Nucl Med. 2002;43(2):173–80.PubMedGoogle Scholar
Kurokawa T, Yoshida Y, Kawahara K, Tsuchida T, Okazawa H, Fujibayashi Y, et al. Expression of GLUT-1 glucose transfer, cellular proliferation activity and grade of tumor correlate with [F-18]-fluorodeoxyglucose uptake by positron emission tomography in epithelial tumors of the ovary. Int J Cancer. 2004;109(6):926–32.View ArticlePubMedGoogle Scholar
Mamede M, Higashi T, Kitaichi M, Ishizu K, Ishimori T, Nakamoto Y, et al. [18F]FDG uptake and PCNA, Glut-1, and Hexokinase-II expressions in cancers and inflammatory lesions of the lung. Neoplasia. 2005;7(4):369–79.View ArticlePubMedPubMed CentralGoogle Scholar
van Berkel A, Rao JU, Kusters B, Demir T, Visser E, Mensenkamp AR, et al. Correlation between in vivo 18F-FDG PET and immunohistochemical markers of glucose uptake and metabolism in pheochromocytoma and paraganglioma. J Nucl Med. 2014;55(8):1253–9.View ArticlePubMedGoogle Scholar
Alvarez JV, Belka GK, Pan TC, Chen CC, Blankemeyer E, Alavi A, et al. Oncogene pathway activation in mammary tumors dictates FDG-PET uptake. Cancer Res. 2014;74(24):7583–98.View ArticlePubMedPubMed CentralGoogle Scholar
Iwamoto M, Kawada K, Nakamoto Y, Itatani Y, Inamoto S, Toda K, et al. Regulation of 18F-FDG accumulation in colorectal cancer cells with mutated KRAS. J Nucl Med. 2014;55(12):2038–44.View ArticlePubMedGoogle Scholar
Morani F, Phadngam S, Follo C, Titone R, Aimaretti G, Galetto A, et al. PTEN regulates plasma membrane expression of glucose transporter 1 and glucose uptake in thyroid cancer cells. J Mol Endocrinol. 2014;53(2):247–58.View ArticlePubMedGoogle Scholar
Robinson DR, Wu YM, Lonigro RJ, Vats P, Cobain E, Everett J, et al. Integrative clinical genomics of metastatic cancer. Nature. 2017;548(7667):297–303.View ArticlePubMedGoogle Scholar
Gevaert O, Xu J, Hoang CD, Leung AN, Xu Y, Quon A, et al. Non-small cell lung cancer: identifying prognostic imaging biomarkers by leveraging public gene expression microarray data—methods and preliminary results. Radiology. 2012;264(2):387–96.View ArticlePubMedPubMed CentralGoogle Scholar
Nair VS, Gevaert O, Davidzon G, Napel S, Graves EE, Hoang CD, et al. Prognostic PET 18F-FDG uptake imaging features are associated with major oncogenomic alterations in patients with resected non-small cell lung cancer. Cancer Res. 2012;72(15):3725–34.View ArticlePubMedPubMed CentralGoogle Scholar
James G, Witten D, Hastie T, Tibshirani R. K-fold cross-validation. In: James G, Witten D, Hastie T, Tibshirani R, editors. An introduction to statistical learning with applications in R. Springer texts in statistics. New York: Springer-Verlag; 2013. p. 181–4.Google Scholar
Rebollo J, Sureda M, Martinez EM, Fernandez-Morejon FJ, Farre J, Munoz V, et al. Gene expression profiling of tumors from heavily pretreated patients with metastatic cancer for the selection of therapy: a pilot study. Am J Clin Oncol. 2017;40(2):140–5.View ArticlePubMedGoogle Scholar
Janmahasatian S, Duffull SB, Ash S, Ward LC, Byrne NM, Green B. Quantification of lean bodyweight. Clin Pharmacokinet. 2005;44(10):1051–65.View ArticlePubMedGoogle Scholar
Lee SM, Kim TS, Lee JW, Kim SK, Park SJ, Han SS. Improved prognostic value of standardized uptake value corrected for blood glucose level in pancreatic cancer using F-18 FDG PET. Clin Nucl Med. 2011;36(5):331–6.View ArticlePubMedGoogle Scholar
de Langen AJ, Vincent A, Velasquez LM, van Tinteren H, Boellaard R, Shankar LK, et al. Repeatability of 18F-FDG uptake measurements in tumors: a metaanalysis. J Nucl Med. 2012;53(5):701–8.View ArticlePubMedGoogle Scholar
Frings V, de Langen AJ, Smit EF, van Velden FH, Hoekstra OS, van Tinteren H, et al. Repeatability of metabolically active volume measurements with 18F-FDG and 18F-FLT PET in non-small cell lung cancer. J Nucl Med. 2010;51(12):1870–7.View ArticlePubMedGoogle Scholar
Hastie T, Tibshirani R, Friedman J. High dimensional problems: p>>N. In: Hastie T, Tibshirani R, Friedman J, editors. The elements of statistical learning data mining, inference and prediction: Springer series in statistics; 2009. p. 677–83.Google Scholar
Barbie DA, Tamayo P, Boehm JS, Kim SY, Moody SE, Dunn IF, et al. Systematic RNA interference reveals that oncogenic KRAS-driven cancers require TBK1. Nature. 2009;462(7269):108–12.View ArticlePubMedPubMed CentralGoogle Scholar
Becht E, Giraldo NA, Lacroix L, Buttard B, Elarouci N, Petitprez F, et al. Estimating the population abundance of tissue-infiltrating immune and stromal cell populations using gene expression. Genome Biol. 2016;17(1):218.View ArticlePubMedPubMed CentralGoogle Scholar
Montanaro L, Trere D, Derenzini M. Nucleolus, ribosomes, and cancer. Am J Pathol. 2008;173(2):301–10.View ArticlePubMedPubMed CentralGoogle Scholar
Frauwirth KA, Riley JL, Harris MH, Parry RV, Rathmell JC, Plas DR, et al. The CD28 signaling pathway regulates glucose metabolism. Immunity. 2002;16(6):769–77.View ArticlePubMedGoogle Scholar
Szklarczyk D, Franceschini A, Wyder S, Forslund K, Heller D, Huerta-Cepas J, et al. STRING v10: protein-protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 2015;43(Database issue):D447–52.View ArticlePubMedGoogle Scholar
Kusaczuk M, Cechowska-Pasko M. Molecular chaperone ORP150 in ER stress-related diseases. Curr Pharm Des. 2013;19(15):2807–18.View ArticlePubMedGoogle Scholar
Slaby O, Sobkova K, Svoboda M, Garajova I, Fabian P, Hrstka R, et al. Significant overexpression of Hsp110 gene during colorectal cancer progression. Oncol Rep. 2009;21(5):1235–41.View ArticlePubMedGoogle Scholar
Stojadinovic A, Hooke JA, Shriver CD, Nissan A, Kovatich AJ, Kao TC, et al. HYOU1/Orp150 expression in breast cancer. Med Sci Monit. 2007;13(11):BR231–9.PubMedGoogle Scholar
Zou Y, Ling YH, Sironi J, Schwartz EL, Perez-Soler R, Piperdi B. The autophagy inhibitor chloroquine overcomes the innate resistance of wild-type EGFR non-small-cell lung cancer cells to erlotinib. J Thorac Oncol. 2013;8(6):693–702.View ArticlePubMedGoogle Scholar
Stine ZE, Walton ZE, Altman BJ, Hsieh AL, Dang CV. MYC, metabolism, and cancer. Cancer Discov. 2015;5(10):1024–39.View ArticlePubMedPubMed CentralGoogle Scholar
Luo J, Solimini NL, Elledge SJ. Principles of cancer therapy: oncogene and non-oncogene addiction. Cell. 2009;136(5):823–37.View ArticlePubMedPubMed CentralGoogle Scholar
Kandalaft LE, Facciabene A, Buckanovich RJ, Coukos G. Endothelin B receptor, a new target in cancer immune therapy. Clin Cancer Res. 2009;15(14):4521–8.View ArticlePubMedPubMed CentralGoogle Scholar
Nie S, Zhou J, Bai F, Jiang B, Chen J, Zhou J. Role of endothelin A receptor in colon cancer metastasis: in vitro and in vivo evidence. Mol Carcinog. 2014;53(Suppl 1):E85–91.View ArticlePubMedGoogle Scholar
Kim C, Yang H, Fukushima Y, Saw PE, Lee J, Park JS, et al. Vascular RhoJ is an effective and selective target for tumor angiogenesis and vascular disruption. Cancer Cell. 2014;25(1):102–17.View ArticlePubMedGoogle Scholar
Committee F-PCT. FDG-PET/CT as an imaging biomarker measuring response to cancer therapy, quantitative imaging biomarkers alliance, version 1.05, publicly reviewed version.: RSNA.ORG/QIBA; 2013.Google Scholar
Olmeda D, Cerezo-Wallis D, Riveiro-Falkenbach E, Pennacchi PC, Contreras-Alcalde M, Ibarz N, et al. Whole-body imaging of lymphovascular niches identifies pre-metastatic roles of midkine. Nature. 2017;546(7660):676–80.View ArticlePubMedGoogle Scholar
Palaskas N, Larson SM, Schultz N, Komisopoulou E, Wong J, Rohle D, et al. 18F-fluorodeoxy-glucose positron emission tomography marks MYC-overexpressing human basal-like breast cancers. Cancer Res. 2011;71(15):5164–74.View ArticlePubMedPubMed CentralGoogle Scholar
Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999;286(5439):531–7.View ArticlePubMedGoogle Scholar
Haibe-Kains B, Desmedt C, Loi S, Culhane AC, Bontempi G, Quackenbush J, et al. A three-gene model to robustly identify breast cancer molecular subtypes. J Natl Cancer Inst. 2012;104(4):311–25.View ArticlePubMedPubMed CentralGoogle Scholar
|
CommonCrawl
|
Bioinformatics Training
K-12 STEM
NCGR is proud of our long history of collaborative research, reflected in the diverse areas in which out team has published throughout the life of the organization.
G. Bauchet, K. Bett, C. Cameron, J. Campbell, E. Cannon, S. Cannon, J. Carlson, A. Chan, A. Cleary, T. Close, D. Cook, A. Cooksey, C. Coyne, S. Dash, R. Dickstein, A. Farmer, D. Fernández-Baca, S. Hokin, E. Jones, Y. Kang, M. Monteros, M. Muñoz-Amatriaín, K. Mysore, C. Pislariu, C. Richards, A. Shi, C. Town, M. Udvardi, E. Wettberg, N. Young and P. Zhao.
The future of legume genetic data resources: Challenges, opportunities, and priorities
Legume Science (e16 LEG3-2019-052.R1), n/a(n/a), e16, 2019
Legume Information System Legume Federation PeanutBase Medicago HapMap Project
B. Llamas, G. Narzisi, V. Schneider, P. Audano, E. Biederstedt, L. Blauvelt, P. Bradbury, X. Chang, C. Chin, A. Fungtammasan, W. Clarke, A. Cleary, J. Ebler, J. Eizenga, J. Sibbesen, C. Markello, E. Garrison, S. Garg, G. Hickey, G. Lazo, M. Lin, M. Mahmoud, T. Marschall, I. Minkin, J. Monlong, R. Musunuri, S. Sagayaradj, A. Novak, M. Rautiainen, A. Regier, F. Sedlazeck, J. Siren, Y. Souilmi, J. Wagner, T. Wrightsman, T. Yokoyama, Q. Zeng, J. Zook, B. Paten and B. Busby.
A strategy for building and using a human reference pangenome
F1000Research, 8(1751), 1751, 2019
M. Chadiarakou, A. Sundararajan, I. Lindquist, G. DeFrancesca, M. Kwicklis, D. Lighthall, N. Farmer, M. Shuster and J. Mudge.
MRSA in the NICU: Outbreak or Coincidence?
National Center for Case Study Teaching in Science, http://sciencecases.lib.buffalo.edu/cs/collection/detail.asp?case_id=1002&id=1002, 2018
INBRE Science Tools in the Classroom
S. Hokin and A. Cleary.
Disease Classification with Pan-Genome Frequented Regions and Machine Learning
Gordon Research Conference, 2019
Pan-genomic analysis of complex human diseases Pangenomic Algorithms
S. Lo, M. Munoz-Amatriain, S. Hokin, N. Cisse, P. Roberts, A. Farmer, S. Xu and T. Close.
A genome-wide association and meta-analysis reveal regions associated with seed size in cowpea [Vigna unguiculata (L.) Walp]
Theor. Appl. Genet. (PMID: 31367839), 2019
Legume Information System Legume Federation
S. Scroggs, N. Grubaugh, J. Sena, A. Sundararajan, F. Schilkey, D. Smith, G. Ebel and K. Hanley.
Endless Forms: Within-Host Variation in the Structure of the West Nile Virus RNA Genome during Serial Passage in Bird Hosts
mSphere, 4(3), 2019
INBRE
S. Lonardi, M. Muñoz-Amatriaín, Q. Liang, S. Shu, S. Wanamaker, S. Lo, J. Tanskanen, A. Schulman, T. Zhu, M. Luo, H. Alhakami, R. Ounit, A. Hasan, J. Verdier, P. Roberts, J. Santos, A. Ndeve, J. Doležel, J. Vrána, S. Hokin, A. Farmer, S. Cannon and T. Close.
The genome of cowpea (Vigna unguiculata [L.] Walp.)
The Plant Journal (PMID: 31017340), 98(5), 767--782, 2019
D. Bertioli, J. Jenkins, J. Clevenger, O. Dudchenko, D. Gao, G. Seijo, S. Leal-Bertioli, L. Ren, A. Farmer, M. Pandey, S. Samoluk, B. Abernathy, G. Agarwal, C. Ballén-Taborda, C. Cameron, J. Campbell, C. Chavarro, A. Chitikineni, Y. Chu, S. Dash, M. Baidouri, B. Guo, W. Huang, K. Kim, W. Korani, S. Lanciano, C. Lui, M. Mirouze, M. Moretzsohn, M. Pham, J. Shin, K. Shirasawa, S. Sinharoy, A. Sreedasyam, N. Weeks, X. Zhang, Z. Zheng, Z. Sun, L. Froenicke, E. Aiden, R. Michelmore, R. Varshney, C. Holbrook, E. Cannon, B. Scheffler, J. Grimwood, P. Ozias-Akins, S. Cannon, S. Jackson and J. Schmutz.
The genome sequence of segmental allotetraploid peanut Arachis hypogaea
Nature Genetics (PMID: 31043755), 51(5), 877--884, 2019
Legume Information System Legume Federation PeanutBase
K. Whitney, J. Mudge, D. Natvig, A. Sundararajan, W. Pockman, J. Bell, S. Collins and J. Rudgers.
Experimental drought reduces genetic diversity in the grassland foundation species Bouteloua eriopoda
Oecologia (PMID: 30850884), 189(4), 1107--1120, 2019
X. Sun, W. Chen, S. Ivanov, A. MacLean, H. Wight, T. Ramaraj, J. Mudge, M. Harrison and Z. Fei.
Genome and evolution of the arbuscular mycorrhizal fungus Diversispora epigaea (formerly Glomus versiforme) and its bacterial endosymbionts
New Phytologist (PMID: 30368822), 221(3), 1556--1573, 2019
Medicago HapMap Project
J. Sena, G. Galotto, N. Devitt, M. Connick, J. Jacobi, P. Umale, L. Vidali and C. Bell.
Unique Molecular Identifiers reveal a novel sequencing artefact with implications for RNA-Seq based gene expression analysis
Sci Rep (PMID: 30177820), 8(1), 13121, 2018
Single cell transcriptomics in plant systems
A. Sundararajan, H. Rane, T. Ramaraj, J. Sena, A. Howell, S. Bernardo, F. Schilkey and S. Lee.
Cranberry-derived proanthocyanidins induce a differential transcriptomic response within Candida albicans urinary biofilms
PLOS ONE (PMID: 30089157), 13(8), e0201969, 2018
S. DeVore, C. Young, G. Li, A. Sundararajan, T. Ramaraj, J. Mudge, F. Schilkey, A. Muth, P. Thompson and B. Cherrington.
Histone Citrullination Represses MicroRNA Expression, Resulting in Increased Oncogene mRNAs in Somatolactotrope Cells
Molecular and Cellular Biology (PMID: 29987187), 38(19), 2018
P. Kianian, M. Wang, K. Simons, F. Ghavami, Y. He, S. Dukowic-Schulze, A. Sundararajan, Q. Sun, J. Pillardy, J. Mudge, C. Chen, S. Kianian and W. Pawlowski.
High-resolution crossover mapping reveals similarities and differences of male and female recombination in maize
Nature Communications (PMID: 29915302), 9(1), 2018
H. Tran, T. Ramaraj, A. Furtado, L. Lee and R. Henry.
Use of a draft genome of coffee (Coffea arabica) to identify SNPs associated with caffeine content
Plant Biotechnology Journal (PMID: 29509991), 16(10), 1756--1766, 2018
K. Konganti, F. Guerrero, F. Schilkey, P. Ngam, J. Jacobi, P. Umale, A. Leon and D. Threadgill.
A Whole Genome Assembly of the Horn Fly, Haematobia irritans, and Prediction of Genes with Roles in Metabolism and Sex Determination
G3&$\mathsemicolon$#58$\mathsemicolon$ Genes$|$Genomes$|$Genetics (PMID: 29602812), 8(5), 1675--1686, 2018
A. Cleary, T. Ramaraj, I. Kahanda, J. Mudge and B. Mumey.
Exploring frequented regions in pan-genomic graphs
IEEE/ACM transactions on computational biology and bioinformatics (PMID: 30106690), 2018
H. Castillo, X. Li, F. Schilkey and G. Smith.
Transcriptome analysis reveals a stress response of Shewanella oneidensis deprived of background levels of ionizing radiation
H. Tsujimoto, K. Hanley, A. Sundararajan, N. Devitt, F. Schilkey and I. Hansen.
Correction: Dengue virus serotype 2 infection alters midgut and carcass gene expression in the Asian tiger mosquito, Aedes albopictus
E. Keshishian, H. Hallmark, T. Ramaraj, L. Plačková, A. Sundararajan, F. Schilkey, O. Novák and A. Rashotte.
Salt and oxidative stresses uniquely regulate tomato cytokinin levels and transcriptomic response
Plant Direct (PMID: 31245735), 2(7), e00071, 2018
A. Cleary and A. Farmer.
Genome Context Viewer: visual exploration of multiple annotated genomes using microsynteny
Bioinformatics (PMID: 29194466), 34(9), 1562--1564, 2017
A. Mosbach, D. Edel, A. Farmer, S. Widdison, T. Barchietto, R. Dietrich, A. Corran and G. Scalliet.
Anilinopyrimidine Resistance in Botrytis cinerea Is Linked to Mitochondrial Function
Frontiers in Microbiology (PMID: 29250050), 8, 2017
C. Grover, M. Arick, J. Conover, A. Thrash, G. Hu, W. Sanders, C. Hsu, R. Naqvi, M. Farooq, X. Li, L. Gong, J. Mudge, T. Ramaraj, J. Udall, D. Peterson and J. Wendel.
Comparative Genomics of an Unusual Biogeographic Disjunction in the Cotton Tribe (Gossypieae) Yields Insights into Genome Downsizing
Genome Biology and Evolution (PMID: 29194487), 9(12), 3328--3344, 2017
J. Singh, S. Kalberer, V. Belamkar, T. Assefa, M. Nelson, A. Farmer, W. Blackmon and S. Cannon.
A transcriptome-SNP-derived linkage map of Apios americana (potato bean) provides insights about genome re-organization and synteny conservation in the phaseoloid legumes
Theoretical and Applied Genetics (PMID: 29071392), 131(2), 333--351, 2017
Legume Information System
S. Buddenborg, L. Bu, S. Zhang, F. Schilkey, G. Mkoji and E. Loker.
Transcriptomic responses of Biomphalaria pfeifferi to Schistosoma mansoni: Investigation of a neglected African snail that supports more S. mansoni transmission than any other snail species
PLOS Neglected Tropical Diseases (PMID: 29045404), 11(10), e0005984, 2017
K. Moll, P. Zhou, T. Ramaraj, D. Fajardo, N. Devitt, M. Sadowsky, R. Stupar, P. Tiffin, J. Miller, N. Young, K. Silverstein and J. Mudge.
Strategies for optimizing BioNano and Dovetail explored through a second reference quality assembly for the legume model, Medicago truncatula
BMC Genomics (PMID: 28778149), 18(1), 2017
D. Lightfoot, D. Jarvis, T. Ramaraj, R. Lee, E. Jellen and P. Maughan.
Single-molecule sequencing and Hi-C-based proximity-guided assembly of amaranth (Amaranthus hypochondriacus) chromosomes provide insights into genome evolution
BMC Biology (PMID: 28854926), 15(1), 2017
R. Barrero, F. Guerrero, M. Black, J. McCooke, B. Chapman, F. Schilkey, A. León, R. Miller, S. Bruns, J. Dobry, G. Mikhaylenko, K. Stormo, C. Bell, Q. Tao, R. Bogden, P. Moolhuijzen, A. Hunter and M. Bellgard.
Gene-enriched draft genome of the cattle tick Rhipicephalus microplus: assembly by the hybrid Pacific Biosciences/Illumina approach enabled analysis of the highly repetitive genome
International Journal for Parasitology (PMID: 28577881), 47(9), 569--583, 2017
J. Miller, P. Zhou, J. Mudge, J. Gurtowski, H. Lee, T. Ramaraj, B. Walenz, J. Liu, R. Stupar, R. Denny, L. Song, N. Singh, L. Maron, S. McCouch, W. McCombie, M. Schatz, P. Tiffin, N. Young and K. Silverstein.
Hybrid assembly with long and short reads improves discovery of gene family expansions
D. Neupane, B. Jacquez, A. Sundararajan, T. Ramaraj, F. Schilkey and E. Yukl.
Zinc-Dependent Transcriptional Regulation in Paracoccus denitrificans
P. Zhou, K. Silverstein, T. Ramaraj, J. Guhlin, R. Denny, J. Liu, A. Farmer, K. Steele, R. Stupar, J. Miller, P. Tiffin, J. Mudge and N. Young.
Exploring structural variation and gene family architecture with De Novo assemblies of 15 Medicago genomes
Legume Federation Medicago HapMap Project
M. Munoz-Amatriain, H. Mirebrahim, P. Xu, S. Wanamaker, M. Luo, H. Alhakami, M. Alpert, I. Atokple, B. Batieno, O. Boukar, S. Bozdag, N. Cisse, I. Drabo, J. Ehlers, A. Farmer, C. Fatokun, Y. Gu, Y. Guo, B. Huynh, S. Jackson, F. Kusi, C. Lawley, M. Lucas, Y. Ma, M. Timko, J. Wu, F. You, N. Barkley, P. Roberts, S. Lonardi and T. Close.
Genome resources for climate-resilient cowpea, an essential crop for food security
Plant J. (PMID: 27775877), 89(5), 1042--1054, 2017
Dengue virus serotype 2 infection alters midgut and carcass gene expression in the Asian tiger mosquito, Aedes albopictus
K. Bayha, N. Ortell, C. Ryan, K. Griffitt, M. Krasnec, J. Sena, T. Ramaraj, R. Takeshita, G. Mayer and R. Schilkey.
Crude oil impairs immune function and increases susceptibility to pathogenic bacteria in southern flounder
L. Chaney, R. Mangelson, T. Ramaraj, E. Jellen and P. Maughan.
The Complete Chloroplast Genome Sequences for FourAmaranthusSpecies (Amaranthaceae)
Applications in Plant Sciences (PMID: 27672525), 4(9), 1600063, 2016
R. Isaza, C. Diaz-Trujillo, B. Dhillon, A. Aerts, J. Carlier, C. Crane, T. Jong, I. Vries, R. Dietrich, A. Farmer, C. Fereira, S. Garcia, M. Guzman, R. Hamelin, E. Lindquist, R. Mehrabi, O. Quiros, J. Schmutz, H. Shapiro, E. Reynolds, G. Scalliet, M. Souza, I. Stergiopoulos, T. Lee, P. Wit, M. Zapater, L. Zwiers, I. Grigoriev, S. Goodwin and G. Kema.
Combating a Global Threat to a Clonal Crop: Banana Black Sigatoka Pathogen Pseudocercospora fijiensis (Synonym Mycosphaerella fijiensis) Genomes Reveal Clues for Disease Control
PLOS Genetics (PMID: 27512984), 12(8), e1005876, 2016
R. Horn, T. Ramaraj, N. Devitt, F. Schilkey and D. Cowley.
De novo assembly of a tadpole shrimp (Triops newberryi) transcriptome and preliminary differential gene expression analysis
Molecular Ecology Resources (PMID: 27292122), 17(2), 161--171, 2016
N. Vishwanathan, A. Bandyopadhyay, H. Fu, M. Sharma, K. Johnson, J. Mudge, T. Ramaraj, G. Onsongo, K. Silverstein, N. Jacob, H. Le, G. Karypis and W. Hu.
Augmenting Chinese hamster genome assembly by identifying regions of high confidence
Biotechnology Journal (PMID: 27374913), 11(9), 1151--1157, 2016
S. Dukowic-Schulze, A. Sundararajan, T. Ramaraj, S. Kianian, W. Pawlowski, J. Mudge and C. Chen.
Novel Meiotic miRNAs and Indications for a Role of PhasiRNAs in Meiosis
Frontiers in Plant Science (PMID: 27313591), 7, 2016
S. Deschamps, J. Mudge, C. Cameron, T. Ramaraj, A. Anand, K. Fengler, K. Hayes, V. Llaca, T. Jones and G. May.
Characterization, correction and de novo assembly of an Oxford Nanopore genomic dataset from Agrobacterium tumefaciens
Scientific Reports (PMID: 27350167), 6(1), 2016
J. He, A. Sundararajan, N. Devitt, F. Schilkey, T. Ramaraj and C. Melançon.
Complete Genome Sequence of Streptomyces venezuelae ATCC 15439, Producer of the Methymycin/Pikromycin Family of Macrolide Antibiotics, Using PacBio Technology
Genome Announcements (PMID: 27151802), 4(3), 2016
R. Penmetsa, N. Carrasquilla-Garcia, E. Bergmann, L. Vance, B. Castro, M. Kassa, B. Sarma, S. Datta, A. Farmer, J. Baek, C. Coyne, R. Varshney, E. Wettberg and D. Cook.
Multiple post-domestication origins of kabuli chickpea through allelic variation in a diversification-associated transcription factor
S. Jansky and D. Fajardo.
Amylose content decreases during tuber development in potato
Journal of the Science of Food and Agriculture (PMID: 26931799), 96(13), 4560--4564, 2016
T. Zuo, J. Zhang, A. Lithio, S. Dash, D. Weber, R. Wise, D. Nettleton and T. Peterson.
Genes and Small RNA Transcripts Exhibit Dosage-Dependent Expression Pattern in Maize Copy-Number Alterations
Genetics (PMID: 27129738), 203(3), 1133--1147, 2016
D. Bertioli, S. Cannon, L. Froenicke, G. Huang, A. Farmer, E. Cannon, X. Liu, D. Gao, J. Clevenger, S. Dash, L. Ren, M. Moretzsohn, K. Shirasawa, W. Huang, B. Vidigal, B. Abernathy, Y. Chu, C. Niederhuth, P. Umale, A. Araújo, A. Kozik, K. Kim, M. Burow, R. Varshney, X. Wang, X. Zhang, N. Barkley, P. Guimarães, S. Isobe, B. Guo, B. Liao, H. Stalker, R. Schmitz, B. Scheffler, S. Leal-Bertioli, X. Xun, S. Jackson, R. Michelmore and P. Ozias-Akins.
The genome sequences of Arachis duranensis and Arachis ipaensis, the diploid ancestors of cultivated peanut
B. Schlautman, G. Covarrubias-Pazaran, D. Fajardo, S. Steffan and J. Zalapa.
Discriminating power of microsatellites in cranberry organelles for taxonomic studies in Vaccinium and Ericaceae
Genetic Resources and Crop Evolution, 64(3), 451--466, 2016
H. Smith, C. Foreman, T. Akiyama, M. Franklin, N. Devitt and T. Ramaraj.
Genome Sequence of Janthinobacterium sp. CG23_2, a Violacein-Producing Isolate from an Antarctic Supraglacial Stream
S. Abdel-Ghany, M. Hamilton, J. Jacobi, P. Ngam, N. Devitt, F. Schilkey, A. Ben-Hur and A. Reddy.
A survey of the sorghum transcriptome using single-molecule long reads
Nature communications (PMID: 27339290), 7, 11706, 2016
R. Stamler, D. Vereecke, Y. Zhang, F. Schilkey, N. Devitt and J. Randall.
Complete genome and plasmid sequences for Rhodococcus fascians D188 and draft sequences for Rhodococcus isolates PBTS 1 and PBTS 2
Genome Announc. (PMID: 27284129), 4(3), e00495--16, 2016
V. Hansen, F. Schilkey and R. Miller.
Transcriptomic changes associated with pregnancy in a marsupial, the gray short-tailed opossum Monodelphis domestica
J. Clouse, D. Adhikary, J. Page, T. Ramaraj, M. Deyholos, J. Udall, D. Fairbanks, E. Jellen and P. Maughan.
The Amaranth Genome: Genome, Transcriptome, and Physical Map Assembly
The Plant Genome (PMID: 27898770), 9(1), 0, 2016
S. Dash, E. Cannon, S. Kalberer, A. Farmer and S. Cannon.
PeanutBase and Other Bioinformatic Resources for Peanut
PeanutBase
A. Sundararajan, S. Dukowic-Schulze, M. Kwicklis, K. Engstrom, N. Garcia, O. Oviedo, T. Ramaraj, M. Gonzales, Y. He, M. Wang, Q. Sun, J. Pillardy, S. Kianian, W. Pawlowski, C. Chen and J. Mudge.
Gene Evolutionary Trajectories and GC Patterns Driven by Recombination in Zea mays
Front Plant Sci (PMID: 27713757), 7, 1433, 2016
S. Dash, J. Campbell, E. Cannon, A. Cleary, W. Huang, S. Kalberer, V. Karingula, A. Rice, J. Singh, P. Umale, N. Weeks, A. Wilkey, A. Farmer and S. Cannon.
Legume information system (LegumeInfo.org): a key component of a set of federated data resources for the legume family
Nucleic Acids Res. (PMID: 26546515), 44(D1), D1181--1188, 2016
V. Shah, . , S. Lambeth, T. Carson, J. Lowe, T. Ramaraj, J. Leff, L. Luo and C. Bell.
Composition Diversity and Abundance of Gut Microbiome in Prediabetes and Type 2 Diabetes
Journal of Diabetes and Obesity (PMID: 26756039), 2(2), 108--114, 2015
D. Ramírez-Gordillo, T. Powers, J. Velkinburgh, C. Trujillo-Provencio, F. Schilkey and E. Serrano.
RNA-Seq and microarray analysis of the Xenopus inner ear transcriptome discloses orthologous OMIM® genes for hereditary disorders of hearing and balance
BMC Research Notes (PMID: 26582541), 8(1), 2015
J. Li, S. Dukowic-Schulze, I. Lindquist, A. Farmer, B. Kelly, T. Li, A. Smith, E. Retzel, J. Mudge and C. Chen.
The plant-specific protein FEHLSTART controls male meiotic entry, initializing meiotic synchronization in Arabidopsis
D. Livingstone, S. Royaert, C. Stack, K. Mockaitis, G. May, A. Farmer, C. Saski, R. Schnell, D. Kuhn and J. Motamayor.
Making a chocolate chip: development and evaluation of a 6K SNP array for Theobroma cacao
DNA Research (PMID: 26070980), 22(4), 279--291, 2015
Y. Ogasawara, N. Torrez-Martinez, A. Aragon, B. Yackley, J. Weber, A. Sundararajan, T. Ramaraj, J. Edwards and C. Melançon.
High-Quality Draft Genome Sequence of Actinobacterium Kibdelosporangium sp. MJ126-NF4, Producer of Type II Polyketide Azicemicins, Using Illumina and PacBio Technologies
J. Duitama, A. Silva, Y. Sanabria, D. Cruz, C. Quintero, C. Ballen, M. Lorieux, B. Scheffler, A. Farmer, E. Torres, J. Oard and J. Tohme.
Whole Genome Sequencing of Elite Rice Cultivars as a Comprehensive Information Resource for Marker Assisted Selection
E. Schirtzinger, C. Andrade, N. Devitt, T. Ramaraj, J. Jacobi, F. Schilkey and K. Hanley.
Repertoire of virus-derived small RNAs produced by mosquito and mammalian cells in response to dengue virus infection
Virology (PMID: 25528416), 476, 54--60, 2015
R. Chopra, G. Burow, A. Farmer, J. Mudge, C. Simpson, T. Wilkins, M. Baring, N. Puppala, K. Chamberlin and M. Burow.
Next-generation transcriptome sequencing, SNP discovery and validation in four market classes of peanut, Arachis hypogaea L.
Molecular Genetics and Genomics (PMID: 25663138), 290(3), 1169--1180, 2015
B. Schlautman, D. Fajardo, T. Bougie, E. Wiesman, J. Polashock, N. Vorsa, S. Steffan and J. Zalapa.
Development and Validation of 697 Novel Polymorphic Genomic and EST-SSR Markers in the American Cranberry (Vaccinium macrocarpon Ait.)
Molecules (PMID: 25633331), 20(2), 2001--2013, 2015
J. Rudd, K. Kanyuka, K. Hassani-Pak, M. Derbyshire, A. Andongabo, J. Devonshire, A. Lysenko, M. Saqi, N. Desai, S. Powers, J. Hooper, L. Ambroso, A. Bharti, A. Farmer, K. Hammond-Kosack, R. Dietrich and M. Courbot.
Transcriptome and Metabolite Profiling of the Infection Cycle of Zymoseptoria tritici on Wheat Reveals a Biphasic Interaction with Plant Immunity Involving Differential Pathogen Chromosomal Contributions and a Variation on the Hemibiotrophic Lifestyle Definition
Plant Physiology (PMID: 25596183), 167(3), 1158--1185, 2015
H. Smith, C. Foreman and T. Ramaraj.
Draft Genome Sequence of a Metabolically Diverse Antarctic Supraglacial Stream Organism, Polaromonas sp. Strain CG9_12, Determined Using Pacific Biosciences Single-Molecule Real-Time Sequencing Technology
R. Chopra, G. Burow, A. Farmer, J. Mudge, C. Simpson and M. Burow.
Comparisons of De Novo Transcriptome Assemblers in Diploid and Polyploid Species Using Peanut (Arachis spp.) RNA-Seq Data
PLoS ONE (PMID: 25551607), 9(12), e115055, 2014
N. Gujaria-Verma, S. Vail, N. Carrasquilla-Garcia, R. Penmetsa, D. Cook, A. Farmer, A. Vandenberg and K. Bett.
Genetic mapping of legume orthologs reveals high conservation of synteny between lentil species and the sequenced genomes of Medicago and chickpea
V. Belamkar, N. Weeks, A. Bharti, A. Farmer, M. Graham and S. Cannon.
Comprehensive characterization and RNA-Seq profiling of the HD-Zip transcription factor family in soybean (Glycine max) during dehydration and salt stress
BMC Genomics (PMID: 25362847), 15, 950, 2014
T. Ramaraj, S. Matyi, A. Sundararajan, I. Lindquist, N. Devitt, F. Schilkey, R. Lamichhane-Khadka, P. Hoyt, J. Mudge and J. Gustafson.
Draft Genome Sequences of Vancomycin-Susceptible Staphylococcus aureus Related to Heterogeneous Vancomycin-Intermediate S. aureus
S. Shrestha, J. Hu, R. Fryxell, J. Mudge and K. Lamour.
SNP markers identify widely distributed clonal lineages ofPhytophthora colocasiaein Vietnam, Hawaii and Hainan Island, China
Mycologia (PMID: 24895424), 106(4), 676--685, 2014
S. Matyi, T. Ramaraj, A. Sundararajan, I. Lindquist, N. Devitt, F. Schilkey, R. Lamichhane-Khadka, P. Hoyt, J. Mudge and J. Gustafson.
Draft Genomes of Heterogeneous Vancomycin-Intermediate Staphylococcus aureus Strain MM66 and MM66 Derivatives with Altered Vancomycin Resistance Levels
M. Mukherjee, P. Kakarla, S. Kumar, E. Gonzalez, J. Floyd, M. Inupakutika, A. Devireddy, S. Tirrell, M. Bruns, G. He, I. Lindquist, A. Sundararajan, F. Schilkey, J. Mudge and M. Varela.
Comparative genome analysis of non-toxigenic non-O1 versus toxigenic O1 Vibrio cholerae
Genomics Discovery (PMID: 25722857), 2(1), 1, 2014
S. Dukowic-Schulze, A. Sundararajan, J. Mudge, T. Ramaraj, A. Farmer, M. Wang, Q. Sun, J. Pillardy, S. Kianian, E. Retzel, W. Pawlowski and C. Chen.
The transcriptome landscape of early maize meiosis
BMC Plant Biology (PMID: 24885405), 14(1), 118, 2014
M. Schleiss, S. McAllister, A. Armién, N. Hernandez-Alvarado, C. Fernández-Alarcón, J. Zabeli, T. Ramaraj, J. Crow and M. McVoy.
Molecular and Biological Characterization of a New Isolate of Guinea Pig Cytomegalovirus
Viruses (PMID: 24473341), 6(2), 448--475, 2014
R. He, F. Salvato, J. Park, M. Kim, W. Nelson, T. Balbuena, M. Willer, J. Crow, G. May, C. Soderlund, J. Thelen and D. Gang.
A systems-wide comparison of red rice (Oryza longistaminata) tissues identifies rhizome specific genes and proteins that are targets for cultivated rice improvement
BMC Plant Biology (PMID: 24521476), 14(1), 46, 2014
L. Santoferrara, S. Guida, H. Zhang and G. McManus.
De novo transcriptomes of a mixotrophic and a heterotrophic ciliate from marine plankton
PLoS One (PMID: 24983246), 9(7), e101418, 2014
X. Li, Y. Han, Y. Wei, A. Acharya, A. Farmer, J. Ho, M. Monteros and E. Brummer.
Development of an alfalfa SNP array and its use to evaluate patterns of population structure and linkage disequilibrium
PLoS One (PMID: 24416217), 9(1), e84329, 2014
H. Kudapa, S. Azam, A. Sharpe, B. Taran, R. Li, B. Deonovic, C. Cameron, A. Farmer, S. Cannon and R. Varshney.
Comprehensive transcriptome assembly of chickpea (Cicer arietinum L.) using Sanger and next generation sequencing platforms: development and applications
P. Keeling, F. Burki, H. Wilcox, B. Allam, E. Allen, L. Amaral-Zettler, E. Armbrust, J. Archibald, A. Bharti, C. Bell and []. others.
The Marine Microbial Eukaryote Transcriptome Sequencing Project (MMETSP): illuminating the functional diversity of eukaryotic life in the oceans through transcriptome sequencing
PLoS biology (PMID: 24959919), 12(6), e1001889, 2014
K. Dziewanowska, M. Settles, S. Hunter, I. Linquist, F. Schilkey and P. Hartzell.
Phase variation in Myxococcus xanthus yields cells specialized for iron sequestration
L. Meinhardt, G. Costa, D. Thomazella, P. Teixeira, M. Carazzolle, S. Schuster, J. Carlson, M. Guiltinan, P. Mieczkowski, A. Farmer and []. others.
Genome and secretome analysis of the hemibiotrophic fungal pathogen, Moniliophthora roreri, which causes frosty pod rot disease of cacao: mechanisms of the biotrophic and necrotrophic phases
Bmc Genomics (PMID: 24571091), 15(1), 164, 2014
S. Dukowic-Schulze, A. Sundararajan, T. Ramaraj, J. Mudge and C. Chen.
Sequencing-based large-scale genomics approaches with small numbers of isolated maize meiocytes
Frontiers in plant science (PMID: 24611068), 5, 57, 2014
K. Lamour, J. Hu, V. Lefebvre, J. Mudge, A. Howden and E. Huitema.
Illuminating the Phytophthora capsici Genome
Springer, 2014
M. Schleiss, N. Hernandez-Alvarado, T. Ramaraj and J. Crow.
Genome Sequence of a Novel, Newly Identified Isolate of Guinea Pig Cytomegalovirus, the CIDMTR Strain
Genome Announcements (PMID: 24371200 ), 1(6), 2013
M. Bonhomme, O. André, Y. Badis, J. Ronfort, C. Burgarella, N. Chantret, J. Prosperi, R. Briskine, J. Mudge, F. Debéllé, H. Navier, H. Miteul, A. Hajri, A. Baranger, P. Tiffin, B. Dumas, M. Pilet-Nayel, N. Young and C. Jacquet.
High-density genome-wide association mapping implicates an F-box encoding gene inMedicago truncatularesistance toAphanomyces euteiches
J. Yoder, R. Briskine, J. Mudge, A. Farmer, T. Paape, K. Steele, G. Weiblen, A. Bharti, P. Zhou, G. May, N. Young and P. Tiffin.
Phylogenetic Signal Variation in the Genomes of Medicago (Fabaceae)
Systematic Biology (PMID: 23417680), 62(3), 424--438, 2013
S. Kumar, I. Lindquist, A. Sundararajan, C. Rajanna, J. Floyd, K. Smith, J. Andersen, G. He, R. Ayers, J. Johnson, J. Werdann, A. Sandoval, N. Mojica, F. Schilkey, J. Mudge and M. Varela.
Genome Sequence of Non-O1 Vibrio cholerae PS15
R. Varshney, C. Song, R. Saxena, S. Azam, S. Yu, A. Sharpe, S. Cannon, J. Baek, B. Rosen, B. Tar'an, T. Millan, X. Zhang, L. Ramsay, A. Iwata, Y. Wang, W. Nelson, A. Farmer, P. Gaur, C. Soderlund, R. Penmetsa, C. Xu, A. Bharti, W. He, P. Winter, S. Zhao, J. Hane, N. Carrasquilla-Garcia, J. Condie, H. Upadhyaya, M. Luo, M. Thudi, C. Gowda, N. Singh, J. Lichtenzveig, K. Gali, J. Rubio, N. Nadarajan, J. Dolezel, K. Bansal, X. Xu, D. Edwards, G. Zhang, G. Kahl, J. Gil, K. Singh, S. Datta, S. Jackson, J. Wang and D. Cook.
Draft genome sequence of chickpea (Cicer arietinum) provides a resource for trait improvement
Nature biotechnology (PMID: ), 31(3), 240, 2013
S. Singer, J. Schwarz, C. Manduca, S. Fox, E. Iverson, B. Taylor, S. Cannon, G. May, S. Maki, A. Farmer and J. Doyle.
Keeping an Eye on Biology
Science (PMID: 23349282), 339(6118), 408--409, 2013
X. Shi, S. Gupta, I. Lindquist, C. Cameron, J. Mudge and A. Rashotte.
Transcriptome Analysis of Cytokinin Response in Tomato Leaves
A. Roulin, P. Auer, M. Libault, J. Schlueter, A. Farmer, G. May, G. Stacey, R. Doerge and S. Jackson.
The fate of duplicated genes in a polyploid plant genome
N. McNulty, M. Wu, A. Erickson, C. Pan, B. Erickson, E. Martens, N. Pudlo, B. Muegge, B. Henrissat, R. Hettich and []. others.
Effects of diet on resource utilization by a model human gut microbiota containing Bacteroides cellulosilyticus WH2, a symbiont with an extensive glycobiome
F. Francis, J. Kim, T. Ramaraj, A. Farmer, M. Rush and J. Ham.
Comparative genomic analysis of two Burkholderia glumae strains from different geographic origins reveals a high degree of plasticity in genome structure associated with genomic islands
Molecular genetics and genomics (PMID: 23563926), 288(3-4), 195--203, 2013
M. Sugawara, B. Epstein, B. Badgley, T. Unno, L. Xu, J. Reese, P. Gyaneshwar, R. Denny, J. Mudge, A. Bharti and []. others.
Comparative genomics of the core and accessory genomes of 48 Sinorhizobium strains comprising five genospecies
Genome biology (PMID: 23425606), 14(2), R17, 2013
S. Gupta, X. Shi, I. Lindquist, N. Devitt, J. Mudge and A. Rashotte.
Transcriptome profiling of cytokinin and auxin regulation in tomato root
Journal of experimental botany (PMID: 23307920), 64(2), 695--704, 2013
J. Stanton-Geddes, T. Paape, B. Epstein, R. Briskine, J. Yoder, J. Mudge, A. Bharti, A. Farmer, P. Zhou, R. Denny and []. others.
Candidate genes and genetic architecture of symbiotic and agronomic traits revealed by whole-genome, sequence-based association genetics in Medicago truncatula
C. Chen and E. Retzel.
Analyzing the meiotic transcriptome using isolated meiocytes of Arabidopsis thaliana
J. Yoder, R. Briskine, J. Mudge, A. Farmer, T. Paape, K. Steele, G. Weiblen, A. Bharti, P. Zhou, G. May and []. others.
V. Pravosudov, T. Roth, M. Forister, L. Ladage, R. Kramer, F. Schilkey and A. Van.
Differential hippocampal gene expression is associated with climate-related natural variation in memory and the hippocampus in food-caching chickadees
Molecular ecology (PMID: 23205699), 22(2), 397--408, 2013
H. Kudapa, A. Bharti, S. Cannon, A. Farmer, B. Mulaosmanovic, R. Kramer, A. Bohra, N. Weeks, J. Crow, R. Tuteja, T. Shah, S. Dutta, D. Gupta, A. Singh, K. Gaikwad, T. Sharma, G. May, N. Singh and R. Varshney.
A Comprehensive Transcriptome Assembly of Pigeonpea (Cajanus cajan L.) using Sanger and Second-Generation Sequencing Platforms
Molecular Plant (PMID: 22241453), 5(5), 1020--1028, 2012
T. PARCHMAN, Z. GOMPERT, J. MUDGE, F. SCHILKEY, C. BENKMAN and C. BUERKLE.
Genome-wide association genetics of an adaptive trait in lodgepole pine
Molecular Ecology (PMID: 22404645), 21(12), 2991--3005, 2012
N. Young and A. Bharti.
Genome-enabled insights into legume biology
Annual review of plant biology (PMID: 22404476), 63, 2012
A. Ryvkin, H. Ashkenazy, L. Smelyanski, G. Kaplan, O. Penn, Y. Weiss-Ottolenghi, E. Privman, P. Ngam, J. Woodward, G. May, C. Bell, T. Pupko and J. Gershoni.
Deep panning: steps towards probing the IgOme
R. Varshney and G. May.
Next-generation sequencing technologies: opportunities and obligations in plant genomics
Briefings in Functional Genomics (PMID: 22345600), 11(1), 1--2, 2012
G. Peiffer, K. King, A. Severin, G. May, S. Cianzio, S. Lin, N. Lauter and R. Shoemaker.
Identification of candidate genes underlying an iron efficiency quantitative trait locus in soybean
M. Thudi, Y. Li, S. Jackson, G. May and R. Varshney.
Current state-of-art of sequencing technologies for plant genomics research
Briefings in Functional Genomics (PMID: 22345601), 11(1), 3--11, 2012
L. Nfonsam, C. Cano, J. Mudge, F. Schilkey and J. Curtiss.
Analysis of the transcriptomes downstream of Eyeless and the Hedgehog, Decapentaplegic and Notch signaling pathways in Drosophila melanogaster
R. He, M. Kim, W. Nelson, T. Balbuena, R. Kim, R. Kramer, J. Crow, G. May, J. Thelen, C. Soderlund and []. others.
Next-generation sequencing-based transcriptomic and proteomic analysis of the common reed, Phragmites australis (Poaceae), reveals genes involved in invasiveness and rhizome specificity
American Journal of Botany (PMID: 22301892), 99(2), 232--247, 2012
D. Kuhn, D. Livingstone, D. Main, P. Zheng, C. Saski, F. Feltus, K. Mockaitis, A. Farmer, G. May, R. Schnell and []. others.
Identification and mapping of conserved ortholog set (COS) II sequences of cacao and their conversion to SNP markers for marker-assisted selection in Theobroma cacao and comparative genomics studies
Tree genetics & genomes, 8(1), 97--111, 2012
S. Azam, V. Thakur, P. Ruperao, T. Shah, J. Balaji, B. Amindala, A. Farmer, D. Studholme, G. May, D. Edwards and []. others.
Coverage-based consensus calling (CbCC) of short sequence reads and comparison of CbCC results to identify SNPs in chickpea (Cicer arietinum; Fabaceae), a crop species without a reference genome
American journal of botany, 99(2), 186--192, 2012
H. Kudapa, A. Bharti, S. Cannon, A. Farmer, B. Mulaosmanovic, R. Kramer, A. Bohra, N. Weeks, J. Crow, R. Tuteja and []. others.
D. Ilut, J. Coate, A. Luciano, T. Owens, G. May, A. Farmer and J. Doyle.
A comparative transcriptomic study of an allotetraploid and its diploid progenitors illustrates the unique advantages and challenges of RNA-seq in plant species
J. Silva, B. Scheffler, Y. Sanabria, C. De, D. Galam, A. Farmer, J. Woodward, G. May and J. Oard.
Identification of candidate genes in rice for resistance to sheath blight disease by whole genome sequencing
Theoretical and applied genetics (PMID: 21901547), 124(1), 63--74, 2012
X. Li, A. Acharya, A. Farmer, J. Crow, A. Bharti, R. Kramer, Y. Wei, Y. Han, J. Gou, G. May and []. others.
Prevalence of single nucleotide polymorphism among 27 diverse alfalfa genotypes as assessed by transcriptome sequencing
BMC genomics, 13(1), 568, 2012
P. Hiremath, A. Kumar, R. Penmetsa, A. Farmer, J. Schlueter, S. Chamarthi, A. Whaley, N. Carrasquilla-Garcia, P. Gaur, H. Upadhyaya and []. others.
Large-scale development of cost-effective SNP marker assays for diversity assessment and genetic mapping in chickpea and comparative mapping in legumes
Plant biotechnology journal (PMID: 23103470), 10(6), 716--732, 2012
J. Li, A. Farmer, I. Lindquist, S. Dukowic-Schulze, J. Mudge, T. Li, E. Retzel and C. Chen.
Characterization of a set of novel meiotically-active promoters in Arabidopsis
K. Lamour, J. Mudge, D. Gobena, O. Hurtado-Gonzales, J. Schmutz, A. Kuo, N. Miller, B. Rice, S. Raffaele, L. Cano and []. others.
Genome sequencing and mapping reveal loss of heterozygosity as a mechanism for rapid adaptation in the vegetable pathogen Phytophthora capsici
Molecular Plant-Microbe Interactions (PMID: 22712506), 25(10), 1350--1360, 2012
R. Varshney, W. Chen, Y. Li, A. Bharti, R. Saxena, J. Schlueter, M. Donoghue, S. Azam, G. Fan, A. Whaley and []. others.
Draft genome sequence of pigeonpea (Cajanus cajan), an orphan legume crop of resource-poor farmers
Nature biotechnology (PMID: 22057054), 30(1), 83, 2012
N. Young, F. Debellé, G. Oldroyd, R. Geurts, S. Cannon, M. Udvardi, V. Benedito, K. Mayer, J. Gouzy, H. Schoof, Y. Peer, S. Proost, D. Cook, B. Meyers, M. Spannagl, F. Cheung, S. Mita, V. Krishnakumar, H. Gundlach, S. Zhou, J. Mudge, A. Bharti, J. Murray, M. Naoumkina, B. Rosen, K. Silverstein, H. Tang, S. Rombauts, P. Zhao, P. Zhou, V. Barbe, P. Bardou, M. Bechner, A. Bellec, A. Berger, H. Bergès, S. Bidwell, T. Bisseling, N. Choisne, A. Couloux, R. Denny, S. Deshpande, X. Dai, J. Doyle, A. Dudez, A. Farmer, S. Fouteau, C. Franken, C. Gibelin, J. Gish, S. Goldstein, A. González, P. Green, A. Hallab, M. Hartog, A. Hua, S. Humphray, D. Jeong, Y. Jing, A. Jöcker, S. Kenton, D. Kim, K. Klee, H. Lai, C. Lang, S. Lin, S. Macmil, G. Magdelenat, L. Matthews, J. McCorrison, E. Monaghan, J. Mun, F. Najar, C. Nicholson, C. Noirot, M. O'Bleness, C. Paule, J. Poulain, F. Prion, B. Qin, C. Qu, E. Retzel, C. Riddle, E. Sallet, S. Samain, N. Samson, I. Sanders, O. Saurat, C. Scarpelli, T. Schiex, B. Segurens, A. Severin, D. Sherrier, R. Shi, S. Sims, S. Singer, S. Sinharoy, L. Sterck, A. Viollet, B. Wang, K. Wang, M. Wang, X. Wang, J. Warfsmann, J. Weissenbach, D. White, J. White, G. Wiley, P. Wincker, Y. Xing, L. Yang, Z. Yao, F. Ying, J. Zhai, L. Zhou, A. Zuber, J. Dénarié, R. Dixon, G. May, D. Schwartz, J. Rogers, F. Quétier, C. Town and B. Roe.
The Medicago genome provides insight into the evolution of rhizobial symbioses
Nature (PMID: 22089132), 480(7378), 520--524, 2011
A. Branca, T. Paape, P. Zhou, R. Briskine, A. Farmer, J. Mudge, A. Bharti, J. Woodward, G. May, L. Gentzbittel, C. Ben, R. Denny, M. Sadowsky, J. Ronfort, T. Bataillon, N. Young and P. Tiffin.
Whole-genome nucleotide diversity, recombination, and linkage disequilibrium in the model legume Medicago truncatula
Proc. Natl. Acad. Sci. U.S.A. (PMID: 21949378), 108(42), E864--870, 2011
T. Kumar, J. Crow, T. Wennblom, M. Abril, P. Letcher, M. Blackwell, R. Roberson and D. McLaughlin.
An ontology of fungal subcellular traits
Am. J. Bot. (PMID: 21875969), 98(9), 1504--1510, 2011
R. Mehta, T. Yamada, B. Taylor, K. Christov, M. King, D. Majumdar, F. Lekmine, C. Tiruppathi, A. Shilkaitis, L. Bratescu, A. Green, C. Beattie and T. Das.
A cell penetrating peptide derived from azurin inhibits angiogenesis and tumor growth by inhibiting phosphorylation of VEGFR-2, FAK and Akt
Angiogenesis (PMID: 21667138), 14(3), 355--369, 2011
J. Ma, T. Chang, H. Yasue, A. Farmer, J. Crow, K. Eyer, H. Hiraiwa, T. Shimogiri, S. Meyers, J. Beever, L. Schook, E. Retzel, C. Beattie and W. Liu.
A high-resolution comparative map of porcine chromosome 4 (SSC4)
Anim. Genet. (PMID: 21749428), 42(4), 440--444, 2011
L. Jia, G. Gorman, L. Coward, P. Noker, D. McCormick, T. Horn, J. Harder, M. Muzzio, B. Prabhakar, B. Ganesh, T. Das and C. Beattie.
Preclinical pharmacokinetics, metabolism, and toxicity of azurin-p28 (NSC745104) a peptide inhibitor of p53 ubiquitination
Cancer Chemother. Pharmacol. (PMID: 21085965), 68(2), 513--524, 2011
A. Dubey, A. Farmer, J. Schlueter, S. Cannon, B. Abernathy, R. Tuteja, J. Woodward, T. Shah, B. Mulasmanovic, H. Kudapa, N. Raju, R. Gothalwal, S. Pande, Y. Xiao, C. Town, N. Singh, G. May, S. Jackson and R. Varshney.
Defining the transcriptome assembly and its use for genome dynamics and transcriptome profiling studies in pigeonpea (Cajanus cajan L.)
DNA Res. (PMID: 21565938), 18(3), 153--164, 2011
P. Hiremath, A. Farmer, S. Cannon, J. Woodward, H. Kudapa, R. Tuteja, A. Kumar, A. BhanuPrakash, B. Mulaosmanovic, N. Gujaria, L. Krishnamurthy, P. Gaur, P. KaviKishor, T. Shah, R. Srinivasan, M. Lohse, Y. Xiao, C. Town, D. Cook, G. May and R. Varshney.
Large-scale transcriptome analysis in chickpea (Cicer arietinum L.), an orphan legume crop of the semi-arid tropics of Asia and Africa
Plant Biotechnology Journal (PMID: 21615673), 9(8), 922--931, 2011
N. Gujaria, A. Kumar, P. Dauthal, A. Dubey, P. Hiremath, A. Bhanu, A. Farmer, M. Bhide, T. Shah, P. Gaur, H. Upadhyaya, S. Bhatia, D. Cook, G. May and R. Varshney.
Development and use of genic molecular markers (GMMs) for construction of a transcript map of chickpea (Cicer arietinum L.)
Theor. Appl. Genet. (PMID: 21384113), 122(8), 1577--1589, 2011
A. Bohra, A. Dubey, R. Saxena, R. Penmetsa, K. Poornima, N. Kumar, A. Farmer, G. Srivani, H. Upadhyaya, R. Gothalwal, S. Ramesh, D. Singh, K. Saxena, P. Kishor, N. Singh, C. Town, G. May, D. Cook and R. Varshney.
Analysis of BAC-end sequences (BESs) and development of BES-SSR markers for genetic mapping and hybrid purity assessment in pigeonpea (Cajanus spp.)
BMC Plant Biol. (PMID: 21447154), 11, 56, 2011
T. Chang, Y. Yang, H. Yasue, A. Bharti, E. Retzel and W. Liu.
The expansion of the PRAME gene family in Eutheria
A. Bizzarri, S. Santini, E. Coppari, M. Bucciantini, S. Di, T. Yamada, C. Beattie and S. Cannistraro.
Interaction of an anticancer peptide fragment of azurin with p53 and its isolated domains studied by atomic force spectroscopy
Int J Nanomedicine (PMID: 22162658), 6, 3011--3019, 2011
C. Bell, D. Dinwiddie, N. Miller, S. Hateley, E. Ganusova, J. Mudge, R. Langley, L. Zhang, C. Lee, F. Schilkey, V. Sheth, J. Woodward, H. Peckham, G. Schroth, R. Kim and S. Kingsmore.
Carrier testing for severe childhood recessive diseases by next-generation sequencing
Sci Transl Med (PMID: 21228398), 3(65), 65ra4, 2011
J. Woody, A. Severin, Y. Bolon, B. Joseph, B. Diers, A. Farmer, N. Weeks, G. Muehlbauer, R. Nelson, D. Grant, J. Specht, M. Graham, S. Cannon, G. May, C. Vance and R. Shoemaker.
Gene expression patterns are correlated with genomic and genic structure in soybean
Genome (PMID: 21217801), 54(1), 10--18, 2011
Y. Yang, T. Chang, H. Yasue, A. Bharti, E. Retzel and W. Liu.
ZNF280BY and ZNF280AY: autosome derived Y-chromosome gene families in Bovidae
BMC Genomics (PMID: 21214936), 12, 13, 2011
M. Schmidt, W. Barbazuk, M. Sandford, G. May, Z. Song, W. Zhou, B. Nikolau and E. Herman.
Silencing of soybean seed storage proteins results in a rebalanced protein composition preserving seed protein content without major collateral changes in the metabolome and transcriptome
Plant Physiology (PMID: 21398260), 156(1), 330--345, 2011
C. Chen, A. Farmer, R. Langley, J. Mudge, J. Crow, G. May, J. Huntley, A. Smith and E. Retzel.
Meiosis-specific gene discovery in plants: RNA-Seq applied to isolated Arabidopsis male meiocytes
BMC Plant Biol. (PMID: 21167045), 10, 280, 2010
H. Kallankari, T. Kaukola, M. Ojaniemi, R. Herva, M. Perhomaa, R. Vuolteenaho, S. Kingsmore and M. Hallman.
Chemokine CCL18 predicts intraventricular hemorrhage in very preterm infants
Ann. Med. (PMID: 20608885), 42(6), 416--425, 2010
R. Nelson, S. Avraham, R. Shoemaker, G. May, D. Ware and D. Gessler.
Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for bioinformatics resource discovery and disparate data and service integration
BioData Mining (PMID: 20525377), 3(1), 2010
S. Baranzini, J. Mudge, J. Velkinburgh, P. Khankhanian, I. Khrebtukova, N. Miller, L. Zhang, A. Farmer, C. Bell, R. Kim, G. May, J. Woodward, S. Caillier, J. McElroy, R. Gomez, M. Pando, L. Clendenen, E. Ganusova, F. Schilkey, T. Ramaraj, O. Khan, J. Huntley, S. Luo, P. Kwok, T. Wu, G. Schroth, J. Oksenberg, S. Hauser and S. Kingsmore.
Genome, epigenome and RNA sequences of monozygotic twins discordant for multiple sclerosis
Nature (PMID: 20428171), 464(7293), 1351--1356, 2010
S. Glickman, C. Cairns, R. Otero, C. Woods, E. Tsalik, R. Langley, J. Velkinburgh, L. Park, L. Glickman, V. Fowler, S. Kingsmore and E. Rivers.
Disease progression in hemodynamically stable patients presenting to the emergency department with sepsis
Acad Emerg Med (PMID: 20370777), 17(4), 383--390, 2010
N. Jacob, A. Kantardjieff, F. Yusufi, E. Retzel, B. Mulukutla, S. Chuah, M. Yap and W. Hu.
Reaching the depth of the Chinese hamster ovary cell transcriptome
Biotechnol. Bioeng. (PMID: 19882695), 105(5), 1002--1009, 2010
J. Vogel, D. Garvin, T. Mockler, J. Schmutz, D. Rokhsar, M. Bevan, K. Barry, S. Lucas, M. Harmon-Smith, K. Lail, H. Tice, J. Schmutz, J. Grimwood, N. McKenzie, M. Bevan, N. Huo, Y. Gu, G. Lazo, O. Anderson, J. Vogel, F. You, M. Luo, J. Dvorak, J. Wright, M. Febrer, M. Bevan, D. Idziak, R. Hasterok, D. Garvin, E. Lindquist, M. Wang, S. Fox, H. Priest, S. Filichkin, S. Givan, D. Bryant, J. Chang, T. Mockler, H. Wu, W. Wu, A. Hsia, P. Schnable, A. Kalyanaraman, B. Barbazuk, T. Michael, S. Hazen, J. Bragg, D. Laudencia-Chingcuanco, J. Vogel, D. Garvin, Y. Weng, N. McKenzie, M. Bevan, G. Haberer, M. Spannagl, K. Mayer, T. Rattei, T. Mitros, D. Rokhsar, S. Lee, J. Rose, L. Mueller, T. York, T. Wicker, J. Buchmann, J. Tanskanen, A. Schulman, H. Gundlach, J. Wright, M. Bevan, A. Oliveira, L. Maia, W. Belknap, Y. Gu, N. Jiang, J. Lai, L. Zhu, J. Ma, C. Sun, E. Pritham, J. Salse, F. Murat, M. Abrouk, G. Haberer, M. Spannagl, K. Mayer, R. Bruggmann, J. Messing, F. You, M. Luo, J. Dvorak, N. Fahlgren, S. Fox, C. Sullivan, T. Mockler, J. Carrington, E. Chapman, G. May, J. Zhai, M. Ganssmann, S. Gurazada, M. German, B. Meyers, P. Green, J. Bragg, L. Tyler, J. Wu, Y. Gu, G. Lazo, D. Laudencia-Chingcuanco, J. Thomson, J. Vogel, S. Hazen, S. Chen, H. Scheller, J. Harholt, P. Ulvskov, S. Fox, S. Filichkin, N. Fahlgren, J. Kimbrel, J. Chang, C. Sullivan, E. Chapman, J. Carrington, T. Mockler, L. Bartley, P. Cao, K. Jung, M. Sharma, M. Vega-Sanchez, P. Ronald, C. Dardick, S. De, W. Verelst, D. Inze, M. Heese, A. Schnittger, X. Yang, U. Kalluri, G. Tuskan, Z. Hua, R. Vierstra, D. Garvin, Y. Cui, S. Ouyang, Q. Sun, Z. Liu, A. Yilmaz, E. Grotewold, R. Sibout, K. Hematy, G. Mouille, H. Hofte, T. Michael, J. Pelloux, D. O'Connor, J. Schnable, S. Rowe, F. Harmon, C. Cass, J. Sedbrook, M. Byrne, S. Walsh, J. Higgins, M. Bevan, P. Li, T. Brutnell, T. Unver, H. Budak, H. Belcram, M. Charles, B. Chalhoub and I. Baxter.
Genome sequencing and analysis of the model grass Brachypodium distachyon
M. Libault, A. Farmer, L. Brechenmacher, J. Drnevich, R. Langley, D. Bilgin, O. Radwan, D. Neece, S. Clough, G. May and G. Stacey.
Complete transcriptome of the soybean root hair cell, a single-cell model, and its alteration in response to Bradyrhizobium japonicum infection
Plant Physiol. (PMID: 19933387), 152(2), 541--552, 2010
A. Severin, J. Woody, Y. Bolon, B. Joseph, B. Diers, A. Farmer, G. Muehlbauer, R. Nelson, D. Grant, J. Specht, M. Graham, S. Cannon, G. May, C. Vance and R. Shoemaker.
RNA-Seq Atlas of Glycine max: A guide to the soybean transcriptome
R. R..
Pigeonpea genomics initiative (PGI): an international effort to improve crop productivity of pigeonpea (Cajanus cajan L.)
Molecular Breeding (PMID: 20976284), 26(3), 393--408, 2010
D. Hyten, S. Cannon, Q. Song, N. Weeks, E. Fickus, R. Shoemaker, J. Specht, A. Farmer, G. May and P. Cregan.
High-throughput SNP discovery through deep resequencing of a reduced representation library to anchor and orient scaffolds in the soybean whole genome sequence
J. Schmutz, S. Cannon, J. Schlueter, J. Ma, T. Mitros, W. Nelson, D. Hyten, Q. Song, J. Thelen, J. Cheng, D. Xu, U. Hellsten, G. May, Y. Yu, T. Sakurai, T. Umezawa, M. Bhattacharyya, D. Sandhu, B. Valliyodan, E. Lindquist, M. Peto, D. Grant, S. Shu, D. Goodstein, K. Barry, M. Futrell-Griggs, B. Abernathy, J. Du, Z. Tian, L. Zhu, N. Gill, T. Joshi, M. Libault, A. Sethuraman, X. Zhang, K. Shinozaki, H. Nguyen, R. Wing, P. Cregan, J. Specht, J. Grimwood, D. Rokhsar, G. Stacey, R. Shoemaker and S. Jackson.
Genome sequence of the palaeopolyploid soybean
E. Tsalik, D. Jones, B. Nicholson, L. Waring, O. Liesenfeld, L. Park, S. Glickman, L. Caram, R. Langley, J. Velkinburgh, C. Cairns, E. Rivers, R. Otero, S. Kingsmore, T. Lalani, V. Fowler and C. Woods.
Multiplex PCR to diagnose bloodstream infections in patients admitted from the emergency department with sepsis
J. Clin. Microbiol. (PMID: 19846634), 48(1), 26--33, 2010
R. Buggs, S. Chamala, W. Wu, L. Gao, G. May, P. Schnable, D. Soltis, P. Soltis and W. Barbazuk.
Characterization of duplicate gene evolution in the recent natural allopolyploid Tragopogon miscellus by next-generation sequencing and Sequenom iPLEX MassARRAY genotyping
Molecular ecology (PMID: 20331776), 19, 132--146, 2010
M. Libault, A. Farmer, L. Brechenmacher, G. May and G. Stacey.
Soybean root hairs: a valuable system to investigate plant biology at the cellular level
Plant signaling & behavior (PMID: 20339317), 5(4), 419--421, 2010
T. Joshi, Z. Yan, M. Libault, D. Jeong, S. Park, P. Green, D. Sherrier, A. Farmer, G. May, B. Meyers and []. others.
Prediction of novel miRNAs and associated target genes in Glycine max
BMC bioinformatics (PMID: 20122185), 11(1), S14, 2010
Y. Bolon, B. Joseph, S. Cannon, M. Graham, B. Diers, A. Farmer, G. May, G. Muehlbauer, J. Specht, Z. Tu and []. others.
Complementary genetic and genomic approaches help characterize the linkage group I seed protein QTL in soybean
S. Deschamps, M. Rota, J. Ratashak, P. Biddle, D. Thureen, A. Farmer, S. Luck, M. Beatty, N. Nagasawa, L. Michael and []. others.
Rapid genome-wide single nucleotide polymorphism discovery in soybean and rice via deep resequencing of reduced representation libraries with the Illumina genome analyzer
The Plant Genome, 3(1), 53--68, 2010
S. Cannon, D. Ilut, A. Farmer, S. Maki, G. May, S. Singer and J. Doyle.
Polyploidy did not predate the evolution of nodulation in all legumes
M. Libault, A. Farmer, T. Joshi, K. Takahashi, R. Langley, L. Franklin, J. He, D. Xu, G. May and G. Stacey.
An integrated transcriptome atlas of the crop model Glycine max, and its use in comparative analyses in plants
The Plant Journal (PMID: 20408999), 63(1), 86--99, 2010
A. Severin, G. Peiffer, W. Xu, D. Hyten, B. Bucciarelli, J. O'Rourke, Y. Bolon, D. Grant, A. Farmer, G. May and []. others.
An integrative approach to genomic introgression mapping
Plant physiology (PMID: 20656899), 154(1), 3--12, 2010
J. Woody, A. Severin, Y. Bolon, B. Joseph, B. Diers, A. Farmer, N. Weeks, G. Muehlbauer, R. Nelson, D. Grant and []. others.
D. Brackney, J. Scott, F. Sagawa, J. Woodward, N. Miller, F. Schilkey, J. Mudge, J. Wilusz, K. Olson, C. Blair and []. others.
C6/36 Aedes albopictus cells have a dysfunctional antiviral RNA interference response
PLoS neglected tropical diseases (PMID: 21049065), 4(10), e856, 2010
R. Varshney, M. Thundi, G. May and S. Jackson.
Legume genomics and breeding
Plant breeding reviews, 33, 257--304, 2010
E. Tsalik, D. Jones, B. Nicholson, L. Waring, O. Liesenfeld, L. Park, S. Glickman, L. Caram, R. Langley, J. Velkinburgh and []. others.
Journal of clinical microbiology (PMID: 19846634), 48(1), 26--33, 2010
S. Glickman, C. Cairns, R. Otero, C. Woods, E. Tsalik, R. Langley, J. Van, L. Park, L. Glickman, V. Fowler and []. others.
Academic Emergency Medicine (PMID: 20370777), 17(4), 383--390, 2010
BMC genomics (PMID: 20078886), 11(1), 38, 2010
J. Schmutz, S. Cannon, J. Schlueter, J. Ma, T. Mitros, W. Nelson, D. Hyten, Q. Song, J. Thelen, J. Cheng and []. others.
nature, 463(7278), 178, 2010
S. Singer, S. Maki, A. Farmer, D. Ilut, G. May, S. Cannon and J. Doyle.
Venturing beyond beans and peas: what can we learn from Chamaecrista?
Plant Physiol. (PMID: 19755538), 151(3), 1041--1047, 2009
R. Varshney, S. Nayak, G. May and S. Jackson.
Next-generation sequencing technologies and their implications for crop genetics and breeding
Trends Biotechnol. (PMID: 19679362), 27(9), 522--530, 2009
A. Zaas, M. Chen, J. Varkey, T. Veldman, A. Hero, J. Lucas, Y. Huang, R. Turner, A. Gilbert, R. Lambkin-Williams, N. ?ien, B. Nicholson, S. Kingsmore, L. Carin, C. Woods and G. Ginsburg.
Gene expression signatures diagnose influenza and other symptomatic respiratory viral infections in humans
Cell Host Microbe (PMID: 19664979), 6(3), 207--217, 2009
J. Kim, Y. Ju, H. Park, S. Kim, S. Lee, J. Yi, J. Mudge, N. Miller, D. Hong, C. Bell, H. Kim, I. Chung, W. Lee, J. Lee, S. Seo, J. Yun, H. Woo, H. Lee, D. Suh, S. Lee, H. Kim, M. Yavartanoo, M. Kwak, Y. Zheng, M. Lee, H. Park, J. Kim, O. Gokcumen, R. Mills, A. Zaranek, J. Thakuria, X. Wu, R. Kim, J. Huntley, S. Luo, G. Schroth, T. Wu, H. Kim, K. Yang, W. Park, H. Kim, G. Church, C. Lee, S. Kingsmore and J. Seo.
A highly annotated whole-genome sequence of a Korean individual
F. Salsbury, M. Crowder, S. Kingsmore and J. Huntley.
Molecular dynamic simulations of the metallo-beta-lactamase from Bacteroides fragilis in the presence and absence of a tight-binding inhibitor
J Mol Model (PMID: 19039608), 15(2), 133--145, 2009
T. Kaukola, J. Tuimala, R. Herva, S. Kingsmore and M. Hallman.
Cord immunoproteins as predictors of respiratory outcome in preterm infants
Am. J. Obstet. Gynecol. (PMID: 19026401), 200(1), 1--8, 2009
F. Stephen, J. Mudge and G. May.
Generation II DNA sequencing technologies
Drug Discovery, 61, 2009
S. Cannon, G. May and S. Jackson.
Three sequenced legume genomes and many crop species: rich opportunities for translational genomics
C. Woods, R. Feezor and S. Kingsmore.
Sepsis and the Genomic Revolution
E. Wang, R. Sandberg, S. Luo, I. Khrebtukova, L. Zhang, C. Mayr, S. Kingsmore, G. Schroth and C. Burge.
Alternative isoform regulation in human tissue transcriptomes
S. Kingsmore, N. Kennedy, H. Halliday, J. Van, S. Zhong, V. Gabriel, J. Grant, W. Beavis, V. Tchernev, L. Perlee, S. Lejnine, B. Grimwade, M. Sorette and J. Edgar.
Identification of diagnostic biomarkers for infection in premature neonates
Mol. Cell Proteomics (PMID: 18622029), 7(10), 1863--1875, 2008
M. Mian, Y. Zhang, Z. Wang, J. Zhang, X. Cheng, L. Chen, K. Chekhovskiy, X. Dai, C. Mao, F. Cheung, X. Zhao, J. He, A. Scott, C. Town and G. May.
Analysis of tall fescue ESTs representing different abiotic stresses, tissue types and developmental stages
BMC Plant Biol. (PMID: 18318913), 8, 27, 2008
D. Sugarbaker, W. Richards, G. Gordon, L. Dong, A. De, G. Maulik, J. Glickman, L. Chirieac, M. Hartman, B. Taillon, L. Du, P. Bouffard, S. Kingsmore, N. Miller, A. Farmer, R. Jensen, S. Gullans and R. Bueno.
Transcriptome sequencing of malignant pleural mesothelioma tumors
Proc. Natl. Acad. Sci. U.S.A. (PMID: 18303113), 105(9), 3521--3526, 2008
S. Kingsmore, I. Lindquist, J. Mudge, D. Gessler and W. Beavis.
Genome-wide association studies: progress and potential for drug discovery and development
Nat Rev Drug Discov (PMID: 18274536), 7(3), 221--230, 2008
J. Mudge, N. Miller, I. Khrebtukova, I. Lindquist, G. May, J. Huntley, S. Luo, L. Zhang, J. Velkinburgh, A. Farmer, S. Lewis, W. Beavis, F. Schilkey, S. Virk, C. Black, M. Myers, L. Mader, R. Langley, J. Utsey, R. Kim, R. Roberts, S. Khalsa, M. Garcia, V. Ambriz-Griffith, R. Harlan, W. Czika, S. Martin, R. Wolfinger, N. Perrone-Bizzozero, G. Schroth and S. Kingsmore.
Genomic convergence analysis of schizophrenia: mRNA sequencing reveals altered synaptic vesicular transport in post-mortem cerebellum
PLoS ONE (PMID: 18985160), 3(11), e3625, 2008
N. Miller, S. Kingsmore, A. Farmer, R. Langley, J. Mudge, J. Crow, A. Gonzalez, F. Schilkey, R. Kim, J. Van and []. others.
Management of high-throughput DNA sequencing projects: Alpheus
Journal of computer science and systems biology (PMID: 20151039), 1, 132, 2008
R. Pendleton, S. Kitchen, J. Mudge and E. McArthur.
Origin of the flax cultivar 'Appar'and its position within the Linum perenne complex
International journal of plant sciences, 169(3), 445--453, 2008
R. Pendleton, S. Kitchen, E. McArthur and J. Mudge.
The 'Appar'Flax Release: origin, distinguishing characteristics, and use; and a native alternative
Native plants journal, 9(1), 18--24, 2008
M. Naoumkina, I. Torres-Jerez, S. Allen, J. He, P. Zhao, R. Dixon and G. May.
Analysis of cDNA libraries from developing seeds of guar (Cyamopsis tetragonoloba (L.) Taub)
S. Kingsmore, I. Lindquist, J. Mudge and W. Beavis.
Genome-wide association studies: progress in identifying genetic biomarkers in common, complex diseases
Biomarker insights (PMID: 19662211), 2, 117727190700200019, 2007
W. Beavis, F. Schilkey and S. Baxter.
Translational bioinformatics: at the interface of genomics and quantitative genetics
Crop science, 47(Supplement_3), S--32, 2007
E. Retzel, J. Johnson, J. Crow, A. Lamblin and C. Paule.
Legume resources: MtDB and Medicago. org
R. Dixon, J. Bouton, B. Narasimhamoorthy, M. Saha, Z. Wang and G. May.
Beyond structural genomics for plant science
Advances in Agronomy, 95, 77--161, 2007
C. Joslyn, K. Verspoor and D. Gessler.
Knowledge Integration in OpenWorlds: Utilizing the Mathematics of Hierarchical Structure
M. Saha, J. Cooper, M. Mian, K. Chekhovskiy and G. May.
Tall fescue genomic SSR markers: development and transferability across multiple grass species
F. Cheung, B. Haas, S. Goldberg, G. May, Y. Xiao and C. Town.
Sequencing Medicago truncatula expressed sequenced tags using 454 Life Sciences technology
BMC Genomics (PMID: 17062153), 7, 272, 2006
S. Baxter, S. Day, J. Fetrow and S. Reisinger.
Scientific software development is not an oxymoron
PLoS Comput. Biol. (PMID: 16965174), 2(9), e87, 2006
B. McKinney, D. Reif, M. Rock, K. Edwards, S. Kingsmore, J. Moore and J. Crowe.
Cytokine expression patterns associated with systemic adverse events following smallpox immunization
J. Infect. Dis. (PMID: 16845627), 194(4), 444--453, 2006
G. Stacey, M. Libault, L. Brechenmacher, J. Wan and G. May.
Genetics and functional genomics of legume nodulation
Curr. Opin. Plant Biol. (PMID: 16458572), 9(2), 110--121, 2006
S. Kingsmore.
Multiplexed protein measurement: technologies and applications of protein and antibody arrays
D. Gessler, C. Dye, P. Farmer, M. Murray, T. Navin, R. Reves, T. Shinnick, P. Small, T. Yates and G. Simpson.
Public health. A National Tuberculosis Archive
Science (PMID: 16513968), 311(5765), 1245--1246, 2006
T. Kaukola, R. Herva, M. Perhomaa, E. Paakko, S. Kingsmore, L. Vainionpaa and M. Hallman.
Population cohort associating chorioamnionitis, cord inflammatory cytokines and neurologic outcome in very preterm, extremely low birth weight infants
Pediatr. Res. (PMID: 16492993), 59(3), 478--483, 2006
S. Zuev, S. Kingsmore and D. Gessler.
Sepsis progression and outcome: a dynamical model
Theor Biol Med Model (PMID: 16480490), 3, 8, 2006
K. Gajendran, M. Gonzales, A. Farmer, E. Archuleta, J. Win, M. Waugh and S. Kamoun.
Phytophthora functional genomics database (PFGD): functional genomics of phytophthora-plant interactions
Nucleic Acids Res. (PMID: 16381913), 34(Database issue), D465--470, 2006
S. Cannon, L. Sterck, S. Rombauts, S. Sato, F. Cheung, J. Gouzy, X. Wang, J. Mudge, J. Vasdewani, T. Schiex and []. others.
Legume genome evolution viewed through the Medicago truncatula and Lotus japonicus genomes
Proceedings of the National Academy of Sciences (PMID: 17003129), 103(40), 14959--14964, 2006
S. Jackson, R. Wing, G. Stacey, G. May, R. Shoemaker and []. others.
SoyMap: an integrated map of soybean for resolution and dissection of multiple genome duplication events.
Soybean genetics newsletter, 33, 177, 2006
D. Gessler, C. Joslyn, K. Verspoor and S. Schmidt.
Deconstruction, Reconstruction, and Ontogenesis for Large, Monolithic, Legacy Ontologies in Semantic Web Service Applications
Los Alamos: National Center for Genome Research, 2006
C. Joslyn, D. Gessler, S. Schmidt and K. Verspoor.
Distributed representations of bio-ontologies for semantic web services
Phytophthora functional genomics database (PFGD): functional genomics of phytophthora--plant interactions
Nucleic acids research (PMID: 16381913), 34(suppl_1), D465--D470, 2006
B. Munneke, K. Schlauch, K. Simonsen, W. Beavis and R. Doerge.
Adding confidence to gene expression clustering
T. Yan, D. Yoo, T. Berardini, L. Mueller, D. Weems, S. Weng, J. Cherry and S. Rhee.
PatMatch: a program for finding patterns in peptide and nucleotide sequences
Nucleic Acids Res. (PMID: 15980466), 33(Web Server issue), W262--266, 2005
H. Kader, V. Tchernev, E. Satyaraj, S. Lejnine, G. Kotler, S. Kingsmore and D. Patel.
Protein microarray analysis of disease activity in pediatric inflammatory bowel disease demonstrates elevated serum PLGF, IL-7, TGF-beta1, and IL-12p40 levels in Crohn's disease and ulcerative colitis patients in remission versus active disease
Am. J. Gastroenterol. (PMID: 15667502), 100(2), 414--423, 2005
T. Kaukola, R. Herva, M. Perhomaa, E. Pääkkö, S. Kingsmore, L. Vainionpää and []. others.
Chorioamnionitis and cord serum proinflammatory cytokines: lack of association with brain damage and neurologic outcome in very preterm infants
Pediatr Res, 58, 1--6, 2005
K. Bailey, L. Ciannelli, N. Bond, A. Belgrano and N. Stenseth.
Recruitment of walleye pollock in a physically and biologically complex ecosystem: a new perspective
Progress in Oceanography, 67(1-2), 24--42, 2005
M. Gonzales, E. Archuleta, A. Farmer, K. Gajendran, D. Grant, R. Shoemaker, W. Beavis and M. Waugh.
The Legume Information System (LIS): an integrated information resource for comparative legume biology
T. Berardini, S. Mundodi, L. Reiser, E. Huala, M. Garcia-Hernandez, P. Zhang, L. Mueller, J. Yoon, A. Doyle, G. Lander, N. Moseyko, D. Yoo, I. Xu, B. Zoeckler, M. Montoya, N. Miller, D. Weems and S. Rhee.
Functional annotation of the Arabidopsis genome using controlled vocabularies
A. Arpat, M. Waugh, J. Sullivan, M. Gonzales, D. Frisch, D. Main, T. Wood, A. Leslie, R. Wing and T. Wilkins.
Functional genomics of cell elongation in developing cotton fibers
Plant Mol. Biol. (PMID: 15604659), 54(6), 911--929, 2004
E. Huitema, J. Bos, M. Tian, J. Win, M. Waugh and S. Kamoun.
Linking sequence to phenotype in Phytophthora-plant interactions
Trends Microbiol. (PMID: 15051070), 12(4), 193--200, 2004
A. Belgrano.
Emergent properties of complex marine systems: a macroecological perspective
Marine ecology progress series, 273, 227--227, 2004
A. Farina and A. Belgrano.
The eco-field: a new paradigm for landscape ecology
Ecological Research, 19(1), 107--110, 2004
A. Belgrano, M. Lima and N. Stenseth.
Non-linear dynamics in marine-phytoplankton population systems
Marine Ecology Progress Series (PMID: 2330029), 273, 281--289, 2004
S. Pavı́a, C. Biles, M. Waugh, K. Waugh, G. Alvarado and C. Liddell.
Characterization of southern New Mexico Phytophthora capsici Leonian isolates from pepper (Capsicum annuum L.)
Revista Mexicana de Fitopatologı́a, 22(1), 82--89, 2004
M. Sawkins, A. Farmer, D. Hoisington, J. Sullivan, A. Tolopko, Z. Jiang and J. Ribaut.
Comparative map and trait viewer (CMTV): an integrated bioinformatic tool to construct consensus maps and compare QTL and functional genomics data across genomes and experiments
Plant molecular biology (PMID: 15604756), 56(3), 465--480, 2004
A. Thro, W. Parrott, J. Udall and W. Beavis.
Genomics and plant breeding: the experience of the initiative for future agricultural and food systems
Crop Sci, 44, 1893--1919, 2004
D. Weems, N. Miller, M. Garcia-Hernandez, E. Huala and S. Rhee.
Design, implementation and maintenance of a model organism database for Arabidopsis thaliana
Comparative and functional genomics (PMID: 18629167), 5(4), 362--369, 2004
R. Jansen, J. Jannink and W. Beavis.
Mapping quantitative trait loci in plant breeding populations
Crop Science, 43(3), 829--834, 2003
S. Rhee, W. Beavis, T. Berardini, G. Chen, D. Dixon, A. Doyle, M. Garcia-Hernandez, E. Huala, G. Lander, M. Montoya and []. others.
The Arabidopsis Information Resource (TAIR): a model organism database providing a centralized, curated gateway to Arabidopsis biology, research materials and community
Nucleic acids research (PMID: 12519987), 31(1), 224--228, 2003
C. Tuggle, J. Green, C. Fitzsimmons, R. Woods, R. Prather, S. Malchenko, B. Soares, T. Kucaba, K. Crouch, C. Smith and []. others.
EST-based gene discovery in pig: virtual expression patterns and comparative mapping to human
Mammalian Genome (PMID: 12925889), 14(8), 565--579, 2003
M. Wilkinson, D. Gessler, A. Farmer and L. Stein.
The BioMOBY project explores open-source, simple, extensible protocols for enabling biological database interoperability
D. Wlodek and M. Gonzales.
Decreased energy levels can cause and sustain obesity
Journal of theoretical biology (PMID: 14559057), 225(1), 33--44, 2003
W. Beavis.
QTL mapping in plant breeding populations
Google Patents, 2002
N. Stenseth, A. Mysterud, G. Ottersen, J. Hurrell, K. Chan and M. Lima.
Ecological effects of climate fluctuations
Science, 297(5585), 1292--1296, 2002
D. Gessler.
ISYS (Integrated SYStem): a platform for integrating heterogeneous bioinformatic resources
International Journal of Genomics (PMID: 18628843), 3(2), 169--175, 2002
M. Lee, N. Sharopova, W. Beavis, D. Grant, M. Katt, D. Blair and A. Hallauer.
Expanding the genetic map of maize with the intermated B73_x_Mo17 (IBM) population
Plant molecular biology (PMID: 11999829), 48(5-6), 453--461, 2002
Y. Vigouroux, J. Jaqueth, Y. Matsuoka, O. Smith, W. Beavis, J. Smith and J. Doebley.
Rate and pattern of mutation at microsatellite loci in maize
Molecular Biology and Evolution (PMID: 12140237), 19(8), 1251--1260, 2002
G. Ottersen, B. Planque, A. Belgrano, E. Post, P. Reid and N. Stenseth.
Ecological effects of the North Atlantic oscillation
Oecologia, 128(1), 1--14, 2001
C. Bell, R. Dixon, A. Farmer, R. Flores, J. Inman, R. Gonzales, M. Harrison, N. Paiva, A. Scott, J. Weller and G. May.
The Medicago Genome Initiative: a model legume database
A. Siepel, A. Farmer, A. Tolopko, M. Zhuang, P. Mendes, W. Beavis and B. Sobral.
ISYS: a decentralized, component-based approach to the integration of heterogeneous bioinformatics resources
Bioinformatics (PMID: 11222265), 17(1), 83--94, 2001
C. Harger, G. Chen, A. Farmer, W. Huang, J. Inman, D. Kiphart, F. Schilkey, M. Skupski and J. Weller.
The genome sequence DataBase
Nucleic Acids Res. (PMID: 10592174), 28(1), 31--32, 2000
G. Seluja, A. Farmer, M. McLeod, C. Harger and P. Schad.
Establishing a method of vector contamination identification in database sequences
Bioinformatics (PMID: 10089195), 15(2), 106--110, 1999
About NCGR
The National Center for Genome Resources is a not-for-profit research institute that innovates, collaborates, and educates in the field of genomic data science. As leaders in DNA sequence analysis, we partner with government, industry, and academia to drive biological discovery in all kingdoms of life. We deliver value through expertise in experimental design, software, computation, data integration and training a skilled workforce.
[email protected]
National Center for Genome Resources
2935 Rodeo Park Dr E
© 2019 National Center for Genome Resources. Privacy Policy | Terms of Use
The National Center for Genome Resources, a New Mexico nonprofit corporation ("Company," "NCGR," or "we"), in the operation of this website, www.ncgr.org (the "Website") is committed to protecting the privacy of its online visitors (collectively, "our users" or, as applicable to yourself and your organization, "you").
We believe that maintaining privacy on the Web is very important and hope you will read this Privacy Policy carefully so that you will clearly understand both our commitment to you and your privacy and our method of collecting and using information.
This Privacy Policy only applies to transactions made, and data gathered, on this Website and associated email service, and does not apply to any other website.
By using this Website, communicating with us by email, or by otherwise submitting personal information to us, you agree to the terms of this Privacy Policy and give your consent to the collection, storage and use of personal information as explained in this Privacy Policy.
The term "personal information" refers to non-public information that personally relates to or identifies you, such as your name, password, age, gender, email address, postal mailing address, zip code, home/mobile telephone number, Social Security number and/or taxpayer identification number, and other similar information. If we combine or associate information from other sources with personal information that you provide directly to us through or in connection with our services, we will treat the combined information as personal information in accordance with this Privacy Policy.
Other information not treated as personal that may be collected on the Website may include, without limitation, website pages viewed, sites visited before visiting this Website, frequency of visits, clickstream data, browser type, operating system, organization name, articles, internet connection speed, presentations viewed, time spent viewing pages of our website or using certain features of our website, demographic data such as server locations, clickstream data, location services, server location, cookies existing on your computer, search criteria used and results, date and time of access or visits to our website, frequency of visits to our website, connection speed, and other information which does not specifically identify you.
We use commercially reasonable best efforts to maintain the confidentiality, integrity and security of our users' personal information. Keeping user information secure, and using it only as our users agree, are matters of principle for NCGR. With this in mind, here is our commitment to each user:
We will restrict employee access to user information to those who need to know in order to provide services to you;
We will maintain commercial reasonable and customary security standards and procedures to protect information about you; and
We will respond quickly to your request to correct inaccurate information.
UPDATES AND CHANGES TO PRIVACY POLICY
We may revise our Privacy Policy at any time. In the event of a change in this Privacy Policy, a revised Privacy Policy will promptly be posted to our Website, and an "Updated" date will be provided. Please revisit this page to familiarize yourself with changes to the Privacy Policy. You agree to accept posting of a revised Privacy Policy as described herein as actual notice to you of such revised Privacy Policy. Your continued use of our Website after such posting constitute consent to the collection and use of your information as described in the then-current Privacy Policy.
HOW AND WHY WE GATHER INFORMATION
If you register for an account for use of this Website, we collect certain personal information from you to open an account, transact business, communicate with you, verify your identity and fulfill legal and regulatory requirements. From time to time, we may request additional information (e.g., through surveys) to help us further assess your needs and preferences. If you choose to provide such information, during registration or otherwise, you are giving us permission to collect, store and use it consistent with this Privacy Policy.
We may also obtain your personal information from your transactions with us or other users through the services provided through this Website, or from third parties.
We may send email newsletters to our registered users. Registered users may subscribe or unsubscribe to an email newsletter at any time by changing their email preferences. On occasion, we may send emails to individuals who have provided us with their email address, informing them of events or promotions we think might be of interest to them. Email recipients may always opt out of any email category at any time by following the unsubscribe instructions included in the email message.
Emailing Us: We try to respond to email messages requiring a response in accordance with our internal policies. If you email us, your message and email address will be forwarded to the appropriate individual within NCGR. We may choose to save this information. We are pleased to hear from you. However, any message, material, information, ideas, concepts or other information sent to us by email will be treated as non-confidential and non-proprietary, and we will not be liable for delays or omissions in receiving or responding to email.
COOKIES AND BEACONS
When you visit the Website, whether or not you register for an account, we may send one or more cookies. "Cookies" are small text files containing a string of alphanumeric characters that may be placed on your web browser. Cookies make it easier for you to navigate our Website by, among other things, "remembering" your identity so that you do not have to input your password multiple times as you navigate between webpages on the Website and/or as you access the Website. This use of cookies for authentication (i.e., verifying that you are who you say you are) is an essential component of site security. You can set your web browser to inform you when cookies are set or to prevent cookies from being set.
We may also utilize web beacons and server logs. A "web beacon" is typically a transparent graphic image (usually 1 pixel x 1 pixel) that is placed on a site or in an email which allows the website to record the simple actions of the user opening the page that contains the beacon. "Server logs" can be either a single log file or several log files automatically created and maintained by a server of activity performed by the server, which can include information about any transaction you conduct with the server.
Please note that if you decline to use cookies, you may experience reduced functionality or slower site response times. Declining to use our authentication-related cookies may prevent you from using the Website altogether. You may also clear cookies from your computer via your web browser settings. You may also wish to use a Google Analytics opt-out web browser add- on. Information on this option is available at: http://support.google.com/analytics/bin/answer.py?hl=en&answer=2700409.
We or our service providers may also collect web surfing data related to your use of our services (e.g., information regarding which of our web pages you access, the frequency of such access, and your product and service preferences). This may be accomplished by using cookies, web beacons, page tags or similar tools. Such web surfing data may include your Internet Protocol (IP) address, browser type, internet service provider (ISP), referring or exit pages, click stream data, operating system and the dates and times that you visit the Website. Web surfing data and similar information may be used: for administrative purposes; to assess the usage, value and performance of our online products and services; and to improve your experience with our services.
HOW INFORMATION HELPS BOTH YOU AND US
Information about you plays a key role in our ability to succeed in our activities. It also helps us service your accounts and administer our activities. For example, we may use information about you to:
Provide you with the services available through the Website;
Establish and set up your account, issue an account number, and maintain your transaction history;
Respond more accurately and efficiently to your requests;
Identify opportunities to give you more convenience and control by developing new services that may benefit you;
Verify your identity and contact information to help protect you and us from fraud;
Comply with certain legal and regulatory requirements.
We generally use the information that you provide or that we collect to establish and enhance our relationship with you, and to operate, maintain, enhance, and provide our services and the Website. We may use such information to periodically provide you with information, newsletters or offers about features and services available through our services that may be of interest to you. We may also send information or offers to groups of Users on behalf of other Users or third parties. However, you will have the option to opt out of receiving such information.
We use aggregated information provided by or collected from our users to understand and analyze the usage trends and preferences of our users, to improve the way our services and the Website work and look, and to create new features and functionality. We may use aggregated information about our Users to generate statistics and maps that we deem helpful in the provision of our services or any part thereof. We do not and will not sell analytics that identify personal information of our users without written consent from the user.
You may be able to log in to the Website using services such as Linkedin or Facebook Connect. These services will authenticate your identity and provide you the option to share certain personal information with us such as your name and email address to pre-populate our sign up form. Services like Facebook Connect give you the option to post information about your activities on this Website to your profile page to share with others within your network.
Our Website may also include Social Media Features such as the Facebook Like button, Twitter, and share buttons. These Features are interactive mini-programs and may collect your IP address, which page you are visiting on our site, and may set a cookie to enable the feature to function properly. Social media features and widgets are either hosted by a third party or hosted directly on our Website. Your interactions with these Features are governed by the Privacy Policy of the company providing them.
HOW AND WHY OUR COMPANY DISCLOSES YOUR INFORMATION TO THIRD PARTIES
We limit the sharing of personal information outside our Company. We do not sell, license, lease, or otherwise disclose your personal information to third parties, except as noted below:
We may disclose your personal information when such disclosure is legally required or appropriate pursuant to any court orders, subpoenas or any regulations, including responding to court orders and subpoenas, cooperating with government agencies, other regulatory bodies, and law enforcement officials, performing background checks, resolving disputes or performing risk-management functions.
We may share personal information with other companies and organizations who perform work for us under contract or sell products or services that complement our products and services.
To help us improve our services, we may engage third parties to help us to carry out certain internal functions such as information management, account processing, client services, or other data collection relevant to our activities. Examples of the types of outside companies with which we may share information include companies that perform services for us, such as data processing, and companies that perform services on our behalf. Use of any personal information we share with these third parties is limited to the performance of the task we request. The third parties with which we share personal information are required to protect it in a manner similar to the way we protect your personal information.
We may make certain automatically collected information about your interactions and activities on our services, such as the areas of interest to you, and your past activities, projects or proposals, publicly available on or through the services. Any such publicly available information will be accessible by other users.
In the event that our Company is acquired by or merged with a third-party entity, we reserve the right to transfer or assign the information that we have collected from users as part of such merger, acquisition, sale, or other change of control. In such cases, we will provide you notice before your personal information is transferred to and becomes subject to the Privacy Policy of a different entity.
We further reserve the right to disclose any of your personal information as we believe appropriate or necessary to take precautions against liability, to investigate and defend against any third party claims or allegations, to assist government enforcement agencies, to protect the security or integrity of the services, or to protect the rights, property or personal safety of our Company, our users, or others.
We may also use and disclose personally identifiable information and non-personally identifiable information: to investigate and help prevent potentially unlawful activity or activities that threaten the integrity of our website or network; to protect and defend our rights or property or the rights or property of others; as required by courts or administrative agencies; and in connection with a financing, sale, merger, or reorganization of our activities or assets.
Any successor in interest to our activities would acquire the information we maintain, including personally identifiable information, and may alter the terms of this Privacy Policy.
We do not and will not sell any of your information, including non-personally identifiable or personally identifiable information, to any third party for purposes of advertising, soliciting, or telemarketing. We may use a third party service to collect anonymous visitor information like IP addresses, browser types (such as Firefox), referring pages, pages visited and time spent on a particular service or feature. We collect this information for statistical analysis of web page traffic patterns; to administer our service and servers; to allow for auditing of our services by some third parties who have that right; and for internal purposes to make marketing decisions.
OUR PRIVACY POLICY DOES NOT APPLY TO THIRD-PARTY ACTIVITIES OR SITES
The Website may provide links to third-party websites for your convenience and information. If you access those links, you will leave our Website. Any information submitted by you to a third party, will be controlled by that third party's Privacy Policy, which may differ from our own. This Privacy Policy does not cover the collection of information by cookies or other methods by such third party services or other third parties. We do not control how these third party services or third parties collect information or by what means such third party services or third parties may use their own cookies to collect information about you. We do not endorse, screen, or approve, and are not responsible for the privacy practices or the content of, other websites or services. We encourage you to review the privacy policy of any company before submitting your personal information.
KEEPING INFORMATION SAFE
We limit access to your personal information to those employees who have a legitimate need to access such information to provide you with our services. In keeping with industry standards and practices, we maintain commercially reasonable physical, electronic and procedural safeguards and controls to protect your information. The Website is built upon a secure infrastructure with multiple layers of protection and we use industry standard encryption technologies to safeguard your information.
We have security standards and procedures in place designed to prevent unauthorized access to your accounts and personal information. A key part of this process helps ensure that all information we have about you is accurate and up-to-date. If you ever discover inaccuracies in our data or if your personal information changes, notify us immediately.
The transmission of information via the Internet is not completely secure, and for this reason we cannot guarantee the security of information sent to us electronically. If we learn of a security systems breach, we may attempt to notify you electronically so that you can take appropriate protective steps. We may also post a notice on or through the Website in the event of a security breach. Depending on where you live, you may have a legal right to receive notice of a security breach in writing.
We may at times send you e-mail communications with marketing or promotional materials. If you prefer not to receive such marketing or promotional e-mails from us, you may unsubscribe completely by emailing us at: [email protected].
Please note that opt-out requests may take up to twenty-four (24) hours to process. Please also note that at times we may need to send you e-mail communications that are transactional in nature such as service or termination announcements or payment confirmations which are communications you will not be able to opt-out of.
If you would like us to remove your Personally Identifiable Information from our database, please send a request to: [email protected].
We are not responsible for removing your personal information from the lists of any third-party services or other third party who has previously been provided your information in accordance with this notice.
REVIEWING, CHANGING OR CORRECTING INFORMATION
You are solely responsible for helping us to maintain the accuracy and completeness of your personal and other information. We urge you to review your information regularly to ensure that it is correct and complete. If you believe that any of your information is incorrect, or if you have any questions regarding this Privacy Policy, please contact us.
You may, of course, decline to share any or all of your personal information with us or ask us to delete your personal information from our systems at any time, in which case we may not be able to provide to you some of the features and functionality found on or through the Website.
We do not knowingly collect personal information on or through our services from persons under 13 years-of-age without their legal guardian's consent. If we learn that personal information has been collected from persons under 13 years-of-age through our services without their legal guardian's consent, then we will take the appropriate steps to delete this information. If you are a parent or guardian and discover that your child under the age of 13 has a registered account with the services or we have received any personally identifiable information of that child without your consent, you may alert us by contacting us.
Our services may be accessed and/or used by users located outside the United States in accordance with our Terms of Use and other policies and procedures posted on the Website. If you choose to use the services from the European Union or other regions of the world with laws governing data collection and use that may differ from U.S. law, then please note that you are transferring your personal information outside of those regions to the United States and by providing your personal information on or through the Website you consent to that transfer.
This Privacy Notice shall be governed by, construed and entered in accordance with the laws of the State of New Mexico, United States of America, applicable to contracts deemed to be made within such state, without regard to choice of law or conflict of law provisions thereof. All disputes with respect to this Privacy Notice shall be brought and heard either in the state or federal courts in and for the state of New Mexico, United States of Americe. You consent to the in personam jurisdiction and venue of such courts. YOU HEREBY WAIVE YOUR RIGHT TO A TRIAL BY JURY WITH RESPECT TO ANY CLAIM, ACTION OR PROCEEDING, DIRECTLY OR INDIRECTLY, ARISING OUT OF, OR RELATING TO, THIS AGREEMENT TO THE FULLEST EXTENT PERMITTED BY LAW.
No data transmissions over the Internet can be guaranteed to be 100% secure. Consequently, we cannot ensure or warrant the security of any information you transmit to us and you understand that any information that you transfer to our Company is done at your own risk.
We make reasonable efforts to ensure security on our systems. We have put in place appropriate physical, electronic and administrative procedures to safeguard and secure the information we collect online. However, please note that this is not a guarantee that such information may not be accessed, disclosed, altered or destroyed by breach of our systems.
If we learn of a security systems breach we may attempt to notify you electronically so that you can take appropriate protective steps. By using the service or providing personally identifiable information to us you agree that we can communicate with you electronically regarding security, privacy and administrative issues relating to your use of our services. We may post a notice on our Webpage if a security breach occurs. We may also send an email to you at the email address you have provided to us in these circumstances. Depending on where you live, you may have a legal right to receive notice of a security breach in writing.
If you have any questions regarding this Privacy Policy, please contact: [email protected]
The National Center for Genome Resources ("NCGR") website (the "Website") is owned and managed by The National Center for Genome Resources, a New Mexico nonprofit corporation ("Owner," or "we" or "us").
THE PURPOSE OF THE WEBSITE IS TO SHARE INFORMATION WITH A PROFESSIONAL AND BUSINESS AUDIENCE CONCERNING GENOMIC DATA SCIENCE.
These Terms of Use are an agreement between Owner and each person who accesses or uses the Website (a "User," or "you"). By accessing, browsing, or using this service, User agrees to be bound by these Terms of Use. Owner may at any time revise these Terms of Use by updating this posting. User's continued use of the service following the posting of notice of a change will confirm User's acceptance of the change. IF YOU DO NOT ACCEPT THESE TERMS & CONDITIONS IN THEIR ENTIRETY PLEASE LEAVE THE SITE NOW.
Owner undertakes to provide accurate and up-to-date information it posts on the Website. However, each User should understand that all information obtained using the Website is subject to change, does not constitute professional advice or opinions, has not been tested or proven, and is provided "AS IS." Accordingly:
THE USE OF THE INFORMATION AND ANY SUBMISSIONS MADE AVAILABLE ON THE WEBSITE IS AT YOUR OWN RISK;
Owner shall not be responsible or liable for the accuracy, usefulness or availability of any information transmitted or made available via the Website, and User is solely responsible for any decisions made based on such information.
Passing of time can render information contained in, or linked to, the Website stale. Owner are not responsible for any misimpressions which may result from the use of dated material. Owner does not undertake any duty to verify, update, supplement, correct, comment upon or modify any information contained in the Website or any other site to which it is linked.
You may restrict access to this site using content filtering software or by changing your computer settings (e.g. browser or operating system), as appropriate.
RULES AND RESTRICTIONS ON SUBMISSIONS
1. The Website may contain interactive services, including discussion groups, news groups, bulletin boards, chat rooms, blogs and other social networking features, such as the display of linked content from other sites, which may allow you to post, transmit or submit information, including writings, images, illustrations, audio recordings, and video recordings ("Submissions"). Users should exercise common sense and courtesy with all Submissions. Inappropriate Submissions, which Owner prohibits, would include, for example, comments or materials that:
Are false or untrue in any material particular, or that, irrespective of falsity, directly, or by ambiguity, omission, or inference, or by the addition of irrelevant, scientific or technical matter, tend to create a misleading impression;
Abuse, harass, stalk, threaten or otherwise violate the legal rights (such as rights of privacy and publicity) of others;
Are obscene or indecent;
Upload or attach files that contain software or other material protected by intellectual property laws (or by rights of privacy or publicity) unless you own or control the rights thereto or have received all necessary consents;
Upload or attach files that contain viruses, corrupted files, or any other similar software or programs that may damage the operation of another's computer;
Delete any author attributions, legal notices or proprietary designations or labels in any file that is uploaded;
Falsify the origin or source of software or other material contained in a file that is uploaded; or
Violate applicable federal, state or local laws or regulations.
2. Submissions may not include content or materials that violate the copyrights, trademark rights or other intellectual property rights of third parties.
3. Submissions may not contain unauthorized disclosures of proprietary or confidential information. DO NOT POST PERSONALLY IDENTIFIABLE INFORMATION (AS IMAGES OR TEXT) TO PUBLICLY VIEWABLE AREAS UNLESS YOU HAVE THE RIGHT TO DO SO AND CONSENT TO THE PUBLICATION OF THAT INFORMATION.
4. Users may not use the Website in a manner or for a purpose that could violate federal or state laws.
5. The purpose of the Website is strictly informational. Submissions may not include advertisements, promotions, solicitations, "spam," chain letters, surveys, marketing arrangements or the like.
6. Submissions may not include false or misleading representations of affiliation with any other person or entity. A User may not employ false identifiers to impersonate any person or entity or to misrepresent or disguise the true origin of any content.
7. Owner believes that anonymous postings are neither necessary for purposes of the Website, nor consistent with the professional level of dialogue expected. Therefore, the User's name (which must be provided) will be posted with the Submission.
8. Owner does not endorse, recommend or guarantee any Submission or any opinion, recommendation or advice expressed therein.
9. Users may not interfere with or disrupt the Website, networks or servers connected to the Website, such as by attempting to probe, scan or test the vulnerability of a system or network or to breach security or authentication measures, attempting to interfere with service to any user, host or network, such as by overloading, "flooding," "spamming," "mailbombing" or "crashing," sending unsolicited email, including promotions and/or advertising of products or services, or forging any TCP/IP packet header or any part of the header information in any email or newsgroup posting, or otherwise violating the regulations, policies or procedures of such networks or servers.
10. Users may not employ any type of bots that can disrupt the normal flow of dialogue, cause a screen to "scroll" faster than other users of the Website are able to type, show multiple screens, or otherwise act in a manner that negatively affects other users' ability to engage in real time exchanges.
11. Users may not use or attempt to use any engine, software, tool, agent or other device or mechanism (including browsers, spiders, robots, avatars or intelligent agents) to navigate or search the Website other than the search engine and search agents which Owner makes available on the Website and generally available third-party web browsers.
OWNER'S RIGHT TO MONITOR AND ADMINISTER THE WEBSITE
Owner reserves the right to monitor and administer the Website and, in its sole discretion, to block or remove any content posted to the Website. Owner reserves the right to disallow the use of any particular screen name or e-mail address, or to suspend or terminate any User's access or posting privileges at any time, including individual and group postings. While Owner reserves the right to do so, Owner gives no assurance it does or will do so, as it is not Owner's intention to edit or control content generally.
ACCOUNTS AND SECURITY
To use certain areas of the Website or related features, you may be asked to create an account with a username and password. You agree that the account information you provide us will be and remain accurate. You are responsible for maintaining the strict confidentiality of your account username and password. You are responsible for any activity using your account and password.
Owner does not review all of the sites linked to its Website ("Linked Sites") and is not responsible for the contents of any such Linked Site ("Linked Content"). The inclusion of any link does not imply endorsement by Owner of the Linked Site. Use of any such Linked Site or Linked Content is at the User's own risk.
To use some of the functionality of the Website you may be required to establish an account with a username and password with Linked Sites. As these are unaffiliated sites, we are not responsible for any username, password, or other information these sites may collect. If you are unable to establish accounts on these Linked Sites for any reason, you may not be able to fully utilize the functionality provided by the Site.
In addition to these Terms of Use, if Linked Content is displayed on the Website, the use thereof may be subject to separate terms of use provided by the Linked Site.
OWNERSHIP OF THE WEBSITE AND ITS CONTENTS AND ASSOCIATED TRADEMARKS
The brands, logos and names of specific products or services provided by third parties are owned by those third parties.
The content of the Website, including postings originated by Owner or its providers, and related artwork, graphics and photographic or audiovisual works, are protected by copyright and owned by Owner or third parties who provide such content to Owner. Except as permitted under U.S. Copyright laws, the Website and its contents may not be copied, reproduced, republished or sold, posted, transmitted, distributed, modified, or used for the creation of derivative works without Owner's or applicable third parties' prior written consent.
Owner, to the extent it owns such rights, grants User a non-exclusive, non-transferable, limited license to access, display and/or download a limited and temporary amount of content and pages available within the Website, solely on User's computer and for User's individual use.
Except for the permission expressly granted above to access, display, copy and download the Owner content and pages available within the Website, Owner reserves all rights in the Website and its contents. User may not "mirror" any content or information contained on the Website without Owner's prior written consent. User may not create links to the Website from other sites without Owner's prior written consent and compliance with all applicable laws.
Owner does not claim ownership of or copyrights in User Submissions. User understands that Submissions are not confidential and Owner will be free (without compensation to User) to use or disseminate such Submissions on an unrestricted basis for any purpose. User agrees that Submissions may be published, displayed, copied, distributed, downloaded, or transmitted by Owner or other Website participants, and User grants Owner and all other Users of the Website an irrevocable, unrestricted, perpetual, worldwide, royalty-free license to use, copy, reproduce, display, publish, distribute, transmit, adapt, modify or use for the creation of derivative works (including in digital form) such Submissions.
OWNER MAKES MAKE NO REPRESENTATION OR WARRANTY RELATING TO THE WEBSITE OR ANY INFORMATION AVAILABLE ON THE WEBSITE; ALL EXPRESS OR IMPLIED WARRANTIES OF ANY KIND, INCLUDING WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ARE HEREBY DISCLAIMED.
OWNER MAKES NO WARRANTY THAT THE WEBSITE, OR ANY COMPUTER, SERVER, DEVICE, SOFTWARE, OR OTHER TECHNOLOGY ASSOCIATED WITH THE WEBSITE, IS FREE OF VIRUSES, WORMS, OR OTHER ELEMENTS OR CODES THAT MANIFEST CONTAMINATING OR DESTRUCTIVE PROPERTIES. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU.
NEITHER OWNER NOR ITS OFFICERS, DIRECTORS, EMPLOYEES, AGENTS, OR REPRESENTATIVES SHALL BE RESPONSIBLE FOR ANY LOSS OR DAMAGE OF ANY KIND ARISING FROM OR IN ANY WAY RELATING TO (A) THE USE OF OR INABILITY TO USE THE WEBSITE, (B) ERRORS IN OR OMISSIONS FROM WEBSITE CONTENT OR ANY LINKED CONTENT, (C) ANY THIRD PARTY WEB SITES OR CONTENT THEREIN DIRECTLY OR INDIRECTLY ACCESSED THROUGH LINKS IN ANY WEBSITE CONTENT OR LINKED CONTENT, (D) THE UNAVAILABILITY OF THE WEBSITE OR ANY LINKED SITE, OR (E) ANY USE OF THE WEBSITE OR ANY LINKED SITE OR RELIANCE BY THE USER ON ANY INFORMATION OR CONTENT CONTAINED THEREIN.
OWNER SHALL NOT BE LIABLE TO ANY USERS OR ANY OTHER PERSON FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES ARISING FROM OR IN ANY WAY RELATING TO THE FOREGOING, EVEN IF THEY HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES OCCURRING.
IN NO EVENT MAY USER BRING ANY CLAIM OR CAUSE OF ACTION AGAINST OWNER MORE THAN ONE (1) YEAR AFTER SUCH CLAIM OR CAUSE OF ACTION ARISES.
REPRESENTATIONS BY USERS; INDEMNIFICATION
USER IS SOLELY RESPONSIBLE FOR THE CONTENTS OF HIS/HER SUBMISSIONS TO THE WEBSITE. USER REPRESENTS THAT HE/SHE HAS ALL RIGHTS NECESSARY TO POST THE INFORMATION, CONTENT OR MATERIALS SUBMITTED TO THE WEBSITE WITHOUT VIOLATING THE COPYRIGHTS OR ANY OTHER INTELLECTUAL PROPERTY RIGHTS OF THIRD PARTIES, AND THAT NO SUBMISSION BY THE USER WILL VIOLATE APPLICABLE LAWS OR THE RIGHTS OF THIRD PERSONS. USER HEREBY INDEMNIFIES AND AGREES TO HOLD HARMLESS OWNER AND ITS OFFICERS, DIRECTORS, SHAREHOLDERS, EMPLOYEES, AGENTS AND REPRESENTATIVES FROM ANY AND ALL CLAIMS ASSERTED AGAINST THEM AND ANY LIABILITY, LOSS, DAMAGE, COSTS OR EXPENSES (INCLUDING REASONABLE ATTORNEYS' FEES) INCURRED OR SUFFERED BY THEM IN CONNECTION WITH OR ARISING OUT OF USER'S ACTS OR OMISSIONS OR THE MATERIALS OR INFORMATION USER SUBMITS TO THE WEBSITE.
CHOICE OF LAW AND CONSENT TO FORUM
This Agreement will be interpreted and governed in all respects by the laws of the State of New Mexico, United States of America, without regard to its choice of laws rules. Any dispute arising under these Terms of Use or in connection with the User's use of the Website shall be subject to the exclusive jurisdiction of either the state or federal courts in and for New Mexico, and User and Owner hereby consent to the personal jurisdiction of such courts over them.
RESTRICTION, SUSPENSION AND TERMINATION
We may restrict, suspend, or terminate your access to the Site and/or your ability to avail of any of the services on the Website, including interactive services, at any time if we believe that you have breached these Terms of Use. Any such restriction, suspension, or termination will be without prejudice to any rights that we may have against you in respect of your breach of these Terms of Use. We may also remove the Website as a whole or any sections or features of the Website at any time.
Please note that we have the ability to trace your IP address and if necessary contact your internet service provider in the event of a suspected breach of these Terms of Use.
PROCEDURES FOR REQUESTING THE REMOVAL OF INFRINGING MATERIAL
Owner respect the intellectual property of others. If a person or organization (a "Copyright Owner") believes that his, her or its work has been copied on the Website in a way that constitutes copyright infringement, provide Owner the written information specified below. Please note that this procedure is exclusively for the Copyright Owner to notify Owner that copyrighted material of the Copyright Owner has been infringed. If a Copyright Owner believes that its own copyrighted work has been copied in a way that constitutes copyright infringement and is accessible via the Website, the Copyright Owner may notify Owner's copyright agent in accordance with the following procedures and the Digital Millennium Copyright Act of 1998 (DMCA).
For a complaint to be valid under the DMCA, the Copyright Owner must provide the following information when providing notice of the claimed copyright infringement: (1) a physical or electronic signature of a person authorized to act on behalf of the Copyright Owner; (2) identification of the copyrighted work or other intellectual property that the Copyright Owner owns and claims to have been infringed; (3) identification of the material that the Copyright Owner claims is infringing as well as information reasonably sufficient to permit Owner to locate the material on the Website; (4) the address, telephone number, and e-mail address of the Copyright Owner or its designee; (5) a statement by the Copyright Owner that the Copyright Owner as the complaining party has a good faith belief that use of the material in the manner complained of is not authorized by the Copyright Owner, its agent, or the law; and (6) a statement, made under penalty of perjury, that the information in the notification is accurate, and that the complaining party is authorized to act on behalf of the Copyright Owner of an exclusive right that is allegedly infringed. The foregoing information must be submitted as a written notification to the following Designated Agent:
Site: www.ncgr.org
Designated agent for infringement claims: Callum Bell, President
2935 Rodeo Park Drive East
Telephone of designated agent: 505-982-7840
E-mail of designated agent: [email protected]
WE CAUTION YOU THAT UNDER FEDERAL LAW, IF YOU KNOWINGLY MISREPRESENT THAT ONLINE MATERIAL IS INFRINGING, YOU MAY BE SUBJECT TO CIVIL PENALTIES. THESE INCLUDE MONETARY DAMAGES, COURT COSTS, AND ATTORNEYS' FEES INCURRED BY US, BY ANY COPYRIGHT OWNER, OR BY ANY COPYRIGHT OWNER'S LICENSEE THAT IS INJURED AS A RESULT OF OUR RELYING UPON YOUR MISREPRESENTATION. YOU MAY ALSO BE SUBJECT TO CRIMINAL PROSECUTION FOR PERJURY.
This information should not be construed as legal advice. For further details on the information required for valid DMCA notifications, see 17 U.S.C. 512(c)(3).
NOTE: This information is provided exclusively for notifying the service providers referenced above that the Copyright Owner's own copyrighted material(s) might have been infringed. All other inquiries, including technical requests, reports of e-mail abuse and third party reports of piracy, will not receive a response through this process.
HOW TO CONTACT OWNER
Questions regarding the Terms of Use or any other aspect of the Website can be submitted in writing to: [email protected]
|
CommonCrawl
|
CellEngine Bar, Line and Box Plots
CellEngine
Experiments Experiments
21 CFR 11 Compliance
FCS Files
Gating
Data Scales
Illustrations Illustrations
Pivot Tables
Bar, Line and Box Plots Bar, Line and Box Plots Table of contents
Box plot
Normalized heatmap
Scaling and Normalized Plotting
Replicate Data, Variability, Error and Error Bars
Batched Illustrations
Graphics for Publication
Analysis Algorithms
Export Populations
Bar and Line Charts, Box Plots and Heatmaps¶
These statistics plots use a setup interface similar to pivot tables. However, unlike pivot tables, any number of FCS files can be displayed per pivot position—ideal for summarizing replicate data.
Bar and line charts and heatmaps show the arithmetic mean of all matching files, along with either the standard deviation or standard error of the mean (see replicate data).
Box plots show the median line flanked by the first and third quartile, with whiskers spanning ±1.5 times the inter-quartile range.
Zero and indeterminate values
An FCS file will be omitted from a plot in the following scenarios:
When showing channel statistics (e.g., means or medians) and the file has no events in the selected gate.
When showing the geometric mean and the file has events with negative values or zeros, in which case the geometric mean cannot be calculated.
Additionally, box plots omit FCS files when the plot is log-scaled and the statistic is negative or zero. Bar and line charts and heatmaps include zeros and negatives when calculating the mean of multiple files, but the mean will not be displayed if it is negative or zero.
Box plot¶
This example shows the frequencies of several cell types for five different species, based on several dozen biological replicates per species. A single dot is shown for each donor. The box shows the lower quartile, median and upper quartile. The whisker spans ±1.5 times the interquartile range.
Insert a box plot using the button in the toolbar.
Set up the pivot table dimensions as described in the pivoting model. In the example above, the settings are as follows:
Axis Labels Populations: pDCs, CD14+ Monocytes, CD16+ Monocytes, cDCs, NK Cells, CD4+ T cells, CD8+ T cells
Legend Entries (Data Series) Species: African green monkey, Cyno, Human, Mouse, RHesus
Select the statistic and scaling in the the sidebar. In the example above, "Percent of" and "Singlets" are selected, and scaled by Log10 to improve visibility of the large range of values.
Normalized heatmap¶
This example shows the degree of change in phospho-P38 in response to four stimuli in six cell types.
Insert a heatmap using the button in the toolbar.
Columns Populations: CD4+ T cells, CD8+ T cells, CD19+, CD33+, NKs, pDCs
Rows Condition: IL2/GMCSF, IL10, PMA, LPS, unsim
Statistic Median
Channel pP38
Scaling Scaled difference. This is a common normalization method with CyTOF data akin to fold-change in fluorescence data.
Normalize to Top Row. This makes the top row have the value 0; all other rows are normalized relative to the top row.
Select the statistic and scaling in the the sidebar. In this example:
See Scaling and Normalized Plotting for more information about the last two settings.
Scaling and Normalized Plotting¶
When looking at changes in data, such as cell signaling before and after treatment or changes in population frequencies over time, you can display normalized data in heatmaps, bar charts and line charts.
Setup your plot as described in the pivoting model. For example, to look at signaling markers under various stimulation conditions in a heatmap, set the row annotations to your cell signaling readouts and the column annotations to your stimulation conditions.
Select a normalization method from the scaling & normalization selector. For fluorescence-based signaling experiments, Log2 Ratio is common. See the table of scaling and normalization methods below for more information.
Select values to which to normalize the visualization. For example, if your unstimulated condition is in the left-most column, select Left Column. You may need to manually adjust the sorting order of your annotations so that your normalize-to values are in an appropriate position.
The possible scaling and normalization methods are as follows:
Description and Use Cases
Raw $$ x $$ The unmodified value. Commonly used for population frequencies (event counts or percentages).
Raw Fold $$ \frac{x}{c} $$ Fold change without scaling.
Raw Difference $$ x - c $$ Use instead of raw fold when the control value is near zero, in which case dividing by a small number amplifies the experimental value.
Scaled $$ \operatorname{Scale}(x) $$ For channel statistics only. Uses the channel's scale. This shows the value on the same scale used in flow plots (e.g. gating) and may thus be more approachable.
Scaled Difference $$ \operatorname{Scale}(x) - \operatorname{Scale}(c) $$ For channel statistics only. Uses the channel's scale. Commonly used instead of log2 ratio for CyTOF signaling experiments because unstimulated signaling markers tend to be near zero.
Scaled Ratio $$ \operatorname{Scale}(x) / \operatorname{Scale}(c) $$ For channel statistics only. Uses the channel's scale.
Log2 $$ \log_2{x} $$
Log2 Ratio $$ \log_2{\left(\frac{x}{c}\right)} $$ Commonly used for signaling experiments because it makes the control value zero, increased values positive and decreased values negative.
Log10 $$ \log_{10}{x} $$ Commonly used when visualizing a large range of data, in which case a linear scale would make changes at the low end of the scale difficult to see.
Log10 Ratio $$ \log_{10}{\left(\frac{x}{c}\right)} $$
where \( x \) is the experimental value and \( c \) is the control value.
Replicate Data, Variability, Error and Error Bars¶
When replicate values are present, the mean of the values will be displayed along with the standard deviation (SD) or standard error of the mean (SEM). Bar charts, line charts and heatmaps show the variability or error in the hover text. Variability or error can also be displayed as error bars in bar and line charts.
The standard deviation (SD) is an estimate of the variability of the entire population based on the representative set of samples in your data set. This value does not necessarily get smaller with larger sample sizes. This value should be used when you wish to describe the variability of a population.
The standard error of the mean (SE or SEM) is an estimate of how precisely you have determined the mean with your experiment. This value gets smaller with larger sample sizes, as it is defined as the standard deviation divided by the square root of the number of samples. This value should be used when you wish to compare between different groups of samples.
Regardless of your choice, you should always report which metric you are showing.
How the SD or SEM is calculated further depends on the selected scaling and normalization, as described in the table below. These formulas propagate measurement uncertainty through the scaling and normalization equations.
scaling \ normalization
raw absolute error (\( \sigma \)) $$ \lvert \frac{x}{c} \rvert \sqrt{\frac{\sigma_x}{x}^2 + \frac{\sigma_c}{c}^2} $$ $$ \sqrt{\sigma_x^2 + \sigma_c^2} $$
log2 $$ \frac{\sigma}{x \times \ln 2} $$ $$ \sqrt{\left( \frac{\sigma_x}{x \ln 2} \right)^2 + \left( \frac{\sigma_c}{c \ln 2} \right)^2} $$ not applicable
log10 $$ \frac{\sigma}{x \times \ln 10} $$ $$ \sqrt{\left( \frac{\sigma_x}{x \ln 10} \right)^2 + \left( \frac{\sigma_c}{c \ln 10} \right)^2} $$ not applicable
scale set not supported not supported not supported
where \( \sigma \) is the SD or SEM, \( x \) is the experimental value (mean, median, count, etc.), \( c \) is the control value, \( \sigma_x \) is the SD or SEM of the experimental value and and \( \sigma_c \) is the SD or SEM of the control value.
All of these formulas are estimates and make assumptions, including that the experimental and control conditions are uncorrelated (i.e. that there is no systematic bias) and that the error is relatively small.
Cumming G., Fidler F., Vaux D.L. Error bars in experimental biology at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2064100/.
Baird D.C. Experimentation: an introduction to measurement theory and experiment design at https://books.google.com/books?id=LicvAAAAIAAJ&pg=PA48. See sections 3.3 through 3.8.
Propagation of Uncertainty at https://en.wikipedia.org/wiki/Propagation_of_uncertainty.
Previous Heatmaps
Next Dose Response
Copyright © 2021 CellCarta
|
CommonCrawl
|
Journal of Computational Neuroscience
April 2013 , Volume 34, Issue 2, pp 345–366 | Cite as
Cooperation of intrinsic bursting and calcium oscillations underlying activity patterns of model pre-Bötzinger complex neurons
Choongseok Park
Jonathan E. Rubin
Activity of neurons in the pre-Bötzinger complex (pre-BötC) within the mammalian brainstem drives the inspiratory phase of the respiratory rhythm. Experimental results have suggested that multiple bursting mechanisms based on a calcium-activated nonspecific cationic (CAN) current, a persistent sodium (NaP) current, and calcium dynamics may be incorporated within the pre-BötC. Previous modeling works have incorporated representations of some or all of these mechanisms. In this study, we consider a single-compartment model of a pre-BötC inspiratory neuron that encompasses particular aspects of all of these features. We present a novel mathematical analysis of the interaction of the corresponding rhythmic mechanisms arising in the model, including square-wave bursting and autonomous calcium oscillations, which requires treatment of a system of differential equations incorporating three slow variables.
Respiration Pre-Bötzinger complex Multiple bursting mechanisms Bifurcation analysis
Action Editor: David Terman
This work was partially supported by the National Science Foundation award DMS 1021701. The authors thank Natalia Toporikova for many helpful conversations relating to this work.
Appendix: Return maps
We first computed the durations of the active phase and silent phase by treating hp as a parameter. With initial condition hp 0, we integrated the averaged equations of hp (Eqs. (11) and (12)) over the silent phase duration and the active phase duration sequentially. The value of hp at the end of active phase, F (hp 0) in Eq. (17), is the map that we used.
For quick reference, we first rewrite the governing equation of hp,
$$ \frac{d \,hp}{dt} = \frac{hp_{\infty}(V) - hp}{\tau_{h} (V)} = \frac{hp_{\infty}(V)}{\tau_{h} (V)} - \frac{1}{\tau_{h} (V)} hp. $$
We regard hp as a parameter to compute the duration of the silent phase, say SP(hp), and the active phase, say AP(hp), of the bursting solution for each fixed hp between 0.2 and 0.8. With hp fixed, the activity of the solution of the full system is purely driven by the dendritic Ca 2 + dynamics and the projection of the solution onto (\(g_{\rm CAN _{Tot}}\), hp)-space is a horizontal line segment. Now we average the right-hand side of Eq. (10) over time duration AP(hp). The averaged equation over the active phase is given as
$$ \begin{array}{rll} \frac{d \,hp^{A}_{av}}{dt}&=& \left [ \frac{1}{T^A_F-T^A_I}\int_{T^A_I}^{T^A_F} \frac{hp_{\infty}(V)}{\tau_{h}(V)} dt \right ]\\ && - \left [\frac{1}{T^A_F-T^A_I}\int_{T^A_I}^{T^A_F} \frac{1}{\tau_{h}(V)} dt \right ] hp^A_{av} \end{array} $$
where \(T^A_I (T^A_F)\) denotes the initial (final) time of the active phase AP(hp). Since τ h (V) is sensitive to the range of voltage V within silent phase, we also average the right-hand side of Eq. (10) over time duration SP(hp) to get
$$ \begin{array}{rll} \frac{d \,hp^{S}_{av}}{dt}&=& \left [ \frac{1}{T^S_F-T^S_I}\int_{T^S_I}^{T^S_F} \frac{hp_{\infty}(V)}{\tau_{h}(V)} dt \right ]\\ &&- \left [\frac{1}{T^S_F-T^S_I}\int_{T^S_I}^{T^S_F} \frac{1}{\tau_{h}(V)} dt \right ] hp^S_{av} \end{array} $$
where \(T^S_I (T^S_F)\) denotes the initial (final) time of the silent phase SP(hp). The averaged hp variable over the active phase, \(hp^A_{av}\), has a tendency to approach the fixed point of Eq. (11), call it FP A (hp).
$${\rm FP}_A(hp) = \int_{T^A_I}^{T^A_F} \frac{hp_{\infty}(V)}{\tau_{h}(V)} dt / \int_{T^A_I}^{T^A_F} \frac{1}{\tau_{h}(V)} dt $$
Similarly, \(hp^S_{av}\) tends to approach FP S (hp), which is given as
$${\rm FP}_S(hp) = \int_{T^S_I}^{T^S_F} \frac{hp_{\infty}(V)}{\tau_{h}(V)} dt / \int_{T^S_I}^{T^S_F} \frac{1}{\tau_{h}(V)} dt $$
We computed FP A (hp) and FP S (hp) over hp values from 0.2 to 0.8 with stepsize 0.05 and [IP3] values from 0.95 to 1.4 with stepsize 0.01. The results are shown in Fig. 13(a), where the solid curves correspond to the mean values of FP A (lower curve) and FP S (upper curve) averaged over the [IP3] range and the standard deviation from this mean over the range of [IP3] is indicated with error bars. From the small size of the error bars, we see that the mean values of FP A and FP S show only slight dependence on [IP3] values. This observation is consistent with our earlier claim that the hp values at transitions between phases are relatively independent of [IP3].
Fig. 13
Return maps to estimate the minimal values of hp over bursting solutions. (a) Fixed points of averaged Eqs. (11) and (12), FP A (hp) and FP S (hp), are computed over hp values from 0.2 to 0.8 and [IP3] from 0.95 to 1.4. Upper (lower, resp.) solid curve with error bars denote the mean value of FP S (hp) (FP A (hp), resp.) averaged over the [IP3] range and standard deviation from this mean. (b) Examples of the return map F(hp) when IP3 = 1 (black, upper) and 1.3 (black, lower), relative to the identity line (gray). (c) Collection of stable fixed points of the return map (Eq. (17)) (gray) with the results from numerical simulation (black)
Now, suppose that the silent phase begins at t = 0 with initial condition hp 0. Let F S (hp 0) be the value of hp at the end of the silent phase predicted by integrating the averaged equation of hp for the duration of silent phase, SP(hp 0). Then we have
$$ \begin{array}{rll} {\rm F}_S(hp_0) &=& {\rm FP}_S(hp_0) \\&&+\, (hp_0 - {\rm FP}_S(hp_0))\exp(-k_S{\rm SP}(hp_0)) \end{array} $$
$$ k_S=\frac{1}{T^S_F-T^S_I}\int_{T^S_I}^{T^S_F} \frac{1}{\tau_{h}(V)} dt. $$
Now, we integrate Eq. (11) again for the duration of active phase, AP(hp 0), with initial condition F S (hp 0). Let F (hp 0) be the resulting value of hp, which is the desired map and given as
$$ \begin{array}{rll} {\rm F}(hp_0)&=& {\rm FP}_A(hp_0) + ({\rm F}_S(hp_0) \\ &&-\, {\rm FP}_A(hp_0))\exp(-k_A{\rm AP}(hp_0)) \end{array} $$
$$ k_A=\frac{1}{T^A_F-T^A_I}\int_{T^A_I}^{T^A_F} \frac{1}{\tau_{h}(V)} dt $$
Two examples of maps with a line of identity (gray diagonal line) are shown in Fig. 13(b) when [IP3] = 1 (upper) and 1.3 (lower). In each map, there is a stable fixed point. Figure 13(c) shows the collection of these stable fixed points (gray), which matches well with the numerical results (black). Thus, Eq. (17) captures the decrease in the hp value where the balance of silent and active phase durations occurs, specifically in the hp value at the end of the active phase, as [IP3] increases.
Adams, W.B., & Benson, J.A. (1985). The generation and modulation of endogenous rhythmicity in the aplysia bursting pacemaker neurone r15. Progress in Biophysics and Molecular Biology, 46(1), 1–49.CrossRefPubMedGoogle Scholar
Bertram, R., Sherman, A., Satin, L.S. (2010). Electrical bursting, calcium oscillations, and synchronization of pancreatic islets. Advances in Experimental Medicine and Biology, 654, 261–279.CrossRefPubMedGoogle Scholar
Best, J., Borisyuk, A., Rubin, J., Terman, D., Wechselberger, M. (2005). The dynamic range of bursting in a model respiratory pacemaker network. SIAM Journal on Applied Dynamical Systems, 4(4), 1107–1139.CrossRefGoogle Scholar
Butera, R., Rubin, J., Terman, D., Smith, J. (2005). Oscillatory bursting mechanisms in respiratory pacemaker neurons and networks. In S. Coombes & P. Bressloff (Eds.), Bursting: the genesis of rhythm in the nervous system. Singapore: World Scientific.Google Scholar
Butera, R.J. Jr., Clark, J.W. Jr., Canavier, C.C., Baxter, D.A., Byrne, J.H. (1995). Analysis of the efects of modulatory agents on a modeled bursting neuron: dynamic interactions between voltage and calcium dependent systems. Journal of Computational Neuroscience, 2(1), 19–44.CrossRefPubMedGoogle Scholar
Butera, R.J. Jr., Rinzel, J., Smith, J.C. (1999). Models of respiratory rhythm generation in the pre-Bötzinger complex. I. Bursting pacemaker neurons. Journal of Neurophysiology, 82(1), 382–397.Google Scholar
Chay, T.R., & Keizer, J. (1983). Minimal model for membrane oscillations in the pancreatic beta-cell. Biophysical Journal, 42(2), 181–190.CrossRefPubMedGoogle Scholar
Crowder, E.A., Saha, M.S., Pace, R.W., Zhang, H., Prestwich, G.D., Del Negro, C.A. (2007). Phosphatidylinositol 4,5-bisphosphate regulates inspiratory burst activity in the neonatal mouse pre-Bötzinger complex. The Journal of Physiology 582(3), 1047–1058.CrossRefPubMedGoogle Scholar
Csercsik, D., Farkas, I., Hrabovszky, E., Liposits, Z. (2012). A simple integrative electrophysiological model of bursting GnRH neurons. Journal of Computational Neuroscience, 32(1), 119–136.CrossRefPubMedGoogle Scholar
Del Negro, C.A., Hayes, J.A., Rekling, J.C. (2011). Dendritic calcium activity precedes inspiratory bursts in pre-Bötzinger complex neurons. The Journal of Neuroscience, 31(3), 1017–1022.CrossRefPubMedGoogle Scholar
Del Negro, C.A., Koshiya, N., Butera, R.J., Smith, J.C. (2002a). Persistent sodium current, membrane properties and bursting behavior of pre-Bötzinger complex inspiratory neurons in vitro. Journal of Neurophysiology, 88(5), 2242–2250.CrossRefPubMedGoogle Scholar
Del Negro, C.A., Morgado-Valle, C., Feldman, J.L. (2002b). Respiratory rhythm: an emergent network property? Neuron, 34(5), 821–830.CrossRefPubMedGoogle Scholar
Del Negro, C.A., Morgado-Valle, C., Hayes, J.A., Mackay, D.D., Pace, R.W., Crowder, E.A., Feldman, J.L. (2005). Sodium and calcium current-mediated pacemaker neurons and respiratory rhythm generation. The Journal of Neuroscience, 25(2), 446–453.CrossRefPubMedGoogle Scholar
Doi, A., & Ramirez, J.M. (2008). Neuromodulation and the orchestration of the respiratory rhythm. Respiratory Physiology & Neurobiology, 164(1–2), 96–104.CrossRefGoogle Scholar
Doi, A., & Ramirez, J.M. (2010). State-dependent interactions between excitatory neuromodulators in the neuronal control of breathing. The Journal of Neuroscience, 30(24), 8251–8262.CrossRefPubMedGoogle Scholar
Duan, W., Lee, K., Herbison, A.E., Sneyd, J. (2011). A mathematical model of adult gnrh neurons in mouse brain and its bifurcation analysis. Journal of Theoretical Biology, 276(1), 22–34.CrossRefPubMedGoogle Scholar
Dunmyre, J.R., Del Negro, C.A., Rubin, J.E. (2011). Interactions of persistent sodium and calcium-activated nonspecific cationic currents yield dynamically distinct bursting regimes in a model of respiratory neurons. Journal of Computational Neuroscience, 31(2), 305–328.CrossRefPubMedGoogle Scholar
Errington, A.C., Renger, J.J., Uebele, V.N., Crunelli, V. (2010). State-dependent firing determines intrinsic dendritic ca2+ signaling in thalamocortical neurons. The Journal of Neuroscience 30(44), 14843–14853.CrossRefPubMedGoogle Scholar
Feldman, J.L., & Del Negro, C.A. (2006). Looking for inspiration: new perspectives on respiratory rhythm. Nature Reviews Neuroscience, 7(3), 232–242.CrossRefPubMedGoogle Scholar
Feldman, J.L., & Smith, J.C. (1989). Cellular mechanisms underlying modulation of breathing pattern in mammals. Annals of the New York Academy of Sciences, 563, 114–130.CrossRefPubMedGoogle Scholar
Fletcher, P.A., & Li, Y.X. (2009). An integrated model of electrical spiking, bursting, and calcium oscillations in gnrh neurons. Biophysical Journal, 96(11), 4514–4524.CrossRefPubMedGoogle Scholar
Hughes, S.W., Errington, A., Lorincz, M.L., Kékesi, K.A., Juhász, G., Orbán, G., Cope, D.W., Crunelli, V. (2008). Novel modes of rhythmic burst firing at cognitively-relevant frequencies in thalamocortical neurons. Brain Research, 1235, 12–20.CrossRefPubMedGoogle Scholar
Johnson, S.M., Smith, J.C., Funk, G.D., Feldman, J.L. (1994). Pacemaker behavior of respiratory neurons in medullary slices from neonatal rat. Journal of Neurophysiology, 72(6), 2598–2608.PubMedGoogle Scholar
Koizumi, H., & Smith, J.C. (2008). Persistent Na+ and K+ - dominated leak currents contribute to respiratory rhythm generation in the pre-Bötzinger complex in vitro. The Journal of Neuroscience, 28(7), 1773–1785.CrossRefPubMedGoogle Scholar
Mironov, S.L. (2008). Metabotropic glutamate receptors activate dendritic calcium waves and trpm channels which drive rhythmic respiratory patterns in mice. The Journal of Physiology, 586(9), 2277–2291.CrossRefPubMedGoogle Scholar
Pace, R.W., Mackay, D.D., Feldman, J.L., Del Negro, C.A. (2007a). Inspiratory bursts in the pre-Bözinger complex depend on a calcium-activated non-specific cation current linked to glutamate receptors in neonatal mice. The Journal of Physiology 582(1), 113–125.CrossRefPubMedGoogle Scholar
Pace, R.W., Mackay, D.D., Feldman, J.L., Del Negro, C.A. (2007b). Role of persistent sodium current in mouse pre-Bötzinger complex neurons and respiratory rhythm generation. The Journal of Physiology, 580(2), 485–496.CrossRefPubMedGoogle Scholar
Paton, J.F., Abdala, A.P., Koizumi, H., Smith, J.C., St-John, W.M. (2006). Respiratory rhythm generation during gasping depends on persistent sodium current. Nature Neuroscience, 9(3), 311–313.CrossRefPubMedGoogle Scholar
Peña, F., Parkis, M.A., Tryba, A.K., Ramirez, J.M. (2004). Differential contribution of pacemaker properties to the generation of respiratory rhythms during normoxia and hypoxia. Neuron, 43(1), 105–117.CrossRefPubMedGoogle Scholar
Peña, F., & Ramirez, J.M. (2002). Endogenous activation of serotonin-2a receptors is required for respiratory rhythm generation in vitro. The Journal of Neuroscience, 22(24), 11,055–11,064.Google Scholar
Ptak, K., Zummo, G.G., Alheid, G.F., Tkatch, T., Surmeier, D.J., McCrimmon, D.R. (2005). Sodium currents in medullary neurons isolated from the pre-Bötzinger complex region. The Journal of Neuroscience, 25(21), 5159–5170.CrossRefPubMedGoogle Scholar
Ramirez, J.M., Koch, H., Garcia, A.J., Doi, A., Zanella, S. (2011). The role of spiking and bursting pacemakers in the neuronal control of breathing. Journal of Biological Physics, 37(3), 241–261.CrossRefPubMedGoogle Scholar
Rekling, J.C., & Feldman, J.L. (1998). PreBötzinger complex and pacemaker neurons: hypothesized site and kernel for respiratory rhythm generation. Annual Review of Physiology, 60, 385–405.CrossRefPubMedGoogle Scholar
Rinzel, J. (1987). A formal classification of bursting mechanisms in excitable systems. In Gleason, A. (Ed.), In: Proceedings of the international congress of mathematicians. Providence, RI: American Mathematical Society.Google Scholar
Rubin, J.E., Hayes, J.A., Mendenhall, J.L., Del Negro, C.A. (2009). Calcium-activated nonspecific cation current and synaptic depression promote network-dependent burst oscillations. Proceedings of the National Academy of Sciences of the United States of America, 106(8), 2939–2944.CrossRefPubMedGoogle Scholar
Rybak, I.A., Abdala, A.P., Markin, S.N., Paton, J.F., Smith, J.C. (2007). Spatial organization and state-dependent mechanisms for respiratory rhythm and pattern generation. Progress in Brain Research, 165, 201–220.CrossRefPubMedGoogle Scholar
Saftenku, E.E. (2012). Models of calcium dynamics in cerebellar granule cells. Cerebellum, 11(1), 85–101.CrossRefPubMedGoogle Scholar
Shao, X.M., & Feldman, J.L. (2000). Acetylcholine modulates respiratory pattern: effects mediated by m3-like receptors in preBötzinger complex inspiratory neurons. Journal of Neurophysiology, 83(3), 1243–1252.PubMedGoogle Scholar
Smith, J.C., Butera, R.J., Koshiya, N., Del Negro, C., Wilson, C.G., Johnson, S.M. (2000). Respiratory rhythm generation in neonatal and adult mammals: the hybrid pacemaker-network model. Respiration Physiology, 122(2–3), 131–147.CrossRefPubMedGoogle Scholar
Smith, J.C., Ellenberger, H.H., Ballanyi, K., Richter, D.W., Feldman, J.L. (1991). Pre-Bötzinger complex: a brainstem region that may generate respiratory rhythm in mammals. Science, 254(5032), 726–729.CrossRefPubMedGoogle Scholar
Terman, D. (1992). The transition from bursting to continuous spiking in excitable membrane models. Journal of Nonlinear Science, 2(2), 135–182.CrossRefGoogle Scholar
Thoby-Brisson, M., & Ramirez, J.M. (2001). Identification of two types of inspiratory pacemaker neurons in the isolated respiratory neural network of mice. Journal of Neurophysiology, 86(1), 104–112.PubMedGoogle Scholar
Topolnik, L. (2012). Dendritic calcium mechanisms and long-term potentiation in cortical inhibitory interneurons. The European Journal of Neuroscience, 35(4), 496–506.CrossRefPubMedGoogle Scholar
Toporikova, N., & Butera, R.J. (2011). Two types of independent bursting mechanisms in inspiratory neurons: an integrative model. Journal of Computational Neuroscience, 30(3), 515–528.CrossRefPubMedGoogle Scholar
Viemari, J.C., Garcia, A.J. 3rd, Doi, A., Ramirez, J.M. (2011). Activation of alpha-2 noradrenergic receptors is critical for the generation of fictive eupnea and fictive gasping inspiratory activities in mammals in vitro. The European Journal of Neuroscience, 33(12), 2228–2237.CrossRefPubMedGoogle Scholar
Viemari, J.C., & Ramirez, J.M. (2006). Norepinephrine differentially modulates different types of respiratory pacemaker and nonpacemaker neurons. Journal of Neurophysiology, 95(4), 2070–2082.CrossRefPubMedGoogle Scholar
© Springer Science+Business Media New York 2012
1.Department of Mathematics and Center for the Neural Basis of CognitionUniversity of PittsburghPittsburghUSA
Park, C. & Rubin, J.E. J Comput Neurosci (2013) 34: 345. https://doi.org/10.1007/s10827-012-0425-5
|
CommonCrawl
|
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter?
I have a question that relates to this post:
Can a statistic depend on a parameter?
But on it, the discussion focuses much on the t-statistic given as an example by the question asker. My doubt in a broader sense is that:
Let ${X_1, ..., X_n}$ be a random sample of size $n$ from a population. $T(x_1, ..., x_n)$ is a real-valued function. The random-variable $Y = T(X_1, ..., X_n)$ is called a statistic.
The statistic can't be a function of any parameter. But the random sample ${X_1, ..., X_n}$ depends on some parameter $\theta$. So, if the statistic is a function of the random sample, and the random sample is a function of a parameter, doesn't that make the (random) statistics a function of the parameter as well?
I understand that when we are calculating a t-statistic, say, we aren't using the real parameter of the population anywhere. But we're using a sample mean. And this sample mean is dependent on the populational mean, ain't it? So the (random) statistic depends in some sense of the populational mean.
Then, $T(\textbf{X}) = T(\textbf{X}(\theta))$. But that goes against the fact that the statistic can't be a function of any parameter. That just doesn't enter my head when I think of the random counterpart of the statistic.
There must be something wrong with my line of thought but I just can't find it. Any thoughts?
mathematical-statistics sampling inference
kjetil b halvorsen
jpugliesejpugliese
$\require{mediawiki-texvc}$Let $T=T(X)=T(X_1,X_2, \dotsc, X_n)$ be a statistic, and assume we have some statistical model for the random variable $X$ (the data), say that $X$ is distributed according to the distribution $f(x;\theta)$, $f$ is then a model function (often a density or probability mass function) which is known only up to the parameter $\theta$, which is unknown.
Then the statistic $T$ has a distribution which depend upon the unknown parameter $\theta$, but $T$, as a function of the data $X$, do not depend upon $\theta$. That only says that you can calculate the realized value of $T$, from some observed data, without knowing the value of the parameter $\theta$. That is good, because you do not know $\theta$, so if you needed $\theta$ to calculate $T$, you would not be able to calculate $T$. That would be bad, because you could not even start your statistical analysis!
But, still the distribution of $T$ depends upon the value of $\theta$. That is good, because it means that observing the realized value of $T$ you can guess something about $\theta$, maybe calculate a confidence interval for $\theta$. If the distribution of $T$ was the same for all possible values of $\theta^\P$, then observing the value of $T$ would not teach us anything about $\theta$!
So, this boils down to: You must distinguish between $T$ as a function of the data, and the distribution of the random variable $T(X)$. The first one do not depend upon $\theta$, the second one does.
$\P$: Such a statistic is called ancillary. It might be useful, just not directly, alone for inference about $\theta$.
kjetil b halvorsenkjetil b halvorsen
While the other answers (so far) are quite to the point and valid, I would like to add another direction to the discussion that relates to both fiducial inference (Fisher's pet theory) and a form of sampling called "perfect sampling" (or "sampling from the past").
Since a random variable is a measurable function from a (probability) space $(\Omega,\mathbb{P})$ to $\mathbb{R}$ (or $\mathbb{R}^n$), $X:\Omega\to\mathbb{R}$, the function may itself depend on a parameter $\theta$ if its distribution depends on $\theta$, in the sense that $X=\Psi(\omega,\theta)$. For instance, if $F_\theta$ denotes the cdf of $X$, we can write $X=F_\theta^{-1}(U)$ where $U$ is a uniform $\mathcal{U}(0,1)$ random variable. In this sense, $X$ (and the sample $(X_1,\ldots,X_n)$ as well) can be written as a [known] function of the [unknown] parameter $\theta$ and a fixed distribution [unobserved] random vector $\xi$, $$(X_1,\ldots,X_n)=\Psi(\xi,\theta)$$ This representation is eminently useful for simulation, either for the production of (pseudo-) samples from $F_\theta$ as in the inverse cdf approach, or for achieving "perfect" simulation. And for conducting inference by inverting the equation in $\theta$ into a distribution as in fiducial inference. Called by Efron Fisher's biggest blunder.
To relate with the previous answers, this [distributional] dependence of $X$ on [the true value of] $\theta$ does not imply that one can build a statistic that depends on $\theta$ because in the above equation both $\theta$ and $\xi$ are unobserved. Which is the whole point for conducting inference.
Xi'anXi'an
The confusion here stems from conflating a random variable with its distribution. To be clear about the issue, a random variable is not a function of the model parameters, but its distribution is.
Taking things back to their foundations, you have some probability space that consists of a sample space $\Omega$, a class of subsets on that space, and a class of probability measures $\mathbb{P}_\theta$ indexed by a model parameter $\theta$. Now, the random variable $X: \Omega \rightarrow \mathbb{R}$ is just a mapping defined on the domain $\Omega$. The random variable itself does not depend in any way on the parameter $\theta$, and so it is wrong to write it as a function $X(\theta)$. It is of course true that the probability distribution of $X$ depends on $\theta$, since the latter affects the probability measure over the sample space. However, it does not affect the sample space itself.$\dagger$
Consequently, when you are dealing with a statistic, which is just a function of the observed random variables, this also does not depend on $\theta$, but its distribution usually does. (If not, it is an ancillary statistic,)
$\dagger$ This treatment has taken $\theta$ as an index for the probability measure, but the same result occurs under a Bayesian treatment where $\theta$ is regarded as a random variable on $\Omega$ and the behaviour of $X$ is treated conditionally on the parameter.
Not the answer you're looking for? Browse other questions tagged mathematical-statistics sampling inference or ask your own question.
What is the fiducial argument and why has it not been accepted?
Is the sample sufficient?
Connection between MLE (Maximum Likelihood Estimation) and introductory Inferential Statistics?
Definition of sufficient statistic when the support of the statistic depends on the unknown parameter?
Admissibility under the loss function
What is statistic in statistics?
Sufficient statistics - how can the conditional pdf not depend on $\theta$ when $\theta$ is in the equation?
Notation in statistics (parameter/estimator/estimate)
Puzzled by definition of sufficient statistics
Which cumulative distribution of F(X) is equal to the cumulative distribution of its sample median (as sample statistics)
What does $\sum_{\{\textbf{x}:T(\textbf{x})=t\}} f(\textbf{x}; \theta)$ mean?
|
CommonCrawl
|
Article | Open | Published: 01 November 2017
Direct estimation of de novo mutation rates in a chimpanzee parent-offspring trio by ultra-deep whole genome sequencing
Shoji Tatsumoto1 na1,
Yasuhiro Go ORCID: orcid.org/0000-0003-4581-03251,2,3 na1,
Kentaro Fukuta4,5,
Hideki Noguchi4,5,
Takashi Hayakawa6,7,
Masaki Tomonaga6,7,8,
Hirohisa Hirai9,
Tetsuro Matsuzawa6,7,8,10,
Kiyokazu Agata11,12,13 &
Asao Fujiyama4,5,14
Scientific Reportsvolume 7, Article number: 13561 (2017) | Download Citation
Genome evolution
Rare variants
Mutations generate genetic variation and are a major driving force of evolution. Therefore, examining mutation rates and modes are essential for understanding the genetic basis of the physiology and evolution of organisms. Here, we aim to identify germline de novo mutations through the whole-genome surveyance of Mendelian inheritance error sites (MIEs), those not inherited through the Mendelian inheritance manner from either of the parents, using ultra-deep whole genome sequences (>150-fold) from a chimpanzee parent-offspring trio. We identified such 889 MIEs and classified them into four categories based on the pattern of inheritance and the sequence read depth: [i] de novo single nucleotide variants (SNVs), [ii] copy number neutral inherited variants, [iii] hemizygous deletion inherited variants, and [iv] de novo copy number variants (CNVs). From de novo SNV candidates, we estimated a germline de novo SNV mutation rate as 1.48 × 10−8 per site per generation or 0.62 × 10−9 per site per year. In summary, this study demonstrates the significance of ultra-deep whole genome sequencing not only for the direct estimation of mutation rates but also for discerning various mutation modes including de novo allelic conversion and de novo CNVs by identifying MIEs through the transmission of genomes from parents to offspring.
Estimation of mutation rates and identification of mutation modes are important for better understanding the molecular mechanisms of an organism's physiological conditions and the species' evolutionary history. Advancements of high-throughput next-generation sequencing (NGS) technologies and their application to the whole genome sequencing (WGS) of a large number of human genomes revealed the mutation spectrum, genetic diversity, and population history of human beings1,2,3. As for the mutation spectrum, recent studies utilizing the WGS data from multiple human parent-offspring trios or quartets (pedigree-based approach) estimated germline de novo mutation rates for single nucleotide variants [de novo single nucleotide variants (SNVs)] around 0.97–1.20 × 10−8 per site per generation or approximately 0.38–0.48 × 10−9 per site per year, assuming a 25-year generation time4,5,6,7,8.
One traditional method to estimate mutation rate is a phylogenetic approach that uses the sequence divergence between two species and their ancestral effective population size. Many studies have reported a typical value of 1 × 10−9 per site per year as so-called "phylogenetic mutation rate9,10,11" based on the sequence divergence of 1.23–1.37% between humans and chimpanzees11,12,13 and an assumed sequence divergence time approximately 6–7 million years ago (Ma); however, uncertain factors such as extent of ancestral polymorphisms, effective population size, generation time, and rate of heterogeneity within and between the genomes of species are associated with the method14,15.
To overcome these difficulties and to estimate the mutation rates more directly, we performed WGS on the genomes of a chimpanzee parent-offspring trio and then identified de novo SNVs and other structural alterations. The chimpanzee parent-offspring trio used in this study have been participating in a wide variety of comparative cognitive research since 197816,17. Because the frequency of de novo SNVs and structural alterations found within the single generation should be very low4,5,6,7,8, we took a straightforward strategy to identify such events through ultra-deep WGS to compensate for statistical variation and sequencing errors. In total, we acquired 150-fold coverage of the sequences of all individuals. To the best of our knowledge, this is the first study to conduct such an ultra-deep WGS of a given mammalian parent-offspring trio. In addition to the identification of the de novo SNV sites, we were able to detect and identify de novo copy number variation sites (CNVs) among the trio according to the comparison of the depth of the sequence read coverage in a given region. Moreover, although little is known about the biological significance of de novo allelic conversion (known as interallelic gene conversion), we succeeded in the quantifying the rate of genome-wide de novo allelic conversion events.
Comprehensive and highly accurate identification of structural variants through ultra-deep whole genome sequencing
To understand the mechanism of the structural changes of genomes and to estimate their rates of occurrence from parents to offspring, it is essential to detect with the highest possible accuracy the structural changes in the genome of each parent-offspring member. In the present study, we sequenced the genomes of a mother-father-offspring (male) chimpanzee trio reared at the Primate Research Institute, Kyoto University (Methods). We acquired the raw DNA sequences of 575 gigabases (Gb) with 194.6-fold genome coverage against the total number of non-N bases of the chimpanzee reference sequence (CHIMP2.1.4 or panTro4), 463 Gb with 157.8-fold coverage, and 468 Gb with 158.3-fold coverage of the father, mother, and son, respectively (Fig. 1A, Supplementary Table S1). The distributions of the read depth of the chimpanzee trio are shown in Fig. 1B. The raw data were processed to extract high-quality reads and mapped to the chimpanzee reference genome to identify the positions of structural variant candidates as an initial dataset (Fig. 1A; Methods).
Whole-genome sequencing (WGS) and workflow of variant discovery. (A) Pipeline for mapping and variant detection. The offspring's data are shown in the box. (B) Distribution of the read-depth within the datasets from the chimpanzee trio. Lower and upper read-depths shown in each histogram indicates ± 3σ from the mean, and the reads present in the outlier regions were excluded from the following analyses.
In total, we detected approximately 3.67 million SNVs and 585 thousand insertion/deletions (indels) over 89.16% of the reference genome for the trio [(vii) in Fig. 1A, Table 1]. These initially obtained candidate sites were further examined to minimize systematic errors and false positives (FPs) for the accuracy. For SNVs, for example, we excluded low-complexity or repetitive regions from the alignment using the following filters: (i) read depth at each nucleotide position, (ii) balance between forward and reverse reads at a particular site, (iii) indels, (iv) allelic and strand biases, and (v) positions flanking to gaps (see Methods and Supplementary Method for details). In addition, we only considered autosomes. Table 1 demonstrates a total number of SNVs after filtering. The frequency of SNVs for each trio member exhibits almost the same value (0.118%), and the autosomal heterozygosity was 0.076% in any individual (Table 1, Supplementary Table S2), which coincided with the reported values for the Western chimpanzee, 0.08%13 or 0.077–0.084%18, and even that of human, 0.0765%19. The ratio of transition to transversion (Ti/Tv) is 1.98 for the trio (Table 1), which is 2.0–2.1 for the human genome20.
Table 1 Summary of SNVs.
Identification of de novo SNVs in the genome of the offspring
The main purpose of this study is to identify and analyze the genetic signature of mutations in the framework of WGS of a chimpanzee parent-offspring trio. To achieve this goal, we aimed to identify the sites that were not inherited from either parent through Mendelian inheritance, which was referred to as Mendelian inheritance errors (MIEs)4. Using the total set of SNV calls obtained in initial analyses (Table 1), we analyzed inheritance in the trio and identified 2,405 sites in the genome of the offspring as MIEs. We excluded those located in repetitive regions, such as LINE/SINEs, simple repeats, and LTRs to improve accuracy (Supplementary Table S3), and the remaining 889 MIEs were further classified into four categories based on the pattern of inheritance and the coverage depth of the mapped reads to the corresponding region: [i] de novo SNVs, [ii] copy number neutral inherited variants (CNIVs), [iii] hemizygous deletion inherited variants (HDIVs), and [iv] de novo CNVs as shown in Fig. 2.
Classification of the MIEs. When the variant alleles were identified only in the offspring, they were classified as [i] de novo SNVs. Inherited MIEs are classified into [ii] copy-number neutral inherited variants (CNIVs), [iii] hemizygous-deletion inherited variants (HDIVs), and [iv] de novo CNVs, according to the relative depth of the read-coverage among the trio's sequences. Black circles indicate the sites of SNVs. The vertical columns in the right panel represent schematics of the read-coverage and their relative ratios.
Finally, we identified 45 de novo SNVs among 889 MIEs ([i] in Fig. 2, Supplementary Figure S1 and Table S4). Out of the 45 de novo SNVs, 20, 24, and 1 SNVs were found in intergenic, intronic, and exonic regions, respectively. This is consistent with the rate of de novo SNVs reported for the human exome (0.92 de novo SNVs on average in exonic regions)21.
Characterization of copy number neutral inherited variants and hemizygous deletion inherited variants
Other than the de novo SNVs, we discovered that 476 and 318 MIEs were classified into CNIVs and HDIVs, respectively, based on the relative read depth among the trio (Fig. 2, Supplementary Figure S1). Since we sequenced each chimpanzee genome with more than 150-fold genome coverage, we were able to detect, distinguish, and quantify CNIVs and HDIVs through the comparison of the read depth at each candidate site (see Methods for detail). When the read depth showed similar extent among the trio throughout the corresponding genomic regions, as shown in [ii] of Fig. 2, we assumed that an allelic conversion occurred through the transmission of genomes from either parent to the offspring. In contrast, when the read depth varied considerably among the trio (either father/offspring or mother/offspring had half read depth in a given site/region as shown in [iii] of Fig. 2), we assumed the deletion of one allele (known as a hemizygous state) in father/offspring or in mother/offspring in a given site/region as described in ref.2. Therefore, we defined these MIEs as inherited variants and not de novo SNVs.
When multiple CNIVs were closely located on the genome, those CNIVs could be generated from a single allelic conversion event. If we suppose that 476 CNIVs randomly occurred on the target genomic regions (1.17 × 109 bp in this study), the expected mean distance of two adjacent CNIVs was 2.46 × 106 bp, and the 99% confidence interval of the distance was calculated from 1.75 × 104 bp to 1.25 × 107 bp based on the 10,000 bootstrap resampling simulation. Then, we assumed a single allelic conversion event if two adjacent CNIVs are significantly closely located at each other. We set the criteria to a lower bound of a 99% confidence interval (1.75 × 104 bp). Indeed, more than half of the CNIVs have an adjacent CNIV within 1.75 × 104 bp (238/476), and especially 128 and 192 CNIVs have an adjacent CNIV within less than 100 bp and 1,000 bp, respectively (Supplementary Figures S1B, S2 and S3), strongly suggesting that most of the CNIVs are closely located to each other and are likely to be the products of a single allelic conversion event. As a result, we identified 306 such events from 476 CNIVs (Supplementary Table S4) and estimated the rate of the genome-wide de novo allelic conversion rate as an order of 10−7 per site per generation. However, the true allelic conversion rate could be higher than the value we estimated here because we were unable to identify conversion events when two alleles have long identical DNA sequences due to no marker for distinguishing them. For the more precise estimation of the allelic conversion rate, we need to obtain much variation data and meaningful markers using multiple family trios such as the studies recently reported22,23.
Similarly, we found 318 HDIVs located within 84 regions (Supplementary Figure S1). A typical example of the HDIV cluster can be seen on chromosome 6 and extends 71 kb from the position of 55,271,096 bp to 55,342,281 bp in which both the mother and offspring have one copy of an allele. Across this region, genotypes of offspring are identical to those of the father because only the paternal allele is transmitted to the offspring (Supplementary Table S4).
Characterization of de novo CNVs
The final category in Fig. 2 [iv] is de novo CNVs, where only the offspring had half read depth. We detected nine such sites in this study (Supplementary Figure S1). In all the cases, one allele was lost from the offspring, and most of them were caused by microdeletions shorter than 6 kb. The remaining was relatively large, covering approximately 11 kb on chromosome 22, and was located adjacent to the 35 kb hemizygous deletion region (Fig. 3), where the depth of coverage for both the mother (red line) and offspring (green line) was approximately half of the mean coverage. Although the frequency is relatively low, the de novo CNVs may have a larger influence than that of de novo SNVs due to a larger extent of affected sequences.
Representative region of hemizygous deletion and a de novo CNV on chromosome 22. Blue, red, and green lines represent the average depth of the read coverage for the corresponding regions in the father, mother, and offspring, respectively.
In the present study, we searched for de novo CNVs with ± 3σ deviations from the mean (from 34 × to 201 × coverage) in the genome of the offspring (Fig. 1B). Because we filtered out highly repetitive regions from our analyses, we were unable to exclude the possibility of high-copy number de novo CNVs; however, we believe this is unlikely because all the de novo CNVs showed decreased copy number in the range used in this study.
Identification of germline de novo SNVs
The de novo SNVs we initially identified in the offspring (Fig. 2) may have resulted from mutations that occurred either in germline cells in the parents or somatic cells in the offspring or both. To distinguish germline de novo SNVs from the somatic ones, we analyzed another DNA sample obtained from hair follicles of the offspring [mesoderm (blood) vs. ectoderm (hair follicle) comparison]. Because somatic mutations, if any, should occur independently in the genomes of stem cells during the development and aging processes of the offspring, they should thus produce different SNV profiles, whereas germline mutations or mutations that occurred in the early developmental stages should be retained commonly among the DNA from the tissues of different cell lineages.
Primers used for polymerase chain reaction (PCR) were designed for all 45 de novo SNVs, and we were able to obtain 40 PCR products across the parent-offspring trio. Subsequent genotyping of the offspring using Sanger sequencing showed differences between the genotypes of blood and hair follicle DNAs in only one case (Supplementary Table S5). As a result, almost all the de novo SNVs (31/32) detected in the present study are germline mutations (Fig. 4A) except for one somatic de novo mutation (Fig. 4B).
Representative Sanger sequencing electropherogram at the position of de novo SNVs. (A) An example of germline de novo SNV identified on chromosome 12, where the parents' genotypes are homozygous and those of the blood and hair follicle DNAs of the offspring are heterozygous (red arrow). (B) A somatic de novo SNV identified on chromosome 3, where the only blood-derived DNA of the offspring shows heterozygous (red arrow).
Estimation of false positive and false negative rates during the process of de novo SNVs identification
It is also important to estimate the extent of the false positive (FP) and false negative (FN) calls and to discriminate germline de novo SNVs from somatic ones in our identified de novo SNVs. For the estimation of FP calls, we compared the Sanger sequencing data, which was collected in the previous section, with the corresponding NGS data to detect inconsistencies in the genotypes. As a result, we found eight FP calls in the 40 genotypes, yielding an FP rate to be 0.2 (8/40) (Table 2, Supplementary Table S5).
Table 2 False positive SNVs identified from the DNAs of blood and hair follicle cells using NGS and Sanger sequencing.
For the estimation of FN calls in our analysis, we used a likelihood-based program, DeNovoGear24, on the same data set for the comparison. DeNovoGear is a program designed to detect de novo mutations using NGS data as we have done in this study. When the posterior probability for the detection of de novo SNVs was set to 0.99 as a threshold, the DeNovoGear reported 61 sites as de novo SNVs, and 26 of them were not identified from our analysis (Supplementary Figure S4). To examine the sensitivity and specificity of the two methods, we performed resequencing analysis with PCR and Sanger sequencing using the parent-offspring DNA samples and confirmed the genotype of candidate de novo SNV sites that are inconsistent between the two methods. Out of the 26 sites that were called as de novo SNVs by DeNovoGear, but were not called by our analysis, seven sites were successfully genotyped by PCR and Sanger resequencing. We found that all of them were not de novo SNVs and then regarded all of them as true negatives (Table 3, Supplementary Figure S4, Supplementary Table S6). We could not properly genotype the rest of the candidate sites (26 − 7 = 19) because of the multiple PCR products; we speculated that these sites originated from the duplicated regions that were omitted in the present chimpanzee reference sequence. From these results, we estimated that the FN rate of our procedure is close to zero. In conclusion, we estimated that the number of FP and FN calls as 9 (45 × 0.2) and 0 (45 × 0), respectively, and identified a somatic de novo SNV (Fig. 4B) out of the 45 candidates de novo SNV sites.
Table 3 De novo SNVs identified only by DeNovoGear and genotypes determined by NGS and Sanger sequencing.
Estimation of the paternity and maternity of the de novo SNVs
According to the previous studies on humans, approximately 73–80% of de novo SNVs originate from the father25. In this study, we acquired plenty of paired-end sequences that enabled us to distinguish the parental origins of de novo SNVs using the information on the nearby heterozygous SNV sites covered by the paired read. We assigned 11 and four out of 45 de novo candidate SNVs, and seven and two of 31 validated de novo SNVs to the father and mother, respectively, showing that 73–78% de novo SNVs were of paternal origin. However, we should take into consideration the effect of the father's age at conception because the number of germ cell divisions in a human male is approximately 35, 380, and 840 at ages 15, 30, and 50, respectively26.
Estimation of the rate of de novo SNVs in germline cells
The rate of germline de novo SNVs per haploid genome can be calculated as follows:
$$[{\rm{number}}\,{\rm{of}}\,{\rm{germlinede}}\,{\rm{novoSNVs}}]/[{\rm{target}}\,{\rm{genomic}}\,{\rm{size}}\times 2]$$
From the numbers of the FP and FN calls (nine and zero, respectively) and the experimental confirmation that almost all of the de novo SNVs detected in this study are germline mutations except for a somatic de novo SNV, we estimated the number of germline de novo SNVs in this chimpanzee trio to be 35 [45–9 (number of FP calls) –1 (number of somatic de novo SNV)], and the range is from 31, in which all unannotated de novo SNVs are assigned to be false positives, to 36, in which all unannotated ones are true positives. Since the target genomic regions used in this study are 1.182 × 109 bp as described before, the rate per haploid genome is calculated as follows:
$$35/[1.182\times {10}^{9}\times 2]=1.48\times {10}^{-8}/{\rm{site}}/{\rm{generation}}$$
The range is from 1.31 × 10−8 to 1.52 × 10−8 when minimum and maximum number of germline de novo SNVs are assumed to be 31 and 36, respectively.
According to the record, the ages of the father and mother were estimated to be 24 years when their offspring was born. Therefore, we speculated that the germline de novo SNVs occur with a frequency of 0.62 × 10−9 per site per year, which is slightly higher than the pedigree-based rate for humans and chimpanzees27.
The results obtained from low coverage WGS studies (10–20-fold) make it difficult to properly call heterozygous SNVs due to a larger variance of allelic mapping bias20. We also demonstrated that even relatively high coverage data (around 90-fold) is not efficient for proper genotyping. Specifically, we made three different depth of coverage data sets (one-fourth, half, and three-fourths) and compared them with the full data set regarding the sensitivity and specificity. Since we obtained around 120× coverage data from the parent-offspring trio after quality filtering (father 142.0×, mother 116.9×, offspring 120.3×) for the variant detection (Supplementary Table S1), for simplicity, we call each data set as '30×', '60×', '90×', and '120×', respectively. Regarding the coverage depth efficiency to the sensitivity and specificity for the detection of de novo SNVs, it is shown that low (30×) and middle (60×) coverage data have many specific or non-shared de novo SNV candidates (Fig. 5A), and it is revealed that most of the inconsistency is due to miscalling of heterozygous SNVs owing to relatively shallow depth of reads that lead to losing statistical power (Supplementary Table S7). Even for the 90× data set, nine de novo SNV candidates are not shared with the 120× data set, and all of the unshared ones are revealed to be false positives by Sanger sequencing validation (Fig. 5B), showing again that a relatively high coverage data set (90×) is still not enough for accurate de novo SNV identification. It is then that deep-sequencing coverage data for all the members are important to call variants at heterozygous sites reliably and to identify de novo SNVs with minimum FPs and FNs.
Number of candidate de novo SNV site among four different depth of sequencing coverage data (30×, 60×, 90×, 120×). (A) Venn diagram of shared de novo SNVs among four different coverage data. Especially, low- and middle-coverage data (30× and 60×) have many non-shared de novo SNVs. (B) Comparison of the shared and specific de novo SNVs between 90× and 120× coverage data. The result shows that 90× coverage data is not enough to accurate de novo SNV calls.
Another advantage of deep-sequencing is to effectively detect CNVs based on the comparison of the read depth data among the genomes of the offspring and the parents. When the offspring has an inherited hemizygous allele from its parents, its genotype should be inevitably homozygous because of the loss of one allele. Conversely, with adequate consideration on the CNVs, we can effectively identify such hemizygous deletion events if the depth of the read coverage of the father or mother, and the offspring are half of the average depth (i.e., loss of an allele) [see details in Fig. 2, hemizygous deletion inherited variants (HDIVs)]. Moreover, when both of the parents have two alleles, and only the offspring have lost one allele, the MIE can be assigned as the de novo CNVs in the offspring. We found nine such de novo CNVs and most of them (8/9) is less than 6 kb in size. Microarray analysis known as array CGH (comparative genomic hybridization) can detect de novo CNVs; however, because of the density of probes, they can mostly identify a tract of de novo CNVs in a stretch of several kb. Because NGS data includes read depth information for each site, CNV detection using NGS is more sensitive than that using microarrays.
A recent study which characterized de novo structural changes in the human genome reported that the rate of de novo CNVs is 0.16 per generation28. The rate significantly differs from our result (nine de novo CNVs in our study). However, they used shallow sequence depth data (14.5×), and it is therefore that their result probably contains some false negative (unidentified) de novo CNVs due to lack of statistical power for identification of such CNVs. Moreover, it is revealed that the longer-read sequencing technologies uncover the novel and complex structural variations in the human genome29, and the actual rate of de novo CNVs might higher than the currently reported rate (0.16 per generation).
In this study, we were also able to show the presence of other modes of de novo variants. For example, our analyses revealed 476 sites representing an inherited variant with no depth variation among the trio (Fig. 2 [ii]). These sites tend to be highly clustered and are mainly distributed within < 10 kb (Supplementary Figures S1, S2, S3). We annotated these variants as copy number neutral inherited variants (CNIVs) and speculated that they were generated through an allelic conversion. When an allelic conversion occurs via homologous recombination between sister chromatids, in which one of the alleles is converted to the other, that results in loss of heterozygosity. Hence, SNVs are subjected to the allelic conversion generate MIEs (Fig. 2 [ii]). However, definitive and reliable detection of such allelic conversions is very difficult because the frequency of SNVs is considerably less dense (0.0012, an average of one SNV per 833 bp of the genome) than that required to reliably detect conversion events due to the shorter mean conversion tract length of 55–290 bp30. Moreover, CNIVs can also arise through uniparental isodisomy (UPID), in which a single chromosome or part of a chromosome from one of the parents is inherited and duplicated via malsegregation during meiosis or post-zygotic mitosis. UPID is reportedly involved in certain human disorders, including Prader-Willi and Angelman syndromes, which are caused by malsegregation of imprinting genes on chromosome 15q, although the loss of its heterozygous tract generally extends from several hundred kb to the entire chromosome. In this study, we identified maximum loss of several kb-long heterozygous tracts; therefore, we assumed that most of the events identified in this study likely to be caused by allelic conversions. In any case, we have efficiently identified the structural dynamics of copy number alterations at the whole genome level using ultra-deep sequencing data, which is difficult through conventional cytogenetic and/or microarray analyses.
Of the 45 de novo candidate SNVs, we show that approximately 35 are germline de novo SNVs and have estimated its mutation rate as 1.48 × 10−8 per site per generation. The rate is approximately 23–54% greater than the human mutation rate of 0.96–1.20 × 10−8 per site per generation. This difference may be explained, in part, by the richness of SNVs and structural variant information for humans. Most human studies exclude the SNVs registered in the dbSNP database and residing within known segmental duplication regions. We agree with the concept of excluding the SNVs within known segmental duplication regions because of a higher probability of the NGS-derived short read mapping error. Indeed, we removed known low-complexity regions, such as LINE/SINE from the analysis. However, we believe that the exclusion of the SNVs registered in the dbSNP is not appropriate because it is known that there are many hypermutable sites of CpG in the dbSNP. In the CpG sites, we can expect that independent and recurrent mutations have occurred due to their deamination property, which converts 5-methylcytosine into thymine. In fact, Besenbacher et al. reported that 3.5% (18/508) of the germline de novo SNVs in their multiple human trio genome analysis were already present in the dbSNP and that half of the sites were located on the CpG sites31. They concluded that these overlaps were due to recurrent mutations, in particular on the hypermutable CpG sites. Our chimpanzee study also revealed that 29% (9/31) germline de novo SNVs are on CpG sites. These observations do not adequately support the exclusion of the SNVs registered in the dbSNP. Regardless, if we exclude the SNVs that are located inside the known chimpanzee segmental duplication regions32, two de novo SNVs (chr2A: 102577476, chr22: 22163245) are excluded from the list, which results in a de novo SNV rate of 1.45 × 10−8 per site per generation [43 (45–2) − 9 (0.2 FP rate)]/{1.170 × 109 [(original analyzed region) − (total dbSNP sites) − (segmental duplication regions)] × 2 (per haploid)}. Since there is an order of magnitude difference of accumulated information between humans and chimpanzees regarding SNVs and structural variants, more chimpanzee variation data may narrow the gap between the de novo mutation rates of humans and chimpanzees.
Other cause of different mutation rates between the two species might be the difference of the germline cell cycles. For example, one cycle of spermatogonial stem cell division takes 16 and 14 days in humans and chimpanzees, respectively33,34, suggesting a higher number of mutation events in chimpanzees compared with humans over a given time interval (per year number of cell divisions is approximately 23 and approximately 26 in humans and chimpanzees, respectively). The difference in cell cycles may account, in part, for this discrepancy.
An additional possibility for the inconsistency of the mutation rate may come from the uncertainty of parameters used for the phylogenetic approaches, or from the inaccurate analyses of NGS studies. Phylogenetic analyses commonly incorporate genetic divergence between humans and chimpanzees (d = 0.012), generation time (g = 20), divergence time (t = 6 Ma), and common ancestral population size (N e = 10,000) to estimate the rate as 1.88 × 10−8 per site per generation or 0.94 × 10−9 per site per year. However, if the actual divergence time of humans and chimpanzees is greater, or the average number of years per generation is >20, or if the ancestral N e is greater than the assumed value of 10,000, the phylogenetic mutation rate becomes similar to that of the pedigree-based mutation rate. Indeed, if we assume that the human-chimpanzee common ancestor N e is ten times higher than the assumed value (10,000) according to the theoretical study35, the phylogenetic mutation rate becomes 1.20 × 10−8 per site per generation or 0.60 × 10−9 per site per year36. Conversely, NGS analyses of pedigrees are somewhat immature because of the lack of a robust framework to identify FPs and FNs, the inability to sequence through repetitive sequences, and a bias against GC-rich DNA, suggesting that the mutation rate according to pedigree analysis represents a lower bound37. Interestingly, genetic studies of alternative populations that examine sequence data for genes estimated an intermediate mutation rate (1.3–1.8 × 10−8 per site per generation)38,39, suggesting the appropriate value lies within this range.
Using the data obtained from six chimpanzee offspring, the germline de novo SNV rate was estimated to be approximately 1.2 × 10−8 per site per generation (mean coverage was approximately 28×)27, which is consistent with the mutation rate of the human genome and is lower than the rate obtained in this study. One of the possible explanations for the difference could be the difference in the father's age. The studies cited the above-used offspring with relatively younger fathers (mean 18.9 years; range, 14.6–23.9 years) than that of the father in this study (24 years). The effect of age may partially explain the elevated mutation rate reported in this study. Nevertheless, more data covering a wider age range (particularly the father's) are required to define the evolutionary transition of mutation rates of hominoid genomes, and to define the effect of the ages of the parents to the overall genetic effect to the offspring.
The chimpanzee parent-offspring trio and animal welfare and care
The chimpanzee parent-offspring trio, the father who is called Akira [ID: 0435 in the Great Ape Information Network (GAIN), http://www.shigen.nig.ac.jp/gain/]; the mother who is called Ai [ID: 0434]; and the offspring who is Ayumu [ID: 0608], used in this study are western African chimpanzees (Pan troglodytes verus) reared in the Primate Research Institute, Kyoto University, Japan. The parents were wild-born and offspring were born by artificial insemination. They live in a social group with nine other chimpanzees in a semi-natural enriched outdoor compound (770 m2) and the two cages that were interconnected. Blood DNA samples were used for constructing genomic libraries. To minimize suffering, blood was not collected for the purpose of the present study but as part of routine health examinations. The blood DNA was extracted using the DNeasy Blood & Tissue kits (QIAGEN GmbH, Hilden, Germany). For validation of de novo mutation analysis, DNA samples representing a different cell lineage other than blood cells (mesoderm) were obtained from hair follicle cells (ectoderm) of the offspring. QIAamp DNA Investigator kits (QIAGEN GmbH) were used to extract hair follicle DNA from approximately 0.5 mm of the whole root of the hair.
All experiments were performed according to the Guidelines for Care and Use of Nonhuman Primates Versions 2 and 3 of the Primate Research Institute, Kyoto University (2002, 2010). The Animal Welfare and Animal Care Committee (Monkey Committee) of the Primate Research Institute approved the experiments (2010-002, 2011-063, 2012-014, 2012-124, 2013-118, 2013-175, 2014-097).
Genome library construction and sequencing
Genomic libraries were prepared using Illumina TruSeq DNA Sample Prep kits (Illumina, Inc., CA, US) without an amplification step to produce the final products. Two types of paired-end libraries were generated using different insert fragment sizes (300 bp and 500 bp) and were sequenced using 2 × 101 cycles for each trio. All libraries were sequenced using an Illumina HiSeq. 2000 following the manufacturer's protocols.
Mapping reads to the chimpanzee reference sequence
Adaptor sequences and low-quality bases were removed using an in-house script before mapping (Fig. 1A; step [ii]). Low-quality sequences were defined by the averaged quality value (QV) <20 for a given base ±1 adjacent nucleotide and were marked. If a marked position was located at either the 5′- or 3′-end or both, these bases were trimmed. Finally, only high-quality paired-end (PE) reads with ≥20 nucleotides were selected. Overall, we obtained 509 Gb, 417 Gb, and 435 Gb of sequences of the father, mother, and offspring, respectively. The Burrows-Wheeler aligner (BWA; version 0.6.1)40 was used to align the reads, using default parameters, to the chimpanzee reference genome sequence CHIMP2.1.4 assembly from Ensembl (http://www.ensembl.org/) (Fig. 1A; step [iii]).
Alignments were converted from sequence alignment/map (SAM) format to sorted, indexed binary alignment/map (BAM) files (SAMtools; version 0.1.19)41, and the Picard tool (version 1.93) was used to remove duplicate reads (Fig. 1A; step [iv]). Using the sorted BAM files, we used samtools to generate genotype calls. The "mpileup" command in samtools was used to identify SNVs (http://samtools.sourceforge.net/mpileup.shtml). We used a variant call format (vcf) file for the trio, which is used to determine common and unique SNVs between members. GATK software tools42 (version 2.1-9) were used to improve the initial mapping results, genotype calling, and refining using the recommended parameters20,43 (http://www.broadinstitute.org/gatk/guide/best-practices). BAM files were realigned using the GATK IndelRealigner, and base quality scores were recalibrated using the GATK base quality recalibration tool with known variant data (common variants among the trio generated using samtools mpileup) (Fig. 1A; step [v]). The proper pair mapping results were independently selected for each read by discarding an inconsistent pair (two reads on the same chromosome with incorrect orientations or incorrect insert size) or singletons (one of the reads was unaligned). We only used unique best alignments. To do this, specific tags generated by BWA after alignment, including ×0 (number of best hits) and ×1 (number of suboptimal hits), were used to extract unique alignments (using SAM tags including ×0:i:1 and ×1:i:0) (Fig. 1A; step [vi]).
Detailed analysis pipeline and command for mapping and variant calling are shown in Supplementary Method.
Calling SNVs and indels
The BAM files produced above were used for calling SNVs and indels using UnifiedGenotyper implemented in GATK software tools42 (version 2.1-9) after applying the parameters for each individual as follows: -stand_call_conf 50, -stand_emit_conf 10, and -dcov max_depth. Subsequent filtering of SNVs was performed by discarding low-quality variants according to the score calculated from UnifiedGenotyper analysis; the second most likely phred-scaled likelihoods (PL)–the most likely PL <200 for heterozygous SNVs and <100 for homozygous SNVs for reducing FPs (Fig. 1A; step [vii]).
Filtering by read depth, allele balance, and identification of uncertain read mapped regions
To detect authentic variants and to minimize FPs, target regions with high confidence of variant calling should be defined by excluding the genomic regions according to the following filter criteria:
(i) Read depth
To filter out read depth outliers, the mean and standard deviation of read depth of each individual should be calculated. We then calculated the mean and standard deviation of trio read depths after setting the proper range of read depth, where lower is the minimum read depth (father 15×, mother 15×, offspring 18×) and upper is 512×, because unusual lower and higher coverage of regions (e.g., some region covering >100 K reads) confound accurate calculation of the median and standard deviation. Using the calculated mean and standard deviation, read depth ranges was set to ±3σ for each individual (Fig. 1B). This filter removed 163,077,725 bp (6.28%).
(ii) Mapped-read balance
Considering allelic balance read mapping, at least 10 forward and reverse reads were used to map genomic regions. This filter removed 190,025,818 bp (7.32%).
The next three filters identified uncertain read mapped regions and excluded low complexity regions as uncertain for variant calling.
(iii) Indels
Indel calling using NGS is highly challenging, with a high probability of obtaining FPs. The indels, which were annotated using GATK software tools (UnifiedGenotyper), and adjacent 50 bp were then excluded from target genomic regions. This filter removed 86,308,113 bp (3.33%).
(iv) Allelic and strand bias
Allelic and strand bias effects for variant calling have been previously mentioned20. We subsequently retained the variant sites that were covered by at least one read on the reference forward strand (RF), reference reverse strand (RR), alternative forward allele (AF), and alternative reverse allele (AR). For example, we retained SNVs A (20 forward reads, 12 reverse reads) and G (15 forward reads, 18 reverse reads) but discarded SNVs A (18 forward reads, 0 reverse reads) and G (19 forward reads, 22 reverse reads). All biased SNVs and adjacent 10 bp sites were excluded from the genomic target regions. This filter removed 7,455,688 bp (0.29%).
(v) Gaps
All variant sites located at the end of the read, with average sizes from the end of read within 10 bp, were excluded from genomic target regions and adjacent 10 bp sites were also excluded. This filter was intended to exclude uncertain variants located adjacent to relatively large contig/scaffold gaps. This filter excludes low-quality variants at the terminus of each read because the quality of both sides of a read tends to be lower. This filter removed 12,527,318 bp (0.48%).
We removed 281,326,851 bp (10.84%) using these filters and ultimately defined the target genomic regions that were shared among the trio, covering 89.16% of the chimpanzee reference genome (Tables 1 and 2).
Detailed analysis pipeline and command for filtering low-quality variant are shown in Supplementary Method.
Identification of candidate Mendelian Inheritance Error sites (MIEs), classification of MIEs into inherited variants, and de novo SNVs
All the variant sites annotated using the variant calling method described above were investigated as potential de novo SNVs of the trio. MIEs were identified when the pattern of alleles observed in the offspring was inconsistent with the assortment of the parental alleles. Among the identified MIEs, if an allele was not present in either parent and newly emerged (mutated) in the offspring, these sites were classified as de novo SNVs (Fig. 2). If each allele in an offspring is present in either parent or in both, we classified the site as an inherited variant. Focusing on the depth variation among the trio, inherited variants can be classified into two different classes of variants as follows: (a) copy number neutral inherited variants (CNIVs), where no depth variation among the trio exists and (b) a hemizygous deletion inherited variants (HDIVs), in which either parent and the offspring show half read depth from the average (Fig. 2). Moreover, if depth variations occur only in the offspring, we classified these as de novo CNVs.
Identification and quantification of read depth variations across the trio
To detect variations from mean depth across the trio, we used the program for detecting copy number changes using short sequence reads produced by NGS sequencer (VarScan ver2.3.5)44 by comparing father-offspring and mother-offspring in a pairwise manner. If there were no copy number changes (i.e., Offspring = Father = Mother), they were classified as CNIVs (Fig. 2 [ii]). If copy number changes were detected in either pairwise comparison (i.e., Offspring = Father, Offspring < Mother or Offspring < Father, Offspring = Mother), they were categorized as HDIVs (Fig. 2 [iii]). In the last category, where copy number changes were found in both pairwise comparisons (i.e., Offspring < Father, Offspring < Mother), we classified the variants as de novo CNVs in the offspring and found a relatively large such de novo CNV (11,284 bp) on chromosome 22 (Supplementary Table S4), and all sites in these regions are homozygotes (namely loss of heterozygosity or LOH).
PCR and Sanger sequencing
The de novo SNV candidates were used to validate the genotype and to identify germline de novo SNVs using Sanger sequencing. Blood DNAs from the trio were used for Sanger validation to confirm the NGS variant calls and to estimate FPs and FNs. Moreover, DNA from mesoderm-derived hair follicles of the offspring was used to determine whether each de novo SNV occurred in the germline or somatic cell lineages. The variants were genotyped using PCR amplification of 2.5 ng of DNA contained KAPA2G Robust DNA polymerase (Kapa Biosystems Inc., Woburn, MA, USA) followed by Sanger sequencing using an ABI 3730 automatic genetic analyzer. The sequence reads were analyzed using the Sequencer software package and were compared to the results generated using HiSeq data.
All sequence reads were deposited in the DDBJ Sequence Read Archive (SRA) under accession number DRA003107. SNV information used in this study is available at http://map4.nig.ac.jp/cgi-bin/gb2/gbrowse/chimpanzee/.
The 1000 Genomes Project Consortium. A map of human genome variation from population-scale sequencing. Nature 467, 1061–1073 (2010).
Koboldt, D. C. et al. The next-generation sequencing revolution and its impact on genomics. Cell 155, 27–38 (2013).
Goodwin, S., McPherson, J. D. & McCombie, W. R. Coming of age: ten years of next-generation sequencing technologies. Nat Rev Genet 17, 333–351 (2016).
Roach, J. C. et al. Analysis of genetic inheritance in a family quartet by whole-genome sequencing. Science 328, 636–639 (2010).
Conrad, D. F. et al. Variation in genome-wide mutation rates within and between human families. Nat Genet 43, 712–714 (2011).
Kong, A. et al. Rate of de novo mutations and the importance of father's age to disease risk. Nature 488, 471–475 (2012).
Campbell, C. D. et al. 2012. Estimating the human mutation rate using autozygosity in a founder population. Nat Genet 44, 1277–1281 (2012).
Michaelson, J. J. et al. Whole-genome sequencing in autism identifies hot spots for de novo germline mutation. Cell 151, 1431–1442 (2012).
Locke, D. P. et al. Comparative and demographic analysis of orang-utan genomes. Nature 469, 529–533 (2011).
Prüfer, K. et al. The bonobo genome compared with the chimpanzee and human genomes. Nature 486, 527–531 (2012).
Scally, A. et al. Insights into hominid evolution from the gorilla genome sequence. Nature 483, 169–175 (2012).
Fujiyama, A. et al. Construction and analysis of a human-chimpanzee comparative clone map. Science 295, 131–134 (2002).
Chimpanzee Sequencing and Analysis Consortium. Initial sequence of the chimpanzee genome and comparison with the human genome. Nature 437, 69–87 (2005).
Langergraber, K. E. et al. Generation times in wild chimpanzees and gorillas suggest earlier divergence times in great ape and human evolution. Proc Natl Acad Sci 109, 15716–15721 (2012).
Scally, A. & Durbin, R. Revising the human mutation rate: implications for understanding human evolution. Nat Rev Genet 13, 745–753 (2012) and erratum in 13, 824.
Matsuzawa, T. The Ai project: historical and ecological contexts. Anim Cogn 6, 199–211 (2003).
Matsuzawa, T., Tomonaga, M. & Tanaka, M. Cognitive Development in Chimpanzees. Tokyo: Springer-Verlag Tokyo (2006).
Prado-Martinez, J. et al. Great ape genetic diversity and population history. Nature 499, 471–475 (2013).
The International SNP Map Working Group. A map of human genome sequence variation containing 1.42 million single nucleotide polymorphisms. Nature 409, 928–933 (2001).
Depristo, M. A. et al. A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat Genet 43, 491–498 (2011).
Neale, B. M. et al. Patterns and rates of exonic de novo mutations in autism spectrum disorders. Nature 485, 242–245 (2012).
Williams, A. L. et al. Non-crossover gene conversions show strong GC bias and unexpected clustering in humans. Elife 4, e04637 (2015).
Palamara, P. F. et al. Leveraging distant relatedness to quantify human mutation and gene-conversion rates. Am J Hum Genet 97, 775–789 (2015).
Ramu, A. et al. DeNovoGear: de novo indel and point mutation discovery and phasing. Nat Methods 10, 985–987 (2013).
Campbell, C. D. & Eichler, E. E. Properties and rates of germline mutations in humans. Trends Genet 29, 575–584 (2013).
Crow, J. F. The origins, patterns and implications of human spontaneous mutation. Nat Rev Genet 1, 40–47 (2000).
Venn, O. et al. Strong male bias drives germline mutation in chimpanzees. Science 344, 1272–1275 (2014).
Kloosterman, W. P. et al. Characteristics of de novo structural changes in the human genome. Genome Res 25, 792–801 (2015).
Chaisson, M. J. et al. Resolving the complexity of the human genome using single-molecule sequencing. Nature 517, 608–611 (2015).
Jeffreys, A. J. & May, C. A. Intense and highly localized gene conversion activity in human meiotic crossover hot spots. Nat Genet 36, 151–156 (2004).
Besenbacher, S. et al. Novel variation and de novo mutation rates in population-wide de novo assembled Danish trios. Nat Commun 6, 5969 (2015).
Cheng, Z. et al. A genome-wide comparison of recent chimpanzee and human segmental duplications. Nature 437, 88–93 (2005).
Smithwick, E. B., Young, L. G. & Gould, K. G. Duration of spermatogenesis and relative frequency of each stage in the seminiferous epithelial cycle of the chimpanzee. Tissue Cell 28, 357–366 (1996).
Hermann, B. P., Sukhwani, M., Hansel, M. C. & Orwig, K. E. Spermatogonial stem cells in higher primates, are there differences from those in rodents? Reproduction 139, 479–493 (2010).
Takahata, N., Satta, Y. & Klein, J. Divergence time and population size in the lineage leading to modern humans. Theor Popul Biol 48, 198–221 (1995).
Keightley, P. D. Rates and fitness consequences of new mutations in humans. Genetics 190, 295–304 (2012).
Veeramah, K. R. & Hammer, M. F. The impact of whole-genome sequencing on the reconstruction of human population history. Nat Rev Genet 15, 149–162 (2014).
Lynch, M. Rate, molecular spectrum, and consequences of human mutation. Proc Natl Acad Sci 107, 961–968 (2010).
Nelson, M. R. et al. An abundance of rare functional variants in 202 drug target genes sequenced in 14,002 people. Science 337, 100–104 (2012).
Li, H. & Durbin, R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics 25, 1754–1760 (2009).
Li, H. et al. The Sequence Alignment/Map format and SAMtools. Bioinformatics 25, 2078–2079 (2009).
McKenna, A. et al. The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res 20, 1297–1303 (2010).
Van der Auwera, G. A. et al. From FastQ Data to High-Confidence Variant Calls: The Genome Analysis Toolkit Best Practices Pipeline. Current Protocols in Bioinformatics 43, 11.10.1–11.10.33 (2013).
Koboldt, D. C. et al. VarScan 2: somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome Res 22, 568–576 (2012).
We thank Dr. Atsushi Toyoda and Dr. Yukiko Yamazaki of the National Institute of Genetics (NIG), and Dr. Osamu Nishimura of the RIKEN Center for Developmental Biology (CDB) for assistance with experiments, creating the genome browser, and computational advice. We thank the members of the Center for Human Evolution Modeling Research and the Section of Language and Intelligence, Primate Research Institute, Kyoto University, for the daily care of the chimpanzees. This study was a collaborative effort of the Primate Research Institute and Global COE Program, Kyoto University; the National Institute of Genetics; and the Center for Novel Science Initiatives, National Institutes of Natural Sciences funded by JSPS KAKENHI (Grant-in-Aid for Scientific Research) for Innovative Areas "Genome Science" (JP221S0002); JSPS KAKENHI Grant Numbers JP22770240, JP24113511, JP25711027, JP16H06531, JP16H04849 to Y.G., JP12J04270 to T.H., JP22650053, JP23220006, JP15H05709 to M.T., JP22247037 to H.H., JP20002001, JP24000001, JP16H06283 to T.M., and JP25250024 to A.F.; a JSPS grant for the Leading graduate program (U04) to T.M; a JSPS grant for core-to-core program CCSN to T.M.; a grant from the Global COE Program (A06) of Kyoto University to K.A.; grants from the NIG Collaborative Research Program (2010-A53, 2011-A82, 2012-B7, 2012-B8, 2013-A71) and the Cooperation Research Program of Primate Research Institute, Kyoto University; and grants from the Inamori Foundation to Y.G. Computations were partially performed on the NIG supercomputer at ROIS National Institute of Genetics.
Shoji Tatsumoto and Yasuhiro Go contributed equally to this work.
Department of Brain Sciences, Center for Novel Science Initiatives, National Institutes of Natural Sciences, Okazaki, Aichi, 444-8585, Japan
Shoji Tatsumoto
& Yasuhiro Go
Department of System Neuroscience, National Institute for Physiological Sciences, Okazaki, Aichi, 444-8585, Japan
Yasuhiro Go
Department of Physiological Sciences, School of Life Science, SOKENDAI (The Graduate University for Advanced Studies), Okazaki, Aichi, 484-8585, Japan
Center for Genome Informatics, Joint Support-Center for Data Science Research, Research Organization of Information and Systems, Mishima, Shizuoka, 411-8540, Japan
Kentaro Fukuta
, Hideki Noguchi
& Asao Fujiyama
Advanced Genomics Center, National Institute of Genetics, Mishima, Shizuoka, 411-8540, Japan
Department of Wildlife Science (Nagoya Railroad Co., Ltd.), Primate Research Institute, Kyoto University, Inuyama, Aichi, 484-8506, Japan
Takashi Hayakawa
, Masaki Tomonaga
& Tetsuro Matsuzawa
Japan Monkey Centre, Inuyama, Aichi, 484-0081, Japan
Language and Intelligence Section, Department of Cognitive Sciences, Primate Research Institute, Kyoto University, Inuyama, Aichi, 484-8506, Japan
Masaki Tomonaga
Molecular Biology Section, Department of Cellular and Molecular Biology, Primate Research Institute, Kyoto University, Inuyama, Aichi, 484-8506, Japan
Hirohisa Hirai
Institute of Advanced Study, Kyoto University, Kyoto, 606-8501, Japan
Laboratory for Biodiversity, Global COE Program, Graduate School of Science, Kyoto University, Kyoto, 606-8502, Japan
Kiyokazu Agata
Laboratory for Molecular Developmental Biology, Graduate School of Science, Kyoto University, Kyoto, 606-8502, Japan
Graduate Course in Life Science, Gakushuin University, Tokyo, 171-8585, Japan
Department of Genetics, School of Life Science, SOKENDAI (The Graduate University for Advanced Studies), Mishima, Shizuoka, 411-8540, Japan
Asao Fujiyama
Search for Shoji Tatsumoto in:
Search for Yasuhiro Go in:
Search for Kentaro Fukuta in:
Search for Hideki Noguchi in:
Search for Takashi Hayakawa in:
Search for Masaki Tomonaga in:
Search for Hirohisa Hirai in:
Search for Tetsuro Matsuzawa in:
Search for Kiyokazu Agata in:
Search for Asao Fujiyama in:
Y.G., K.A. and A.F. designed the study. Y.G., T.H., M.T., H.H. and T.M. participated in sample collections. Y.G., T.H. and A.F. performed the experiment. S.T., Y.G., K.F., H.N. and A.F. analyzed the data. S.T., Y.G. and A.F. wrote the manuscript. All of the authors discussed the results and commented on the manuscript.
Corresponding authors
Correspondence to Yasuhiro Go or Asao Fujiyama.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
https://doi.org/10.1038/s41598-017-13919-7
Direct estimation of mutations in great apes reconciles phylogenetic dating
Søren Besenbacher
, Christina Hvilsom
, Tomas Marques-Bonet
, Thomas Mailund
& Mikkel Heide Schierup
Nature Ecology & Evolution (2019)
Old Trade, New Tricks: Insights into the Spontaneous Mutation Process from the Partnering of Classical Mutation Accumulation Experiments with High-Throughput Genomic Approaches
Vaishali Katju
, Ulfar Bergthorsson
& Kateryna Makova
Genome Biology and Evolution (2019)
Next-Generation Sequencing in Primate Molecular Ecology
Primate Research (2018)
Reproductive Longevity Predicts Mutation Rates in Primates
Gregg W.C. Thomas
, Richard J. Wang
, Arthi Puri
, R. Alan Harris
, Muthuswamy Raveendran
, Daniel S.T. Hughes
, Shwetha C. Murali
, Lawrence E. Williams
, Harsha Doddapaneni
, Donna M. Muzny
, Richard A. Gibbs
, Christian R. Abee
, Mary R. Galinski
, Kim C. Worley
, Jeffrey Rogers
, Predrag Radivojac
& Matthew W. Hahn
Current Biology (2018)
Scientific Reports menu
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Guest Edited Collections
Editorial Board Highlights
Author Highlights
Scientific Reports rigorous editorial process
Open Access Funding Support
|
CommonCrawl
|
Sun, 05 Sep 2021 07:27:33 GMT
8.6: Fixed point theorem and Picard's theorem again
[ "article:topic", "authorname:lebl", "Fixed point theorem", "showtoc:no" ]
https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FBookshelves%2FAnalysis%2FBook%253A_Introduction_to_Real_Analysis_(Lebl)%2F08%253A_Metric_Spaces%2F8.06%253A_Fixed_point_theorem_and_Picard%25E2%2580%2599s_theorem_again
Book: Introduction to Real Analysis (Lebl)
8: Metric Spaces
Jiří Lebl
Associate Professor (Mathematics) at Oklahoma State University
In this section we prove a fixed point theorem for contraction mappings. As an application we prove Picard's theorem. We have proved Picard's theorem without metric spaces in . The proof we present here is similar, but the proof goes a lot smoother by using metric space concepts and the fixed point theorem. For more examples on using Picard's theorem see .
Let \((X,d)\) and \((X',d')\) be metric spaces. \(F \colon X \to X'\) is said to be a contraction (or a contractive map) if it is a \(k\)-Lipschitz map for some \(k < 1\), i.e. if there exists a \(k < 1\) such that \[d'\bigl(F(x),F(y)\bigr) \leq k d(x,y) \ \ \ \ \text{for all } x,y \in X.\]
If \(T \colon X \to X\) is a map, \(x \in X\) is called a fixed point if \(T(x)=x\).
[Contraction mapping principle or Fixed point theorem] [thm:contr] Let \((X,d)\) be a nonempty complete metric space and \(T \colon X \to X\) is a contraction. Then \(T\) has a fixed point.
Note that the words complete and contraction are necessary. See .
Pick any \(x_0 \in X\). Define a sequence \(\{ x_n \}\) by \(x_{n+1} := T(x_n)\). \[d(x_{n+1},x_n) = d\bigl(T(x_n),T(x_{n-1})\bigr) \leq k d(x_n,x_{n-1}) \leq \cdots \leq k^n d(x_1,x_0) .\] So let \(m \geq n\) \[\begin{split} d(x_m,x_n) & \leq \sum_{i=n}^{m-1} d(x_{i+1},x_i) \\ & \leq \sum_{i=n}^{m-1} k^i d(x_1,x_0) \\ & = k^n d(x_1,x_0) \sum_{i=0}^{m-n-1} k^i \\ & \leq k^n d(x_1,x_0) \sum_{i=0}^{\infty} k^i = k^n d(x_1,x_0) \frac{1}{1-k} . \end{split}\] In particular the sequence is Cauchy. Since \(X\) is complete we let \(x := \lim_{n\to \infty} x_n\) and claim that \(x\) is our unique fixed point.
Fixed point? Note that \(T\) is continuous because it is a contraction. Hence \[T(x) = \lim T(x_n) = \lim x_{n+1} = x .\]
Unique? Let \(y\) be a fixed point. \[d(x,y) = d\bigl(T(x),T(y)\bigr) = k d(x,y) .\] As \(k < 1\) this means that \(d(x,y) = 0\) and hence \(x=y\). The theorem is proved.
Note that the proof is constructive. Not only do we know that a unique fixed point exists. We also know how to find it. Let us use the theorem to prove the classical Picard theorem on the existence and uniqueness of ordinary differential equations.
Consider the equation \[\frac{dx}{dt} = F(t,x) .\] Given some \(t_0, x_0\) we are looking for a function \(f(t)\) such that \(f'(t_0) = x_0\) and such that \[f'(t) = F\bigl(t,f(t)\bigr) .\] There are some subtle issues. Look at the equation \(x' = x^2\), \(x(0)=1\). Then \(x(t) = \frac{1}{1-t}\) is a solution. While \(F\) is a reasonably "nice" function and in particular exists for all \(x\) and \(t\), the solution "blows up" at \(t=1\).
Let \(I, J \subset {\mathbb{R}}\) be compact intervals and let \(I_0\) and \(J_0\) be their interiors. Suppose \(F \colon I \times J \to {\mathbb{R}}\) is continuous and Lipschitz in the second variable, that is, there exists \(L \in {\mathbb{R}}\) such that \[\left\lvert {F(t,x) - F(t,y)} \right\rvert \leq L \left\lvert {x-y} \right\rvert \ \ \ \text{ for all $x,y \in J$, $t \in I$} .\] Let \((t_0,x_0) \in I_0 \times J_0\). Then there exists \(h > 0\) and a unique differentiable \(f \colon [t_0 - h, t_0 + h] \to {\mathbb{R}}\), such that \(f'(t) = F\bigl(t,f(t)\bigr)\) and \(f(t_0) = x_0\).
Without loss of generality assume \(t_0 =0\). Let \(M := \sup \{ \left\lvert {F(t,x)} \right\rvert : (t,x) \in I\times J \}\). As \(I \times J\) is compact, \(M < \infty\). Pick \(\alpha > 0\) such that \([-\alpha,\alpha] \subset I\) and \([x_0-\alpha, x_0 + \alpha] \subset J\). Let \[h := \min \left\{ \alpha, \frac{\alpha}{M+L\alpha} \right\} .\] Note \([-h,h] \subset I\). Define the set \[Y := \{ f \in C([-h,h]) : f([-h,h]) \subset [x_0-\alpha,x_0+\alpha] \} .\] Here \(C([-h,h])\) is equipped with the standard metric \(d(f,g) := \sup \{ \left\lvert {f(x)-g(x)} \right\rvert : x \in [-h,h] \}\). With this metric we have shown in an exercise that \(C([-h,h])\) is a complete metric space.
Show that \(Y \subset {\mathbb{C}}([-h,h])\) is closed.
Define a mapping \(T \colon Y \to C([-h,h])\) by \[T(f)(t) := x_0 + \int_0^t F\bigl(s,f(s)\bigr)~ds .\]
Show that \(T\) really maps into \(C([-h,h])\).
Let \(f \in Y\) and \(\left\lvert {t} \right\rvert \leq h\). As \(F\) is bounded by \(M\) we have \[\begin{split} \left\lvert {T(f)(t) - x_0} \right\rvert &= \left\lvert {\int_0^t F\bigl(s,f(s)\bigr)~ds} \right\rvert \\ & \leq \left\lvert {t} \right\rvert M \leq hM \leq \alpha . \end{split}\] Therefore, \(T(Y) \subset Y\). We can thus consider \(T\) as a mapping of \(Y\) to \(Y\).
We claim \(T\) is a contraction. First, for \(t \in [-h,h]\) and \(f,g \in Y\) we have \[\left\lvert {F\bigl(t,f(t)\bigr) - F\bigl(t,g(t)\bigr)} \right\rvert \leq L\left\lvert {f(t)- g(t)} \right\rvert \leq L \, d(f,g) .\] Therefore, \[\begin{split} \left\lvert {T(f)(t) - T(g)(t)} \right\rvert &= \left\lvert {\int_0^t F\bigl(s,f(s)\bigr) - F\bigl(s,g(s)\bigr)~ds} \right\rvert \\ & \leq \left\lvert {t} \right\rvert L \, d(f,g) \\ & \leq h L\, d(f,g) \\ & \leq \frac{L\alpha}{M+L\alpha} \, d(f,g) . \end{split}\] We can assume \(M > 0\) (why?). Then \(\frac{L\alpha}{M+L\alpha} < 1\) and the claim is proved.
Now apply the fixed point theorem () to find a unique \(f \in Y\) such that \(T(f) = f\), that is, \[f(t) = x_0 + \int_0^t F\bigl(s,f(s)\bigr)~ds .\] By the fundamental theorem of calculus, \(f\) is differentiable and \(f'(t) = F\bigl(t,f(t)\bigr)\).
We have shown that \(f\) is the unique function in \(Y\). Why is it the unique continuous function \(f \colon [-h,h] \to J\) that solves \(T(f)=f\)? Hint: Look at the last estimate in the proof.
Suppose \(X = X' = {\mathbb{R}}\) with the standard metric. Let \(0 < k < 1\), \(b \in {\mathbb{R}}\). a) Show that the map \(F(x) = kx + b\) is a contraction. b) Find the fixed point and show directly that it is unique.
Suppose \(X = X' = [0,\nicefrac{1}{4}]\) with the standard metric. a) Show that the map \(F(x) = x^2\) is a contraction, and find the best (largest) \(k\) that works. b) Find the fixed point and show directly that it is unique.
[exercise:nofixedpoint] a) Find an example of a contraction of non-complete metric space with no fixed point. b) Find a 1-Lipschitz map of a complete metric space with no fixed point.
Consider \(x' =x^2\), \(x(0)=1\). Start with \(f_0(t) = 1\). Find a few iterates (at least up to \(f_2\)). Prove that the limit of \(f_n\) is \(\frac{1}{1-t}\).
Template:ContribLebl
8.5: Continuous Functions
9: Several Variables and Partial Derivatives
Fixed point theorem
|
CommonCrawl
|
Low Resolution Geodesic Distance
Post author By josue.perez
No Comments on Low Resolution Geodesic Distance
by Jayati Sood, Uriel Martínez, Miles Silberling-Cook and Josue Perez
The geodesic distance between two points x and y on a mesh is the length of the shortest path from x to y, along the surface. In the first and final weeks of SGI, under the mentorship of Professor Justin Solomon and Professor Nester Guillen, we worked on finding numerical approximations of geodesic distance functions that are low in accuracy and hence faster to compute.
Given a point a set \(S\) of points on the surface \(\mathcal{M}\), we can define a function \(f_S(x):\mathcal{M}\mapsto\mathbb{R}^+\) such that
\[f_S(x) = d(x, S) = \min_{y\in S} d(x,y),\]
where \(d\) is the geodesic distance function in \(\mathcal{M}\). \(f\) satisfies the eikonal equation, i.e.,
\[||\nabla f(x)||_2=1.\]
Solving the eikonal equation with \(f_{S}(x_0)=0\) gives us the geodesic distance function from a given set of points on the mesh/manifold to all other points on the mesh. However, the eikonal equation is a non-linear PDE and hard to solve. To speed up the process, we employ Perron's method to solve for \(f_{S}(x)\) by finding the maximum of the subsolutions of the above boundary value problem, which leads to the following convex optimization problem:
\begin{align*} \max_f & \;\; \int_{\mathcal{M}}f(x)dx \\ \text{subject to } & \;\; f(x) \leq 0\;\forall\;x\in S \\ & \;\;|\nabla f(x)| \leq 1\; \forall\;x \in \mathcal{M}\hspace{3.3cm} \end{align*}
Discretizing the problem
Our first task was to reformulate this optimization problem for triangle meshes. By restricting paths to the edges of the mesh we arrive at the following. Take our triangle mesh to be an edge-weighted, undirected graph \(G\).
If \(S\subset G\) is the set of vertices to which we are calculating a geodesic, then the distance function \(x\mapsto d(x, S)\) is the unique solution of the following second-order cone program (SOCP):
\[ \begin{align*} \max_f& \;\; \sum \limits_{i=1}^N f(x_i) \\ \text{subject to } & \;\; f(x_i) = 0\;\forall\;x_i\in S \\ & \;\;f(x_i)-f(x_j) \leq \ell_{e} \text{ anytime } e := (x_i,x_j) \in E \end{align*} \]
We can observe from the formulation of the problem that we have two types of constraints. The number of equations we have of the first kind depends directly on the cardinality of \(S\), meanwhile, the number of equations we have of the second type depend on the cardinality of the edges on the mesh. Hence an upper bound on the number of constraints of this particular problem is \(|S|+|E|=\frac{|S|(|S|+1)}{2}\). In other words, the number of constraints in our problem grows quadratically on the amount of vertices on the mesh. Furthermore, there is no good reason why \(f\) should be linear in general, which makes solving this problem even more computationally taxing.
We found that this formulation of the problem (with paths restricted to edges) resulted in poor-quality geodesics with negligible increase in speed. For the remainder of this project we used a form closer to the original, and approximated the gradient with finite differences. Our observations about the number of constraints still apply.
The approach we took on this project to solve these inconveniences proceeds twofold.
On a first instance, we represent \(f\) in terms of a convenient linear basis, just only for building an approximate notion of distance which takes into account only the elements on the basis that influence the most the values of \(f\), in other words, we build a low resolution geodesic. For selecting this linear base we start calculating the laplacian of the mesh. Then, we select a fixed number of eigenvectors of the matrix, giving preference to those with a lower eigenvalue.
On a second instance, we require the solution of our problem to be of low resolution. We do this by imposing new linear constraints. This could be thought to be contradictory with the goal of the project, however, with the proper use of some standard tools of convex optimization, for example the active set method, in theory we could actually reduce the number of constraints in the original problem. A consequence that we observed by using this approach on reasonably behaved meshes is that the number of active constraints on the problem depends on the size of sampled basis, yielding a method that is almost independent of the cardinallity of \(S\).
Throughout the project, we used cvx in MATLAB to solve our convex program for different sets \(S\) of vertices on the mesh, concluding with \(S\) being the set of all vertices on the mesh, which gave us the distance function for all pairs of vertices. Over the course of the project, we employed various techniques to increase the efficiency of our solver, and were successful in obtaining an all-pairs geodesic distance function on the Spot mesh.
Geodesic distance function heat map
This heat map is a plot of the approximate all-pairs geodesic function for a vertex randomly sampled from Spot. The colour values represent the estimated geodesic distance from the sample vertex (red).
All-pairs geodesic distance function on Spot
Error map
This heat map represents the change in accuracy of our approximate all-pairs geodesic distance function, with change in the number of basis vectors (vertical), and number of faces sampled (horizontal) from the Spot mesh. The colour values represent the mean relative error between the approximated function and the ground truth geodesic function, with the lightest and darkest blues being assigned to the error values "0" and "1.5", normalized according to the observed distribution of the error values obtained. Starting with 5 faces and 5 basis vectors, and taking intervals of 5 for both, we observe that the highest accuracy is obtained along the diagonal, i.e, when the number of faces sampled equals the number of basis vectors.
Tags geodesic distance, optimization
Neural Implicit Boundary Representations
Post author By Xinwen Ding
No Comments on Neural Implicit Boundary Representations
By SGI Fellow Xinwen Ding and Ahmed Elhag
During the third and fourth week of SGI, Xinwen Ding, Ahmed A. A. Elhag and Miles Silberling-Cook (week 3 only) worked under the guidance of Benjamin Jones and Prof. Adriana Schulz to explore methods that can define a continuous relaxation of CAD geometry.
In general, there are two ways to represent shapes: explicit representations and implicit representations. Explicit representations are easier to model and allow local differentiable parameterizations. CAD geometry, stored in an explicit form called parametric boundary representations (B-reps), is one example, while triangle mesh is another typical example.
However, just as each triangle facet in a triangle mesh has its independent parameterization, it is hard to represent a surface using one single function under an explicit representation. We call this property discrete at the global scale. This discreteness forces us to catch continuous changes using discontinuous shape parameterization and results in weirdness and artifacts. For example, explicit representations can be incompatible with some gradient-based methods and some neural network techniques on a non-local scale.
One possible fix to this issue is to use implicit shape representations, such as signed distance field (SDF). SDFs are global functions that are continuously differentiable almost everywhere in the domain, which addresses the issues caused by explicit representations. Motivated by this, we want to play the same trick by defining a continuous relaxation of CAD geometry.
To define this continuous relaxation of CAD geometry, we need to find a continuous relaxation of the boundary element type. Consider a simple case where the CAD data define a geometry that only contains two types; lines and circles. While it is natural that we map lines to 0 and circles to 1, there is no type defined in the CAD geometry as the pre-image of (0,1). So, we want to define these intermediate states between lines and circles.
The task contains two parts. First, we need to learn the SDF and thus obtain the implicit shape representation of the CAD geometry. As an alignment to the input data type, we want to convert the type-blended geometry to CAD data. So next, we want to convert the SDF back to valid boundary representation by recovering the parameters of the elements we encoded in the SDF and then blending their element type.
To make it easier for the reconstruction task, we decided to learn multiple SDFs, one for each type of geometry. According to these learned SDFs, we can step into the process of recovering the geometries based on their types. Now, let us consider a concrete example. If we have a CAD shape that consists of two types of geometries, say lines and circles, we need to learn two SDFs: one for edges (part of circles) and another for arcs (part of circles). With these learned SDFs, We hope to recover all the lines that appear in the input shape from the line SDF, and a similar expectation applies to the circle SDF.
Before jumping into detailed implementations, we want to acknowledge Miles Silberling-Cook for bringing up the multi-SDF idea. Due to the time limitation at SGI, we only tested this method for edges in 2D. We start with the CAD data defining a shape in Figure 1. All the results we show later are based on this geometry.
Figure 1: Input geometry defined by CAD data.
Learned SDF
Our goal is to learn a function that maps a coordinate of a query point \((x,y) \in \mathbb{R^2}\) to a signed distance in \(\mathbb{R}\) from \((x,y)\) to the surface. The output of this function is positive if \((x,y)\) is outside the surface, negative if \((x,y)\) is enclosed by the suface, and zero if \((x,y)\) lies on the surface. Thus, our neural network is defined as \(f: \mathbb{R} ^2 \to \mathbb{R}\). For Figure 1, we learned two neural networks, the first network maps \((x,y)\) to its distance from the line edge, and the second network maps this point to its distance for the circle edge. For this task, we use a Decoder network (multi-layer perceptron, MLP), and optimize it using a gradient descent until convergence. Our dataset was created from a grid with appropriate dimensions, as these are our 2D points. Then, for each point in the grid, we calculate its distance from the line edge and the circle edge.
We compare the image from the learned SDF and the ground truth in Figure 2. It clearly shows that we can overfit and learn both the two networks for line and circle edges.
Figure 2: The network learned the SDF with respect to the edges (first row) and arcs (second row) of the input geometry displayed in Figure 1. We compare the learned result (left column) with the ground truth (right column).
After obtaining the learned line SDF model, we need to analytically recover the edges and arcs. To define an edge, we need nothing but a starting point, a direction, and a length. So, we begin the recovery by randomly seeding thousands of points and assigning each point a random direction. Furthermore, we can only accept those points with their associated values in SDF to be close to zero (see Figure 3), which enhances the success rate of finding an edge as a part of the shape boundary.
Figure 3: we iteratively generate points until 6000 of them are accepted (i.e. SDF value small enough). The accepted points are plotted in red.
Then, we need to tell which lines are more likely to be the ones that define the boundary of our CAD shape and reject the ones that are unlikely to be on the boundary. To guarantee a fair selection, we need to fix the length of the randomly generated edges and pick the ones whose line integral of the learned line SDF is small enough. Moreover, to save more time, we approximate the integral by a finite sum, where we sum up the SDF value assumed by a fixed number of sample points along every edge. Stopping here, we have a pool of edge boundary candidates. We visualize them in terms of their starting points and direction using a quiver plot in Figure 4.
Figure 4: Starting points (red dots) and proper directions (black arrows) that define potential edges.
In the next step, we want to extend the candidate edges as long as possible, as our goal is to reconstruct the whole boundary. The extension ends once the SDF value of some extended point exceeds some threshold. After the extension, we cluster the extended edges using the mean shift algorithm. We adopt this clustering algorithm since it does not need to pre-determine the number of clusters. As shown in Figure 5, the algorithm successfully predicts the correct number of edge clusters after carefully tuning the parameters.
Figure 5: Clustered edges. Each color represents one cluster. After carefully tuning parameters, the optimal number of clusters found by the mean shift algorithm reflects the actual number of edges in the geometry.
Finally, we want to extract the lines that best define the shape boundary. As we set a threshold in the extension process, we simply need to choose the longest edge from each cluster and name it a boundary edge. The three boundary edges in our example, one in each color, appear in Figure 6.
Figure 6: The longest edges, one from each cluster, with parameters known.
To sum up, during the project's two-week active period, we managed to complete the following items:
We set up a neural network to learn multiple SDFs. The model learns the SDF for edge and arc components on the boundary of a 2D input shape.
We developed and implemented a sequence of procedures to reconstruct the lines from the trained line SDF model.
Even though we showed the results we achieved during the two weeks, there are more things to improve in the future. First of all, we need to reconstruct the arcs in 2D and ensure the whole procedure to be successful in more complicated 2D geometries. Second, we would like to generalize the whole process to 3D. More importantly, we are interested in establishing a way to smoothly and continuously characterize the shape transfer after the reconstruction. Finally, we need to transfer the continuous shape representation back to a CAD geometry.
Tags CAD data, Implicit Shape Representation, Machine Learning, optimization
Mesh denoising in flatland
Post author By hector.chahuara
No Comments on Mesh denoising in flatland
by Hector Chahuara, Anna Krokhine and Elshadai Tegegn
Controlling caustics is a difficult task as any change to specular surface can have large effects on the caustic image. In this post, we address a building block of the optimization framework for computing the shape of refractive objects presented in Schwartzburg et al (2014) and propose to improve a building block on its formulation that performs a mesh reconstruction. The proposed improvement was tested on flatland meshes and performs reasonably well in the presence of noise.
Regularization is a technique often needed in optimization for enforcing certain characteristics such as sparsity or stability in the solution. In this particular case, Schwartzburg et al (2014) apply a generalized Tikhonov regularization to achieve a stable solution. Given the optimization-based formulation of Schwartzburg et al (2014), it is possible to isolate this problem and to see that this is in fact mesh denoising. In addition, it is important to mention that Tikhonov regularization is usually outperformed by other methods, among which total variation (TV) denoising distinguishes itself by its reasonable computational cost and good results.
In the following, we apply the framework described in Zhang et al (2015) that applies TV to the noisy normalized normals N0 of a mesh, then the optimization problem to solve becomes
minN ||N-N0||2 + λ.WVTV(N),
where WVTV is a weighted vectorial TV version, adapted from the one described in Zhang et al (2015) for flatland, defined as
WVTV(N) = Σeωe (le(DxN)2+(DyN)2)0.5
le is the length of the edge "e", and the weights ωe , dependent on the difference of two consecutive normals Ni and Ni+1 i.e. normals that correspond to adjacent edges are defined as
ωe=exp(-||Ni-Ni+1||4)
to penalize less sharp features (high difference between consecutive normals) than the smooth ones (similar consecutive normals). It is important to mention that TV denoising is a non-smooth optimization problem, so a solver that rely solely on gradient or Hessian information could not reach the optimum. By using the iteratively reweighted least squares (IRLS) Wolke et al (1988), a well-known method for optimization, it is straightforward to build an algorithmto solve this problem .
The described approach was implemented in MATLAB R2021b. Visual results can be observed in the following animations that show the denoising process of two figures corrupted by noise: a square and a circle.
Denosing procedure of a square mesh
Denoising procesdure of a circle mesh
The implemented method yields reasonable results for the presented cases. While this exploratory experiments indicate that the method has the potential to improve results if embedded in the general caustics framework. Nonetheless, more experiments are needed to confirm this and to assess the impact on the quality of the result.
Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, "High-contrast computational caustic design, " ACM Trans. Graph. 33, 4, 2014
H. Zhang, C. Wu, J. Zhang and J. Deng, "Variational Mesh Denoising Using Total Variation and Piecewise Constant Function Space," in IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 7, pp. 873-886, 1 July 2015
Wolke, R., Schwetlick, H., "Iteratively Reweighted Least Squares: Algorithms, Convergence Analysis, " and Numerical Comparisons. SIAM Journal on Scientific and Statistical Computing, 9, 907-921, 1988
L. Condat, "Discrete total variation: New definition and minimization," SIAM Journal on Imaging Sciences, vol. 10, no. 3, pp. 1258-1290, 2017
How to Train Your Dragon Using a SIREN
Post author By Ahmed Elhag
No Comments on How to Train Your Dragon Using a SIREN
A theoretical walkthrough INRs methods for unoriented point clouds
By Alisia Lupidi, Ahmed Elhag and Krishnendu Kar
Hello SGI Community, 👋
We are the Implicit Neural Representation (INR) team, Alisia Lupidi, Ahmed Elhag, and Krishnendu Kar, under the supervision of Dr. Dena Bazazian and Shaimaa Monem Abdelhafez. In this article, we would like to introduce you to the theoretical background for our project.
We based our work on two papers: one about Sinusoidal Representation Network (SIREN[1] ) and one about Divergence Guided Shape Implicit Neural Representation for unoriented point clouds (DiGS[2]), but before presenting these we added a (simple and straightforward) prerequisites section where you can find all the theoretical background needed to understand the papers.
0. Prerequisites
0.1 Implicit Neural Representation (INR)
Various paradigms have been used to represent a 3D object predicted by neural networks – including voxels, point clouds, and meshes. All of these methods are discrete and therefore pose several limitations – for instance, they require a lot of memory which limits the resolution of the 3D object predicted.
The concept of implicit neural representation (INR) for 3D representation has recently been introduced and uses a signed distance function (SDF) to represent 3D objects implicitly. It is a type of regression problem and consists of encoding a continuous differential signal within a neural network. With INR, we can provide a high resolution for a 3D object with a lower memory footprint, despite the limitations of discrete methods.
0.2 Signed Distance Function (SDF)
The signed distance function[5] of a set in Ω is a metric space that determines the distance of a given point 𝓍 from the boundary of Ω.
The function has positive values for ∀𝓍 ∈ Ω, and this value decreases as 𝓍 approaches the boundary of Ω. On the boundary, the signed distance function is exactly 0 and it takes negative values outside of Ω.
0.3 Rectified Linear Unit (ReLU)
The Rectified Linear Unit (ReLU, Fig. 1) is the most commonly used activation function in deep learning models. The function returns 0 if it receives any negative input, but for any positive value 𝓍, it returns that value. It is defined by f(𝓍) = max(0, 𝓍), where 𝓍 is an input to a neuron.
Figure 1: ReLU Plot[3]
ReLU pros ReLU cons
Extremely easy to implement (just a max function), unlike the Tanh and the Sigmoid activation functions that require exponential calculation
Dying ReLU: for all negative inputs, ReLU gives 0 as output – a ReLU neuron is "dead" if it is stuck in the negative side (not recoverable).
The rectifier function behaves like a linear activation function Non-differentiable at zero
Can output a true zero value Not zero-centered and unbounded
0.4 Multilayer Perceptron (MLP)
A multilayer perceptron (MLP)[6] is a fully connected class of feedforward artificial neural network (ANN) that utilizes backpropagation for training (supervised learning). It has at least three layers of nodes: an input layer, a hidden layer and an output layer, and except for the first layer, each node is a neuron that uses a nonlinear activation function.
A perceptron is a linear classifier that takes in input 𝓎 = ∑ Wᵢ𝓍ᵢ + bᵢ where 𝓍 is the feature vector, W are the weights, and b the bias. This input, in some cases such as SIREN, is passed to an activation function to produce the bias (see equation 5).
1.SIREN
1.1 Sinusoidal Representation Network (SIREN)
SIRENs[1] are neural networks (NNs) used for signals' INRs. In comparison with other networks' architectures, SIRENs can represent a signal's spatial and temporal derivatives, thus conveying a greater level of detail when modeling a signal, thanks to their sine-based activation function.
1.2 How does SIREN work?
We want to learn a function Φ that satisfies the following equation and is limited by a set of constraints[1],
This above is the formulation of an implicit neural representation where F is our NN architecture and it takes as input the spatio-temporal coordinates 𝓍 ∈ ℝm and the derivatives of Φ with respect to 𝓍. This feasibility problem can be summarized as[1],
As we want to learn Φ, it makes sense to cast this problem in a loss function to see how well we accomplish our goal. This loss function penalizes deviations from the constraints on their domain Ωm[1],
Here we have the indicator function 1Ωm= 1 if 𝓍 ∈ Ωm and otherwise equal to 0.
All points 𝓍ᵢ are mapped to Φ(𝓍) in a way that minimizes its deviation from the sampled value of the constraint aᵢ(𝓍). The dataset D = {(𝓍ᵢ, aᵢ(𝓍))}ᵢ is sampled dynamically at training time using Monte Carlo integration. The aim of doing this is to improve the approximation of the loss L as the number of samples grows.
To calculate Φ, we use θ parameters and solve the resulting optimization problem using gradient descent.[1]
Here θi ∶ ℝ→ ℝNi is the ith layer of the network. It consists of the affine transform defined by the weight matrix Wi ∈ ℝNi x Mi and the biases bi ∈ ℝNi applied on the input 𝓍ᵢ ∈ ℝMi. The sine acts as the activation function and it was chosen to achieve greater resolution: a sine activation function ensures that all derivatives are never null regardless of their order (the sine's derivative is a cosine, whereas a polynomial one would approach zero in a number of derivations related to its grade). A polynomial would thus lose higher order frequencies and therefore render a model with less information.
1.3 Examples and Experiments
In the paper[1], the authors compared SIREN performances to the most popular network architectures used-like Tanh, ReLU, Softplus, etc. Conducting experiments on different signals (picture, sound, video, etc), we can see that SIREN outperforms all the other methods by a significant margin (i.e., it reconstructs the signal with higher accuracy, Fig. 2).
Figure 2: SIREN performance compared to other INRs.[1]
To appreciate all of SIREN's potential, it is worth mentioning that we can look at the experiment the authors did on an oriented point cloud representing a 3D statue and on shape representation with differentiable signed distance functions (Fig. 3).
Figure 3: SIREN reconstruction of a high detailed statue compared to ReLU ones.[5]
By applying SIREN to this problem, they were able to reconstruct the whole object while accurately reproducing even the finer details on the elephant and woman's bodies.
The whole scene is stored in the weights of a single 5-layer NN.
Compared to recent procedures (like combining voxel grids with neural implicit representations) with SIREN, we have no 2D or 3D convolutions and fewer parameters.
What gives SIREN an edge over other architectures is that: it does not compute SDFs using supervised GT SDF or occupancy values. Instead, it requires supervision in the gradient domain, so even though it is a harder problem to solve, it results in a better model and more efficient approach.
1.4 Conclusion on SIREN
SIREN is a new and powerful method in the INR world that allows 3D image reconstructions with greater precision, an increased amount of details, and smaller memory usage.
All of this has been possible thanks to a smart choice of the activation function: using the sine gives an edge on the other networks (Tanh, ReLU, Sigmoid, etc) as its derivatives are never null and retrieve information even at higher frequency levels (higher order derivatives).
2. DiGS
2.1 Divergence Guided Shape Implicit Neural Representation for unoriented point clouds (DiGS)
Now that we know what SIREN is, we present DiGS[2]: a method to reconstruct surfaces from point clouds that have no normals.
Why are normals important for INR? When normal vectors are available for each point in the cloud, a higher fidelity representation can be learned. Unfortunately, most of the time normals are not provided with the raw data. DiGS is significant because it manages to render objects with high accuracy in all cases, thus bypassing the need for normals as a prior.
2.2 Training SIRENs without normal vectors
The problem has now changed into training a SIREN without supplying the shape's normals beforehand.
Looking at the gradient vector field of the signed distance function produced by the network, we can see that it has low divergence nearly everywhere except for sink points – like in the center of the circle in Fig. 4. By incorporating this geometric prior as a soft constraint in the loss function, DiGS reliably orients gradients to match the unknown normals at each point and in some cases this produces results that are even better than those produced by approaches that use directly GT normals.
Figure 4: DiGS applied to a 2D example. [2]
To do this, DiGS requires a network architecture that has continuous second derivatives, such as SIRENs, and an initial zero level set that is approximately spherical. To achieve the latter, two new geometric initialization methods are proposed: the initialization to a sphere and the Multi-Frequency Geometric Initialization (MFGI). The MFGI is better than the first because the first keeps all activations within the first period of the sinusoidal activation function and this will not generate high-frequency outputs, thus losing some detail. The second one, however, introduced controlled high frequencies into the first layer of the NN, and even though is a noisier method it solves the aforementioned problem.
To train SIRENS with DiGS, firstly we need a precise geometric initialization. Initializing the shape to a sphere biases the function to start with an SDF that is positive away from the object and negative in the center of the object's bounding box, while keeping the model's ability to have high frequencies (in a controllable manner).
Progressively, finer and finer details are added into consideration, passing from a smooth surface to a coarse shape with edges. This step-by-step approach allows the model to learn a function that has smoothly changing normals and that is interpolated as much as possible with the original point cloud samples.
The original SIREN loss function includes:
A manifold constraint: points on the surface manifold should be on the function's zero-level set[2],
A non-manifold penalization constraint because of which all off-surface point have SDF = 0 [2],
An Eikonal term that defines all gradients to have unit length[2],
A normal term that forces the gradients' directions of points on the surface manifold to match the GT normals' directions[2],
The original SIREN's loss function is[2]
With DiGS, the normal term is replaced by a Laplacian one that imposes a penalty on the magnitude of the divergence of the gradient vector field.[2]
This leads to a new loss function which is equivalent to regularizing the learn loss function:[2]
Iterations 0 50 2000 9000
Figure 5: Reconstructing a dragon statue (DIGS and SIREN).[7]
In Fig. 5, both SIREN and DiGS were tasked with the reconstruction of a highly detailed statue of a dragon for which no normals were provided. The outputs were color mapped: in red, you can see the approximated solution, and in blue the reconstructed points match exactly with the ground truth.
At iteration 0, we can appreciate the spherical geometrical initialization required by DiGS. At first, it seems that SIREN converges faster and better to the solution: after 50 iterations, in fact, SIREN already has the shape of the dragon whereas DiGS has a potato-like object.
But as the reconstruction progresses, SIREN gets stuck on certain areas and cannot derive them. This phenomenon is known as "ghost geometries" (see Fig 6) as no matter how many iterations we do INR architectures cannot reconstruct these areas. This means that our results will always have irrecuperable holes. DiGS is slower but can overcome ghost geometries: after 9k iterations, DiGS manages to recover data for the whole statue – irrecuperable areas included.
Figure 6: Highlighting ghost geometries. [7]
2.4 Conclusion on DiGS
In general, DiGS behaves as good as (if not better) than other state-of-the-art methods with normal supervision and can also overcome ghost geometries and deal with unoriented point clouds.
Being able to reconstruct shapes from unoriented point clouds expands our ability to deal with incomplete data, which is something that often occurs in real life. As point clouds dataset without normals is more common than those with normals, the advancements championed by SIREN and DiGS will greatly facilitate 3D model reconstruction.
We want to thank our supervisor Dr. Dena Bazazian, and our TA Shaimaa Moner, for guiding us through this article. We also want to thank Dr. Ben-Shabat for his kindness in answering all our questions about his paper.
[1] Sitzmann, V., Martel, J., Bergman, A., Lindell, D. and Wetzstein, G., 2020. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems, 33, pp.7462-7473.
[2] Ben-Shabat, Y., Koneputugodage, C.H. and Gould, S., 2022. DiGS: Divergence guided shape implicit neural representation for unoriented point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19323-19332.
[3] Liu, D., 2017, A Practical Guide to ReLU [online], https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7 [Accessed on 26 July 2022].
[4] Sitzmann, V., 2020, Implicit Neural Representations with Periodic Activation Functions [online], https://www.vincentsitzmann.com/siren/, [Accessed 27 July 2022].
[5] Chan, Tony, and Wei Zhu. "Level set based shape prior segmentation." 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Vol. 2. IEEE, 2005.
[6] Hastie, Trevor, et al. The elements of statistical learning: data mining, inference, and prediction. Vol. 2. New York: springer, 2009.
[7] anucvml, 2022, CVPR 2022 Paper: Divergence Guided Shape Implicit Neural Representation for Unoriented Point Clouds [online], https://www.youtube.com/watch?v=bQWpRyM9wYM, Available from 31 May 2022 [Accessed 27 July 2022].
Tags "3D shape representation", "DIGS", "Implicit Neural Representation", "INR", SIREN"
Frame Averaging for Invariant Point Cloud Classification
Post author By xyzhang17
No Comments on Frame Averaging for Invariant Point Cloud Classification
By Ruyu Yan, Xinyi Zhang
Invariant Point Cloud Classification
We experimented with incorporating SE(3) invariance into a point cloud classification model based on our prior study on Frame Averaging for Invariant and Equivariant Network Design. We used a simple PointNet architecture, dropping the input and feature transformations, as our backbone.
Method and Implementation
Similar to the normal estimation example provided in the reference paper, we defined the frames to be
\(\mathcal{F}(X)=\{([\alpha_1v_1, \alpha_2v_2, \alpha_3v_3], t)|\alpha_i\in\{-1, 1\}\}\subset E(3) \)
where \(\mathbb{t}\) is the centroid and \(v_1, v_2, v_3\) are the principle components of the point cloud \(X\). Then, we have the frame operations
\(\rho_1(g)=\begin{pmatrix}R &t \\0^T & 1\end{pmatrix}, R=\begin{bmatrix}\alpha_1v_1 &\alpha_2v_2&\alpha_3v_3\end{bmatrix} \)
With that, we can symmetrize the classification model \(\phi:V \rightarrow \mathbb{R}\) by
\(\langle \phi \rangle \mathcal{F}(X)=\frac{1}{|\mathcal{F}(X)|}\sum_{g\in\mathcal{F}(X)}\phi(\rho_1(g)^{-1}X) \)
Here we show the pseudo-code of our implementation of the symmetrization algorithm in the forward propagation through the neural network. Please refer to our first blog about this project for details on get_frame and apply_frame functions.
def forward(self, pnt_cloud):
# compute frames by PCA
frame, center = self.get_frame(pnt_cloud)
# apply frame operations to re-centered point cloud
pnt_cloud_framed = self.apply_frame(pnt_cloud - center, frame)
# extract features of framed point cloud with PointNet
pnt_net_feature = self.pnt_net(pnt_cloud_framed)
# predict likelihood of classification to each category
pred_scores = self.classify(pnt_net_features)
# take the average of prediction scores over the 8 frames
pred_scores_averaged = pred_scores.mean()
return pred_scores_averaged
Experiment and Comparison
We chose Vector Neurons, a recent framework for invariant point cloud processing, as our experiment baseline. By extending neurons from 1-dimensional scalars to 3-dimensional vectors, the Vector Neuron Networks (VNNs) enable a simple mapping of SO(3) actions to latent spaces and provide a framework for building equivariance in common neural operations. Although VNNs could construct rotation-equivariant learnable layers of which the actions will commute with the rotation of point clouds, VNNs are not very compatible with the translation of point clouds.
In order to build a neural network that commutes with the actions of the SE(3) group (rotation + translation) instead of the actions of the SO(3) group (rotation only), we will use Frame Averaging to construct the equivariant autoencoders that are efficient, maximally expressive, and therefore universal.
Following the implementation described above, we trained a classification model on 5 selected classes (bed, chair, sofa, table, toilet) in the ModelNet40 dataset. We obtained a Vector Neurons classification model with the same PointNet backbone (input and feature transformation included) pre-trained on the complete ModelNet40 dataset. We tested the classification accuracy on both models with point cloud data of the 5 selected classes randomly transformed by rotation-only or rotation and translation. The results are shown in the table below.
Original Point Cloud of Chair
Rotation-only
Rotation and Translation
SO(3) Test Instance Accuracy SE(3) Test Instance Accuracy
Vector Neurons 89.6% 6.1%
Frame Averaging 80.7% 80.3%
The Vector Neurons model was trained for a more difficult classification task, but it was also more fined tuned with a much longer training time and larger data size. The Frame Averaging model in our experiment, however, was trained with relatively larger learning steps and shorter training time. Although it is not fair to make a direct comparison between the two models, we can still see a conclusive result. As expected, the Vector Neurons model has good SO(3) invariance, but tends to fail when the point cloud is translated. The Frame Averaging model, on the other hand, performs similarly well on both SO(3) and SE(3) invariance tests.
If the Frame Averaging model was trained with the same setting as the Vector Neurons model, we believe that it will have both the accuracy of SO(3) and SE(3) tests comparable with Vector Neurons' SO(3) accuracy, because of its maximal expressive power. Due to the limited resource, we cannot prove our hypothesis by experience, but we will provide more theoretical analysis in the next section.
Discussion: Expressive Power of Frame Averaging
Besides the good performance in maintaining SE(3) invariance in classification tasks, we want to further discuss the advantage of frame averaging, namely its ability to preserve the expressive power of any backbone architecture.
The Expressive Power of Frame Averaging
Let \(\phi: V \rightarrow \mathbb{R}\) and \(\Phi: V \rightarrow W\) be some arbitrary functions, e.g. neural networks where \(V, W\) are normed linear spaces. Let \({G}\) be a group with representations \(\rho_1: G \rightarrow GL(V)\) and \(\rho_2: G \rightarrow GL(W)\) that preserve the group structure.
A frame is bounded over a domain \(K \subset V\) if there exists a constant \(c > 0\) so that \(||\rho_2(g)|| \leq c\) for all \(g \in F(X)\) and all \(X \in K\) where \(||\cdot||\) is the operator norm over \(W\).
A domain \(K \subset V\) is frame finite if for every \(X \in K\), \(F(X)\) is a finite set.
A major advantage of Frame Averaging is its preservation of the expressive power of the base models. For any class of neural networks, we can see them as a collection of functions \(H \subset C(V,W)\), where \(C(V,W)\) denotes all the continuous functions from \(V\) to \(W\). We denote \(\langle H \rangle = \{ \langle \phi \rangle | \phi \in H\}\) as the transformed \(H\) after applying the frame averaging. Intuitively, the expressive power tells us the approximation power of \(\langle H \rangle\) in comparison to \(H\) itself. The following theorem demonstrates the maximal expressive power of Frame averaging.
If \(F\) is a bounded \(G\)-equivariant frame over a frame-finite domain \(K\), then for any equivariant function \(\psi \in C(V,W)\), the following inequality holds
\(\inf_{\phi \in H}||\psi – \langle \phi \rangle_F||{K,W} \leq c \inf{\phi \in H}||\psi – \phi||_{K_F,W} \)
where \(K_F = \{ \rho_1(g)^{-1}X | X \in K, g \in F(X)\}\) is the set of points sampled by the FA operator and $c$ is the constant from the definition above.
With the theorem above, we can therefore prove the universality of the FA operator. Let \(H\) be any collection of functions that are universal set-equivariant, i.e., for arbitrary continuous set function \(\psi\) we have \(\inf_{\phi \in H} ||\psi – \phi||_{\Omega,W} = 0\) for arbitrary compact sets \(\Omega \subset V\). For bounded domain \(K \subset V\), \(K_F\) defined above is also bounded and contained in some compact set \(\Omega\). Therefore, we can conclude with the following corollary.
Corollary
Frame Averaging results in a universal SE(3) equivariant model over bounded frame-finite sets, \(K \subset V\).
Exploring Frame Averaging for Invariant and Equivariant Network Design
Post author By ry233
No Comments on Exploring Frame Averaging for Invariant and Equivariant Network Design
written by Nursena Koprucu and Ruyu Yan
In the first week of project SE(3) Invariant and Equivariant Neural Network for Geometry Processing, we studied the research approach frame averaging that tackles the challenge of representing intrinsic shape properties with neural networks. Here we introduce the paper Frame Averaging for Invariant and Equivariant Network Design[1] which grounded the theoretical analysis of generalizing frame averaging with different data types and symmetry groups.
Deep neural networks are useful for approximating unknown functions with combinations of linear and non-linear units. Its application, especially in geometry processing, involves learning functions invariant or equivariant to certain symmetries of the input data. Formally, for a function \(f\) to be
invariant to transformation \(A\), we have \(f(Ax)=f(x)\);
equivariant to transformation \(A\), we have \(f(Ax)=Af(x)\).
This paper introduced Frame Averaging (FA), a systematic framework for incorporating invariance or equivariance into existing architecture. It was derived from the observation in group theory that
Arbitrary functions \(\phi:V\rightarrow\mathbb{R}, \Phi:V\rightarrow W\), where \(V, W\) are some vector spaces, can be made invariant or equivariant by symmetrization, that is averaging over the group.
Instead of averaging over the entire group \(G\) with potentially large cardinality, where exact averaging is intractable, the paper proposed an alternative of averaging over a carefully selected subset \(\mathcal{F}(X)\subset G\) such that the cardinality \(|\mathcal{F}(X)|\) is small. Computing the average over \(\mathcal{F}(X)\) can efficiently retain both the expressive power and the exact invariance/equivariance.
Let \(\phi\) and \(\Phi\) be arbitrary functions, e.g. neural networks.
Let \({G}\) be a group with representation \({p_i}\)'s that preserves the group structure by satisfying \({p_i}(gh)= {p_i}(g){p_i}(h)\) for all \({g}, {h}\) in \({G}\).
We want to make
\(\phi\) into an invariant function, i.e. \(\phi(p_1(g){X})= \phi({X})\); and
\(\Phi\) into an equivariant function, i.e. \(\Phi(p_1(g){X})= {p_2(g)}\Phi({X})\).
Instead of averaging over the entire group each time, we will do that by averaging over group elements. Specifically, we will take the average on a subset of the group elements – called a frame.
A frame is defined as a set valued function \(\mathcal{F}:V\rightarrow\ {2^G \setminus\emptyset}\).
A frame is \(G\)-equivariant if \(F(\rho_1(g)X) = gF(X)\) for every \(X \in V\) and \(g \in G\) where\(gF(X)= \{gh|h \in F(X)\}\) and the equality means that the two sets are equal.
Why are the equivariant frames useful?
Figure 1: Frame equivariance (sphere shape represents the group \(G\); square represents \(V\)).
If we have a case where the equivariant frame is easy to compute, and its cardinality is not too large, then averaging over the frame, provides the required function symmetrization. (for invariance and equivariance).
Let \(\mathcal{F}\) be a \({G}\) equivariant frame, and \(\phi:V\rightarrow\mathbb{R}, \Phi:V\rightarrow W\) be some functions.
Then, \(\langle \phi \rangle _F\) is \(G\) invariant, while \(\langle \Phi \rangle _F\) is \(G\) equivariant.
Incorporating G as a second symmetry
Here, we have \(\phi, \Phi\) that is already invariant/equivariant w.r.t. some symmetry group \(H\). We want to make it invariant/equivariant to \(H × G\).
Frame Average second symmetry
Assume \(F\) is \(H\)-invariant and \(G\)-equivariant. Then,
If \(\phi:V\rightarrow\mathbb{R}\) is \(H\) invariant, and \(ρ_1,τ_1\) commute, then \(\langle \phi \rangle _F\) is \(G×H\) invariant.
If \(\Phi:V\rightarrow W\) is \(H\) equivariant and \(ρ_i, τ_i, i = {1,2}\), commute then \(\langle \Phi \rangle _F\) is \(G×H\) equivariant.
Efficient calculation of invariant frame averaging
We can come up with a more efficient form of FA in the invariant case.
The stabilizer of an element \(X∈V\) is a subgroup of \(G\) defined by \(G_X =\{g∈G∣ρ_1(g)X=X\}.\) \(G_X\) naturally induces an equivalence relation ∼ on \(F(X)\), with \(g ∼ h ⇐⇒ hg^{−1} ∈ G_X.\) The equivalence classes (orbits) are \([g] = {h ∈ F(X)∣g ∼ h} = G_Xg ⊂ F(X), for g∈F(X)\), and the quotient set is denoted \(F(X)/G_X.\)
Equivariant frame \(F(X)\) is a disjoint union of equal size orbits, \([g] ∈ F(X)/G_X\).
Approximation of invariant frame averaging
When \(F(X)/G_X\) is hard to enumerate, we can draw a random element from \(F(X)/G_X\) with uniform probability.
Let \(F(X)\) be an equivariant frame, and \(g ∈ F(X)\) be a uniform random sample. Then \([g] ∈ F(X)/G_X\) is also uniform.
Hence, we get an efficient approximation strategy that is averaging over uniform samples.
Model Instances
An instance of the FA framework contains:
The symmetry group \(G\), representations \(\rho_1,\rho_2\) , and the underlying frame \(\mathcal{F}\).
The backbone architecture for \(\phi\) (invariant) or \(\Phi\) (equivariant).
Example: Normal Estimation with Point Cloud
Here we use normal estimation with point cloud data as an example. The goal is to incorporate \(E(d)\) equivariance to the existing backbone architecture (e.g. PointNet and DGCNN).
Mathematical Formulation
The paper proposed the frame \(\mathcal{F}(X)\) based on Principal Component Analysis (PCA). Formally, we present the centroid of \(X\) by
$$t=\frac{1}{n}X^T\mathbf{1}\in\mathbb{R}^d $$
and covariance matrix computed after centering the data is
$$C=(X-\mathbf{1}t^T)^T(X-\mathbf{1}t^T)\in\mathbb{R}^{d\times d} $$
Then, we define the frame by
$$\mathcal{F}(X)=\{([\alpha_1v_1, …, \alpha_dv_d], t)|\alpha_i\in\{-1, 1\}\}\subset E(d) $$
where \(\mathbf{v}_1, …, v_d\) are the eigenvectors of \(C\). From that, we can form the representation for the frames as transformation matrices.
$$\rho(g)=\begin{pmatrix}R & t \\\mathbf{0}^T & 1\end{pmatrix}, R=\begin{bmatrix}\alpha_1v_1 &…&\alpha_dv_d\end{bmatrix} $$
We show part of the implementation of normal estimation for point cloud in PyTorch. For more details, please refer to the open-source code provided by the authors of [1].
We first perform PCA with the input point cloud.
# Compute the centroid
center = pnts.mean(1,True)
# Center the points
pnts_centered = pnts - center
# Compute covariance matrix
R = torch.bmm(pnts_centered.transpose(1,2),pnts_centered)
# Eigen-decomposition
lambdas,V_ = torch.symeig(R.detach().cpu(),True)
# Reshape the eigenvectors
F = V_.to(R).unsqueeze(1).repeat(1,pnts.shape[1],1,1)
Then, we can construct the rotation matrices by
# Write down permutations of alpha
ops = torch.tensor([
[1,1,1],
[1,1,-1],
[1,-1,1],
[1,-1,-1],
[-1,1,1],
[-1,1,-1],
[-1,-1,1],
[-1,-1,-1]
]).unsqueeze(1).to(point_cloud)
F_ops = ops.unsqueeze(0).unsqueeze(2) * Frame.unsqueeze(1)
Finally, we perform the preprocessing to the point cloud data by centering and computing the frame average.
pnts_centered = point_cloud - center
# Apply inverse of the frame operation
framed_input = torch.einsum('bopij,bpj->bopi',F_ops.transpose(3,4),pnts_centered)
# feed forward to the backbone neural network
# e.g. point net
outs = self.nn_forward(framed_input)
# Apply frame operation
outs = torch.einsum('bopij,bopj->bopi',F_ops,outs)
# Take the average
outs = outs.mean(1)
For a more detailed example application of frame averaging, please see the next blog post by our group: Frame Averaging for Invariant Point Cloud Classification.
[1] Omri Puny, Matan Atzmon, Heli Ben-Hamu, Ishan Misra, Aditya Grover, Edward J. Smith, and Yaron Lipman. Frame averaging for invariant and equivariant network design. In ICLR, 2022.
Tags frame averaging, invariance and equivariance, neural network
Geometric Modeling for Isogeometric Analysis
Post author By Tiago Fernandes
No Comments on Geometric Modeling for Isogeometric Analysis
by Denisse Garnica and Tiago Fernandes
In industry CAD (Computer-Aided Design) representation for 3D models is very used, and it is estimated that about 80% of overall analysis time is devoted to mesh generation in the automotive, aerospace and shipbuilding industries. But, the geometric approximation can lead to accuracy problems and the construction of the finite element geometry is costly, time-consuming and creates inaccuracies.
Isogeometric Analysis (IGA) is a technique that integrates FEA (Finite Element Analysis) into CAD (Computer-Aided Design) models, enabling the model to be designed and tested using the same domain representation. IGA is based on NURBS (Non-Uniformal RAtional B-Splines) a standard technology employed in CAD systems. One of the benefits of using IGA is that there is no approximation error, as the model representation is exact, unlike conventional FEA approaches, which need to discretize the model to perform the simulations.
For our week 4 project we followed the pipeline described in Yuxuan Yu's paper [1], to convert conventional triangle meshes into a spline based geometry, which is ready for IGA, and we ran some simulations to test it.
[1] [2011.14213] HexGen and Hex2Spline: Polycube-based Hexahedral Mesh Generation and Spline Modeling for Isogeometric Analysis Applications in LS-DYNA (arxiv.org)
HexGen and Hex2Spline Pipeline
The general pipeline consists of two parts: the HexGen, which transforms a CAD Model into a All-Hex Mesh, and the Hex2Spline, which transforms the All-Hex Mesh into a Volumetric Spline, ready for IGA. The code can be found here GitHub – SGI-2022/HexDom-IGApipeline.
Following the pipeline
For testing the pipeline, we reproduced the steps with our own models. LS-prepost was used to visualize and edit the 3D models. We started with two surface triangle mesh:
Penrose triangle mesh
SGI triangle mesh
The first step of the pipeline is using a surface segmentation algorithm based on CVT (Centroidal Voronoi Tessellation), using the Segmentation.exe script. It was used to segment the triangle mesh into 6 regions, one for each of the three principal normal vectors and their opposite normals (±𝑋, ±𝑌, ±𝑍).
Penrose initial segmentation
SGI initial segmentation
The initial segmentation only generates 6 clusters, and for more complex shapes we need a better segmentation to later build a polycube structure. Therefore, a more detailed segmentation is done, trying to better divide the mesh into regions that can be represented by polycubes. This step is done manually.
Penrose segmentation
SGI segmentation
Now it is clearly visible which are the faces for the polycube structure. We can now apply a linearization operation on the segmentation, using the Polycube.exe script.
Penrose linearized segmentation
SGI linearized segmentation
After that, we join the faces to create the cubes and finally have our polycube structure. This step is also done manually. The image in the right represents the cubes decreased in volume, for better visualization.
Penrose polycube structure
SGI polycube structure
Then we want to build a parametric mapping between polycube and CAD model, which takes as input the triangle mesh and the polycube structure, to create an all-hex mesh. For that, we use the ParametricMapping.exe script.
Penrose all-hex mesh
SGI all-hex mesh
And in the last step, we use the Hex2Spline.exe script to generate the splines.
Penrose spline
SGI spline
For testing the Splines, we performed a Modal Analysis (simple eigenvalue problem) in each of them, using the LS-DYNA simulator. An example for a specific displacement can be seen below:
Penrose modal analysis
SGI modal analysis
Improvements for the pipeline
The pipeline used still have two manual steps: the segmentation of the mesh, and building the polycube structure given the linear surface segmentation. During the week, we thought and discussed some possible approaches to automate the second step. Given the external faces of the polycube structure, we need to recover the volumetric information. We came up with a simple approach using graphs, representing the polycubes as vertices, the internal faces as edges, and using the external faces to build the connectivity between the polycubes.
Although, this approach doesn't solve all the cases, for example, when a polycube consists mostly of internal faces, and are not uniquely determined. And even if we automated this step, the manual segmentation process still is a huge bottleneck in the pipeline. One solution to the problem would be generating the polycube structure directly from the mesh, without the need of a segmentation, but it's a hard problem to automatically build a structure that doesn't have a big amount of polycubes and still represents well the geometry and topology of the mesh.
Tags isogeometric analysis, splines
Minimal Currents for Line Drawing Vectorization
Post author By Abraham Kassahun Negash
No Comments on Minimal Currents for Line Drawing Vectorization
By Mariem Khlifi, Abraham Kassahun Negash, Hongcheng Song, and Qi Zhang
For two weeks, we were involved in a project that looked at an interesting way of extracting lines from a line drawing raster image, mentored by Mikhail Bessmeltsev and Edward Chien and assisted by Zhehao Li.
Despite its advent in the 90s, automatic line extraction from drawings is still a topic of research today. Due to the unreliability of current tools, artists and engineers often trace lines by hand rather than deal with the results of automatic vectorization tools.
Most past algorithms treat junctions incorrectly which results in incorrect topology, which can lead to problems in applying different tools to the vectorized drawing. Bessmeltsev and Solomon [1] looked at junction detection as well as vectorizing line drawings using frame fields. Frame fields assign two directions to each point, one direction tangent to the curve and the other roughly aligned to neighboring frame field directions. This second direction is important at junctions as it aligns with the tangent of another incoming curve.
We tried to build on this work by additionally looking at minimal currents. Currents are functionals over the space of differential forms on a smooth manifold. They can be thought of as integrations of differential forms over submanifolds. Wang and Chern [2] represented surfaces bounded by curves as differential forms, leading to a formulation of the Plateau problem whose optimization is convex. This representation is common in geometric measure theory, which studies the geometric properties of sets in some space using measure theory. We used this procedure to find curves in our 2D space with a given metric that minimized the distance between two points. The formulation for a flat 2D space is similar to that of the paper, but some modifications had to be made in order to introduce a narrow band metric or frame fields (i.e. to obtain a curve that follows the dark lines while trying to minimize distance).
Vectorization using Frame Fields
Frame fields help guide the vectorized line in the direction of the drawn line. First, a gradient of the drawing is defined based on the intensity of the pixel. One of the frame field directions is optimized to be as close as possible to the direction perpendicular to this gradient while being smooth. The second direction is determined by a compromise between a few rules we want to enforce. First, this direction wants to be perpendicular to the tangent direction. Second, it wants to be smooth. For example, at a junction where we have a non-perpendicular intersection, the second direction starts to deviate from the perpendicular before the intersection so that it aligns with the incoming curve at the intersection.
Fig 1: Frame field example [1]
Bessmeltsev and Solomon used these frame fields to trace curves along the tangent direction and bundle the extracted curves into strokes. The curves were obtained using Euler's method by tracing along the tangent direction from each point up to a certain step size.
We on the other hand looked at defining a metric at each frame which defines the cost of moving in each direction. We then move through the frame field and try to minimize the distance between our start and end points. In this manner the whole drawing can be vectorized given we have the junction points and the frame field.
What are differential forms?
The main focus of our project was to understand and implement the algorithm proposed by Wang and Chern for the determination of minimal surfaces bounded by a given curve. We reduce the dimension to suit our application and add a metric so that the space is no longer Euclidean.
The core idea of their method is to represent surfaces using differential forms. Differential forms can be thought of as infinitesimal oriented lengths, areas or volumes defined at each point in the space. They can also be thought of as functions that take in vectors, or higher dimensional analog of vectors, and output scalars. In this interpretation, they can be thought of as measuring the vector or vector analog.
A k-form can be defined on an n-dimensional manifold, where k = 0,1,2,…,n. A 0-form is just a function defined over the space that outputs a scalar. 1-forms also known as covectors, which from linear algebra can be recalled to live inside the dual space of a vector space, map vectors to scalars. Generally, a k-form takes in a k-vector and outputs a scalar. These k-vectors are higher dimensional extensions of vectors that can be described by a set of k vectors. They have a magnitude defined by the volume spanned by the vectors. Additionally, they have an orientation, thus in general they can be thought of as signed k-volumes.
k-forms can be visualized as the k-vectors they measure, and the measurement operation can be thought of as taking the projection of the k-vector onto the k-form. But it is important to realize these are two different concepts even if they behave similarly. Their difference is clearly seen in curved spaces. Musical isomorphisms relate a k-form to a k-vector by taking advantage of this similarity, where a \(\sharp\) takes a k-vector to a k-form while a \(\flat\) takes a k-form to a k-vector.
We can define the wedge product denoted by \(\wedge\) to describe the spanning operation described earlier. In particular, a k-vector spanned by vectors x1, x2, x3,…,xk can be expressed as \(x_1 \wedge x_2 \wedge . . . \wedge x_k\).
An operation called the Hodge star denoted by \(\star\) can also be defined on forms. On an n-dimensional space, the Hodge star maps a k-form to an n-k-form. In this way, it gives a complementary form.
Reformulation of the plateau problem
Plateau's problem can be stated as follows. Given given a closed curve \(\Gamma\) inside a 3-dimensional compact space M, find the oriented surface \(\Sigma\) in M such that its area is minimal.
Wang and Chern represent the surface \(\Sigma\) and curve \(\Gamma\) using Dirac-delta forms \(\delta_\Sigma\) and \(\delta_\Gamma\). These are a one- and a two-form, respectively, having infinite magnitudes at the surface and 0 everywhere else. They can be understood as the forms used to measure line and surface integrals of vector fields on the curve and surface. Then they take the mass norm of this Dirac-delta form which is equivalent to the area of the surface. Thus minimizing this mass norm is the same problem as minimizing the area of the surface. The area can be expressed as:
$$Area(\Sigma) = \sup_{\omega \in \Omega^2(M), ||\omega||_{max} \leq 1} \int_M \omega \wedge \delta_\Sigma = ||\delta_\Sigma||_{mass}$$
Furthermore, the boundary constraint can also be written as a differential and hence linear relationship between forms:
$$\partial \Sigma = \Gamma \Leftrightarrow d\delta_\Sigma = \delta_\Gamma$$
Then the problem can be stated as:
$$\min_{\delta_\Sigma \in \Omega^1(M): d\delta_\Sigma = \delta_\Gamma} ||\delta_\Sigma||_{mass}$$
We may allow any 1 form \(\eta\) in the place of \(\delta_\Sigma\).
$$\min_{\eta \in \Omega^1(M):d\eta = \delta_\Gamma} ||\eta||_{mass}$$
Now, it happens that solving the Poisson equation features a bit in our solution algorithm, and to speed this process up, using a Fast Fourier Transform is advantageous. To use FFT, however, we need to use a periodic boundary condition. This introduces solutions that we might not necessarily want which belong to a different homology class to our desired solution. To alleviate this solution, Wang and Chern impose an additional constraint that ties the cohomology class of the optimum 1-form to the initial 1-form.
$$\min_{\eta \in \Omega^1(M):d\eta_0 = \delta_\Gamma,\int_{M} \vartheta_i \wedge \star \eta_0 = \Lambda_i} ||\eta||_{mass}$$
Finally, the space of 1-forms can be decomposed into coexact, harmonic and exact parts by the Helmholtz-Hodge decomposition, which simplifies all the constraints into a single constraint. After some rearranging, we arrive at the following final form.
Given an initial guess \(\eta_0 \in \Omega^1 (M)\) satisfying \(d\eta_0 = \delta_\Gamma\) and \(\int_{M} \vartheta_i \wedge \star \eta_0 = \Lambda_i\), solve
$$\min_{\varphi \in \Omega^0(M)} ||\eta_0 + d\varphi||_{mass}$$
Even though we are working on a 2-dimensional space, the final form of the Plateau problem is identical in our case as well. Unlike Wang and Chern, who were working in 3D, we didn't need to worry as much about the initial conditions as we can initialize plausible one-forms by defining an arbitrary curve between two points and taking the one form to be perpendicular to the curve at each vertex and oriented consistently.
Algorithm 1 (Main Algorithm)
Our goal is to solve the Plateau problem \(\min_{\eta \in \eta_0 + im(d^0)} ||\eta||_{mass}\). This boils down to adding a component to an initial guess. So, an initial guess will already give us a good estimation of how to solve the problem by adjusting the solution values through optimization principles. In this sense, our final solution is the initial guess plus a certain component dφ:
$$\min_{\varphi \in \Omega^0(M)} ||\eta_0 + d\varphi||_{mass}$$.
The algorithm then tries to solve \(\min_{\varphi \in \mathbb{R}^V, X \in \mathbb{R}^{V \times 3}} ||X||_{L^1}\), subject to \(D\varphi – X = -X_0\) where \(X\) is the discrete form of the one-form \(\eta\) and \(V\) is the number of vertices in the space after discretizing..
Lagrange multipliers are introduced to reinforce the constraints and act as multipliers that relate the gradient of the \(X\) variable and the constraints since in general, their gradients point towards the same direction. Generally, the optimization is done in one go. However, since we are solving a minimization problem with two unknown components \(X\) and \(\varphi\), we resort to Alternating Direction Method of Multipliers (ADMM), which adopts a divide and conquer approach that decouples the two variables and solves each one of them in an alternating fashion until X converges [3]:
$$\varphi = argmin_\varphi \langle\langle\hat{\lambda}, D\varphi\rangle\rangle_{L^2} + \frac{\tau}{2}||D\varphi-\hat{X}+X_0||_{L^2}^2$$
$$X = argmin_X ||X||_{L^1} – \langle\langle\hat{\lambda}, X\rangle\rangle_{L^2} + \frac{\tau}{2}||D\varphi-X+X_0||_{L^2}^2$$
For the initialization, we have created our own 2D path initializer which takes two points and creates an L-shaped path. The corresponding 1-form consists of the normals perpendicular to the path and \(X_0\) is obtained through integration over the finite region surrounding each vertex. More complex paths can be created by adding the 1-forms related to multiple L-shaped segments.
Throughout the paper, the convention used for integration to evaluate \(X\) from 𝜂 and differentiation to define the differentiation matrices for the Poisson solver was the midpoint approach. We have noticed however, that a forward or backward differentiation produced more accurate results in our case.
Algorithm 2 (Poisson Solver)
To solve for \(\varphi\) we reduce the following equation we have from the ADMM optimization.
After reducing the L2 norms and inner products and setting the derivative of the resulting expression to zero, we have:
$$h^3 D^T \hat{\lambda} + \tau h^3 D^T (D\varphi – \hat{X} + X_0) = 0$$
$$\varphi = \Delta^{-1}(D^T(\hat{X}-X_0)-\frac{1}{\tau}D^T\hat{\lambda})$$
where \(\Delta=D^TD\), and \(D\) is the finite difference differential operator using central difference. Because the Fast Fourier Transform (FFT) is computation friendly to the computation of derivatives of our PDE or ODE by its clean and clear formula, we use FFT to solve the Poisson equation. Our FFT Poisson solver projects the right hand side signal from the time domain onto a different finite frequency basis in the frequency domain, and the solution is simply a scalar product in the frequency domain. The final solution in the time domain is generated by the inverse FFT.
To test the validity of the FFT Poisson solver, we use \(z = exp(x+\frac{y}{2})\) as the original function and compute its Laplacian as the right hand side signal. The final solution is identical up to a constant value (-2.1776), which is easy to find in the (0, 0) point.
Fig 2: Validation of the Poisson solver
Algorithm 4 (Shrinkage)
Now for the minimization of X, we remember the formula we obtained earlier.
$$X = argmin_{X_v} ||X||_{L^1} – \langle\langle\hat{\lambda}, X\rangle\rangle_{L^2} + \frac{\tau}{2}||D\varphi-X+X_0||_{L^2}^2$$
After reducing the norms, we obtain:
$$X = argmin_{X_v} \sum_{v \in V} h^3 (|X_v| -\hat{\lambda}_v \cdot X_v+\frac{\tau}{2}|(D\varphi)_v – X_v + (X_0)_v|^2)$$
This is equivalent to minimizing the terms corresponding to each individual vertex \(v\).
$$X = argmin_{X_v} |X_v| -\hat{\lambda}_v \cdot X_v+\frac{\tau}{2}|(D\varphi)_v – X_v + (X_0)_v|^2$$
This statement is equivalent to the pointwise shrinkage problem which has the following solution.
$$X_v = SHRINKAGE_{\frac{1}{\tau}}(z) = max(1-\frac{1}{\tau|z|},0)z$$
Where \(z = \tau \hat{\lambda}_v + (D\varphi)_v + (X_0)_v\)
This minimizes the euclidean distance, which is only helpful for drawing lines. To adjust the formulation for our case where the defined metric and the Euclidean metric are different, we first update the minimization problem to include the newly defined metric and then solve it to obtain the new formulation of the Shrinkage map.
After we tinkered with the algorithm, we felt that it was time that we sent it to kindergarten to learn a few tricks.
For this example, we define two points as our boundary and take a non-optimal path (sequence of segments) between them as our initial guess. Once this path is determined, we find its equivalent 1-form which extracts the normals at every vertex of our grid with respect to the path. The metric used in this example is the Euclidean metric, i.e., it's constant everywhere, so the optimal path will be a line connecting the boundary points.
Fig 3: Initial one-form(top left) pseudo-inverse of the minimizing one-form (top-right) level-set obtained from the pseudo-inverse(bottom-left) minimizing one-form (bottom-right)
Note that to obtain the curve, we take the pseudo-inverse of the final one-form \(X\). This gives a 0-form which can be visualized as a surface. This surface will have a big jump over the curve we are looking for.
Tracing a circle
Instead of relying on a Euclidean metric, we can define our own metric to obtain the connecting line within a certain narrow band instead of getting the straight line that connects two points. A narrow band allows us to define pixels that should be prioritized over others. It defines in some way a penalty if the algorithm passes through a section that is not part of the narrowband by making it more expensive to go through these pixels in comparison to choosing the pixels from the narrowband.
We need to update our shrinkage problem. If \(c\) is the cost associated with each pixel, then the equation for \(X\) becomes:
$$ X = \min_{X_v} c|X_v| -\langle \hat{\lambda}_v,X_v\rangle + \frac{\tau}{2}|| (D\varphi)_v-X_v+(X_0)_v||^2
X = \min_{X_v} |X_v| -\langle \hat{\lambda}_v / c,X_v\rangle + \frac{\tau}{2c}|| (D\varphi)_v-X_v+(X_0)_v||^2$$
We can replace \(\hat{\lambda}\) by \(\hat{\lambda}/c\) and \(\tau\) by \(\tau/c\), our shrinkage problem boils down to:
$$X_v=SHRINK_{\frac{c}{\tau}}(\tau \hat{\lambda_v}/c^2+(D\varphi)_v+(X_0)_v)$$.
Fig 4: Superimposition of the initial one-form(green normals), final one-form(red normals) and narrow band(yellow)
Same boundary, multiple paths
The algorithm provides a unique solution to the problem, which means that with different initial solutions for the same boundary, we obtain the same solution. We proved this by defining two curves having the same boundary (in 2D, our boundary is two points).
Fig 5: The initializing one-form as a Superposition of two one-forms corresponding to two curves between the boundaries.
Fig 6: minimizing one-form(top left) pseudo-inverse of the minimizing one-form (top-right) level-set obtained from the pseudo-inverse(bottom-left)
We have defined two intersecting paths from points from the circle. The obtained paths connected points that were not connected in the initialization. This shows the importance of running the optimization on points separately to obtain results on-par with the model's complexity.
Fig 7: minimizing one-form(top left) pseudo-inverse of the minimizing one-form (top-right) level-set obtained from the pseudo-inverse(bottom-left) initial one-form superimposed on the narrow band (bottom-right)
Cross-Shaped Narrow Band
We define crossing paths similar to the point above but we change the shape of the narrow band to a cross. The result yields connections between the closest points in the narrow band, hence the hyperbolic appearance.
Fig 8: Initial one-form(top left) initial one-form superimposed on the narrow band (top-right) level-set obtained from the pseudo-inverse(bottom-left) minimizing one-form (bottom-right)
We have successfully used an algorithm proposed by Wang and Chern [2] for curves instead of surfaces to find minimal connections between 2D boundaries while also incorporating metric information from narrow bands to obtain various shapes of these connections. We have enjoyed working on this project which combined many topics that were new to us and we hope that you have similarly enjoyed going through the blog post!
[1] M. Bessmeltsev and J. Solomon, "Vectorization of Line Drawings via PolyVector Fields," ACM Trans. Graph., vol. 38, no. 1, pp. 1–12, Sep. 2018.
[2] S. Wang and A. Chern, "Computing Minimal Surfaces with Differential Forms," ACM Trans. Graph., vol. 40, no. 4, Aug. 2021.
[3] A. Aich, "A Brief Review of Alternating Direction Method Of Multipliers (ADMM)."
SIREN Architecture: Colab Experiments
No Comments on SIREN Architecture: Colab Experiments
In this blog, we explain how we can train SIREN architecture on a 3D point cloud (Dragon object), from The Stanford 3D Scanning Repository. This work had been done during the project "Implicit Neural Representation (INR) based on the Geometric Information of Shapes" SGI 2022, with Alisia Lupidi and Krishnendu Kar, under the guidance of Dr. Dena Bazazian and Shaimaa Monem Abdelhafez.
The SIREN architecture is a neural network with a periodic activation function that has been proposed to reconstruct 3D objects, and considered as a signed distance function (SDF). We train this network using a colab notebook to reconstruct a Dragon object, which we take from The Stanford 3D Scanning Repository. We provide the instructions to produce our experiments.
Note: you have to use a GPU for this experiment. If you use Google Colab, you just set your runtime to GPU.
Instructions to run our experiments
First, you have to clone the SIREN repository in your notebook using the code below,
git clone https://github.com/vsitzmann/siren
After cloning the repository, install the required libraries by:
pip install sk-video
pip install cmapy
pip install ConfigArgParse
pip install plyfile
Then, you can download the Dragon object from The Stanford 3D Scanning Repository (you can also try another 3D object). The 3D object has to be converted to xyz format, for which you can use MeshLab.
The next step is to train the neural network (SIREN) to reconstruct the 3D object. You can achieve this task by running the following script:
python experiments_scripts/train_sdf.py --model_type=sine --point_cloud_path=<path_to_the_Dragon_in_xyz_format> --batch_size=25000 --experiment_name=experiment_1
Finally, we can test the trained model and use it to reconstruct our Dragon by running,
python experiments_scripts/test_sdf.py --checkpoint_path=<path_to_the_checkpoint_of_the_trained_model> --experiment_name=experiment_1_rec --resolution=512
The reconstructed point cloud file will be saved in the folder "experiment_1_rec". Here is also visualization for the reconstructed Dragon (in gray) wrt to the original one (in brown) using MeshLab. Where you can notice the reconstructed version has a larger scale.
The reconstructed Dragon (in gray) wrt to the original one (in brown) using MeshLab.
tutorial week
|
CommonCrawl
|
Intuition for Lebesgue integration
I have started doing Lebesgue integration and I just want to clarify one thing to start with.
Very loosely speaking, with Riemann integration we partition the domain into $n$ intervals, and then we calculate the area of the $n$ rectangles formed by a base of width $n$ and a height of where each rectangle 'hits' the function. Summing these rectangles gives us an approximation of the area of the graph under the function. Then as we let $n \to \infty$ this approximation increase in accuracy and equals the function when we take the limit.
Now with Lebesgue integration do we follow the same process of partitioning (the range this time) into $n$ intervals and then letting $n \to \infty$ giving us smaller and smaller intervals which implies the approximation to the function keeps increasing? Leaving aside the concept of having sets that are measurable on the domain...I am simply wondering is process of considering intervals of decreasing size the same as with Riemann integration?
calculus real-analysis integration lebesgue-integral
cssscsss
The essence can be better understood in a two-dimensional setting. The Riemann integral of a function $(x,y)\mapsto f(x,y)$ over the square $Q:=[-1,1]^2$ involves cutting up the square into small rectangles $Q_{ij}:=[x_{i-1},x_i]\times[y_{j-1},y_j]$ and then arguing about "Riemann sums" of the form $$\sum_{i, j} f(\xi_i,\eta_j)\mu(Q_{ij})\ ,\tag{1}$$ where $\mu(Q_{ij})=(x_i-x_{i-1})(y_j-y_{j-1})$ is the elementary euclidean area of $Q_{ij}$. This is simple and intuitive, and you get the linearity of the integral for free. But it is somewhat rigid.
Contrasting this, Lebesgue integration involves cutting up the square in subsets defined by the level curves of $f$, like so:
The integral then is the limit $N\to \infty$ of sums of the form $$\sum_{k=-\infty}^\infty {k\over N}\mu(S_{N,k})\ ,\tag{2}$$ where $S_{N,k}$ is the zone in the figure where $${k\over N}\leq f(x,y)<{k+1\over N}\ .$$ It should be intuitively clear that the sums $(2)$ are in the same way an approximation to the volume of the cake defined by $f$ over $Q$ as are the sums $(1)$. But the approach $(2)$ is much more flexible and allows for more powerful "limit theorems". On the other hand such "simple" things as linearity now require proof.
Christian BlatterChristian Blatter
Suppose that $f$ is a positive bounded function defined on an interval $[a,b]$ with $0 < f \le M$.
If you partition the domain using $a = t_0 < t_1 < t_2 < \cdots < t_n = b$ you select points $x_k \in [t_{k-1},t_k]$ and form the Riemann sum $\displaystyle \sum_{k=1}^n f(x_k) \ell([t_{k-1},t_k])$ where $\ell$ is the length of the interval $[t_{k-1},t_k]$.
If instead you partition the range using $0 = M_0 < M_1 < M_2 < \cdots < M_n = M$, you can form an analogous "Lebesgue sum" by defining $E_k = \{x \in [a,b] : M_{k-1} < x \le M_k\}$. Select points $x_k \in E_k$ and use the sum $\displaystyle \sum_{k=1}^n f(x_k) \ell(E_k)$.
There are two things going on that are much different than in the Riemann sum. First, the sets $E_k$ aren't necessarily intervals so you have to be careful in what is meant by $\ell(E_k)$. This is where Lebesgue measure is needed. Second, even if the $E_k$ are intervals, their lengths don't necessarily shrink to zero as the mesh of the partition of $[0,M]$ decreases to zero. So, the process of considering intervals of decreasing size is not the same as with Riemann integration.
Umberto P.Umberto P.
Not the answer you're looking for? Browse other questions tagged calculus real-analysis integration lebesgue-integral or ask your own question.
Intuition about Taking an Integral
Lebesgue integral - interpretation
How much do we really care about Riemann integration compared to Lebesgue integration?
How is Lebesgue integration "partitioning the range"?
FTC for Lebesgue Integral Simple Conjecture
Is the Lebesgue integral simply an integral w.r.t dy instead of dx, with an appropriate measure-theoretic x?
Lebesgue integral: intuition
Simple intuitive explanation of the fundamental theorem of calculus applied to Lebesgue integrals
|
CommonCrawl
|
Existence of traveling waves for a class of nonlocal nonlinear equations with bell shaped kernels
CPAA Home
Upper and lower time decay bounds for solutions of dissipative nonlinear Schrödinger equations
November 2017, 16(6): 2105-2123. doi: 10.3934/cpaa.2017104
Multiple solutions for a fractional nonlinear Schrödinger equation with local potential
Wulong Liu 1,, and Guowei Dai 2,
School of Science, Jiangxi University of Science and Technology, Ganzhou, Jiangxi 341000, China
School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024, China
Received December 2016 Revised April 2017 Published July 2017
Fund Project: The first author is partially supported by the Youth Science Foundation of Jiangxi Provincial Department of Education (GJJ14460), the NSFC Grant(61364015), the Foundation of the Jiangxi University of Science and Technology (NSFJ2015-G25). The second author is supported by NNSF of China (No. 11261052,11401477) and the Fundamental Research Funds for the Central Universities (No. DUT15RC(3)018, DUT17LK05)
Using penalization techniques and the Ljusternik-Schnirelmann theory, we establish the multiplicity and concentration of solutions for the following fractional Schrödinger equation
$\left\{ \begin{align} &{{\varepsilon }^{2\alpha }}{{\left( -\Delta \right)}^{a}}u+V\left( x \right)u=f\left( u \right),\ \ x\in {{\mathbb{R}}^{N}}, \\ &u\in {{H}^{a}}\left( {{\mathbb{R}}^{N}} \right),u>0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x\in {{\mathbb{R}}^{N}}, \\ \end{align} \right.$
$0<α<1$
$N>2α$
$\varepsilon>0$
is a small parameter,
$V$
satisfies the local condition, and
$f$
is superlinear and subcritical nonlinearity. We show that this equation has at least
$\text{cat}_{M_{δ}}(M)$
single spike solutions.
Keywords: Penalization techniques, Ljusternik-Schnirelmann theory, fractional Schrödinger equation, multiple solutions, single spike solutions.
Mathematics Subject Classification: Primary: 35A15, 35J60; Secondary: 35S15.
Citation: Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2105-2123. doi: 10.3934/cpaa.2017104
C. O. Alves, G. M. Figueiredo and M. F. Furtado, Multiple solutions for a nonlinear Schrödinger equation with magnetic fields, Comm. Partial Differential Equations, 36 (2011), 1565-1586. doi: 10.1080/03605302.2011.593013. Google Scholar
A. Ambrosetti and A. Malchiodi, Nonlinear Analysis and Semilinear Elliptic Problems, Cambridge University Press, 2007. doi: 10.1017/CBO9780511618260. Google Scholar
B. Barrios, E. Colorado, R. Servadei and F. Soria, A critical fractional equation with concave-convex power nonlinearities, Ann. Inst. H. Poincaré Anal. Non Linéaire, 32 (2015), 875-900. doi: 10.1016/j.anihpc.2014.04.003. Google Scholar
V. Benci and G. Cerami, The effect of the domain topology on the number of positive solutions of nonlinear elliptic problems, Arch. Rational. Mech. Anal., 114 (1991), 79-93. doi: 10.1007/BF00375686. Google Scholar
C. Bucur and M. Medina, A fractional elliptic problem in $\mathbb{R}^n$ with critical growth and convex nonlinearities, preprint, arXiv: 1609.01911. Google Scholar
C. Bucur and E. Valdinoci, Nonlocal Diffusion and Applications, Lecture Notes of the Unione Matematica Italiana, 20. Springer, [Cham]; Unione Matematica Italiana, Bologna, 2016. doi: 10.1007/978-3-319-28739-3. Google Scholar
J. Byeon, O. Kwon and J. Seok, Nonlinear scalar field equations involving the fractional Laplacian, Nonlinearity, 30 (2017), 1659-1681. Google Scholar
X. Cabré and Y. Sire, Nonlinear equations for fractional Laplacians Ⅰ: Regularity, maximum principles, and Hamiltonian estimates, Ann. Inst. H. Poincaré Anal. Non Linéaire, 31 (2014), 23-53. doi: 10.1016/j.anihpc.2013.02.001. Google Scholar
X. Cabré and Y. Sire, Nonlinear equations for fractional Laplacians Ⅱ: Existence, uniqueness and qualitative properties of solutions, Trans. Amer. Math. Soc., 367 (2015), 911-941. doi: 10.1090/S0002-9947-2014-05906-0. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
G. Chen, Multiple semiclassical standing waves for fractional nonlinear Schrödinger equations, Nonlinearity, 28 (2015), 927-949. doi: 10.1088/0951-7715/28/4/927. Google Scholar
G. Chen and Y. Zheng, Concentration phenomenon for fractional nonlinear Schrödinger equations, Commun. Pure Appl. Anal., 13 (2014), 2359-2376. doi: 10.3934/cpaa.2014.13.2359. Google Scholar
M. Cheng, Bound state for the fractional Schrödinger equation with unbounded potential J. Math. Phys. 53 (2012), 043507. doi: 10.1063/1.3701574. Google Scholar
S. Cingolani and M. Lazzo, Multiple positive solutions to nonlinear Schrödinger equations with competing potential functions, J. Differential Equations, 160 (2000), 118-138. doi: 10.1006/jdeq.1999.3662. Google Scholar
J. Dávila, M. del Pino, S. Dipierro and E. Valdinoci, Concentration phenomena for the nonlocal Schrödinger equation with Dirichlet datum, Anal. PDE, 8 (2015), 1165-1235. doi: 10.2140/apde.2015.8.1165. Google Scholar
J. Dávila, M. Del Pino and J. C. Wei, Concentrating standing waves for the fractional nonlinear Schrödinger equation, J. Differential Equations, 256 (2014), 858-892. doi: 10.1016/j.jde.2013.10.006. Google Scholar
M. Del Pino and P. Felmer, Local mountain passes for semilinear elliptic problems in unbounded domains, Calc. Var. Partial Differential Equations, 4 (1996), 121-137. doi: 10.1007/BF01189950. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
S. Dipierro, M. Medina, I. Peral and E. Valdinoci, Bifurcation results for a fractional elliptic equation with critical exponent in $\mathbb{R}^n$ Manuscripta Math. (2016), doi: 10.1007/s00229-016-0878-3. Google Scholar
S. Dipierro, M. Medina and E. Valdinoci, Fractional elliptic problems with critical growth in the whole of $\mathbb{R}^n$ in Appunti. Edizioni della Normale Scuola Normale di Pisa (2017). arXiv: 1506.01748. doi: 10.1007/978-88-7642-601-8. Google Scholar
M. M. Fall and E. Valdinoci, Uniqueness and nondegeneracy of positive solutions of $(Δ)^s u+u=u^p$ in $\mathbb{R}^N$ when $s$ is close to $1$, Comm. Math. Phys., 329 (2014), 383-404. doi: 10.1007/s00220-014-1919-y. Google Scholar
M. M. Fall, F. Mahmoudi and E. Valdinoci, Ground states and concentration phenomena for the fractional Schrödinger equation, Nonlinearity, 28 (2015), 1937-1961. doi: 10.1088/0951-7715/28/6/1937. Google Scholar
P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schrödinger equation with the fractional Laplacian, Proc. Roy. Soc. Edinburgh Sect. A, 142 (2012), 1237-1262. doi: 10.1017/S0308210511000746. Google Scholar
G. M. Figueiredo and G. Siciliano, A multiplicity result via Ljusternick-Schnirelmann category and Morse theory for a fractional Schrdinger equation in $\mathbb{R}^N$ Nonlinear Differ. Equ. Appl. 23 (2016), 12. doi: 10.1007/s00030-016-0355-4. Google Scholar
A. Fiscella, R. Servadei and E. Valdinoci, Density properties for fractional Sobolev spaces, Ann. Acad. Sci. Fenn. Math., 40 (2015), 235-53. doi: 10.5186/aasfm.2015.4009. Google Scholar
R. L. Frank and E. Lenzmann, Uniqueness of non-linear ground states for fractional Laplacians in $\mathbb{R}$, Acta Math., 210 (2013), 261-318. doi: 10.1007/s11511-013-0095-9. Google Scholar
R. L. Frank, E. Lenzmann and L. Silvestre, Uniqueness of radial solutions for the fractional Laplacian, Comm. Pure Appl. Math., 69 (2016), 1671-1726. doi: 10.1002/cpa.21591. Google Scholar
N. Laskin, Fractional quantum mechanics, Phys. Rev. E, 62 (2000), 31-35. Google Scholar
W. Liu, Multiple solutions for a fractional nonlinear Schrödinger equation with a general nonlinearity, prepared. Google Scholar
S. Secchi, Ground state solutions for nonlinear fractional Schrödinger equations in $\mathbb{R}^N$ J. Math. Phys. 54 (2013), 031501. doi: 10.1063/1.4793990. Google Scholar
R. Servadei and E. Valdinoci, Lewy-Stampacchia type estimates for variational inequalities driven by (non)local operators, Rev. Mat. Iberoam., 29 (2013), 1091-1126. doi: 10.4171/RMI/750. Google Scholar
R. Servadei and E. Valdinoci, Mountain Pass solutions for non-local elliptic operators, J. Math. Anal. Appl., 389 (2012), 887-898. doi: 10.1016/j.jmaa.2011.12.032. Google Scholar
R. Servadei and E. Valdinoci, Variational methods for non-local operators of elliptic type, Discrete Contin. Dyn. Syst., 33 (2013), 2105-2137. Google Scholar
R. Servadei and E. Valdinoci, The Brezis-Nirenberg result for the fractional Laplacian, Trans. Amer. Math. Soc., 367 (2015), 67-102. doi: 10.1090/S0002-9947-2014-05884-4. Google Scholar
X. Shang, J. Zhang and Y. Yang, On fractional Schödinger equation in $\mathbb{R}^N$ with critical growth J. Math. Phys. 54 (2013), 121502. doi: 10.1063/1.4835355. Google Scholar
X. Shang and J. Zhang, Ground states for fractional Schrödinger equations with critical growth, Nonlinearity, 27 (2014), 187-207. doi: 10.1088/0951-7715/27/2/187. Google Scholar
X. Shang and J. Zhang, Concentrating solutions of nonlinear fractional Schrödinger equation with potentials, J. Differential Equations, 258 (2015), 1106-1128. doi: 10.1016/j.jde.2014.10.012. Google Scholar
Shuangjie Peng, Huirong Pi. Spike vector solutions for some coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2205-2227. doi: 10.3934/dcds.2016.36.2205
Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2239-2259. doi: 10.3934/cpaa.2018107
Fengshuang Gao, Yuxia Guo. Multiple solutions for a nonlinear Schrödinger systems. Communications on Pure & Applied Analysis, 2020, 19 (2) : 1181-1204. doi: 10.3934/cpaa.2020055
D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563
Weichung Wang, Tsung-Fang Wu, Chien-Hsiang Liu. On the multiple spike solutions for singularly perturbed elliptic systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 237-258. doi: 10.3934/dcdsb.2013.18.237
Changfeng Gui, Zhenbu Zhang. Spike solutions to a nonlocal differential equation. Communications on Pure & Applied Analysis, 2006, 5 (1) : 85-95. doi: 10.3934/cpaa.2006.5.85
Weiming Liu, Lu Gan. Multi-bump positive solutions of a fractional nonlinear Schrödinger equation in $\mathbb{R}^N$. Communications on Pure & Applied Analysis, 2016, 15 (2) : 413-428. doi: 10.3934/cpaa.2016.15.413
Binhua Feng. On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1785-1804. doi: 10.3934/cpaa.2018085
César E. Torres Ledesma. Existence and concentration of solutions for a non-linear fractional Schrödinger equation with steep potential well. Communications on Pure & Applied Analysis, 2016, 15 (2) : 535-547. doi: 10.3934/cpaa.2016.15.535
Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3963-3987. doi: 10.3934/dcds.2017168
Van Duong Dinh. On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 689-708. doi: 10.3934/cpaa.2019034
Rossella Bartolo, Anna Maria Candela, Addolorata Salvatore. Infinitely many solutions for a perturbed Schrödinger equation. Conference Publications, 2015, 2015 (special) : 94-102. doi: 10.3934/proc.2015.0094
Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2016, 15 (3) : 991-1008. doi: 10.3934/cpaa.2016.15.991
Miao Du, Lixin Tian. Infinitely many solutions of the nonlinear fractional Schrödinger equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3407-3428. doi: 10.3934/dcdsb.2016104
Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1595-1611. doi: 10.3934/dcds.2019071
Gerd Grubb. Limited regularity of solutions to fractional heat and Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3609-3634. doi: 10.3934/dcds.2019148
John Baxley, Mary E. Cunningham, M. Kathryn McKinnon. Higher order boundary value problems with multiple solutions: examples and techniques. Conference Publications, 2005, 2005 (Special) : 84-90. doi: 10.3934/proc.2005.2005.84
Vincenzo Ambrosio. Multiple concentrating solutions for a fractional Kirchhoff equation with magnetic fields. Discrete & Continuous Dynamical Systems - A, 2020, 40 (2) : 781-815. doi: 10.3934/dcds.2020062
Wulong Liu Guowei Dai
|
CommonCrawl
|
Home All issues Volume 527 (March 2011) A&A, 527 (2011) A138 Full HTML
A&A
Volume 527, March 2011
Astrophysical processes
https://doi.org/10.1051/0004-6361/201015568
2. The model
3. 2D results
The baroclinic instability in the context of layered accretion
Self-sustained vortices and their magnetic stability in local compressible unstratified models of protoplanetary disks
W. Lyra1,2 and H. Klahr1
1 Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
2 American Museum of Natural History, Department of Astrophysics, Central Park West at 79th Street, New York, NY 10024-5192, USA
e-mail: [email protected]
Context. Turbulence and angular momentum transport in accretion disks remains a topic of debate. With the realization that dead zones are robust features of protoplanetary disks, the search for hydrodynamical sources of turbulence continues. A possible source is the baroclinic instability (BI), which has been shown to exist in unmagnetized non-barotropic disks.
Aims. We aim to verify the existence of the baroclinic instability in 3D magnetized disks, as well as its interplay with other instabilities, namely the magneto-rotational instability (MRI) and the magneto-elliptical instability.
Methods. We performed local simulations of non-isothermal accretion disks with the Pencil Code. The entropy gradient that generates the baroclinic instability is linearized and included in the momentum and energy equations in the shearing box approximation. The model is compressible, so excitation of spiral density waves is allowed and angular momentum transport can be measured.
Results. We find that the vortices generated and sustained by the baroclinic instability in the purely hydrodynamical regime do not survive when magnetic fields are included. The MRI by far supersedes the BI in growth rate and strength at saturation. The resulting turbulence is virtually identical to an MRI-only scenario. We measured the intrinsic vorticity profile of the vortex, finding little radial variation in the vortex core. Nevertheless, the core is disrupted by an MHD instability, which we identify with the magneto-elliptic instability. This instability has nearly the same range of unstable wavelengths as the MRI, but has higher growth rates. In fact, we identify the MRI as a limiting case of the magneto-elliptic instability, when the vortex aspect ratio tends to infinity (pure shear flow). We isolated its effect on the vortex, finding that a strong but unstable vertical magnetic field leads to channel flows inside the vortex, which stretch it apart. When the field is decreased or resistivity is used, we find that the vortex survives until the MRI develops in the box. The vortex is then destroyed by the strain of the surrounding turbulence. Constant azimuthal fields and zero net flux fields also lead to vortex destruction. Resistivity quenches both instabilities when the magnetic Reynolds number of the longest vertical wavelength of the box is near unity.
Conclusions. We conclude that vortex excitation and self-sustenance by the baroclinic instability in protoplanetary disks is viable only in low ionization, i.e., the dead zone. Our results are thus in accordance with the layered accretion paradigm. A baroclinicly unstable dead zone should be characterized by the presence of large-scale vortices whose cores are elliptically unstable, yet sustained by the baroclinic feedback. Since magnetic fields destroy the vortices and the MRI outweighs the BI, the active layers are unmodified.
Key words: accretion, accretion disks / hydrodynamics / instabilities / magnetohydrodynamics (MHD) / turbulence / methods: numerical
© ESO, 2011
Turbulence is the preferred mechanism for enabling accretion in circumstellar disks, and the magneto-rotational instability (MRI, Balbus & Hawley 1991) is the preferred route to turbulence. However, the MRI requires sufficient ionization since the magnetic field and the gas must be coupled, so it should not be expected to occur in regions of low ionization such as the "dead zone" (Gammie 1996; Turner & Drake 2009). Therefore, the search for hydrodynamical sources of turbulence continues, if only to provide some residual accretion in the dead zone. A distinct possibility is the baroclinic instability (BI; Klahr & Bodenheimer 2003; Klahr 2004), the interest in which has been recently rekindled (Petersen et al. 2007a,b; Lesur & Papaloizou 2010).
A baroclinic flow is one where the pressure depends on both density and temperature, as opposed to a barotropic flow where the pressure only depends on density. In such a flow, the non-axyssimmetric misalignment between surfaces of constant density ρ (isopycnals) and surfaces of constant pressure p (isobars) generates vorticity. Mathematically, this translates into a non-zero baroclinic vector, ∇ × (−ρ-1∇p) = ρ-2∇p × ∇ρ. Baroclinicity has long been known in atmospheric dynamics to be responsible for turbulent patterns on planets and for weather patterns of Rossby waves (planetary waves), cyclones, and anticyclones on Earth.
The difference between the baroclinic instability of weather patterns on planetary atmospheres and the baroclinic instability in accretion disks is that the former is linear, whereas the latter is nonlinear (Klahr 2004; Lesur & Papaloizou 2010). This is because in accretion disks, the disturbances have to overcome the strong Keplerian shear that causes perturbations to be heavily dominated by restoring forces in all Reynolds numbers.
Simulation suite parameters for nonmagnetic runs and results.
The nature of the instability was clarified in the work of Petersen et al. (2007a,b), who highlighted the importance of finite thermal inertia. When the thermal time is comparable to the eddy turnover time, the vortex is able to establish an entropy gradient around itself that compensates the large-scale entropy gradient that created it. This entropy gradient back reacts on the eddy, generating more vorticity via buoyancy. This in turn reinforces the gradient. A positive feedback has been established, and the eddy grows. This, in a nutshell, is the baroclinic instability: the sole result of an eddy trying to counter the background entropy gradient that established it, and reinforcing itself by doing so.
The 3D properties of the instability have been studied by Lesur & Papaloizou (2010). They find that the vortices produced are not destroyed when baroclinicity is present, although they are unstable to the elliptical instability (Kerswell 2002; Lesur & Papaloizou 2009). The saturated state of the instability is dominated by the presence of large-scale 3D, self-sustained, vortices with weakly turbulent cores. The study of Petersen et al. (2007a,b) and most of that of Lesur & Papaloizou (2010) was done with spectral codes, which filter sound waves. Vortices, however, have the interesting property of radiating inertial-acoustic waves, which are known to transport angular momentum (Heinemann & Papaloizou 2009). Lesur & Papaloizou (2010) performed a compressible, yet 2D, simulation, with a resulting Shakura-Sunyaev-like α value of 10-3.
These results are intriguing, and a major question to ask is what their significance is in the context of the layered accretion paradigm. Vortices have been described in the literature as devoid of radial shear (Klahr & Bodenheimer 2006), so in principle they could form and survive in the midst of MRI turbulence, as the simulations of Fromang & Nelson (2005) suggest. Moreover, if the baroclinic instability is able to produce and sustain vortices when magnetization is present, synergy with the MRI is an interesting possibility, potentially leading to higher accretion rates than hitherto achieved in previous works. On the other hand, elliptical streamlines can be heavily destabilized by magnetic fields (Lebovitz & Zweibel 2004; Mizerski & Bajer 2009). This magneto-elliptic instability may either be stabilized by baroclinicity, as the elliptic instability was shown to be (Lesur & Papaloizou 2010), or completely break the vortices apart thus rendering the baroclinic instability meaningless in the presence of magnetization. We address these open questions in this work.
The paper is structured as follows. In Sect. 2 we introduce the model equations of the compressible shearing box, modified to include the contribution from the large-scale background entropy gradient. In view of the controversy aroused by the baroclinic instability in the literature, it was found prudent to establish the reliability of the numerics, as well as to provide an independent confirmation of the 2D results. This is done in Sect. 3. The 3D results are presented in Sect. 4. Tables 1 and 2 contain summaries of the simulations performed for this study, referring to the sections and figures they are described. Conclusions are given in Sect. 5.
Simulation suite parameters for magnetic runs.
We model a local patch of the disk following the shearing box approximation. The reader is referred to Regev & Umurhan (2008) for an extensive discussion of the limitations of the approximation. To include the baroclinic term, we consider a large-scale radial pressure gradient following a power law of index ξ where r is the cylindrical radius and R0 is a reference radius. The overbar indicates that this quantity is time-independent. The total pressure is , where p is the local fluctuation. The linearization of this gradient is done in the same way as the large-scale Keplerian shear is linearized in the shearing box. This leads to extra terms in the equations involving the radial pressure gradient. We quote the modified shearing box equations below. A derivation of the extra terms is presented in Appendix A. In the equations above, u is the velocity, A the magnetic vector potential, B = ∇ × A the magnetic field, J = ∇ × B/μ0 the current (where μ0 is the magnetic permittivity), η is the resistivity, T the temperature, s the entropy, and K the radiative conductivity. The operator (5)represents the Keplerian derivative of a fluid parcel. It is the only place where the Keplerian flow appears explicitly. The advection is made Galilean-invariant by means of the SAFI algorithm (Johansen et al. 2009), which speeds up performance. The simulations were done with the Pencil Code1.
We work with entropy as the main thermodynamical quantity. This is a natural choice when dealing with baroclinicity. Considering the polytropic equation of state, (6)and the definition of entropy, (7)we immediately recognize s = cVln(k/k0) in the case n = 1/(γ − 1); i.e., up to a constant, entropy is the proportionality factor in the polytropic equation of state. That means that any spatial gradient of entropy translates into a departure from barotropic conditions2.
The third term on the right-hand side of the entropy equation is an artificial thermal relaxation term, which drives the temperature back to the initial temperature T0, on a pre-specified thermal timescale τc. The temperature is , where cs is the sound speed, γ = cp/cV the adiabatic index, and and cp are the heat capacities at constant volume and constant pressure, respectively.
Fig. 1
Snapshots of the fiducial 2D run with ξ = 2, τ = 2π, H = 0.1, and resolution 2562. A vortex is formed and establishes a local entropy gradient that counteracts the global entropy gradient that caused it in first place. Moderate cooling times keep the surfaces of constant density and constant pressure misaligned, leading to more vortex growth. In the positive feedback that ensues, giant anti-cyclonic vortices grow to the sonic scale. The initial condition was free of enstrophy. This vorticity growth was due purely to baroclinic effects.
Open with DEXTER
After defining entropy, we also define the Brunt-Väisälä frequency N, the frequency associated with buoyant structures (8)and we have assumed axis-symmetry (∂φ = 0) and no vertical stratification (∂z = 0) between the steps. In our setup, there is no large-scale density gradient, so the first term inside the parentheses cancel. As dp/dr = − pξ/r, we have, at r = R0 (9)i.e., the Brunt-Väisälä frequency is always imaginary. However, the flow is convectively stable, since the epicyclic frequency squared (10)is far higher than − N2, so that the Solberg-Høiland criterion is always satisfied (11)In Eq. (10), j = Ωr2 is the specific angular momentum per unit mass.
We add explicit sixth-order hyperdiffusion fD(ρ), hyperviscosity fν(u,ρ), and hyperresistivity fη(A) to the mass, momentum, and induction equations as specified in Lyra et al. (2008). Hyper heat conductivity fK(s) to the entropy equation is added as in Lyra et al. (2009) and Oishi & Mac Low (2009). All simulations use cp = R0 = Ω0 = ρ0 = μ0 = 1, γ = 1.4, and cs = 0.1.
Although the most interesting results for the effects of the baroclinic generation of vorticity should be given by 3D models, the existence of the baroclinic instability has been strongly contested even in two dimensions. We therefore consider it important to present 2D results confirming its excitation. In Fig. 1 we present a fiducial 2D run where the evolution of the baroclinic instability is followed. It corresponds to run A in Table 1. The slope of the entropy gradient is ξ = 2, which corresponds to a Richardson number . As initial condition, we seed the box with noise at small wavenumbers only, following (12)The phase 0 < φ < 1 determines the randomness. The subscripts underscore that the phase is the same for all grid points, only changing with wavenumber. The constant C sets the strength of the perturbation. As stressed by Lesur & Papaloizou (2010), the baroclinic instability is subcritical, and therefore a finite initial amplitude is needed to trigger growth. We set the constant C to yield Σrms = 0.05. The entropy is then initialized such that p = p0 ≡ const. in the sheet.
The rationale behind this unorthodox initial condition is that this noise is independent of resolution. The usual Gaussian noise distributes power through all wavelengths, so the wavelengths from k < 10 are assigned increasingly less power as the resolution increases. We stress that it is not vital for the instability to be seeded with resolution-independent noise, nor are we missing important physics by not exciting the small scales.
We do not seed noise in the velocity field. The initial condition is strictly nonvortical. Since in 2D the stretching term is absent, any increase in vorticity can only be a baroclinic effect.
3.1. Baroclinic production of vorticity
Left. Baroclinic enstrophy growth with thermal relaxation, where a gas parcel returns to the initial temperature on a timescale τ. A stable entropy gradient can only be maintained between the extremes of too fast a relaxation (isothermal behavior) and too slow a relaxation (adiabatic behavior). Optimal growth occurs when τ is comparable to the dynamical time. Right. Baroclinic enstrophy growth with thermal diffusion, where heat diffuses over a scale height on a timescale τ. Optimal growth occurs on longer timescales when compared to the thermal relaxation case.
The baroclinic term, in 2D, is (13)The two first terms are local, whereas the third comes from the large-scale gradient. There would be a fourth if we considered a large-scale density gradient as well. This third term generates vorticity out of any azimuthal perturbation in the density, much in the same way as the locally isothermal approximation does in global disks. This term is paramount, since it is the only source term that will generate net enstrophy in the flow. In the beginning, this term dominates, generating enstrophy out of the initial density perturbations. The enstrophy is then amplified by the local baroclinic vector via the positive feedback described in the introduction.
We witness the same general phenomena as Petersen et al. (2007a), even though the details of implementating the entropy gradient are different. In global simulations, the vortex swings gas parcels back and forth from cold to hot, which causes baroclinicity and vortex growth. Here the initial temperature all over the box is the same, so the vortex does not automatically swing gas from cold to hot. It swings it up and down the hard-coded pressure (= temperature = entropy) gradient. The second-to-last term on the right-hand side of the entropy equation comes from the linearization of the advection term in the presence of an entropy gradient. It embodies how the relative entropy of a fluid parcel with respect to the background entropy changes as it walks up or down an entropy gradient, clearly demonstrated by the dependence on ux. Because the movement along the vortex lines is embedded with a ux component of motion, this term increases or decreases the entropy within the gas parcel depending on the sign of ux. Of course, this is just the same physical effect in a different frame of reference.
We show a time series of the flow in Fig. 1 as snapshots of density, pressure, entropy, and vorticity. From these snapshots we see that the density and the pressure are very correlated. One would therefore expect that the amount of baroclinicity produced is tiny or vanishingly small. However, looking at the snapshots of entropy, we see that appearances are deceiving: the vortex generates a strong radial entropy gradient around itself. This is what we described qualitatively in the introduction, and what Petersen et al. (2007a,b) called a "sandwich pattern". Notice that the sign is indeed reversed with respect to the global gradient (higher at negative values of x). This pattern of a local entropy gradient developed by the vortex is a constant feature throughout the simulations.
Enstrophy and the resulting alpha-viscosity for the fiducial 2D run. The two quantities are quite well correlated, since the angular momentum transport is the result of inertial-acoustic waves, which in turn are driven by vorticity.
3.2. Angular momentum transport
A very important question to ask is what is the strength of the angular momentum transport of the resulting baroclinic unstable flow. We measured the kinetic alpha values in the simulations, (14)where is the Reynolds stress. The value measured is α ≈ 5 × 10-3, indicating good transport of angular momentum. The temporal variation of alpha (Fig. 3) matches that of the enstrophy.
This correlation is understood in light of the shear-vortex wave coupling. The angular momentum transport does not come from the vortex itself, but is instead caused by the inertial-acoustic waves that are driven by vorticity. For a detailed explanation, see Mamatsashvili & Chagelishvili (2007), Heinemann & Papaloizou (2009), and Tevzadze et al. (2010). The same production of shear waves and associated angular momentum transport are seen in the 2D compressible runs of Lesur & Papaloizou (2010). It should be kept in mind that the quoted angular momentum transport may be overestimated because of using a shearing box. As pointed out by Regev & Umurhan (2008), the shearing box approximation may lead to wrong results because it has a limited spatial scale, excessive symmetries, and uses periodic boundaries. In the particular case of a vortex in a box, the periodic boundaries enforce interaction between the vortex and the strain field of its own images, which may lead to spurious generation of Reynolds stress.
We underscore again that the initial condition was nonvortical. The finite-amplitude perturbations are turned into vortical patches by the global baroclinic term, which then may more owing to the local baroclinic feedback.
Other point worth highlighting is that the instability has slow growth rates on the order of a hundred orbits. The saturated state is only weakly compressible, with the rms density ⟨ ρ2 ⟩ − ⟨ ρ ⟩ 2 at a modest 0.05.
All our simulations are compressible, so they are timestep limited by the presence of sound waves. The viscosity and heat diffusion are explicit, so they influence the Courant condition, which further limits the timestep. We calculated the fiducial model for 1000 orbits, but the other runs, which intend to explore the parameter space, were only calculated for 500. These are shown in the next sections.
3.3. Thermal time
The fiducial simulation had a constant thermal relaxation time equal to one orbit, τc = τorb in Eq. (7), where τorb = 2π/Ω. This is more representative of the very outer disk, ≈ 100 AU, but at 10 AU the disk is optically thick. The thermal time is therefore expected to be much longer, and it is instructive to examine the behavior of the baroclinic instability in such a regime. We ran simulations with thermal times of 10 and 100 orbits, shown in Fig. 2. These correspond to runs C and D in Table 1. The extreme cases of an adiabatic run (run E) and a nearly isothermal one (τc = 0.1 orbit, run B) are also presented. In agreement with Petersen et al. (2007a), the runs with longer thermal times allow for a stronger increase in enstrophy in the first orbits, also seen in the adiabatic case. This is because the initial thermal perturbations disperse slowly without thermal relaxation, thus remaining tight (strong gradients) and allowing for a stronger baroclinic amplification.
After the first eddies appear, the establishment of a baroclinic feedback needs a fast cooling time to lead to the reverted entropy gradient seen in the fiducial run. The most vigorous enstrophy growth in this phase is indeed seen to be the one with τc equal to one orbit. For a cooling time of 10 orbits, sustained growth of enstrophy only happens at later times (between 20 and 50 orbits, as opposed to 10 orbits for τc = τorb), and leads to 5 times less (grid-averaged) enstrophy at 150 orbits. The isothermal case and the adiabatic cases, as expected, can never establish the counter entropy gradient needed for the baroclinic feedback and do not experience enstrophy growth past the initial phase.
3.4. Diffusion
Thermal relaxation is one of the ways of changing the internal energy of a gas parcel. Another way is of course diffusion. Petersen et al. (2007a) and Lesur & Papaloizou (2010) report sustained baroclinic growth using thermal diffusion, and this is also why the 3D simulations of Klahr & Bodenheimer (2003) with flux-limited diffusion also experienced baroclinic growth. In the 2D simulations by Klahr & Bodenheimer (2003), the thermal relaxation was numerical and a result of low resolution and a dispersive numerical scheme.
We assess the effect of diffusion by setting τc = ∞ (i.e., shutting down the thermal relaxation) and adding a non-zero radiative conductivity K to the entropy equation (Eq. (7)). The thermal diffusivity kth, is related to the radiative conductivity by kth ≡ K/(cpρ). As we choose dimensionless units such that cp = ρ0 = 1, then kth ≈ K in our simulations. The thermal diffusivity, like any diffusion coefficient, has a dimension of L2T-1, where L is length and T is time. We use L = H where H = cs/Ω is the scale height, and write kth = H2/τdiff so that the heat diffuses over one scale height within a time τdiff. We assess τdiff = 1, 10, and 100 orbits. These are runs F, G, and H in Table 1. We see that now too fast a diffusion time (1 orbit) does not lead to growth, and vigorous growth occurs for 100 orbits. The rationale is the same as for the radiative case. Too fast diffusion disperses temperature gradients and weakens the baroclinic feedback. Slow diffusion works towards keeping the gradients tight and leads to vigorous growth.
It is curious that the optimal diffusion time for growth is longer than in the thermal relaxation case. The difference between them is that relaxation is proportional to the temperature, whereas diffusion is proportional to the Laplacian of the temperature; that is, relaxation operates equally in all spectrum, while diffusion mostly affects higher frequencies. As such, stronger diffusion (when compared to relaxation) should be needed for longer wavelengths. At present, we can offer no explanation as to why this is not the case, though we notice that Lesur & Papaloizou (2010) also see that the optimal diffusion time for the baroclinic feedback is substantially longer than the vortex turnover time.
Dependence on resolution. The low resolution run fails to develop vortices, reaffirming that aliasing is not occurring in our models. The middle and high resolution runs saturate at the same level of enstrophy, which suggests convergence.
Dependence on Grid Reynolds number. The hyperviscosities shown correspond to Re = 0.002, 0.02, and 0.2 with respect to the velocity shear introduced by the Keplerian flow, calculated on the grid scale. The initial phase of growth occurs at Re < 1, where it is seen that the amount of growth depends on the Reynolds number. Upon saturation, all simulations converge to the same level of enstrophy. A heavily aliased solution occurs for ν(3) = 10-21, where even a simulation seeded only with noise develops vortices. The same does not occur for the hyperviscosities shown, where finite amplitude perturbations were required. We usually use ν(3) = 10-17.
Evolution of vorticity (upper panels) and entropy (lower panels) due to the baroclinic instability in 3D.
3.5. Resolution
To investigate the effect of resolution, we compare runs using 128, 256, and 512 grid zones in the x-axis, and cells of unity aspect ratio (meaning four times more resolution in the y-axis than in the fiducial run). These are runs J, K, and L in Table 1. They use the default values of τc = 1 orbit for the thermal relaxation time, and ξ = 2 for the entropy gradient.
As seen in Fig. 4, the run with resolution 128 × 512 (low resolution) fails to sustain vortex growth, in contrast to the runs with resolution 256 × 1024 (middle resolution) and 512 × 2048 (high resolution). That the low-resolution run does not lead to enstrophy growth is a salutary reassurance that aliasing is not spuriously injecting vorticity in the box. The high-resolution run shows a slightly higher initial enstrophy production (from 0–30 orbits), yet it saturates to the same level as the middle resolution run, which suggests convergence.
3.6. Grid Reynolds number
As for the grid Reynolds number, we check three runs, with hyper-viscosities ν(3) = 10-16 (run M), 10-17 (run A), and 10-18 (run N), meaning grid (hyper-)Reynolds numbers of 2 × 10-3, 2 × 10-2, and 2 × 10-1 with respect to the velocity shear introduced by the Keplerian flow, Re = (3/2)Ω Δx2/ν, where ν = ν(3)(π/Δx)4.
By increasing the Reynolds number, the initial enstrophy amplification is stronger. Upon saturation, the mean enstrophy in all simulations converge to the same value. It should be noticed that although the grid Reynolds number upon saturation is greater than 1, the initial phase of growth occurs below this number – so growth cannot be due by aliasing. A heavily aliased solution is only attained when the hyperviscosity is decreased to ν(3) = 10-21, so that the initial phase of growth occurs at very high Reynolds numbers (1000). At this Reynolds number, vortex growth occurs when the simulation is seeded with Gaussian noise, which is a sign that the growth was numerical, given the nonlinear nature of the baroclinic instability. In contrast, none of the simulations shown in the figure develop vortices when only seeded with noise. We usually use ν(3) = 10-17, which yields a good compromise between not leading to aliasing and not affecting the timestep too much.
Having examined the behavior of the baroclinic instability in 2D, we now turn to 3D simulations. We only study the unstratified case, because the stratified case needs a modification of the evolution equations, replacing p0 in Eqs. (2) and (4) by p0f(z), where f(z) is the stratification function. We use a box of length (4 × 16 × 2) H, with resolution 256 × 256 × 128. Unlike in Lesur & Papaloizou (2010), our simulations are compressible, which limits the timestep and makes it impractical to follow a 3D computation for many hundreds of orbits. For this reason, we follow it for 250 orbits, which was seen to be the beginning of saturation in 2D runs. The parameters of the simulation are shown in Table 1 as run O.
Evolution of enstrophy, kinetic stresses, and vertical velocities in a 3D baroclinic simulation. The evolution is very similar to the 2D case up to 120 orbits. At that time the vortex goes elliptically unstable, and the kinetic energy of vertical motions increases by 10 orders of magnitude in less than 10 orbits, but remains three orders of magnitude lower than the radial rms velocity. This 3D elliptical turbulence is very subsonic, and the vortex is not destroyed. The level of enstrophy and angular momentum transport remain similar to that of a 2D simulation.
In Fig. 6 we show snapshots of enstrophy and entropy, and in Fig. 7 we plot the time series of box-averaged enstrophy, alpha value, and rms vertical velocity. As seen from these figures, the 3D baroclinic instability evolves very similarly to its 2D counterpart. After 200 orbits the instability begins to saturate as vortices merge and the remaining giant vortex grows to the sonic scale. The sandwich pattern of entropy perturbations sustaining the vortex is also very similar. The saturated state also displays similar values of enstrophy ( of the order of 10-2) and angular momentum transport (α ≈ 5 × 10-3).
The difference is in the excitation of the elliptical instability (Kerswell 2002; Lesur & Papaloizou 2009). As seen in the lower panel of Fig. 7, the growth of this instability is very rapid, with the rms of the vertical velocity rising by ten orders of magnitude in less than 10 orbits. As in Lesur & Papaloizou (2010), the instability leads to turbulence in the core of the vortex, but it is not powerful enough to break its coherence. This is because the elliptic destruction caused by the vortex stretching term is compensated with vorticity production by the baroclinic term.
We follow the evolution of the vortex for 130 more orbits, without seeing any decay in the rms vertical velocity. In Fig. 8 we plot vertical slices of the z-vorticity and z-velocity, taken at the y-position of the vorticity minimum, at t = 250 orbits. The snapshots reveal the vertical motions at the vortex core. The motion seems turbulent, only weakly compressible, with maximum velocities reaching 10% of the sound speed.
We stress again that the alpha value is around 5 × 10-3 at saturation and positive. Lesur & Papaloizou (2010) report a much lower (of the order of 10-5) and negative angular momentum transport. This is because of the anelastic approximation, as the authors themselves point out. In that case, the angular momentum "transport" is solely due to the 3D instability that taps energy from the vortical motion. Compressibility allows for the excitation of spiral density waves, which enable positive angular momentum transport.
Vertical slice of the elliptically unstable vortex core showing vertical vorticity (left panel) and vertical velocity (right panel). The motion in the core constitutes a subsonic turbulence at maximum speeds reaching 10% that of sound.
4.1. Magnetic fields. Interaction of the MRI and the BI
The baroclinic instability demonstrated in the past sections seems to be able to drive angular momentum transport in accretion disks. As such, it could be thought of as an alternative to the MRI. Nevertheless, an important question to ask is how the two instabilities interplay. What happens if a magnetic field is introduced into the simulation?
To answer this question, we take a snapshot of the quantities at 200 orbits, and add a constant vertical magnetic field to it, of strength B = 5 × 10-3 ( ). We assume ideal MHD, i.e. perfectly coupling of the field to the gas (run P in Table 2). The same setup in a barotropic box leads to MRI turbulence with alpha values of the order of α ≈ 5 × 10-2. When the field is introduced into the MRI unstable box, the Maxwell stress immediately starts to grow, saturating after ≈ 3 orbits, as expected from the MRI. The Reynolds stress due to the MRI supersedes the stresses due to the BI by one order of magnitude (see Fig. 9). The pattern is the same as with an MRI-only box. The conclusion is immediate: the BI plays little or no role in the angular momentum transport when magnetic fields are well-coupled to the gas. This was intuitively expected, since the BI has weak angular momentum transport, as well as slow growth rates. The MRI is faster by 1 order of magnitude and much stronger.
Angular momentum transport with only the baroclinic instability in a 3D run and the baroclinic and magneto-rotational instabilities, after 200 orbits. The pattern after including the magnetic field is equal to that generated by MRI-only, from which we conclude that the BI is irrelevant if magnetic fields are well-coupled to the gas.
In Fig. 10 we show the evolution of energies, enstrophy, and temperature, before and after including the magnetic field (at 200 orbits). The magnetic energy behaves as expected from the MRI, a fast growth and saturation after ≈ 3 orbits, with most of the energy stored in the azimuthal field. The kinetic energy of the turbulence increases by one order of magnitude and is more isotropic, also as expected from the MRI. The temperature increases by a factor of ≈ 2 in 15 orbits. This is because the MRI turbulence heated the box faster than the thermal relaxation time could bring the temperature back to T0.
With this experiment we expected to assess the possibility of synergy between the instabilities, but as far as we can tell, none is observed because the MRI alone dictates the evolution.
Fig. 10
Evolution of box-average quantities (clockwise: kinetic energy, magnetic energy, enstrophy and temperature) before and after inserting the magnetic field. The MRI quickly takes over, on its characteristic short timescale. No evidence of synergy between the two instabilities is observed. The saturated state of the combined baroclinic+MRI resembles an MRI-only scenario.
Evolution of vorticity (upper panels) and magnetic energy (lower panels) in 3D. As the MRI develops, the vortex is destroyed by the magnetic field. In a nonmagnetic run, the vortex survives indefinitely.
Time evolution of the 1D spatial average of enstrophy (upper panels) and azimuthal magnetic field (middle panels) for the runs in Table 2. The lower panels refer to the magnetic field attained in control runs, where ξ = 0. In all runs, the field is seen to grow first in the vortex, then in the surrounding flow. This shows that the growth rates of the magneto-elliptic instability are faster than those of the MRI. Vortex destruction is apparent in these plots as loss of spatial coherence in the enstrophy plots, and occurred in all simulations. The length of the time axis is the same for all simulations, except the control run for run S.
4.1.1. Vortex destruction by the magnetic field
If the evolution of the box-averaged quantities brings no surprises, the same cannot be said of their spatial distribution. In Fig. 11 we plot vorticity at three consecutive orbits after insertion of the magnetic field. Magnetic energy is also shown. The vortex, which in a nonmagnetic run retains its coherence indefinitely, is dilacerated when magnetic fields are included. In Fig. 12 we plot 1D spatial averages against time of the vertical enstrophy (upper panel) and magnetic energy of the azimuthal field (middle panel). A control run where ξ = 0 is also shown (lower panel). The figure also shows other simulations (discussed later). The run in question is the leftmost one, labeled "P". It is apparent from the enstrophy plot that the vortex bulges, then gets destroyed as the magnetic energy grows.
To understand this behavior, we examined the state of the vortex prior to inserting the field. In Fig. 13 we measure the vorticity profile of the vortex. The figure shows a slice at the midplane, where we define a box of size 8H × 2H centered on the vorticity minimum. We used elliptical coordinates such that the radius is , where χ = a/b is the vortex aspect ratio (a is the semimajor and b the semiminor axis). The coordinates xc and yc are rotated by a small angle to account for the off-axis tilt of the vortex, (xc,yc) = R(x − x0,y − y0), where R is the rotation matrix, and (x0,y0) are the coordinates of the vortex center, found by plotting ellipses and looking for a best fit. We find that χ = 4 and a rotation of 3° best fits the vortex core. Two such ellipses are plotted in the upper panel of Fig. 13.
We then measured the vertical vorticity within the box, averaged all vertical measurements for a given radius, and box-plotted the z-averaged measurements against rV. The box plot uses a bin of ΔrV = 0.01. The result is seen in the lower panel of Fig. 13, with the radius of the ellipses drawn in the upper panel. It is seen that the vortex core (inside the inner ellipse) has a vorticity profile that is well approximated by a Gaussian, , where ω0 = 0.62, r0 = 0.1, and the radii are in elliptical coordinates.
Vorticity profile of the vortex, prior to the insertion of the magnetic field. We measure the vertical vorticity in the midplane of the simulation against the elliptical radius, in the grid points boxed by the thin black line as shown in the upper panel. The modulus of the vorticity is plotted in the lower panel. The conclusion is that the vortex core has an angular velocity profile close to uniform, with shear where it couples to the Keplerian flow. The dashed lines in the lower panel mark the position of the dotted ellipses in the upper panel. They have an aspect ratio χ = 4, and mark elliptical radii of rV = 0.065 and rV = 0.13. The inner one encloses the vortex core.
We conclude that the vorticity in the core is close to uniform (as a Gaussian is very flat near the peak amplitude). Because the vorticity is finite and close to uniform, so is the angular momentum, and thus little radial shear should be present in the core. As the MRI feeds on shear, one can expect that a patch of constant (or nearly constant) angular momentum should be stable. Nevertheless, examining the vorticity after 2 orbits of the insertion of the field, (upper middle panel of Fig. 11), we notice that the core did become unstable. This seems to be a signature of the magneto-elliptic instability (Lebovitz & Zweibel 2004; Mizerski & Bajer 2009), which we consider in the next section.
4.1.2. Magneto-elliptic instability
The elliptic instability has been a topic of extensive study in fluid mechanics (see review by Kerswell 2002). First studied in the context of absent background rotation (Bayly 1986; Pierrehumbert 1986), the effect of the Coriolis force was studied by Miyakazi (1993), followed by the effect of magnetic fields by Lebovitz & Zweibel (2004). The general case, in which both background rotation and magnetic fields are present, has recently been studied by Mizerski & Bajer (2009).
These studies have unveiled two regimes of operation, which may as well be seen as two different instabilities. The first de-stabilization mechanism is through resonances between the frequency of inertial waves and harmonics of the vortex turn-over frequency. This instability is three-dimensional, existing for θ > 0 (the angle θ being the angle between the wavevector of the pertubations and that of the vortex motion). Lebovitz & Zweibel (2004) show that this instability persists in the presence of magnetic fields, and that its effect is twofold. While it lowers the growth rates of the elliptically unstable modes, the excitation of MHD waves allows for de-stabilization of whole new families of resonances.
The second destabilizing mechanism occurs only when the Coriolis force is included (Miyakazi 1993). This instability is nonresonant in nature and exists only for θ = 0 modes, i.e., oscillations in the same plane of the motion of the vortex. Because this plane is associated with a "horizontal" (xy) plane (thus kz modes), this destabilizing mechanism has been called "horizontal instability". As shown by Lesur & Papaloizou (2009), this nonresonant instability results in exponential drift of epicyclic disturbances. It can thus be regarded as an analog of the Rayleigh instability, but for elliptical streamlines. For a vortex embedded in a Keplerian disk, the modified epicylic frequency goes unstable for the range of aspect ratios 3/2 < χ < 4.
Mizerski & Bajer (2009) present the analysis of the general case, including both the Coriolis and Lorentz forces. They confirm the previous effects of the Coriolis and Lorentz forces in isolation, and find that the horizontal instability, when present, dominates over all other modes. They also find that the magnetic field widens the range of existence of the horizontal instability to an unbounded interval of aspect ratios when (15)where (16)is the Rossby number, (17)is another measure of the ellipticity, and (18)is a dimensionless parametrization of the magnetic field, with k the wavenumber. We can also write b = q/Ro, where (19)is a more usual dimensionless parametrization of the magnetic field, based on the Balbus-Hawley wavelength λBH = 2πvA/ΩK (Balbus & Hawley 1991; Hawley & Balbus 1991). The analysis of Mizerski & Bajer (2009) also assumes that the vortex is of the type (20)When there is a magnetic field but the criterion posed by Eq. (15) is not fulfilled, Mizerski & Bajer (2009) find that the field has an overall stabilizing effect on the resonant modes of the classical (hydro) elliptic instability.
We can rewrite Eq. (15) in more familiar terms by isolating the wavenumber and expressing the criterion in terms of ΩK and Ro (21)We estimate the Rossby number of the vortex in Fig. 13. Assuming the elliptic flow of Eq. (20), the vorticity is ωT = 2δΩV. The subscript T stands for "total". This distinction is necessary because the sheared flow amounts to a vorticity of ωbox = − 3ΩK/2. The total vorticity is ωT = ωV + ωbox, where ωV is the vortex's intrinsic vorticity. By isolating ΩVδ and dividing by ΩK, we have (22)In the absence of a vortex, the Rossby number is still finite, Ro = − 3/4, because of the vorticity of the shear flow. In this limit, Eq. (21) becomes
which is the criterion for the MRI (Balbus & Hawley 1991). As we shall see, the growth rate in this limit also matches that of the MRI. This suggests that the MRI is a particular case of the magneto-elliptic instability.
Shear as the common ground between the magneto-rotational and magneto-elliptical instabilities. The distance between two points in uniform rotation does not increase if the streamlines are circular, i.e., the rotation is rigid (upper figure). However, in elliptic streamlines the distance between the two points does increase even if the rotation is uniform (lower figure). A magnetic field connecting the two points will resist this shear, leading to instability depending on the field strength.
Since we measured ωV/ΩK ≈ − 0.6, the Rossby number is approximately Ro ≈ − 1. These results are compatible with the Kida solution (Kida 1981) (23)from which we derive (24)and thus (25)For \begin{formule}$\chi = 4$\end{formule}, the expressions above yield ωV/ΩK = −5/8 = −0.625, which matches well the vorticity plateau measured in Fig. 13, and Rossby number Ro = −17/16 ≈ −1. Since the Rossby number is Ro ≈ −1, Eq. (21) implies that the horizontal instability in the vortex is present when q ≲ 2.
In dimensionless units, we use Lz = 0.2, and a resolution of Nz = 128 points in the z-direction, so the wavenumbers present in the box are k0 < k < kNy, where k0 = 2π/Lz = 31 is the largest scale, and kNy = π/Δz = 2011 is the Nyquist scale. The Pencil Code needs eight points to resolve a wavelength without significant numerical dissipation, so for practical purposes, the maximum wavenumber of the inertial range is kNy/4 = 503. Also in dimensionless units, μ0 = ρ0 = ΩK = 1, so for B0 = vA = 5 × 10-3, the condition posed by Eq. (21) is k < 400, well within the range captured by our box.
As for growth rates, Mizerski & Bajer (2009) do not unveil a simple expression. The solution has to be computed numerically3. The growth rate is a function of χ, q, and the angle θ between the z-axis and the wavevector of the perturbation. Technically, the Rossby number is also a free parameter, but the Kida solution ties the Rossby number to the aspect ratio. We present the growth rates for χ = 4 in the q-θ plane in the left panel of Fig. 15.
Left. Numerically calculated growth rates of the magneto-elliptic instability for elliptic streamlines of aspect ratio χ = 4 as a function of the dimensionless magnetic field strength q = kvA/ΩK and the angle θ between the wavevector of disturbances and the vertical axis. Pure kz modes are the most unstable ones, having a critical wavelength near the predicted . Weaker destabilization exists at intermediate θ (3D disturbances) for shorter wavelengths. Pure planar disturbances (θ = π/2) are stable. Right. Growth rates of the kz modes for different aspect ratios. For χ = 2 and χ = 3 the purely hydrodynamical elliptical instability is seen as finite (and high) growth rates as q tends to zero. For χ = 4 onwards, the instability is magnetic and has a most unstable wavelength near q = 1. The χ = 100 curve stands for an approach to the limit of pure shear flow. The growth rate curve calculated matches that of the MRI.
It is seen that the most unstable modes are those of the horizontal instability (θ = 0, or kz modes). The right panel of Fig. 15 shows the growth rates of these modes for a series of aspect ratios. For χ = 2 and χ = 3, we are in the range of existence of the classical (hydro) horizontal instability, noted by the fact that fast exponential growth exists for vA = 0. For χ = 4 onwards, the instability does not exist or is too weak in the hydro regime, and the most unstable wavelength is found in the vicinity of q = 1. Although a critical wavelength exists for kz modes, 3D resonant instability exists for an unbounded range of wavenumbers, albeit with slower growth rates. In the next sections, unless otherwise stated, whenever we mention "magneto-elliptic instability" we mean the horizontal, nonresonant, magneto-elliptic modes.
We also calculate the growth rate in the limit of pure shear flow, which we approximate numerically by χ = 100. In this case, there are no 3D unstable modes, since there is no finite vortex turnover frequency to establish resonances. The only instability present is horizontal (kz), which we also show in the right panel of Fig. 15. The critical wavelength is , most unstable wavelength q ≈ 1, with growth rate σ ≈ 0.75ΩK. One immediately recognizes that these properties are the properties of the MRI. That the MRI is a limiting case of the magneto-elliptic instability makes sense because Eq. (20) with the Kida solution reduce to a Keplerian sheared flow when χ tends to infinity. Physically, the destabilization of kz modes of the elliptic and magneto-elliptic instability mean exponential drift of epicyclic disturbances. The equivalent of epicyclic disturbances for χ → ∞ are radial perturbations in a sheared flow. The magneto-rotational and magneto-elliptic are in essence the same instability. Another way of seeing the common ground between the instabilities is by realizing that, although constant angular momentum means rigid rotation in circular streamlines, it does not mean so when it comes to elliptical streamlines. Figure 14 illustrates this point. The length of a line connecting two points is conserved in uniform circular motion, but not in uniform elliptical motion4. In other words, uniform elliptical motion contains shear. A magnetic field connecting the two points will resist that shear, leading to instability depending on the field strength.
Judging from the Fig. 15, the growth rates of the magneto-elliptic instability at the Balbus-Hawley wavelength q = 1 seem to be well-reproduced by a fit (26)i.e., scaled by a factor (χ + 1)/χ with respect to those of the MRI. We hereafter refer to this χ = 100 ≫ 1 curve as the MRI limit.
4.2. Isolating the vortex magnetic action
As seen in Fig. 15, the wavelength range of the magneto-rotational and (horizontal) magneto-elliptic instabilities are almost the same for the aspect ratio of interest, leaving only a narrow range where one instability is captured but not the other. However, the growth rates differ, and we can explore this fact. The maximum growth rate for χ = 4 is σ ≈ 0.95ΩK. While the MRI is amplified a millionfold in three orbits, the magneto-elliptic instability is amplified by more than a billionfold in the same time interval. We study in this section limiting cases where the instabilities do not grow as fast as in Fig. 11, thus allowing us to better study their behavior. Because the magneto-rotational and magneto-elliptic instabilities will both be present in the simulations, we loosely refer to them collectively as "the MHD instabilities" or just "the instabilities" in the next sections.
4.2.1. Increasing the field strength – Stabilization of elliptic instability and channel flows
We add to the box a vertical field of strength Bz = 6 × 10-2. Since the smallest wavenumber of the box is k0 = 31, we have that the critical wavenumber for the MRI is k/k0 = 0.9 and thus the box is MRI-stable. The critical wavenumber for the magneto-elliptic instability, according to Eq. (21), is k/k0 = 1.13 and thus in principle resolved. We aim with this to explore the window between where the MRI is suppressed but not the horizontal magneto-elliptic instability.
We follow the evolution of box-average quantities in Fig. 16. The run in question is shown in that plot as green dot-dot-dot-dashed lines, and corresponds to run Q1 in Table 2.
After insertion of the field, we immediately see a decrease in the box average of the vertical velocities. The vertical vorticity is unchanged. Radial and azimuthal fields decay with the decay of the vertical velocity. A weak vertical magnetic field of rms β = 1000 is sustained. Even though the analysis provides an elliptical wavelength that is shorter than the box length, we do not seem to witness a magneto-elliptic instability in operation. In fact, we are in the range of stable Rossby numbers, evidence of which is that the elliptic instability was stabilized after inserting the field. This is not really worrisome considering that the derived critical wavenumber was so close to k0, and we made some approximations. It is curious, though, that we do not see growth in the unstable resonant modes. For wavelengths emcompassing the vortex core (λy = 2H or λx = H/2; each of them well-resolved with 32 points), the maximum growth rate is at the vicinity of σ = 0.33ΩK, yielding a millionfold amplification in less than 7 orbits. We ran this simulation for 30 orbits after inserting the field. At present, we can offer no explanation as to why these modes did not become unstable.
We also test a run with a slightly less strong field, Bz = 3.75 × 10-2 (run Q2). In that case, the magneto-rotational and magneto-elliptic instabilities have critical wavelengths of k/k0 = 1.47 and k/k0 = 1.78, respectively, so both instabilities ought to be resolved. The largest scale of the box corresponds to q = 1.18, close to the maximum growth rate of both instabilities. The magneto-elliptic instability has a faster growth rate, so it should be seen first.
What we witness is quite revealing. The vortex is destroyed in 4 orbits, while the MRI is still growing in the box. A growth of magnetic energy occurs within the vortex at a very fast pace. The vortex is destroyed when still in the phase of linear growth of the instability, owing to the development of a conspicuous and strong channel flow (Fig. 17). Because the flow in different layers occurs in different directions, the vortex is stretched apart and loses its vertical coherence.
Isolating the vortex magnetic action. The lines show the magnetic runs with the MHD instabilities (magneto-elliptic and magneto-rotational) resolved in ideal MHD (P, black solid); unresolved with strong field (Q1, green dot-dot-dot-dashed); most unstable wavelength under-resolved with weak field (R, red dashed); and quenched with resistivity (S, blue dot-dashed). The nonmagnetic 3D hydro run is shown as dotted line in the lower panels. Run Q1 shows that strong magnetic fields have a stabilizing effect on the elliptical instability. Run R shows the magneto-elliptic instability seeming to saturate (at 203 orbits) before the MRI takes over (at 207 orbits). In runs Q2 and S2, the vortex survives until a channel flow develops in the box.
We notice that, prior to the excitation of the channel flow, the elliptical instability in the core was suppressed, which is also obvious from comparing the snapshots at t = 200 and 201 orbits at Fig. 17.
The run is also shown in Fig. 12 (run Q). We see that the growth of magnetic energy occurs earlier in the vortex when compared to the surrounding flow, as expected.
4.2.2. Decreasing the field strength – vortex MHD turbulence
The effect of a strong unstable vertical magnetic field in the vorticity column. The field is added at t = 200. At first, the effect of the field is to stabilize the elliptic turbulence, which is seen in the subsequent snapshots. The disappearance of the vortex at later times is caused by the development of a strong channel flow that stretches the column and destroys its vertical coherence. If the initial vertical field is stable, the strength of the channel does not grow and the vortex survives indefinitely.
Next we checked the behavior of the flow by adding weak magnetic fields. The goal was to slow the MHD instabilities by not resolving their most unstable wavelengths, q ≈ 1. Both instabilities thus operate at a slower pace, which results in stretching the time interval while one (magneto-elliptic in the vortex) is saturating and the other (magneto-rotational in the box) still growing.
Time series of enstrophy and plasma beta for run R, where the instabilities grow at lower growth rates than in run P (Fig. 11). The magneto-elliptic instability grows faster than the MRI, which is seen as the strong turbulence that develops in the core, while the underlying Keplerian flow is still laminar. Once the MRI saturates, the strain of its turbulence destroys the vortex spatial coherence. It is not conclusive if the vortex would have survived the magneto-elliptic instability had the MRI not destroyed it first.
Time series of enstrophy and plasma beta for runs S, where the instabilities are quenched with resistivity. The upper panels correspond to a high-resistivity run, where even the longest wavelength of the box is damped. The simulation is similar to a nonmagnetized run, the vortex surviving indefinitely. In the lower panels we used lower resistivity with Elsässer number Λ = 1. The longest wavelength of the box thus has a magnetic Reynolds number of 6, so its growth is not quenched. The magneto-elliptic instability grows in the vortex core in a conspicuous kz/k0 = 2 mode. Part of the field generated is diffused away due to the high resistivity. Channel flows eventually develop, destroying the vortex.
The cell size in the z-direction is 1.6 × 10-3. We add a field of strength Bz = 1.5 × 10-3. The Balbus-Hawley wavenumber is kBH = 667, and thus resolved but within the viscous range. The first properly resolved wavenumber is k ≈ 500, which corresponds to q ≈ 0.75. The run is shown in Fig. 16 as dashed red line, and labeled R in Table 2 and Fig. 12.
It is seen that the MRI in the Keplerian flow is suppressed, yet an instability is present. We identify it with the magneto-elliptic instability, as it coincides with the vortex core going unstable, as shown in the snapshots of Fig. 18.
The vortex is magneto-elliptic unstable, yet it does not seem to lose its spatial coherence. The magnetic field is mostly confined to the vortex, which shows as a region of high Alfvén speeds, when the surrounding Keplerian flow is still laminar. The instability is violent, making the vortex bulge. This is apparent in Fig. 18 as the vortex seems to have grown radially from t = 203 to t = 206 orbits. During this period, however, the box average of kinetic energy and enstrophy are nearly constant (Fig. 16), so it is not clear if this magneto-elliptic turbulence would have led to vortex destruction, or if it would have reached a steady state. The process just outlined is well-illustrated in Fig. 12 (run R). One orbit later, the MRI starts to develop in the surrounding Keplerian flow (notice the difference between these time scales and those of Fig. 11), which corresponds to the increase in box-average quantities in Fig. 16 at that time. No strong channel flow is excited. The level of vorticity due to the MRI is nonetheless bigger than that of the vortex. The latter eventually becomes inconspicuous in the midst of the box turbulence.
We also tested a weaker field, of strength Bz = 6 × 10-4. The wavenumbers of the analysis above were then scaled by 2.5, so the first resolved wavenumber corresponds to q = 0.3. In this case, no significant action was seen. After ten orbits, the intensity of the magnetic energy was only 4 × 10-9, accompanied by a merely slight increase in the kinetic energy of the vertical velocities ( and remained unchanged). The minimum plasma beta was still as high as 104.
4.2.3. Resistivity
To test the last case, we used a resistivity high enough that the longest unstable wavelength present in the box has a magnetic Reynolds number of unity. This wavelength is of course Lz, the vertical length of the box. The resistivity then is such that ReM = LzvA/η = 1; for a field of strength Bz = 5 × 10-3, this magnetic Reynolds number corresponds to η = 10-3. This is the same field that was used in the fiducial MHD run (Fig. 11), of kBH = 200, so the instabilities are resolved in the absence of resistivity. The run is labeled S in Table 2. The results are shown in the upper panels of Fig. 19.
The simulation is not very different from a purely hydro run. The damped magnetic field only has a slight stabilizing effect on the elliptical instability. A slight amount of the kinetic energy of the core turbulence gets converted into magnetic energy, which then diffuses away. The vortex becomes, at later times, less magnetized than the surroundings.
The situation should change when the resistivity is lowered slightly, allowing some unstable wavelengths to have ReM > 1, yet still quenching the most unstable wavelengths. For that, we set the Elsässer number to . The Elsässer number is equivalent to the magnetic Reynolds number ReM = LU/η taking the length L as the MRI wavelength, and velocity U as the Alfvén velocity. As such, it is the quantity governing the behavior of the MRI (e.g., Pessah 2010). Having Λ=1 corresponds to , or η ≃ 1.6 × 10-4 in dimensionless units. The magnetic Reynolds number of the longest wavelength is thus ≈ 6. The results are shown in the lower panels of Fig. 19.
The vertical field again has a stabilizing effect on the elliptical turbulence. This is seen as a weakening of the vertical kinetic energy in Fig. 16, which lasts for two orbits. The difference between this run and the more resistive one is that thanks to the excitation of magneto-elliptic modes, radial and azimuthal fields grow inside the vortex core, and a conspicuous k/k0 = 2 vertical mode appears (lower panels of Fig. 19). The field gets looped around the vortex, initially making the vorticity patch a region of higher Alfvén speeds. Owing to the high resistivity, however, the field diffuses away (the time for the field to diffuse over a scale height is t = H2/η ≈ 10 orbits). The radial field gets sheared into azimuthal by the Keplerian flow. After a few orbits, strong magnetic fields are seen in the vortex spiral waves. At later times, the exponential growth of radial and azimuthal fields, as well as the excited z-velocities, are seen in these waves. This process is illustrated in Fig. 20.
A look at the induction equation illustrates the process. Under incompressibility and elliptical motion (Eq. (20)), the equations for the in-plane field perturbations under the influence of a vertical magnetic field and resistivity are The first term in both equations generates field out of velocity perturbations. This is the only source term for the in-plane field. Under waning velocity perturbations, the generation of fields dies out as well. The second term is also stretching, but under the vortical motion, thereby turning radial fields into azimuthal and vice-versa, at the vortex frequency. Its effect is to wrap around the vortex the fields generated by the first term. The field then diffuses away due to the resistivity. The radial field is sheared into azimuthal because of the third term in the azimuthal field equation.
In this simulation, the magnetic Reynolds number of the longest wavelength is ReM = LzvA/η = 6, so even though the most unstable wavelength of the MRI is suppressed, slower growing wavelengths are present. Since they amplify the field, strong channel flows eventually appear in the simulation. At 10 orbits the azimuthal field of the channel achieves the same strength as that of the field in the vortex. The box went turbulent at 15 orbits (Fig. 12), slightly after the destruction of the vortex by the channel flow. We note, however, that in the control run for the simulation in question, the MRI grew slower, only becoming noticeable at ≈ 20 orbits (notice the larger range of the time axis for the control run). It appears that the field produced by the magneto-elliptic instability in the vortex and then diffused to the box led to the faster growth compared to the ξ = 0 control run.
A simulation where the Reynolds number of the longest wavelength was three also shows the same qualitative behavior, albeit in longer timescales. We followed a simulation of Reynolds number two for the same time, and no growth was seen. The timescale for growth in this case may be infinite (stable) or just impractically long. We conclude that resistivity suppresses the magneto-elliptic instability when the longest unstable wavelength has a magnetic Reynolds number of the order of unity, as intuitively expected.
Time series of enstrophy and azimuthal field in the midplane for run S2, of moderate resistivity. By action of the magneto-elliptic instability, the field initially grows inside the vortex. Due to the resistivity, it then diffuses away from the vortex, coupling to the waves excited by it. At later times, exponential growth of the field is seen in the wake. The vortex itself appears unmagnetized.
4.3. Constant azimuthal field and zero net flux field
The analysis of the magneto-elliptic instability by Mizerski & Bajer (2009) was done for a system thread by a uniform constant magnetic field. We seek here to establish the effect of a zero-net flux field. As it turns out, the vortex is quite unstable to such configurations as well.
We add a field whose initial value is , where k0z = 2π/Lz, and B0 = 10-2. The run is labeled U in Table 2 and Fig. 12. The most unstable wavelength for the MRI has k = 100, hence well-resolved. In a barotropic box, this field led to saturated turbulence after 3 orbits. The critical wavelength for the magneto-elliptic instability has k = 200, also well resolved. As shown in Fig. 12, the vortex becomes unstable well before the box turbulence starts. In 1 orbit after insertion of the field, the vortex column has already lost its coherence.
As for an azimuthal field, Kerswell (1994) studied the effect of toroidal field on elliptical streamlines, finding only a slight stabilizing adjustment of the growth rates of the elliptical instability. The analysis, however, only holds for the limit of nearly circular streamlines (χ → 1). Given the stark difference in the behavior of vertical fields in different configurations, there is reason to believe the same should apply to azimuthal fields. We add a field , with B0 = 0.03 (run T in Table 2). Once again, the vortex is quickly destroyed, as seen in Fig. 12.
We model for the first time the evolution of the baroclinic instability in 3D including compressibility and magnetic fields. We find that the amount of angular momentum transport due to the inertial-acoustic waves launched by unmagnetized vortices is at the level of α ≈ 5 × 10-3, positive, and compatible with the value found in 2D calculations.
When magnetic fields are included and well-coupled to the gas, an MHD instability destroys the vortex in short timescales. We find that the vortices display a core of nearly uniform angular velocity, as claimed in the literature (e.g., Klahr & Bodenheimer 2006), so this instability is not the MRI. We identify it with the magneto-elliptic instability studied by Lebovitz & Zweibel (2004) and Mizerski & Bajer (2009).
Though Lebovitz & Zweibel (2004) report that the magneto-elliptic instability has lower growth rates than the MRI, our simulations show the vortex core going unstable faster than the box goes turbulent. That is because the presence of background Keplerian rotation allows for destabilization of kz modes (horizontal instability), which have higher growth rates. We also show that the stability criterion and growth rates for the magneto-elliptic instability derived by Mizerski & Bajer (2009), when taken in the limit of infinite aspect ratio (no vortex) and with shear, coincide with those of the MRI. Both instabilities have a similar most unstable wavelength, yet the growth rates of the magneto-elliptic instability in the range of aspect ratios 4 < χ < 10 are approximately 3 times faster than for the MRI.
After the vortex is destroyed, the saturated state of the MRI+BI simulation resembles an MRI-only simulation. The same box-averaged values of α, enstrophy, kinetic, and magnetic energies are measured in the two cases. The conclusion is that the background entropy gradient plays only a small role when magnetic fields are present and well-coupled to the gas. The enstrophy produced by the BI is four orders of magnitude lower than that produced by the MRI.
We performed a series of numerical experiments to determine the behavior of the magneto-elliptic instability in limiting cases. First, we increased the field so that the critical MRI wavelength is bigger than the box. In that case, the elliptical turbulence dies out almost immediately after inserting the field. We take it as evidence of the stabilizing effect of strong magnetic fields on the classical (hydro) elliptical instability. When the field is slightly decreased so that some unstable wavelengths are resolved, the magnetic field inside the vortex core grows rapidly, leading to channel flows that soon break the spatial coherence of the vortex column.
Second, we slow down the instabilities to better study the magneto-elliptic instability in isolation. Decreasing the growth rate by a factor x stretches the time period between their saturations by the same factor. We thus decreased the field so that the most unstable wavelength in the box is underresolved. In this case, we witness the development of magneto-elliptic turbulence in the vortex core only. This turbulence was violent, but it is not clear if it would have led to destruction of the vortex. After seven orbits, longer MRI-unstable wavelengths in the box led to turbulence. The vortex was destroyed by the strain of that turbulence, bulging and losing coherence, and was eventually lost in the turbulent vorticity field of the box. Decreasing the field further led to quenching of the magneto-elliptic instability as well.
Third, the instabilities were suppressed with physical resistivity, setting the Elsässer number to unity. In this case, there is a slight decrease in the kinetic energy of the elliptic turbulence, which lasts for two orbits. Meanwhile, in-plane magnetic fields develop inside the vortex, loop around it, and get diffused away. Vortex destruction happens when longer wavelengths in the box, for which the magnetic Reynolds number is bigger than one, go MRI-unstable. Channel flows develop, and the vortex is stretched apart. By increasing the resistivity, the instabilities are quenched when the longest wavelength of the box has a magnetic Reynolds number ReM ≲ 2.
In addition to uniform vertical fields, we also performed simulations with azimuthal fields and vertical zero net flux fields. These different field configurations also led to magneto-elliptic instability in the vortex.
In view of these results, it is curious that the vortex seen in the zero net flux MRI simulations of Fromang & Nelson (2005) is stable over hundreds of orbits, a fact left without explanation to date. We speculate that this may come from a lack of resolution in the global simulation to capture the magneto-elliptic modes in the core.
We conclude that the baroclinic instability is important only when magnetic fields are too weakly coupled to the gas. Otherwise they are destroyed by the magneto-elliptic instability, channel flows, or by the strain of the surrounding MRI turbulence. We thus underscore that our results fit neatly in the general picture of the layered accretion paradigm in protoplanetary disks. If the MRI supersedes the BI, it thus remains the main source of turbulence in the active zones where ionization is abundant. The active layers are unmodified, whereas the dead zone, if baroclinic unstable, is endowed with large-scale vortices and an associated weak but positive accretion rate of α ≈ 5 × 10-3. This value has to be revised by global simulations in view of the limited spatial scale of the shearing box. If confirmed, it might be sufficient for a steady state to be achieved (Terquem 2008), as long as the borders of the dead zone are stable (Lyra et al. 2009). It remains to be studied what the conditions are when vertical stratification is included, what the precise transition is between a BI dominated dead zone and the MRI-active radii/layers, and how the accumulation of solids will proceed inside elliptically turbulent vortex cores.
See http://www.nordita.org/software/pencil-code/
Actually, this is such a useful insight that some authors prefer to define entropy as S = p/ργ. The reader should then keep in mind that what we call entropy is actually s = cVln(S/S0) in that definition. Here we prefer to use the definition Eq. (7) as it comes from thermodynamics; i.e., Tds = de + pdv, where e = cVT is the internal energy and v = 1/ρ. It also enables the Brunt-Väisälä frequency to be written in a more compact form (Eq. (8)).
A script to calculate the growth rates was kindly provided by Mizerski.
As pointed out by the referee, this is clearly seen when one writes the shear stress Ssh = ∂xuy + ∂yux and substitutes the Kida solution. It yields Ssh = − ΩV(χ2 − 1)/χ, which is only zero for χ = 1.
Simulations were performed at the PIA cluster of the Max-Planck-Institut für Astronomie. We acknowledge useful discussions with K. Mizerski, C. McNally, J. Maron, M.-M. Mac Low, G. Lesur, and A. Johansen. The authors thank the anonymous referee for the many comments that helped improve the manuscript.
Balbus, S., & Hawley J. 1991, ApJ, 376, 214 [NASA ADS] [CrossRef] [Google Scholar]
Bayly, B.J. 1986, PhRvL, 57, 2160 [NASA ADS] [CrossRef] [Google Scholar]
Fromang, S., & Nelson, R. P. 2005, MNRAS, 364, 81 [NASA ADS] [CrossRef] [Google Scholar]
Gammie C. F. 1996, ApJ, 457, 355 [NASA ADS] [CrossRef] [Google Scholar]
Hawley, J. F., & Balbus, S. A. 1991, ApJ, 376, 223 [NASA ADS] [CrossRef] [Google Scholar]
Heinemann, T., & Papaloizou, J. C. B. 2009, MNRAS, 397, 64 [NASA ADS] [CrossRef] [Google Scholar]
Johnson, B. M., & Gammie, C. F. 2005, ApJ, 635, 149 [NASA ADS] [CrossRef] [Google Scholar]
Johansen, A., Youdin, A., & Klahr, H. 2009, ApJ, 697, 1269 [NASA ADS] [CrossRef] [Google Scholar]
Kerswell, R. R. 1994, J. Fluid Mech., 274, 194 [NASA ADS] [CrossRef] [Google Scholar]
Kerswell, R. R. 2002, AnRFM, 34, 83 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
Kida, S. 1981, J. Phys. Soc. Jpn, 50, 3517 [NASA ADS] [CrossRef] [Google Scholar]
Klahr, H. 2004, ApJ, 606, 1070 [NASA ADS] [CrossRef] [Google Scholar]
Klahr, H. H., & Bodenheimer, P. 2003, ApJ, 582, 869 [NASA ADS] [CrossRef] [Google Scholar]
Klahr, H., & Bodenheimer, P. 2006, ApJ, 639, 432 [NASA ADS] [CrossRef] [Google Scholar]
Lebovitz, N., & Zweibel, E. 2004, ApJ, 609, 301 [NASA ADS] [CrossRef] [Google Scholar]
Lesur, G., & Papaloizou, J. C. B. 2009, A&A, 498, 1 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Lesur, G., & Papaloizou, J. C. B. 2010, A&A, 513, 60 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Lyra, W., Johansen, A., Klahr, H., & Piskunov, N. 2008, A&A, 479, 883 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Lyra, W., Johansen, A., Zsom, A., Klahr, H., & Piskunov, N. 2009, A&A, 497, 869 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Mamatsashvili, G. R., & Chagelishvili, G. D. 2007, MNRAS, 381, 809 [NASA ADS] [CrossRef] [Google Scholar]
Miyazaki, T. 1993, PhFl, 5, 2702 [NASA ADS] [CrossRef] [Google Scholar]
Mizerski, K. A., & Bajer, K. 2009, J. Fluid Mech., 632, 401 [NASA ADS] [CrossRef] [Google Scholar]
Oishi, J., & Mac Low, M.-M. 2009, ApJ, 704, 1239 [NASA ADS] [CrossRef] [Google Scholar]
Petersen, M. R., Julien, K., & Stewart, G. R. 2007a, ApJ, 658, 1236 [NASA ADS] [CrossRef] [Google Scholar]
Petersen, M. R., Stewart, G. R., & Julien, K. 2007b, ApJ, 658, 1252 [NASA ADS] [CrossRef] [Google Scholar]
Pierrehumbert, R. T. 1986, PhRvL, 57, 2157 [NASA ADS] [CrossRef] [Google Scholar]
Regev, O., & Umurhan, O. M. 2008, A&A, 481, 21 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Shen, Y., Stone, J. M., & Gardiner, T. A. 2006, ApJ, 653, 513 [NASA ADS] [CrossRef] [Google Scholar]
Terquem, C. E. J. M. L. J. 2008, ApJ, 689, 532 [NASA ADS] [CrossRef] [Google Scholar]
Tevzadze, A. G., Chagelishvili, G. D., Bodo, G., & Rossi, P. 2010, MNRAS, 401, 901 [NASA ADS] [CrossRef] [Google Scholar]
Turner, N. J., & Drake, J. F. 2009, ApJ, 703, 2152 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Linearization of the large-scale pressure gradient
Given a pressure profile following a power law (A.1)we linearize it by considering r = R0 + x, and R0 ≫ x, as below (A.2)The total pressure includes this time-independent part, plus a local fluctuation p′ (A.3)where (A.4)and the superscript "(0)" denotes the part of the pressure that has no information on the radial gradient. The pressure gradient is therefore (A.5)The momentum equation then becomes (A.6)However, this equation would not work because it is not in equilibrium. The second term would continuously inject momentum in the box. This is a reflection of the fact that the pressure gradient modifies the effective gravity, and the rotation curve accordingly. In a global disk, the disk settles in sub-Keplerian centrifugal equilibrium. We add to the equation a term that embodies this equilibrium (A.7)The superscript "(0)" is dropped in Eq. (2). The same procedure applies to the energy equation, albeit with a small caveat, because the energy equation includes not only the pressure gradient but also the pressure itself. In adiabatic form, the energy equation is (A.8)where E = cVρT is the internal energy. The term on the right-hand side is the pdV work. The modifications to the energy equation come from this term and from the radial advection term. Since E = p/(γ − 1), and p is given by Eq. (A.3), we have
so (A.9)We now drop the x-dependent term in the pdV work. The main reason is because the term is not periodic, therefore its inclusion in the shearing box can potentially lead to long-term effects in the simulation, e.g., uneven heating due to the dissipation of waves. It also creates a boundary in the box, and waves would be subject to refraction and reflection upon reaching it.
Although the discarding is not physically motivated, the loss is not as crucial as it may seem at first. This is because the term is not important for the instability cycle; that is, the pdV is not acting on the buoyancy itself. Clearly a vortical flow in geostrophic balance is divergenceless, so the pdV term is zero anyway. Thus, neglecting the x-dependent term is not affecting the baroclinic instability. If included, the term is only affecting the wave pattern, and only slightly. Waves will be subject to more heating (cooling) upon compression (expansion) on one side of the box more than the other. In fact, this could also be expressed by a radially varying adiabatic coefficient γ. A test simulation was run with the non-periodic term included, in order to assess the impact of discarding it. It was seen that the term effectively creates, over a long time, a small entropy jump in the box. Nonetheless, the development of the baroclinic instability progressed unhindered, leading to no significant different results to vortex formation. Even the large-scale sound waves have the same statistical properties. Growth rates and saturation levels of the turbulence are indistinguishable from those attained in the fiducial run.
We are of course losing consistency for dropping a term from the equations. The consistent ways of modeling the problem are either by using the Boussinesq approximation (as in Lesur & Papaloizou 2010), which filters acoustics out and eliminates the pdV term, or by means of global disk calculations, where the radial dependencies are only given in the initial condition, and are free to evolve in time. This is admittedly a limitation of the model, but the benefit of the high resolution that can be achieved in the local box, at the expense of a term that is not important for the instability cycle, was judged worth the loss in consistency.
The simplified energy equation reads as (A.10)i.e., the box has the same temperature all over. The dependency on ux provides the heating/cooling that a gas parcel would experience in a global model. Because of it, a gas parcel heats up when climbing the temperature gradient, and cools down when descending it.
From the definition of entropy (Eq. (7)), we write (A.11)and taking the derivative, (A.12)where we made use of the continuity equation. Multiplying the whole equation by E, we have (A.13)so the pdV term cancels when converting the energy equation into an equation for entropy. We substitute Eq. (A.10) into Eq. (A.13) to find (A.14)The superscripts "(0)" are dropped in Eq. (7).
Appendix B: Testing for aliasing
One critical task to perform before any attempt to quantify baroclinic growth of vorticity is to test how well the code reproduces the analytical results of shear wave theory. Compressible and incompressible modes have well-defined analytical solutions that may be used to assess the presence and quantify the amount of aliasing introduced by the scheme.
Aliasing is a feature of finite-difference codes, which occurs when a shear-wave swings from leading (kx < 0) to trailing (kx > 0). As the radial wavelength of the wave approaches zero, it becomes shorter than the width of a grid cell. The signal is lost on the Nyquist scale, and information is lost owing to the phase degeneracy that is established. There are an infinite number of possible sinusoids of varying amplitude and phase that are solutions (all "aliases" of the correct solution), so spurious power can be introduced in the wave. This extra power is then transferred from the trailing to the leading wave, which will again swing-amplify.
Because of this, it is possible that aliasing by itself may lead to spurious vortex growth. In 2D, the energy spuriously generated at the aliased swing-amplification has no option but to undergo an inverse cascade, the end of which is coherent vortices. This is a particular danger for Pencil, because the code is both finite-difference and high-order. The high-order nature is in most cases a plus, of course, since it leads to little overall numerical dissipation. In this case, however, it means that the spuriously added power will not be discarded. Lower order codes can be diffusive enough that the energy introduced by aliasing may be immediately dissipated. Indeed, Shen et al. (2006) highlight that they do not see any aliasing happening, and suggest that this is due to the high degree of numerical diffusivity in their code. This is of course a case of two negative features canceling each other.
We combine the best of both worlds by using hyperviscosity. It makes the code dissipative only where it is needed (on the grid scale), with the extra benefit of being able to control how much dissipation is added to the solution. We note that Oishi & Mac Low (2009), also using the Pencil code, tested the numerical evolution of incompressible 2D as well as compressive 2D and 3D disturbances against their analytical solutions. They found aliasing unimportant when using hyperviscosity. We repeat here the incompressible test (showed in Fig. B.1), and refer to Oishi & Mac Low (2009) and Johansen et al. (2009) for the other tests.
Fig. B.1
Testing the numerical scheme for aliasing, a feature of finite difference methods, which spuriously increases the energy of a wave that swings from leading to trailing. At the resolutions and mesh-Reynolds numbers we use, this effect is successfully suppressed. The wave has a wavenumber kx,0 = − 16π/Lx, which means 4, 8, 16, and 32 points per wavelength at resolutions 32, 64, 128, and 256, respectively. The wiggling is due to excitation of compressible modes not present in the analytical solution.
The analytical solution of the incompressible shear wave is (Johnson & Gammie 2005) (B.1)where , ky ≡ const., and (B.2)The condition of incompressibility \begin{formule}$k_i\delta{u_i}=0$\end{formule} thus dictates that δuy = − kxδux/ky. The solenoidality of the wave is guaranteed by initializing the velocity field through a streamfunction , with . We use the setup of Shen et al. (2006), kx,0 = − 16π/Lx, ky = 4π/Ly, with Lx = Ly = 0.5, A = 10-4, cs = Ω = ρ0 = 1, γ = 7/5, and q = 3/2.
We follow the evolution of the x-velocity in a point of the grid, and plot the result in Fig. B.1, checking for differences due to resolution and initial mesh Reynolds number Re, defined by the hyperviscosity coefficient (B.3)We see that no aliasing is detected at Re = 0.01. The signal is viscously damped before it can be swing-amplified. At Re = 0.1, aliasing occurs at resolution 322. However, the added signal suffers a severe (hyper-)viscous damping, so that the next swing amplification has little left to work with. When increasing the Reynolds number to Re = 1, aliasing now happens also for resolution 642, but the solution is decaying. Aliasing is not detected at any Reynolds number for resolution 2562. A run with Re = ∞ for resolution 322 (not showed) kept periodic intervals of aliasing without dissipation but without net growth, at least until t = 1000.
The panels in Fig. B.1 are for ξ = 0. When we use non-zero ξ, the aliased solutions show an increase in amplitude. The same does not happen for the solutions at low Reynolds numbers. The wiggling seen at higher resolution in Fig. B.1 is accompanied by changes in the density, which leads us to conclude that they come from the excitation of compressible modes not present in the analytical solution. We use a resolution of 2562 for 2D runs. At this resolution, we rest assured that aliasing is not happening for the considered Reynolds numbers.
In the text
The subcritical baroclinic instability in local accretion disc models
Vortices in stratified protoplanetary disks — From baroclinic instability to vortex layers
Elliptic and magneto-elliptic instabilities
Vortex cycles at the inner edges of dead zones in protoplanetary disks
Planet formation bursts at the borders of the dead zone in 2D numerical simulations of circumstellar disks
A&A 497, 869-888 (2009)
|
CommonCrawl
|
Hostname: page-component-7bb4899584-thhvk Total loading time: 1.287 Render date: 2023-01-27T02:00:45.478Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Political Analysis
>The Wald Test of Common Factors in Spatial Model Specification...
Political Analysis
Substantive and Residual Dependence in Cross-Sectional Models
The Wald Test of Nonlinear Restrictions
Monte Carlo Analysis
Empirical Example: Spatial Contagion Effects in Economic Voting
The Wald Test of Common Factors in Spatial Model Specification Search Strategies
Published online by Cambridge University Press: 16 November 2020
Sebastian Juhl [Opens in a new window]
Sebastian Juhl*
University of Mannheim, Collaborative Research Center 884, B6, 30–32, Room 324, 68159Mannheim, Germany. Email: [email protected]
Corresponding author Sebastian Juhl
Save PDF
Save PDF (0.69 mb) View PDF[Opens in a new window]
Distinguishing substantively meaningful spillover effects from correlated residuals is of great importance in cross-sectional studies. Both forms of spatial dependence not only hold different implications for the choice of an unbiased estimator but also for the validity of inferences. To guide model specification, different empirical strategies involve the estimation of an unrestricted spatial Durbin model and subsequently use the Wald test to scrutinize the nonlinear restriction of common factors implied by pure error dependence. However, the Wald test's sensitivity to algebraically equivalent formulations of the null hypothesis receives scant attention in the context of cross-sectional analyses. This article shows analytically that the noninvariance of the Wald test to such reparameterizations stems from the application of a Taylor series expansion to approximate the restriction's sampling distribution. While asymptotically valid, Monte Carlo simulations reveal that alternative formulations of the common factor restriction frequently produce conflicting conclusions in finite samples. An empirical example illustrates the substantive implications of this problem. Consequently, researchers should either base inferences on bootstrap critical values for the Wald statistic or use the likelihood ratio test which is invariant to such reparameterizations when deciding on the model specification that adequately reflects the spatial process generating the data.
spillover effectscommon factorsWald testspatial econometrics
Political Analysis , Volume 29 , Issue 2 , April 2021 , pp. 193 - 211
DOI: https://doi.org/10.1017/pan.2020.23[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© The Author(s), 2020. Published by Cambridge University Press on behalf of the Society for Political Methodology
The correct specification of the inherently unknown spatial process generating observable patterns of interrelatedness among the units of analysis constitutes a considerable challenge in cross-sectional studies. In particular, distinguishing substantively meaningful indirect spillover effects from spatially correlated random shocks is imperative as there is a serious risk of making incorrect inferences when estimating a misspecified model (e.g., LeSage and Pace 2009; Darmofal 2015). Unfortunately, while unfocused tests for spatial autocorrelation commonly applied in empirical research, like Moran's I (e.g., Cliff and Keith Ord Reference Cliff and Keith Ord1981), help to detect spatial clustering in model residuals, they do not provide guidance on the exact process generating these dependencies. Since spatial regression models differ with respect to the implied pathways of dependence, these simple diagnostic tools do not allow researchers to identify the adequate model specification.
To address this specification problem, many empirical model selection strategies proposed in the literature feature the estimation of the spatial Durbin model (SDM) as an unrestricted nesting model and utilize the Wald test to scrutinize nonlinear common factor restrictions implied by pure error dependence (e.g., Burridge Reference Burridge1981).Footnote 1 Since the Wald test is asymptotically equivalent to the likelihood ratio (LR) and the Lagrange multiplier (LM) tests, which are the two alternative likelihood-based specification tests, the choice of a test statistic is oftentimes motivated by convenience or familiarity (e.g., LeSage and Pace Reference LeSage and Pace2009, 55). However, while previous studies in the field of spatial econometrics report notable differences with respect to their finite sample properties (e.g., Mur and Angulo Reference Mur and Angulo2006; Mur and Angulo Reference Mur and Angulo2009), the Wald test's sensitivity to algebraically equivalent alternative formulations of the null hypothesis is somewhat overlooked. Given that this result is well-established in a time-series context (e.g., Gregory and Veall Reference Gregory and Veall1985; Lafontaine and White Reference Lafontaine and White1986; Breusch and Schmidt Reference Breusch and Schmidt1988; Dagenais and Dufour Reference Dagenais and Dufour1991; de Paula and Cribari-Neto Reference de Paula Ferrari and Cribari-Neto1993; Goh and King Reference Goh and King1996) and regarding the importance of distinguishing spillover effects from residual correlation for substantive inferences, this negligence is startling.
By remedying this omission, the present study evaluates the Wald test's appropriateness for differentiating between alternative mechanisms that cause spatial clustering in cross-sectional data structures. It discusses the substantive and econometric implications of alternative spatial processes and shows analytically that the Wald test's lack of invariance to reparameterizations of nonlinear common factor restrictions stems from the necessity to approximate the restrictions' sampling distributions. While asymptotically valid, Monte Carlo experiments demonstrate that this approximation frequently leads to misleading inferences concerning the presence of spillover effects across a wide range of parameter settings in finite samples. An empirical example further illustrates the severity of this problem for applied research aiming to assess the support for distinct theoretical mechanisms against possible alternative explanations. Given that a misspecification of the process generating cross-sectional dependencies can bias substantive inferences, the results suggest that, irrespective of the specification search strategy employed, researchers should not base inferences on the Wald statistic's asymptotic $\chi ^{2}$ distribution. Instead, simulation techniques such as bootstrap methods allow researchers to use estimated critical values as an alternative to their asymptotic counterparts. The LR test also offers a valuable alternative procedure that is invariant to reparameterizations of the null hypothesis.
2 Substantive and Residual Dependence in Cross-Sectional Models
In regression analyses utilizing cross-sectional data, three different types of interaction effects can be distinguished that generate spatial autocorrelation in the dependent variable. First, endogenous interaction effects occur whenever the units' outcomes are intertwined. In these situations, the actions, decisions, or behaviors of the units are simultaneously determined and responsive to the other units' outcomes. Second, exogenous interaction effects cause spatial clustering by linking the response of each unit to the covariates of other units. Finally, cross-sectional dependencies can be a product of spatially correlated model residuals (e.g., Elhorst Reference Elhorst2014b; Halleck Vega and Elhorst Reference Halleck Vega and Elhorst2015). While endogenous and exogenous interaction effects are part of the regression's systematic component, correlation among the error terms is confined to the model residuals and does not affect the expectation of the outcome conditional on the regressors.
Despite their close correspondence (see e.g., Gibbons and Overman Reference Gibbons and Overman2012), the distinction between the different mechanisms causing spatial dependencies in the data has far-reaching implications for the estimation and interpretation of the regression coefficients (LeSage and Pace Reference LeSage and Pace2009; Rüttenauer Reference Rüttenauer2019). Importantly, substantively meaningful indirect spillover effects, loosely defined as the impact of changes in one unit's covariates on the other units' outcomes, only exist if cross-unit interactions are part of the regression's systematic component (Elhorst Reference Elhorst2010; Darmofal Reference Darmofal2015; Halleck Vega and Elhorst Reference Halleck Vega and Elhorst2015). In these instances, the cross-partial derivative of unit i's outcome $y_{i}$ with respect to unit j's covariate $x_{j}$ is nonzero, signifying a systematic relationship between the units (e.g., LeSage and Pace Reference LeSage and Pace2009, 38). Otherwise, the regression model imposes the restriction of no spillovers by assumption.Footnote 2 To detect the existence of these spillover effects, different model specification search strategies suggest to utilize the unrestricted SDM model as a general model featuring substantive as well as residual dependence and subsequently test several parameter restrictions (e.g., Mur and Angulo Reference Mur and Angulo2006; LeSage and Pace Reference LeSage and Pace2009; Elhorst Reference Elhorst2010).
2.1 An Illustrative Example of the Different Spatial Processes
Before outlining the alternative spatial model specifications, it is useful to contrast the different spatial processes with respect to their substantive implications for empirical political science research.
Spillover effects occur whenever the behavior (endogenous interactions) or certain characteristics (exogenous interactions) of one unit—may this be a country, a (coalition) government, a political party, or any other entity of interest—affect adjacent units (Darmofal Reference Darmofal2015, 5).Footnote 3 For example, the municipalities' income tax revenues are directly related to their local economic performance. At the same time, the economic prosperity of its neighbors also exerts a positive impact on a municipality's income tax revenues as its residents might commute to work. These dependencies in income tax revenues produce spillovers between the municipalities: the effect of a change in one unit's characteristics propagates to its neighbors (e.g., LeSage and Pace Reference LeSage and Pace2009). Hence, adequately understanding the phenomenon of interest—cross-sectional variation in income tax revenues—necessitates the consideration of these exogenous interaction effects among the municipalities that generate substantively meaningful and theoretically relevant indirect spillover effects.
In contrast, cross-sectional dependencies in the disturbances constitute another spatial process that has different substantive implications and requires an alternative model specification. Several circumstances can cause correlation between the units' residuals, for example spatial clustering in measurement errors. Alternatively, omitting a spatially dependent explanatory variable that is part of the true data-generating process (DGP) from the regression equation leads to correlated errors (Elhorst Reference Elhorst2014b; Darmofal Reference Darmofal2015). With regards to the example given above, it is reasonable to expect that several unobservable but spatially dependent characteristics, like a region's general appeal as a place of residence, affect a municipality's revenue from income taxation as well. In contrast to the theorized exogenous interaction effects, these omitted characteristics merely affect the model residuals and there are no relevant indirect effects present in the process that generates the data. Since exactly modeling the true dependence structure is almost impossible (e.g., Juhl Reference Juhl2020b), omitting relevant variables that are spatially correlated can create linkages among the units' disturbances.
2.2 Common Factors in the Spatial Durbin Model
While the previous example illustrates the crucial differences in the substantive implications of the underlying process that links the observations to one another, model misspecification may cause severe econometric problems as well. In fact, neglecting cross-sectional interdependencies can induce correlation between the regressors and the residuals, resulting in the canonical endogeneity problem (e.g., Gibbons and Overman Reference Gibbons and Overman2012; Betz, Cook, and Hollenbach Reference Betz, Cook and Hollenbach2019).
In order to illustrate the problem of omitted spillover effects, consider the stylized DGP in which a dependent variable $\boldsymbol {y}$ is entirely determined by two uncorrelated regressors, denoted $\boldsymbol {x}$ and $\boldsymbol {z}$, such that $\boldsymbol {y}=\boldsymbol {x}\beta +\boldsymbol {z}$.Footnote 4 Assume that $\boldsymbol {z}$ is unobservable and follows a spatial autoregressive process such that $\boldsymbol {z}=\rho \boldsymbol {Wz}+\boldsymbol {u}_{1}$, where $\rho $ is a scalar parameter, $\boldsymbol {W}$ is an exogenously defined connectivity matrix, and $\boldsymbol {u}_{1}$ is a vector of independent and identically distributed normal disturbances with zero mean and a fixed variance.Footnote 5 This scenario leads to the spatial error model (SEM) specification:
(1) $$ \begin{align} \boldsymbol{y} &= \boldsymbol{x}\beta+\underbrace{(\boldsymbol{I}_{n}-\rho\boldsymbol{W})^{-1}\boldsymbol{u}_{1}}_{\boldsymbol{z}}. \end{align} $$
Due to the uncorrelatedness of $\boldsymbol {x}$ and $\boldsymbol {z}$, omitting the spatially autocorrelated variable does not lead to endogeneity concerns. Furthermore, the relationship depicted in Equation (1) implies no indirect spillover effects since the cross-unit interactions are confined to the residuals. Consequently, even a nonspatial OLS model specification would provide unbiased but inefficient parameter estimates (Lacombe and LeSage Reference Lacombe and LeSage2015).
Now consider a slightly modified scenario in which the included and the omitted regressors are no longer independent from one another but correlated. To induce correlation between the variables, suppose that the random variable $\boldsymbol {u}_{1}$ in Equation (1) is replaced by $\boldsymbol {u}_{2}$ which is an additive linear function of $\boldsymbol {x}$ and a stochastic disturbance term $\boldsymbol {v}$ such that $\boldsymbol {u}_{2}=\boldsymbol {x}\gamma +\boldsymbol {v}$. In addition to the spatial autocorrelation, the unobserved covariate $\boldsymbol {z}$ is now also correlated with $\boldsymbol {x}$ and the scalar $\gamma \in (0,1]$ as well as the dispersion of $\boldsymbol {v}$ ( $\sigma _{v}^{2}$) jointly determine the strength of the correlation. In this slightly modified scenario, the true DGP becomes
(2) $$ \begin{align} \boldsymbol{y} & = \boldsymbol{x}\beta+\underbrace{(\boldsymbol{I}_{n}-\rho\boldsymbol{W})^{-1}(\boldsymbol{x}\gamma+\boldsymbol{v})}_{\boldsymbol{z}}. \end{align} $$
In contrast to the DGP shown in Equation (1), the correlation between the included regressor $\boldsymbol {x}$ and the spatially clustered variable $\boldsymbol {z}$ causes an endogeneity problem if $\boldsymbol {z}$ is omitted from the regression's systematic part. Importantly, the effect of a change in regressor $\boldsymbol {x}$ on the outcome $\boldsymbol {y}$ in Equation (2) is more complex and both a nonspatial OLS model as well as a SEM model do not yield unbiased estimates since they ignore indirect effects produced by the spatial patterning of the correlated omitted variable. Consequently, the effect of a change in $x_{i}$ is not confined to unit i's outcome $y_{i}$ but rather pertains to all nonisolated units in the entire system through indirect spillover and instantaneous feedback effects (e.g., LeSage and Pace Reference LeSage and Pace2009; Betz et al. Reference Betz, Cook and Hollenbach2019).
In order to address the endogeneity problem and to identify meaningful spillover effects, the unrestricted SDM model plays an important role. In fact, it serves as a general nesting model in many specification search strategies since it comprises several simpler spatial regression models frequently employed in empirical studies (e.g., Mur and Angulo Reference Mur and Angulo2009; Elhorst Reference Elhorst2010; Angulo and Mur Reference Angulo and Mur2011).Footnote 6 By allowing researchers to test different parameter restrictions, the SDM model facilitates the specification of an econometric model that reflects the spatial process generating the data most appropriately. For the hypothetical scenario with one regressor $\boldsymbol {x}$ and the (possibly correlated) unobserved variable $\boldsymbol {z}$ discussed here, the SDM model to be estimated takes the following form:
(3) $$ \begin{align} \boldsymbol{y} & = \rho\boldsymbol{Wy} + \boldsymbol{x}\beta + \boldsymbol{Wx}\theta + \boldsymbol{v}. \end{align} $$
While it is easy to verify that the SDM model reduces to the popular spatial autoregressive (SAR) model, also known as spatial lag model (e.g., Elhorst Reference Elhorst2014b, 5), that features global spillover effects if $\theta = 0$ and to the spatial lag of X (SLX) model featuring local spillovers when $\rho = 0$, it also subsumes the SEM model that rules out any substantive indirect effects by assumption.Footnote 7 To illuminate the relationship between these models, it is useful to restate the SEM DGP displayed in Equation (1) by multiplying both sides of the equation by $(\boldsymbol {I}_{n}-\rho \boldsymbol {W})$ and rearranging terms which results in the following structural form (Burridge Reference Burridge1981; Anselin Reference Anselin2003):
(4) $$ \begin{align} (\boldsymbol{I}_{n}-\rho\boldsymbol{W})\boldsymbol{y} & = (\boldsymbol{I}_{n}-\rho\boldsymbol{W})\boldsymbol{x}\beta + \boldsymbol{u}_{1}\nonumber\\ \boldsymbol{y} - \rho\boldsymbol{Wy} & =(\beta\boldsymbol{I}_{n}-\rho\beta\boldsymbol{W})\boldsymbol{x} + \boldsymbol{u}_{1} \nonumber\\ \boldsymbol{y} & = \rho\boldsymbol{Wy} + \boldsymbol{x}\beta + \boldsymbol{Wx}(-\rho\beta) + \boldsymbol{u}_{1}. \end{align} $$
Equation (4) elucidates that the SEM process imposes a nonlinear common factor restriction on the parameter associated with $\boldsymbol {Wx}$ which can be assessed by using estimates from the unrestricted SDM model depicted in Equation (3).Footnote 8 More precisely, if the estimates from the unrestricted SDM model satisfy the common factor restriction $\theta =-\rho \beta $, the model can be simplified to a SEM model specification because there are no discernible substantive spillover effects in the data. Given that $E(\hat {\beta })-\beta =0$ in Equation (4), the common factor restriction holds for SEM processes.
However, if $\boldsymbol {x}$ and $\boldsymbol {z}$ are correlated in the true DGP, an endogeneity problem occurs since the variable $\boldsymbol {z}$ is unobserved and the common factor restriction no longer holds. Restating the DGP in Equation (2) that features this correlation in a similar fashion yields:
(5) $$ \begin{align} (\boldsymbol{I}_{n}-\rho\boldsymbol{W})\boldsymbol{y} & = (\boldsymbol{I}_{n}-\rho\boldsymbol{W})\boldsymbol{x}\beta + \boldsymbol{x}\gamma + \boldsymbol{v} \nonumber\\ \boldsymbol{y} & = \rho\boldsymbol{Wy} + \boldsymbol{x}(\beta+\gamma) + \boldsymbol{Wx}(-\rho\beta) + \boldsymbol{v}. \end{align} $$
Given the discussion above, Equation (5) illustrates that, in the presence of an omitted variable that is (i) spatially clustered and (ii) correlated with an included regressor, the true DGP resembles the SDM specification. Although the SDM model shown in Equation (3) provides consistent estimates for $E(\hat {\rho })=\rho $ and $E(\hat {\theta })=-\rho \beta $, the estimate of $\beta $ is asymptotically biased since $E(\hat {\beta }) = \beta + \gamma $. Based on these model estimates, the common factor restriction does not hold because of the endogeneity bias present in $E(\hat {\beta })$. Hence, a violation of the common factor restriction is indicative of the existence of indirect spillover effects that need to be included in the systematic part of the regression model.
As this discussion suggests, the spatially lagged exogenous variables in the SDM model specification can also be understood as instruments for omitted variables that are correlated with included regressors (e.g., Elhorst Reference Elhorst2014b, 18). At the same time, as Gibbons and Overman (Reference Gibbons and Overman2012) emphasize, this strategy provides only weak identification of the model parameters and crucially depends on the exogeneity and the assumed perfect knowledge of $\boldsymbol {W}$. Consequently, the SDM model does not provide a general solution to the omitted variables problem that would allow researchers to identify causal effects. Just like in any other observational study, doing so requires the application of appropriate research designs (see also Betz et al. Reference Betz, Cook and Hollenbach2019; Rüttenauer Reference Rüttenauer2019). Moreover, while the SDM specification features global spillover effects, local spillovers would require a different model specification like the spatial Durbin error model (e.g., Halleck Vega and Elhorst Reference Halleck Vega and Elhorst2015).
3 The Wald Test of Nonlinear Restrictions
Given the estimates from an unrestricted model, the Wald test is flexible enough to scrutinize several different and possibly nonlinear parameter constraints within the same model. Intuitively, the Wald test assesses the distance between the observed estimates and the restrictions imposed. As the distance grows, the restrictions become less likely. In contrast to the LM and the LR tests which constitute prominent and asymptotically equivalent alternative specification tests, the Wald test does not require the estimation of restricted alternative models (e.g., Burridge Reference Burridge1981).
Despite its advantages, a major drawback of the Wald test of nonlinear restrictions is its lack of invariance to algebraically equivalent expressions of the null hypothesis. Since the asymptotic distribution of a nonlinear restriction needs to be approximated by a Taylor series expansion, seemingly identical functional representations may produce different test statistics. Although this undesirable property is a well-known feature of the Wald test (e.g., Gregory and Veall Reference Gregory and Veall1985; Lafontaine and White Reference Lafontaine and White1986; Breusch and Schmidt Reference Breusch and Schmidt1988; Dagenais and Dufour Reference Dagenais and Dufour1991; de Paula and Cribari-Neto Reference de Paula Ferrari and Cribari-Neto1993; Goh and King Reference Goh and King1996), its consequences for empirical spatial model search strategies have been neglected so far.
3.1 Analytical Derivation and Asymptotic Distribution of the Wald Statistic
Consider a situation in which a test needs to be constructed in order to evaluate a single nonlinear restriction $H_{0}: g(\boldsymbol {\lambda }) = 0$, where $\boldsymbol {\lambda }$ is a parameter vector and $g(\cdot )$ is some function that is continuously differentiable in a neighborhood of $\boldsymbol {\lambda }$. For this general case, the Wald statistic is defined by
(6) $$ \begin{align} w=g(\hat{\boldsymbol{\lambda}})\big[\widehat{V(g(\hat{\boldsymbol{\lambda}}))}\big]^{-1}g(\hat{\boldsymbol{\lambda}}), \end{align} $$
where $\widehat {V(g(\boldsymbol {\hat {\lambda }}))}$ is the estimated variance of $g(\boldsymbol {\hat {\lambda }})$. Under the null hypothesis, w asymptotically follows a $\chi ^{2}$ distribution with the number of degrees of freedom equal to the number of restrictions imposed.
The only complication involved here is that obtaining w necessitates knowledge about the sampling distribution of a nonlinear function. While it is straightforward to compute the value of $g(\boldsymbol {\hat {\lambda }})$ at the parameter estimates, deriving the Wald statistic in Equation (6) additionally requires information about its variability which depends on the estimator $\boldsymbol {\hat {\lambda }}$ and the restriction $g(\cdot )$. However, given the restriction's nonlinearity, exact distributional results become inapplicable (Greene Reference Greene2012). Instead, the delta method provides an approximation of the restriction's asymptotic distribution.Footnote 9 It is based on a first-order Taylor series expansion of $g(\boldsymbol {\hat {\lambda }})$ around the true parameter vector $\boldsymbol {\lambda }$. Assuming that $\boldsymbol {\hat {\lambda }}$ is a consistent estimator with a limiting distribution defined by ${\sqrt {n}(\boldsymbol {\hat {\lambda }} - \boldsymbol {\lambda }) \overset {d}{\rightarrow } \mathcal {N}(\boldsymbol {0}, \boldsymbol {\Sigma })}$ and that the standard regularity conditions hold (see e.g., Newey and McFadden Reference Newey, McFadden, Engle and McFadden1994), the delta method implies that
(7) $$ \begin{align} \sqrt{n}(g(\boldsymbol{\hat{\lambda}}) - g(\boldsymbol{\lambda})) & \overset{d}{\rightarrow} \mathcal{N}\big(0,\boldsymbol{G}(\boldsymbol{\lambda})\boldsymbol{\Sigma}\boldsymbol{G}{(\boldsymbol{\lambda})'}\big), \end{align} $$
where $\boldsymbol {G}(\boldsymbol {\lambda }) = \partial g(\boldsymbol {\lambda })/ \partial {\boldsymbol {\lambda }'}$ is a row vector of partial derivatives. It follows that the asymptotic distribution of the restriction under $H_{0}$ is $g(\boldsymbol {\hat {\lambda }}) \overset {a}{\sim } \mathcal {N}\big (g(\boldsymbol {\lambda }), \boldsymbol {G}(\boldsymbol {\lambda })n^{-1}\boldsymbol {\Sigma }\boldsymbol {G}{(\boldsymbol {\lambda })'}\big )$. By using the consistent estimates obtained from an unrestricted model, the restriction's sampling variability derived from the delta method is given by
(8) $$ \begin{align} \widehat{V(g(\boldsymbol{\hat{\lambda}}))} = \boldsymbol{G}(\boldsymbol{\hat{\lambda}})\widehat{\boldsymbol{\Sigma}}\boldsymbol{G}{(\boldsymbol{\hat{\lambda}})'}, \end{align} $$
with $\boldsymbol {G}(\boldsymbol {\hat {\lambda }})$ evaluated at $\boldsymbol {\hat {\lambda }}$ and $\widehat {\boldsymbol {\Sigma }} = n^{-1}\boldsymbol {\Sigma }$ being a consistent estimator of the symmetric, positive definite asymptotic variance–covariance matrix. By substituting Equation (8) into Equation (6), the Wald test statistic can be calculated. Asymptotically, since $\text {plim}_{n\to \infty }\boldsymbol {\hat {\lambda }}=\boldsymbol {\lambda }$, the function $g(\boldsymbol {\hat {\lambda }})$ converges in distribution to $g(\boldsymbol {\lambda })$ with a mean given by $\text {plim}_{n\to \infty }g(\boldsymbol {\hat {\lambda }})=g(\boldsymbol {\lambda })$. At the same time, the necessity to estimate the nonlinear restriction's sampling variability can cause a mismatch between the Wald statistic's asymptotic $\chi ^{2}$ distribution and its finite sample distribution which has considerable consequences for hypothesis testing (e.g., Lafontaine and White Reference Lafontaine and White1986; Phillips and Park Reference Phillips and Park1988).
3.2 The Wald Test of Common Factors in Spatial Models
The analytical results derived above are directly applicable to the empirical assessment of the common factor restriction in spatial regression models because the null hypothesis can be expressed as a nonlinear function of the estimates obtained from an unrestricted SDM model. In order to calculate the test statistic, it is necessary to determine the functional representation of the null hypothesis of common factors. Yet, there are numerous algebraically equivalent alternative parameterizations that satisfy $g(\boldsymbol {\hat {\lambda }})=0$, where $\boldsymbol {\hat {\lambda }}=[\hat {\rho },\hat {\beta },\hat {\theta }]$ are the estimates obtained from the SDM model in Equation (3).
Table 1, for example, lists the four alternative expressions of the null hypothesis considered by Gregory and Veall (Reference Gregory and Veall1986) in a time-series context. While all of the alternative statements declare the same restriction, they produce distinct test statistics and p values in finite samples because the approximation of the restriction's sampling variability is based on the partial derivatives of the parameter estimates. Depending on the exact representation of the null hypothesis, the right part of Table 1 shows that the vector of partial derivatives obtained from these nonlinear functions differ.
Table 1 Algebraically identical formulations of the common factor hypothesis.
As a result, alternative expressions of the common factor hypothesis use different estimators for the nonlinear restriction's sampling variability which yields distinct test statistics and causes them to converge to the asymptotic $\chi ^{2}$ distribution at individual rates. In large samples, this circumstance is unproblematic as the accuracy of the Taylor series approximation increases in sample size while the contribution of the restriction's estimated variability to the test statistic becomes negligible. In finite samples, however, the differences between the alternatives can be substantial (e.g., Gregory and Veall Reference Gregory and Veall1985). At worst, alternative and algebraically identical functional representations of the parameter restriction can indicate opposing conclusions regarding its validity despite the fact that the same model estimates are used to calculate the test statistic.
Importantly, while many statistical software packages used to estimate spatial regression models, like Stata or R packages, report results from a Wald test by default, they do not test for common factors. Instead, the null hypothesis these packages evaluate is cross-sectional independence, that is, $\hat {\rho }=0$. Since this is a linear restriction, the Wald statistic's noninvariance problem does not arise and the tests these packages perform are only indicative of the presence of nonrandom spatial clustering. They do not permit any inferences regarding the spatial process at work. Hence, scrutinizing the common factor restriction requires researchers to amend the Wald test's null hypothesis.
3.3 Modifications of the Wald Test
Since the application of the Taylor series expansion results in distinct Wald statistics for algebraically equivalent formulations of the null hypothesis, the asymptotic $\chi ^{2}$ distribution might constitute an inappropriate approximation of the statistic's finite sample distribution for some parameterizations. Therefore, modifications proposed in the literature that attempt to address the Wald test's noninvariance problem primarily focus on adjusting the statistic's reference distribution.
Phillips and Park (Reference Phillips and Park1988), for example, show that an Edgeworth expansion of the Wald statistic provides additional information on the statistic's distribution which can be used to obtain corrected critical values and modified test statistics for each functional representation of the null hypothesis (e.g., de Paula and Cribari-Neto Reference de Paula Ferrari and Cribari-Neto1993; King and Goh Reference King, Goh, Ullah, Wan and Chaturvedi2002). Besides these corrections, simulation techniques allow researchers to generate the empirical distribution under the null hypothesis for each specification of the common factor restriction and base inferences on these reference distributions (e.g., Lafontaine and White Reference Lafontaine and White1986; Goh and King Reference Goh and King1996). In particular, bootstrap methods provide a way to estimate critical values and use them as an alternative to the (corrected) asymptotic critical values which can be unreliable in finite samples (Godfrey and Veall Reference Godfrey and Veall1998). Using the following procedure, it is straightforward to derive bootstrap critical values to test for common factors in spatial regression models:
1. Use the estimates from an unrestricted SDM model and calculate the observed Wald statistic w according to Equation (6) for a given restriction.
2. Estimate the restricted SEM model. With these estimates and the DGP shown in Equation (1), generate 100 bootstrap samples by resampling with replacement from the residual vector to obtain bootstrap disturbances.
3. Repeat step 1 for each bootstrap sample and store the Wald statistic in vector $\boldsymbol {\tilde {w}}$.
4. Sort $\boldsymbol {\tilde {w}}$ in ascending order. The value with rank $(1-\alpha )\times 100+1$ is the estimated bootstrap critical value, $\chi ^{2}_{boot}$, corresponding to a predefined $\alpha $-level (e.g., $\alpha =0.05$).
By comparing w calculated in step 1 to the corresponding bootstrap critical value $\chi ^{2}_{boot}$ from step4, its statistical significance can be assessed. This procedure can be repeated for any functional representation of the common factor restriction in order to obtain individual bootstrap critical values for each restriction and a given region of the parameter space. Thereby, researchers can base inferences on the empirical distribution under the null hypothesis instead of relying on the asymptotic $\chi ^{2}$ distribution. This is especially important since the performance of the Wald test not only depends on the specific expression of the common factor hypothesis but also on the particular region in the parameter space. In fact, previous research shows that there is no single formulation of the restriction that consistently outperforms all alternatives (e.g., Gregory and Veall Reference Gregory and Veall1986; Lafontaine and White Reference Lafontaine and White1986; Phillips and Park Reference Phillips and Park1988).
While Goh and King (Reference Goh and King1996) demonstrate that both asymptotic modifications—the corrected critical values and the improved test statistics—might even deteriorate the Wald test's power and size properties, they conclude that the bootstrap approach constitutes a useful improvement for applied research (see also Lafontaine and White Reference Lafontaine and White1986; Godfrey and Veall Reference Godfrey and Veall1998; King and Goh Reference King, Goh, Ullah, Wan and Chaturvedi2002). Of course, neither the Edgeworth corrections nor simulation techniques completely resolve the noninvariance problem inherent to the Wald test. Doing so requires the application of alternative tests such as the asymptotically equivalent LR test that is invariant to such reparameterizations (e.g., Mur and Angulo Reference Mur and Angulo2006). Yet, by providing corrections for the Wald test's empirical size, these modifications reduce the possibility of intentionally manipulating the result by amending the functional expression of the null hypothesis (King and Goh Reference King, Goh, Ullah, Wan and Chaturvedi2002, 260).
4 Monte Carlo Analysis
4.1 Experimental Setup
In order to investigate the finite sample performance of the Wald test, I conduct Monte Carlo experiments in which I vary the sample size, the strength of the interdependence, and the severity of the omitted variables bias through the degree of correlation between the included and the omitted regressor. Using the spatial process depicted in Equation (2), I generate $1,000$ samples of the outcome vector $\boldsymbol {y}$ for each of the parameter configurations. In the simulations, $\beta = 2$ and $\sigma _{v}^{2}=1$ are held constant and $\boldsymbol {x}$ is drawn from a standard normal distribution. The parameter space of $\gamma $ ranges from $0$ to $1$ in steps of $0.2$ while $\rho $ takes on values between $0$ and $0.8$ in steps of $0.2$.Footnote 10 This setup includes a nonspatial DGP without omitted variables bias ( $\rho =0$ and $\gamma =0$), nonspatial DGPs with omitted variables bias ( $\rho =0$ and $\gamma>0$), SEM DGPs ( $\rho>0$ and $\gamma =0$), and SDM DGPs ( $\rho>0$ and $\gamma>0$). $\boldsymbol {W}$ is a row-stochastic contiguity matrix based on the queen criterion of adjacency. In contrast to the rook connectivity scheme which links spatial units ordered on a lattice to their direct horizontal and vertical neighbors, the queen criterion additionally connects the units to their diagonal neighbors (e.g., Cliff and Keith Ord Reference Cliff and Keith Ord1981).Footnote 11 The sample sizes specified here contain $49$, $100$, $225$, and $400$ observations distributed on regular grids ( $7 \times 7$, $10 \times 10$, $ 15 \times 15$, and $20 \times 20$) to realistically reflect small to medium sized samples frequently encountered in political science.Footnote 12
Since the consequences of model misspecification for the estimation of unbiased effect estimates have been studied elsewhere (e.g., LeSage and Pace Reference LeSage and Pace2009; Pace and LeSage Reference Pace, LeSage, Páez, Le Gallo, Buliung and Dall'erba2010; Lacombe and LeSage Reference Lacombe and LeSage2015; Rüttenauer Reference Rüttenauer2019), this Monte Carlo analysis focuses on the ability of the Wald test to identify the true spatial model and differentiate between substantive and residual dependence across a range of alternative DGPs.Footnote 13 To this end, I investigate the performance of the Wald test using the four alternative null hypotheses of common factors summarized in Table 1.Footnote 14
4.2 Performance of the Original Wald Test
Table 2 reports the rejection rates of the four expressions of the null hypotheses of common factors at an $\alpha $-level of $0.05$ across the simulations when the true DGP is that of the SEM model ( $\gamma =0$). Since there are no omitted spillovers in this scenario, the common factor restriction holds and the four variants of the Wald test are expected to reject the true null hypothesis in about $5\%$ of the simulation trials with a $95\%$ confidence interval of $[3.65\%; 6.35\%]$.
Table 2 Share of false positives (type I error rates) using asymptotic critical values.
Based on the $\chi ^2$ distribution with $df=1$ and $\alpha = 0.05$, the asymptotic critical value used for all variants of the Wald test and across the different levels of spatial autocorrelation is $\chi ^2_{asym} = 3.841$. The theoretically expected rejection rate across the 1,000 simulation iterations is $5\%$ with a $95\%$ (binomial proportion) confidence interval of $[3.65\%; 6.35\%]$.
Although Section 3 analytically shows that the alternative Wald tests are asymptotically equivalent, their type I error rates differ notably in finite samples. Especially $H_{0}(II)$ but also $H_{0}(IV)$ deviate considerably from the expected error rate. Across all sample sizes, $H_{0}(II)$ is too conservative when $\rho $ is small. Since $\hat {\rho }$ appears in the restriction's denominator (see Table 1), the restriction has no derivative at zero which violates the assumed continuity of derivatives. However, the Wald test based on $H_{0}(II)$ remains valid as its asymptotic distribution is obtained under the null hypothesis which precludes the problematic value (Gregory and Veall Reference Gregory and Veall1985).Footnote 15
At the same time, incorrectly rejecting a true null hypothesis might be less problematic in this case since the SDM model derives unbiased impact estimates even if only the residuals are spatially clustered and no substantive spillovers exist (e.g., Elhorst Reference Elhorst2010). The only drawback is that the appropriate SEM specification would be more efficient which might affect inferences regarding the statistical significance of a regressor's impact. Consequently, it is crucial for any test of the common factor hypothesis to have satisfactory power properties in order to reduce concerns about biased effect estimates.
Against this background, Figure 1 compares the performance of the alternative variants of the Wald test for different levels of correlation between the spatially dependent omitted variable and the included regressor by reporting their power. As the correlation increases, the tests should be more likely to reject the null hypothesis. In order to account for the effects of the sample size and the strength of the interdependence on the performance of the tests, Figure 1 is comprised of 16 panels. In each panel, the horizontal axis depicts the different values of $\gamma $ and the vertical axis shows the observed share of rejections across the simulation trials.
Figure 1 Share of null hypothesis rejections at a nominal significance level of $5\%$.
A brief inspection of Figure 1 already confirms that alternative parameterizations of the null hypothesis—although algebraically equivalent—yield strikingly different results in finite samples. Even with a decently sized sample, there are pronounced differences in the rejection rates of the four Wald tests. While $H_{0}(I)$, which is considered to be the common way to express the restriction (Gregory and Veall Reference Gregory and Veall1986, 204), and $H_{0}(III)$ perform comparatively well in these simulations, the specifications based on $H_{0}(II)$ and $H_{0}(IV)$ have inferior power properties. The remarkably low rejection rates of these specifications of the Wald test increase the likelihood that researchers incorrectly infer the absence of meaningful spillover effects.
Moreover, the behavior of $H_{0}(IV)$ differs greatly from the expectation as its rejection rate initially decreases in almost all parameter settings as $\gamma $ increases. This phenomenon—known as nonmonotonicity in the power function—makes the rejection of the null hypothesis even less likely as the difference between the true DGP and the restriction increases (King and Goh Reference King, Goh, Ullah, Wan and Chaturvedi2002, 256–58).Footnote 16 In practice, these tests would suggest that the data was generated by a DGP with spatially correlated errors even if there are sizable spillover effects. Researchers would incorrectly conclude that a SEM model or even a nonspatial OLS model appropriately represents the unobservable DGP. Given that these model specifications produce biased impact estimates if a SDM process generated the data, the low rejection rates are highly problematic for substantive inferences.
Although the different variants of the Wald test use the same data and identical parameter estimates, this simulation study shows that, depending on the functional representation of the null hypothesis, they can come to contradictory conclusions regarding the validity of the common factor restriction.Footnote 17 In fact, Breusch and Schmidt (Reference Breusch and Schmidt1988) analytically show that it is possible to obtain any desired Wald statistic by appropriately specifying the restriction which opens up the possibility to intentionally manipulate the test result (see also King and Goh Reference King, Goh, Ullah, Wan and Chaturvedi2002). Therefore, any search strategy utilizing the Wald test, like the basic general-to-specific approach or the multistep procedure suggested by Elhorst (Reference Elhorst2014a), is subject to this malfunctioning. Since there is no theoretically justified functional representation of the common factor hypothesis and given the strikingly large share of inconsistent inferences across a range of parameter settings, the evidence presented here strongly caution against the use of the standard Wald test based on an asymptotic reference distribution.
4.3 Performance of the Modified Wald Test Based on Bootstrap Critical Values
While the simulations performed here illustrate that the standard Wald test based on asymptotic critical values is unreliable for the identification of the unobservable spatial process, this section investigates whether the application of simulated reference distributions improves the test's performance. To this end, I use the bootstrap procedure outlined in Section 3.3 and compare the observed Wald statistics based on the different formulations of the common factor restriction to their estimated critical values.
The results reported in Table 3 show that the estimated critical values from the bootstrap approach, $\chi ^{2}_{boot}$, displayed in parentheses not only differ from their asymptotic counterpart $\chi ^{2}_{asym} = 3.841$ on which the original Wald test is based. They also reveal sizable discrepancies between the alternative parameterizations of the null hypothesis. While the estimated critical values for $H_{0}(I)$ and $H_{0}(III)$ are always higher than $\chi ^{2}_{asym}$, the simulated null distributions of $H_{0}(II)$ suggest much smaller critical values for this expression of the null hypothesis in most scenarios.
Table 3 Share of false positives (type I error rates) using bootstrap critical values.
For the different levels of spatial autocorrelation, the median bootstrap critical values $\chi ^{2}_{boot}$ for each variant of the Wald test at the nominal significance level of $5\%$ are displayed in parentheses. Again, the theoretically expected rejection rate across the simulation trials is $5\%\; [3.65\%; 6.35\%]$.
Since the functional expression of the nonlinear common factor restriction determines the Wald statistic's rate of convergence to the asymptotic $\chi ^{2}$ distribution, estimating critical values for each alternative parameterization improves the empirical size of the Wald test in finite samples. Compared to the original tests based on the asymptotically derived critical value, the observed rejection rates of each of the four variants of the Wald test is closer to the nominal significance level of $5\%$ across all sample sizes. Whereas the observed rejection rate of $H_{0}(I)$ ranges from $6.9\%$ to $11\%$ across the different values of $\rho $ for a sample size of $n=49$ when relying on the asymptotic $\chi ^{2}$ distribution (see Table 2), its corresponding range is narrowed to 5.1%–5.8% when using bootstrap critical values. Similarly, the bootstrap critical values even improve the size of $H_{0}(IV)$ which performed poorly under the asymptotic reference distribution. For $n=49$, basing inferences on the simulated null distribution narrows the range of rejection rates from 2.6%–11.5% to 4.1%–6.8% across the different levels of spatial autocorrelation.
In conclusion, the Monte Carlo evidence presented here demonstrate that using the simulated null distribution as a reference distribution and basing inferences on estimated rather than asymptotically derived critical values ameliorates the problems posed by the Wald test's lack of invariance to alternative parameterizations of the common factor hypothesis.Footnote 18 Since the bootstrap critical values account for differences in the convergence rates of the Wald statistics, this modification constitutes a superior alternative that facilitates the empirical assessment of the common factor restriction in spatial regression models. Alternatively, the LR test constitutes another option that is invariant to such reparameterizations (Godfrey and Veall Reference Godfrey and Veall1998).Footnote 19 Hence, irrespective of the empirical model search strategy employed, researchers should utilize the modified Wald test based on the simulated null distribution or the LR test in order to empirically evaluate the appropriateness of the spatial model employed.
5 Empirical Example: Spatial Contagion Effects in Economic Voting
An empirical example helps to demonstrate the consequences of the problem for applied research aiming to evaluate the empirical evidence for a theorized mechanism while ruling out alternative mechanisms. To this end, I reanalyze a study conducted by Williams and Whitten (Reference Williams and Whitten2015) that investigates spatial contagion effects, understood as the process by which "[…] a policy success or failure of one political party in the eyes of voters similarly affects those parties that are ideologically proximate" (Williams and Whitten Reference Williams and Whitten2015, 312). The utilization of different sample sizes and the availability of a plausible alternative mechanism make this study an ideal case to investigate the consequences of the Wald test's lack of invariance to reformulations of the common factor hypothesis.
Williams and Whitten (Reference Williams and Whitten2015) argue that the electorate not only rewards or punishes the parties forming the current government for the country's economic performance at the ballot box as predicted by the economic voting hypothesis. Since voters group parties based on their ideological stances, the effect of economic prosperity also spills over to ideologically proximate parties irrespective of whether or not these parties also belong to the government. These indirect effects conjectured by the authors link the economic wellbeing of a country to the electoral performance of opposition parties. Therefore, the study contributes to the literature on electoral competition by combining insights from the hitherto separated literatures on economic voting and spatial party competition.
To assess the empirical support for the proposed mechanism, Williams and Whitten (Reference Williams and Whitten2015, 315–16) analyze data on electoral contests in 23 parliamentary democracies from 1951 to 2005, where the parties constitute the unit of analysis. The change in a party's vote share between two consecutive elections is the dependent variable and the country's economic performance, measured by the real GDP per capita growth, is the main regressor of interest.Footnote 20 In their study, the authors emphasize the importance of spatial regression models which facilitate the estimation of the theoretically expected contagion effects in the form of spatial spillovers. They choose the SAR model specification in order to quantify (global) spillover effects (Williams and Whitten Reference Williams and Whitten2015, 313–14). In line with the economic voting literature suggesting that the voters' ability to clearly attribute the responsibility for the economic (mis)fortune is a necessary precondition for economic voting to occur, they estimate separate SAR models for elections with high ( $n=398$) and low levels of clarity ( $n=1,030$). While economic voting itself should be less pronounced in elections characterized by a low clarity of responsibility because it is harder for voters to hold a party accountable for the country's economic performance, the authors expect to find larger spatial contagion effects in this context. The argument proposed in the study is that in low clarity settings, the electorate is more experienced in switching their support from one party to an ideologically similar party. Therefore, the voters' sophistication in terms of reallocating their support creates stronger interdependencies between parties in low clarity elections as compared to high clarity settings where voters can easily identify the party who is responsible for the national state of the economy (Williams and Whitten Reference Williams and Whitten2015, 312–13).
Given these theoretical expectations, the SAR model constitutes an appropriate choice as it links the electoral fortune of a party to the performance of the other parties and allows to distinguish between direct and indirect effects of economic prosperity on the parties' vote shares. Yet, while unfocused diagnostics, like Moran's I, indicate the existence of spatial interdependencies, it is possible that an alternative spatial process caused the clustering detected in the data.Footnote 21 Unmodeled election specific particularities, for example, that are unrelated to a country's economic performance—like the general appeal of a candidate or political scandals—might affect the election outcome of ideologically proximate parties as well. Since these factors are not part of the regression's systematic part, they potentially cause spatial clustering in the residuals. Instead of substantively meaningful contagion effects, this plausible alternative process implies no indirect effect of economic performance but a mere diffusion of shocks which would be adequately captured by the SEM model. Consequently, there is a risk that the SAR models specified by the authors lead to incorrect inferences regarding the existence of contagion effects.
To demonstrate the substantive differences between the two alternative spatial processes, Figure 2 displays the estimated direct and indirect effects of the main regressor of interest—economic performance—on the vote share of opposition parties derived from the SAR, SEM, and SDM model.Footnote 22 As the theory suggests, spatial contagion effects should mitigate the negative effect of a strong economy for opposition parties. This is because the beneficial effect of positive economic conditions for governing parties spills over to ideologically neighboring opposition parties. In contrast, if merely the errors are spatially correlated, no contagion takes place and only a direct negative effect of a country's economic performance exists for opposition parties.
Figure 2 Average direct and indirect impact estimates of economic performance in low- and high-clarity elections.
Despite a significant spatial parameter estimate, Figure 2 illustrates that the alternative spatial models suggest no indirect spillover effect of economic growth in low clarity elections.Footnote 23 In high clarity elections, only the SAR model identifies significant spillover effects. While the SEM model assumes no spillovers, the average indirect impact of economic growth on the change in vote share for opposition parties as estimated by the SAR model is $0.020$ with a simulated $95\%$ confidence interval within $[0.004; 0.045]$.Footnote 24 In contrast, the estimate derived from the SDM specification is $0.025\; [-0.208; 0.244]$, suggesting no significant spillovers. Besides the SDM model's remarkable efficiency loss, this example illustrates that the identification of the theorized contagion effects is contingent on the specification of the underlying spatial process. Although the SAR and SEM models produce similar and statistically indistinguishable total impact estimates of economic performance in high clarity elections, the results have very different theoretical implications.Footnote 25 Notably, the overall impact of economic growth on an opposition party's vote share in the SEM model solely consists of the direct impact of $x_{i}$ on $y_{i}$. In contrast, the SAR model also identifies significant indirect impacts. Therefore, while the SAR model supports the theory of spatial contagion effects, there are no substantive spillover effects in the SEM model which highlights the importance of adequately distinguishing between these alternative processes for substantive inferences.
In order to address the problem of model misspecification and to empirically distinguish the two plausible spatial processes, I implement the Wald test of common factors by using the SDM model estimates and the four specifications of the common factor restriction outlined in Table1. If the data supports the theory of indirect contagion effects, the tests should reject the null hypothesis. Yet, Table 4 illustrates that in both high and low clarity contexts, the alternative Wald tests not only differ in their test statistics. Based on the asymptotic critical values, they also come to substantively different conclusions regarding the existence of the spillover effects. While $H_{0}(IV)$ supports the theory proposed by Williams and Whitten (Reference Williams and Whitten2015), the other three alternative versions of the Wald test fail to reject the common factor hypothesis. Instead of substantively meaningful spillovers, these tests only indicate residual dependence which implies that no spatial contagion takes place among the parties. Given the rather large number of observations in the low clarity scenario, these differences become even more alarming. Alternatively, when inferences regarding the underlying spatial process are based on the simulated null distribution of each parameterization of the common factor restriction, all four variants of the Wald test fail to reject the null hypothesis at conventional significance levels.
Table 4 Wald tests of common factors for the analysis of spatial contagion effects.
w is the observed Wald statistic. $p_{asym}$ and $p_{boot}$ denote p values based on asymptotic critical values $\chi ^{2}_{asym}$ and on bootstrap critical values $\chi ^{2}_{boot}$ respectively. While each restriction has an individual simulated critical value, they share a single asymptotic critical value which depends on the $\alpha $-level and the number of degrees of freedom.
Taken together, this empirical case study confirms the Monte Carlo evidence by demonstrating that relying on bootstrap critical values in order to identify statistically significant deviations from the Wald statistic's null distribution improves its finite sample performance and alleviates the conflict between alternative parameterizations of the nonlinear common factor hypothesis. While the tests based on the asymptotic $\chi ^{2}$ distribution come to contradictory conclusions even with a sample size of more than $1,000$ observations, the bootstrap procedure is able to correct for this undesirable circumstance. Regarding the theorized mechanism, this analysis finds insufficient evidence to convincingly dispel doubts that, instead of the theorized spatial contagion effects, correlation in the residuals caused the spatial clustering found in the data.
Distinguishing substantively meaningful indirect spillover effects from a mere diffusion of random shocks is essential as there is a serious risk of making incorrect inferences when estimating a misspecified model. Yet, the task of appropriately modeling the process underlying observable patterns of interrelatedness between the units poses notable difficulties for political scientists. Although many empirical specification search procedures rely on the Wald test to assess the nonlinear common factor restriction, the test's lack of invariance to algebraically equivalent formulations of the null hypothesis poses a serious problem for the accuracy of inferences.
This study investigates the consequences of the Wald test's sensitivity to alternative and algebraically equivalent expressions of the common factor hypothesis for its ability to guide the empirical model specification search. By presenting analytical evidence and using Monte Carlo simulations as well as an empirical example, it shows that the necessity to approximate the sampling variability of a nonlinear function by a Taylor series expansion causes the Wald test's sensitivity to algebraically equivalent reparameterizations of the null hypothesis. While asymptotically valid, this approximation produces considerable differences in finite samples, depending on the restriction's functional representation. In many instances, alternative null hypotheses even suggest contradictory conclusions regarding the underlying spatial process since they converge to the Wald statistic's asymptotic $\chi ^{2}$ distribution at different rates. Given that there is no theoretical justification for any particular expression and since their performance is contingent on the relevant region of the parameter space, the results caution against relying on the Wald test's asymptotic results in any specification search strategy. Instead, practitioners should either base inferences on a simulated null distribution by estimating bootstrap critical values or turn to the LR test which is invariant to such reparameterizations in order to avoid spurious inferences.
Subsequent research might continue this line of research by developing more reliable strategies that help practitioners to differentiate between substantive and residual dependence. As Mur and Angulo (Reference Mur and Angulo2009) show, the evidence in favor of any search strategy proposed in the literature is mixed which explains the debate about the most appropriate strategy and prevents the development of general guidelines for the empirical identification of the correct model specification (Rüttenauer Reference Rüttenauer2019, 16). In this regard, Lacombe and LeSage (Reference Lacombe and LeSage2015), for example, demonstrate that Bayesian methods constitute a promising alternative to the frequentist null hypothesis significance testing approach. Additionally, multimodel inference might help overcoming the current fixation on model selection and instead allows researchers to focus on the identification of substantively meaningful spillover effects in the data (see also Juhl Reference Juhl2020b). Especially regarding the considerable difficulties researchers face when attempting to empirically distinguish between different spatial processes (e.g., Gibbons and Overman Reference Gibbons and Overman2012), following this line of investigation will enhance model building and contribute to our understanding of different interaction effects among the units of analysis.
Spatial autocorrelation poses notable challenges for the correct specification and interpretation of statistical models as model misspecification can bias the substantive inferences. Notwithstanding these difficulties, interdependencies are paramount in social science theories which obliges researchers to carefully consider the process generating these dependencies when building empirical models in order to make valid inferences with respect to the theories. Consequently, especially in the absence of design-based identification strategies as proposed by Gibbons and Overman (Reference Gibbons and Overman2012), methodological research facilitating the appropriate specification of spatial models constitutes an important contribution for a thorough assessment of theoretical expectations.
This research was funded by the German Research Foundation (DFG)—Project-ID 139943784—SFB 884. I also gratefully acknowledge support by the state of Baden-Württemberg through the High Performance Computing Cluster bwHPC (INST 35/1134-1 FUGG) and the University of Mannheim's Graduate School of Economic and Social Sciences.
This project has been presented at the EPSA conference 2019 in Belfast, Northern Ireland. I would like to thank Lion Behrens, Thomas Bräuninger, Thomas Plümper, Akisato Suzuki, Garrett Vande Kamp, and Laron K. Williams, the participants of the 2019 CDSS Political Science Colloquium at the University of Mannheim, and four anonymous reviewers as well as the journal's editor Jeff Gill for helpful comments.
Replication code for this article has been published in Code Ocean, a computational reproducibility platform that enables users to run the code, and can be viewed interactively at https://doi.org/10.24433/CO.1459046.v1. A preservation copy of the same code and data can also be accessed via Dataverse at Juhl (Reference Juhl2020a).
For supplementary material accompanying this paper, please visit https://doi.org/10.1017/pan.2020.23.
Reader note: The Code Ocean capsule above contains the code to replicate the results of this article. Users can run the code and view the outputs, but in order to do so they will need to register on the Code Ocean site (or login if they have an existing Code Ocean account).
Edited by Jeff Gill
1 There is an ongoing debate in the spatial econometrics literature whether the specific-to-general or the general-to-specific approach should be used in order to identify the true data-generating model (e.g., Florax et al. Reference Florax, Folmer and Rey2003; Florax et al. Reference Florax, Folmer and Rey2006; Hendry Reference Hendry2006; Elhorst Reference Elhorst2014a). However, both approaches have their disadvantages and there is no conclusive evidence for the superiority of any of these search strategies (Mur and Angulo Reference Mur and Angulo2009; Rüttenauer Reference Rüttenauer2019). Consequently, many search procedures rely on a combination of both approaches (e.g., Mur and Angulo Reference Mur and Angulo2006; Elhorst Reference Elhorst2010; Elhorst Reference Elhorst2014a).
2 By virtue of the Gauss–Markov assumptions, nonspatial regression models typically estimated by ordinary least squares (OLS) do not incorporate any spatial effects. In a regression model with a sigle regressor $\boldsymbol {x}$, the direct effect of a change in $x_{i}$ for unit i on the unit's outcome $y_{i}$ is $\partial E(y_{i}|x_{i})/\partial x_{i} = \hat {\beta }_{OLS} = (\boldsymbol {{x'}x})^{-1}\boldsymbol {{x'}y}$ while this change is $\partial E(y_{j}|x_{i})/\partial x_{i} = 0$ for all units j where $j \neq i$. By the same token, the spatial error model (SEM) specification does not feature spillover effects since residual dependence does not affect $E(y_{i}|x_{i})$.
3 Different theoretical mechanisms can produce substantively meaningful spillover effects. Shipan and Volden (Reference Shipan and Volden2008), for example, distinguish between four different mechanisms of policy diffusion: learning, economic competition, imitation, and coercion. Acknowledging this, I leave aside a thorough discussion of alternative mechanisms and restrict the focus to the empirical modeling of cross-sectional dependencies.
4 For the sake of simplicity, I set the coefficient associated with $\boldsymbol {z}$ to 1 and omit it from the equation.
5 Also assume that $\rho $ is contained in the compact open interval ( $\omega _{min}^{-1}$; $\omega _{max}^{-1}$), where $\omega _{min}$ and $\omega _{max}$ are the smallest and largest eigenvalues of $\boldsymbol {W}$. This stability constraint ensures that the matrix $(\boldsymbol {I}_{n}-\rho \boldsymbol {W})$ is positive definite and its inverse exists (e.g., LeSage and Pace Reference LeSage and Pace2009; Elhorst Reference Elhorst2014b).
6 While the general nesting spatial (GNS) model incorporates all possible types of cross-sectional interaction effects, it tends to be overparameterized. Hence, it provides no additional information and is rarely used in applied studies (e.g., Elhorst Reference Elhorst2014b; Rüttenauer Reference Rüttenauer2019).
7 Since this study is primarily concerned with the Wald test's ability to assess the common factor hypothesis, the reader may be referred to Halleck Vega and Elhorst (Reference Halleck Vega and Elhorst2015), Elhorst (Reference Elhorst2014b), Elhorst Reference Elhorst2010, or LeSage and Pace (Reference LeSage and Pace2009) who provide excellent treatments of alternative spatial regression models.
8 The number of these common factor restrictions equals the number of regressors included in the model. To ease the exposition, I assume a single regressor throughout this study.
9 Supplementary Material A contains further details on the delta method.
10 Supplementary Material B contains information on the correlation between the regressor $\boldsymbol {x}$ and the omitted variable $\boldsymbol {z}$ for the different values of $\gamma $.
11 Since $\boldsymbol {W}$ is row-stochastic, any value of $\rho $ in the interior of $\omega _{min}^{-1}$ and $1$ ensures matrix invertibility (LeSage and Pace Reference LeSage and Pace2009; Elhorst Reference Elhorst2014b).
12 To perform the simulations, I rely on resources from the High Performance Computing Cluster bwHPC and use the R package spdep (Bivand and Piras Reference Bivand and Piras2015) to estimate all spatial regression models. Replication materials are available on the Political Analysis Dataverse (Juhl Reference Juhl2020a).
13 For an investigation of the substantive effects of spatial misspecification bias in nonspatial OLS and SEM models, see Supplementary Material C.1.
14 Supplementary Material C contains additional robustness tests including a scenario with negative spatial autocorrelation (C.2), an alternative specification of the connectivity scheme and an investigation of possible edge effects (C.3).
15 This situation also occurs at $\hat {\beta }=0$ for $H_{0}(III)$ and at $\hat {\theta }=0$ for $H_{0}(IV)$.
16 The Wald test based on $H_{0}(IV)$ is also strongly affected by $\beta $ in the DGP. As Supplementary Material C.4 shows, the test's performance is even worse for smaller values of $\beta $.
17 Supplementary Material C.5 provides a more detailed discussion on these inconsistencies and identifies regions of the parameter space where the alternative Wald tests diverge most frequently.
18 The application of bootstrap critical values can also improve the Wald test's power as Supplementary Material C.6 shows.
19 Supplementary Material C.7 verifies the good performance of the LR test in the Monte Carlo simulations conducted here.
20 Williams and Whitten (Reference Williams and Whitten2015) present a more detailed discussion on the dataset as well as a comprehensive derivation of their theoretical expectations regarding spatial contagion effects.
21 Table 1 in the study by Williams and Whitten (Reference Williams and Whitten2015), 316) reports the Moran's I and Geary's C tests of spatial interdependencies which both indicate spatial autocorrelation. Note that, while this table also reports results of a Wald test, this is not the Wald test of common factors but rather a test of the null hypothesis of no spatial dependence (see Section 3.2).
22 While Table 2 in the original work only contains the prespatial marginal effects, the estimates presented here explicitly disentangle direct and indirect (or contagion) effects.
23 This highlights the necessity to base substantive inferences on impact rather than coefficient estimates. Elhorst (Reference Elhorst2010) and LeSage and Pace (Reference LeSage and Pace2009) provide a more detailed discussion on this important issue.
24 In order to appropriately account for sampling uncertainty, I use the point estimates and the variance-covariance matrices obtained from the different spatial models to set up multivariate normal distributions from which I sample $1,000$ sets of coefficients.
25 The SAR model's ATI estimate in high clarity elections is $-0.237 [-0.410; -0.056]$ while the ATI derived from the SEM model is $-0.254 [-0.448; -0.075]$.
Angulo, A. M., and Mur, J.. 2011. "The Likelihood Ratio Test of Common Factors under Non-Ideal Conditions." Investigaciones Regionales 21:37–52.Google Scholar
Anselin, L. 2003. "Spatial Externalities, Spatial Multipliers, and Spatial Econometrics." International Regional Science Review 26(2):153–166.10.1177/0160017602250972CrossRefGoogle Scholar
Betz, T., Cook, S. J., and Hollenbach, F. M.. 2019. "Spatial Interdependence and Instrumental Variable Models." Political Science Research and Methods. doi:10.1017/psrm.2018.61Google Scholar
Bivand, R., and Piras, G.. 2015. "Comparing Implementations of Estimation Methods for Spatial Econometrics." Journal of Statistical Software 63(18):1–36.CrossRefGoogle Scholar
Breusch, T. S., and Schmidt, P.. 1988. "Alternative Forms of the Wald Test: How Long is a Piece of String?" Communications in Statistics - Theory and Methods 17(8):2789–2795.CrossRefGoogle Scholar
Burridge, P. 1981. "Testing for a Common Factor in a Spatial Autoregression Model." Environment and Planning A: Economy and Space 13(7):795–800.CrossRefGoogle Scholar
Cliff, A. D., and Keith Ord, J.. 1981. Spatial Processes: Models & Applications. London: Pion.Google Scholar
Dagenais, M. G., and Dufour, J.-M.. 1991. "Invariance, Nonlinear Models, and Asymptotic Tests." Econometrica 59(6):1601–1615.CrossRefGoogle Scholar
Darmofal, D. 2015. Spatial Analysis for the Social Sciences. Analytical Methods for Social Research. New York: Cambridge University Press.10.1017/CBO9781139051293CrossRefGoogle Scholar
de Paula Ferrari, S. L., and Cribari-Neto, F.. 1993. "On the Corrections to the Wald Test of Non-Linear Restrictions." Economics Letters 42(4):321–326.10.1016/0165-1765(93)90079-RCrossRefGoogle Scholar
Elhorst, J. P. 2010. "Applied Spatial Econometrics: Raising the Bar." Spatial Economic Analysis 5(1):9–28.CrossRefGoogle Scholar
Elhorst, J. P. 2014a. "Matlab Software for Spatial Panels." International Regional Science Review 37(3):389–405.CrossRefGoogle Scholar
Elhorst, J. P. 2014b. Spatial Econometrics: From Cross-Sectional Data to Spatial Panels. Heidelberg: Springer.CrossRefGoogle Scholar
Florax, R. J.G.M., Folmer, H., and Rey, S. J.. 2003. "Specification Searches in Spatial Econometrics: The Relevance of Hendry's Methodology." Regional Science and Urban Economics 33(5):557–579.CrossRefGoogle Scholar
Florax, R. J.G.M., Folmer, H., and Rey, S. J.. 2006. "A Comment on Specification Searches in Spatial Econometrics: The Relevance of Hendry's Methodology: A Reply." Regional Science and Urban Economics 36(2):300–308.CrossRefGoogle Scholar
Gibbons, S., and Overman, H. G.. 2012. "Mostly Pointless Spatial Econometrics?" Journal of Regional Science 52(2):172–191.CrossRefGoogle Scholar
Godfrey, L. G., and Veall, M. R.. 1998. "Bootstrap-Based Critical Values for Tests of Common Factor Restrictions." Economics Letters 59(1):1–5.CrossRefGoogle Scholar
Goh, K.-L., and King, M. L.. 1996. "Modified Wald Tests for Non-Linear Restrictions: A Cautionary Tale." Economics Letters 53(2):133–138.CrossRefGoogle Scholar
Greene, W. H. 2012. Econometric Analysis. 7th edn. Boston, MA: Pearson Education, Limited.Google Scholar
Gregory, A. W., and Veall, M. R.. 1985. "Formulating Wald Tests of Nonlinear Restrictions." Econometrica 53(6):1465–1468.CrossRefGoogle Scholar
Gregory, A. W., and Veall, M. R.. 1986. "Wald Tests of Common Factor Restrictions." Economics Letters 22(2-3):203–208.CrossRefGoogle Scholar
Halleck Vega, S., and Elhorst, J. P.. 2015. "The SLX Model." Journal of Regional Science 55(3):339–363.CrossRefGoogle Scholar
Hendry, D. F. 2006. "A Comment on "Specification Searches in Spatial Econometrics: The Relevance of Hendry's Methodology"." Regional Science and Urban Economics 36(2):309–312.CrossRefGoogle Scholar
Juhl, S. 2020a. "Replication Data for: The Wald Test of Common Factors in Spatial Model Specification Search Strategies." https://doi.org/10.7910/DVN/CY7YWE, Harvard Dataverse, V1.CrossRefGoogle Scholar
Juhl, S. 2020b. "The Sensitivity of Spatial Regression Models to Network Misspecification." Political Analysis 28(1):1–19.CrossRefGoogle Scholar
King, M. L., and Goh, K.-L.. 2002. "Improvements to the Wald Test." In Handbook of Applied Econometrics and Statistical Inference, edited by Ullah, A., Wan, A. T. K., and Chaturvedi, A., 251–275. New York: Dekker.Google Scholar
Lacombe, D. J., and LeSage, J. P.. 2015. "Using Bayesian Posterior Model Probabilities to Identify Omitted Variables in Spatial Regression Models." Papers in Regional Science 94(2):365–383.Google Scholar
Lafontaine, F., and White, K. J.. 1986. "Obtaining any Wald Statistic You Want." Economics Letters 21(1):35–40.CrossRefGoogle Scholar
LeSage, J. P., and Pace, R. K.. 2009. Introduction to Spatial Econometrics. Boca Raton, FL: CRC Press.CrossRefGoogle Scholar
Mur, J., and Angulo, A.. 2006. "The Spatial Durbin Model and the Common Factor Tests." Spatial Economic Analysis 1(2):207–226.Google Scholar
Mur, J., and Angulo, A.. 2009. "Model Selection Strategies in a Spatial Setting: Some Additional Results." Regional Science and Urban Economics 39(2):200–213.Google Scholar
Newey, W. K., and McFadden, D. L.. 1994. Large Sample Estimation and Hypothesis Testing. In Handbook of Econometrics, Vol IV, edited by Engle, R. F., and McFadden, D. L., 2111–2245. Amsterdam, Netherlands: North-Holland.Google Scholar
Pace, R. K., and LeSage, J. P.. 2010. Omitted Variables Biases of OLS and Spatial Lag Models. In Progress In Spatial Analysis: Theory and Computation, and Thematic Applications, edited by Páez, A., Le Gallo, J., Buliung, R. N., and Dall'erba, S., 17–28. Berlin, Germany: Springer.Google Scholar
Phillips, P. C. B., and Park, J. Y.. 1988. "On the Formulation of Wald Tests of Nonlinear Restrictions." Econometrica 56(5):1065–1083.CrossRefGoogle Scholar
Rüttenauer, T. 2019. "Spatial Regression Models: A Systematic Comparison of Different Model Specifications Using Monte Carlo Experiments." Sociological Methods & Research. doi:10.1177/0049124119882467CrossRefGoogle Scholar
Shipan, C. R., and Volden, C.. 2008. "The Mechanisms of Policy Diffusion." American Journal of Political Science 52(4):840–857.CrossRefGoogle Scholar
Williams, L. K., and Whitten, G. D.. 2015. "Don't Stand So Close to Me: Spatial Contagion Effects and Party Competition." American Journal of Political Science 59(2):309–325.CrossRefGoogle Scholar
View in content
Juhl Dataset
https://doi.org/10.7910/DVN/CY7YWE
Juhl supplementary material
You have Access Open access
COOK, SCOTT J. HAYS, JUDE C. and FRANZESE, ROBERT J. 2022. STADL Up! The Spatiotemporal Autoregressive Distributed Lag Model for TSCS Data Analysis. American Political Science Review, p. 1.
Afghanistan Aland Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Channel Islands, Isle of Man Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of the Cook Islands Costa Rica Cote D'Ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic East Timor Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-bissau Guyana Haiti Heard and Mc Donald Islands Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macau Macedonia Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Kitts and Nevis Saint Lucia Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands Spain Sri Lanka St. Helena St. Pierre and Miquelon Sudan Suriname Svalbard and Jan Mayen Islands Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania, United Republic of Thailand Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu
|
CommonCrawl
|
SOME NEW IDENTITIES CONCERNING THE HORADAM SEQUENCE AND ITS COMPANION SEQUENCE
Keskin, Refik;Siar, Zafer 1
https://doi.org/10.4134/CKMS.c170261 PDF
Let a, b, P, and Q be real numbers with $PQ{\neq}0$ and $(a,b){\neq}(0,0)$. The Horadam sequence $\{W_n\}$ is defined by $W_0=a$, $W_1=b$ and $W_n=PW_{n-1}+QW_{n-2}$ for $n{\geq}2$. Let the sequence $\{X_n\}$ be defined by $X_n=W_{n+1}+QW_{n-1}$. In this study, we obtain some new identities between the Horadam sequence $\{W_n\}$ and the sequence $\{X_n\}$. By the help of these identities, we show that Diophantine equations such as $$x^2-Pxy-y^2={\pm}(b^2-Pab-a^2)(P^2+4),\\x^2-Pxy+y^2=-(b^2-Pab+a^2)(P^2-4),\\x^2-(P^2+4)y^2={\pm}4(b^2-Pab-a^2),$$ and $$x^2-(P^2-4)y^2=4(b^2-Pab+a^2)$$ have infinitely many integer solutions x and y, where a, b, and P are integers. Lastly, we make an application of the sequences $\{W_n\}$ and $\{X_n\}$ to trigonometric functions and get some new angle addition formulas such as $${\sin}\;r{\theta}\;{\sin}(m+n+r){\theta}={\sin}(m+r){\theta}\;{\sin}(n+r){\theta}-{\sin}\;m{\theta}\;{\sin}\;n{\theta},\\{\cos}\;r{\theta}\;{\cos}(m+n+r){\theta}={\cos}(m+r){\theta}\;{\cos}(n+r){\theta}-{\sin}\;m{\theta}\;{\sin}\;n{\theta},$$ and $${\cos}\;r{\theta}\;{\sin}(m+n){\theta}={\cos}(n+r){\theta}\;{\sin}\;m{\theta}+{\cos}(m-r){\theta}\;{\sin}\;n{\theta}$$.
STRUCTURE OF 3-PRIME NEAR-RINGS SATISFYING SOME IDENTITIES
Boua, Abdelkarim 17
In this paper, we investigate commutativity of 3-prime near-rings ${\mathcal{N}}$ in which (1, ${\alpha}$)-derivations satisfy certain algebraic identities. Some well-known results characterizing commutativity of 3-prime near-rings have been generalized. Furthermore, we give some examples show that the restriction imposed on the hypothesis is not superfluous.
REPRESENTATIONS BY QUATERNARY QUADRATIC FORMS WITH COEFFICIENTS 1, 2, 5 OR 10
Alaca, Ayse;Altiary, Mada 27
We determine explicit formulas for the number of representations of a positive integer n by quaternary quadratic forms with coefficients 1, 2, 5 or 10. We use a modular forms approach.
SYMMETRIC PROPERTY OF RINGS WITH RESPECT TO THE JACOBSON RADICAL
Calci, Tugce Pekacar;Halicioglu, Sait;Harmanci, Abdullah 43
Let R be a ring with identity and J(R) denote the Jacobson radical of R, i.e., the intersection of all maximal left ideals of R. A ring R is called J-symmetric if for any $a,b,c{\in}R$, abc = 0 implies $bac{\in}J(R)$. We prove that some results of symmetric rings can be extended to the J-symmetric rings for this general setting. We give many characterizations of such rings. We show that the class of J-symmetric rings lies strictly between the class of symmetric rings and the class of directly finite rings.
SOLUTIONS AND STABILITY OF TRIGONOMETRIC FUNCTIONAL EQUATIONS ON AN AMENABLE GROUP WITH AN INVOLUTIVE AUTOMORPHISM
Ajebbar, Omar;Elqorachi, Elhoucien 55
Given ${\sigma}:G{\rightarrow}G$ an involutive automorphism of a semigroup G, we study the solutions and stability of the following functional equations $$f(x{\sigma}(y))=f(x)g(y)+g(x)f(y),\;x,y{\in}G,\\f(x{\sigma}(y))=f(x)f(y)-g(x)g(y),\;x,y{\in}G$$ and $$f(x{\sigma}(y))=f(x)g(y)-g(x)f(y),\;x,y{\in}G$$, from the theory of trigonometric functional equations. (1) We determine the solutions when G is a semigroup generated by its squares. (2) We obtain the stability results for these equations, when G is an amenable group.
GENERALIZED DERIVATIONS WITH CENTRALIZING CONDITIONS IN PRIME RINGS
Das, Priyadwip;Dhara, Basudeb;Kar, Sukhendu 83
Let R be a noncommutative prime ring of characteristic different from 2, U the Utumi quotient ring of R, C the extended centroid of R and f($x_1,{\ldots},x_n$) a noncentral multilinear polynomial over C in n noncommuting variables. Denote by f(R) the set of all the evaluations of f($x_1,{\ldots},x_n$) on R. If d is a nonzero derivation of R and G a nonzero generalized derivation of R such that $$d(G(u)u){\in}Z(R)$$ for all $u{\in}f(R)$, then $f(x_1,{\ldots},x_n)^2$ is central-valued on R and there exists $b{\in}U$ such that G(x) = bx for all $x{\in}R$ with $d(b){\in}C$. As an application of this result, we investigate the commutator $[F(u)u,G(v)v]{\in}Z(R)$ for all $u,v{\in}f(R)$, where F and G are two nonzero generalized derivations of R.
TORSION MODULES AND SPECTRAL SPACES
Roshan-Shekalgourab, Hajar 95
In this paper we study certain modules whose prime spectrums are Noetherian or/and spectral spaces. In particular, we investigate the relationship between topological properties of prime spectra of torsion modules and algebraic properties of them.
A VANISHING THEOREM FOR REDUCIBLE SPACE CURVES AND THE CONSTRUCTION OF SMOOTH SPACE CURVES IN THE RANGE C
Ballico, Edoardo 105
Let $Y{\subset}{\mathbb{P}}^3$ be a degree d reduced curve with only planar singularities. We prove that $h^i({\mathcal{I}}_Y(t))=0$, i = 1, 2, for all $t{\geq}d-2$. We use this result and linkage to construct some triples (d, g, s), $d>s^2$, with very large g for which there is a smooth and connected curve of degree d and genus g, $h^0({\mathcal{I}}_C(s))=1$ and describe the Hartshorne-Rao module of C.
ALGEBRAIC CHARACTERIZATION OF GRAPHICAL DEGREE STABILITY
Anwar, Imran;Khalid, Asma 113
In this paper, we introduce the elimination ideal $I_D(G)$ associated to a simple finite graph G. We obtain the upper bound of Castelnuovo-Mumford regularity of elimination ideal for various classes of graphs.
NORMALITY ON JACOBSON AND NIL RADICALS
Kim, Dong Hwa;Yun, Sang Jo 127
This article concerns the normal property of elements on Jacobson and nil radicals which are generalizations of commutativity. A ring is said to be right njr if it satisfies the normal property on the Jacobson radical. Similarly a ring is said to be right nunr (resp., right nlnr) if it satisfies the normal property on the upper (resp., lower) nilradical. We investigate the relations between right duo property and the normality on Jacobson (nil) radicals. Related examples are investigated in the procedure of studying the structures of right njr, nunr, and nlnr rings.
NON-REAL GROUPS WITH EXACTLY TWO CONJUGACY CLASSES OF THE SAME SIZE
Robati, Sajjad Mahmood 137
In this paper, we show that $A_4$ is the only finite group with exactly two conjugacy classes of the same size having some non-real linear characters.
ON SOME PROPERTIES OF J-CLASS OPERATORS
Asadipour, Meysam;Yousefi, Bahmann 145
The notion of hypercyclicity was localized by J-sets and in this paper, we will investigate for an equivalent condition through the use of open sets. Also, we will give a J-class criterion, that gives conditions under which an operator belongs to the J-class of operators.
EXISTENCE OF INFINITELY MANY SOLUTIONS FOR A CLASS OF NONLOCAL PROBLEMS WITH DIRICHLET BOUNDARY CONDITION
Chaharlang, Moloud Makvand;Razani, Abdolrahman 155
In this article we are concerned with some non-local problems of Kirchhoff type with Dirichlet boundary condition in Orlicz-Sobolev spaces. A result of the existence of infinitely many solutions is established using variational methods and Ricceri's critical points principle modified by Bonanno.
A STUDY OF NEW CLASS OF INTEGRALS ASSOCIATED WITH GENERALIZED STRUVE FUNCTION AND POLYNOMIALS
Haq, Sirazul;Khan, Abdul Hakim;Nisar, Kottakkaran Sooppy 169
The main aim of this paper is to establish a new class of integrals involving the generalized Galu$Galu{\grave{e}}$-type Struve function with the different type of polynomials such as Jacobi, Legendre, and Hermite. Also, we derive the integral formula involving Legendre, Wright generalized Bessel and generalized Hypergeometric functions. The results obtained here are general in nature and can deduce many known and new integral formulas involving the various type of polynomials.
CHARACTERIZATIONS OF STABILITY OF ABSTRACT DYNAMIC EQUATIONS ON TIME SCALES
Hamza, Alaa E.;Oraby, Karima M. 185
In this paper, we investigate many types of stability, like (uniform stability, exponential stability and h-stability) of the first order dynamic equations of the form $$\{u^{\Delta}(t)=Au(t)+f(t),\;\;t{\in}{\mathbb{T}},\;t>t_0\\u(t_0)=x{\in}D(A),$$ and $$\{u^{\Delta}(t)=Au(t)+f(t,u),\;\;t{\in}{\mathbb{T}},\;t>t_0\\u(t_0)=x{\in}D(A),$$ in terms of the stability of the homogeneous equation $$\{u^{\Delta}(t)=Au(t),\;\;t{\in}{\mathbb{T}},\;t>t_0\\u(t_0)=x{\in}D(A),$$ where f is rd-continuous in $t{\in}{\mathbb{T}}$ and with values in a Banach space X, with f(t, 0) = 0, and A is the generator of a $C_0$-semigroup $\{T(t):t{\in}{\mathbb{T}}\}{\subset}L(X)$, the space of all bounded linear operators from X into itself. Here D(A) is the domain of A and ${\mathbb{T}}{\subseteq}{\mathbb{R}}^{{\geq}0}$ is a time scale which is an additive semigroup with property that $a-b{\in}{\mathbb{T}}$ for any $a,b{\in}{\mathbb{T}}$ such that a > b. Finally, we give illustrative examples.
REDHEFFER TYPE INEQUALITIES FOR THE FOX-WRIGHT FUNCTIONS
Mehrez, Khaled 203
In this note, new sharpened Redheffer type inequalities related to the Fox-Wright functions are established. As consequence, we show new Redheffer type inequalities for hypergeometric functions and for the four-parametric Mittag-Leffler functions with best possible exponents.
REGULARITIES OF MULTIFRACTAL HEWITT-STROMBERG MEASURES
Attia, Najmeddine;Selmi, Bilel 213
We construct new metric outer measures (multifractal analogues of the Hewitt-Stromberg measure) $H^{q,t}_{\mu}$ and $P^{q,t}_{\mu}$ lying between the multifractal Hausdorff measure ${\mathcal{H}}^{q,t}_{\mu}$ and the multifractal packing measure ${\mathcal{P}}^{q,t}_{\mu}$. We set up a necessary and sufficient condition for which multifractal Hausdorff and packing measures are equivalent to the new ones. Also, we focus our study on some regularities for these given measures. In particular, we try to formulate a new version of Olsen's density theorem when ${\mu}$ satisfies the doubling condition. As an application, we extend the density theorem given in [3].
ODES RELATED WITH SOME NONLOCAL SCHRÖDINGER EQUATIONS AND ITS APPLICATIONS
Huh, Hyungjin 231
We obtain ordinary differential equations related with some nonlocal Schr$Schr{\ddot{o}}dinger$dinger equations. As applications, we prove finite time blow-up or extinction of solutions.
ON A CLASS OF BIVARIATE MEANS INCLUDING A LOT OF OLD AND NEW MEANS
Raissouli, Mustapha;Rezgui, Anis 239
In this paper we introduce a new formulation of symmetric homogeneous bivariate means that depends on the variation of a given continuous strictly increasing function on (0, ${\infty}$). It turns out that this class of means includes a lot of known bivariate means among them the arithmetic mean, the harmonic mean, the geometric mean, the logarithmic mean as well as the first and second Seiffert means. Using this new formulation we introduce a lot of new bivariate means and derive some mean-inequalities.
SOME CONVERGENCE RESULTS FOR GENERALIZED NONEXPANSIVE MAPPINGS IN CAT(0) SPACES
Garodia, Chanchal;Uddin, Izhar 253
The aim of this paper is to study convergence behaviour of Thakur iteration scheme in CAT(0) spaces for generalized nonexpansive mappings. In process, several relevant results of the existing literature are generalized and improved.
DUAL SURFACES DEFINED BY z = f(u) + g(ν) IN SIMPLY ISOTROPIC 3-SPACE ${\mathbb{I}}{\frac{1}{3}}$
Cakmak, Ali;Karacan, Murat Kemal;Kiziltug, Sezai 267
In this study, we define the dual surfaces by z = f(u) + g(v) and also classify these surfaces in ${\mathbb{I}}{\frac{1}{3}}$ satisfying some algebraic equations in terms of the coordinate functions and the Laplace operators according to fundamental forms of the surface.
A NOTE ON DERIVATIONS OF A SULLIVAN MODEL
Kwashira, Rugare 279
Complex Grassmann manifolds $G_{n,k}$ are a generalization of complex projective spaces and have many important features some of which are captured by the $Pl{\ddot{u}}cker$ embedding $f:G_{n,k}{\rightarrow}{\mathbb{C}}P^{N-1}$ where $N=\(^n_k\)$. The problem of existence of cross sections of fibrations can be studied using the Gottlieb group. In a more generalized context one can use the relative evaluation subgroup of a map to describe the cohomology of smooth fiber bundles with fiber the (complex) Grassmann manifold $G_{n,k}$. Our interest lies in making use of techniques of rational homotopy theory to address problems and questions involving applications of Gottlieb groups in general. In this paper, we construct the Sullivan minimal model of the (complex) Grassmann manifold $G_{n,k}$ for $2{\leq}k<n$, and we compute the rational evaluation subgroup of the embedding $f:G_{n,k}{\rightarrow}{\mathbb{C}}P^{N-1}$. We show that, for the Sullivan model ${\phi}:A{\rightarrow}B$, where A and B are the Sullivan minimal models of ${\mathbb{C}}P^{N-1}$ and $G_{n,k}$ respectively, the evaluation subgroup $G_n(A,B;{\phi})$ of ${\phi}$ is generated by a single element and the relative evaluation subgroup $G^{rel}_n(A,B;{\phi})$ is zero. The triviality of the relative evaluation subgroup has its application in studying fibrations with fibre the (complex) Grassmann manifold.
EIGENVALUE MONOTONICITY OF (p, q)-LAPLACIAN ALONG THE RICCI-BOURGUIGNON FLOW
Azami, Shahroud 287
In this paper we study monotonicity the first eigenvalue for a class of (p, q)-Laplace operator acting on the space of functions on a closed Riemannian manifold. We find the first variation formula for the first eigenvalue of a class of (p, q)-Laplacians on a closed Riemannian manifold evolving by the Ricci-Bourguignon flow and show that the first eigenvalue on a closed Riemannian manifold along the Ricci-Bourguignon flow is increasing provided some conditions. At the end of paper, we find some applications in 2-dimensional and 3-dimensional manifolds.
ON 3-DIMENSIONAL LORENTZIAN CONCIRCULAR STRUCTURE MANIFOLDS
Chaubey, Sudhakar Kumar;Shaikh, Absos Ali 303
The aim of the present paper is to study the Eisenhart problems of finding the properties of second order parallel tensors (symmetric and skew-symmetric) on a 3-dimensional LCS-manifold. We also investigate the properties of Ricci solitons, Ricci semisymmetric, locally ${\phi}$-symmetric, ${\eta}$-parallel Ricci tensor and a non-null concircular vector field on $(LCS)_3$-manifolds.
YAMABE SOLITONS ON KENMOTSU MANIFOLDS
Hui, Shyamal Kumar;Mandal, Yadab Chandra 321
The present paper deals with a study of infinitesimal CL-transformations on Kenmotsu manifolds, whose metric is Yamabe soliton and obtained sufficient conditions for such solitons to be expanding, steady and shrinking. Among others, we find a necessary and sufficient condition of a Yamabe soliton on Kenmotsu manifold with respect to CL-connection to be Yamabe soliton on Kenmotsu manifold with respect to Levi-Civita connection. We found the necessary and sufficient condition for the Yamabe soliton structure to be invariant under Schouten-Van Kampen connection. Finally, we constructed an example of steady Yamabe soliton on 3-dimensional Kenmotsu manifolds with respect to Schouten-Van Kampen connection.
ON THE CURVATURE THEORY OF A LINE TRAJECTORY IN SPATIAL KINEMATICS
Abdel-Baky, Rashad A. 333
The paper study the curvature theory of a line-trajectory of constant Disteli-axis, according to the invariants of the axodes of moving body in spatial motion. A necessary and sufficient condition for a line-trajectory to be a constant Disteli-axis is derived. From which new proofs of the Disteli's formulae and concise explicit expressions of the inflection line congruence are directly obtained. The obtained explicit equations degenerate into a quadratic form, which can easily give a clear insight into the geometric properties of a line-trajectory of constant Disteli-axis with the theory of line congruence. The degenerated cases of the Burmester lines are discussed according to dual points having specific trajectories.
ON EXTREMALLY DISCONNECTED SPACES VIA m-STRUCTURES
Al-Omari, Ahmad;Al-Saadi, Hanan;Noiri, Takashi 351
In this paper, we introduce a modification of extremally disconnected spaces which is said to be m-extremally disconnected. And we obtain many characterizations of m-extremally disconnected spaces. The concepts of ${\ast}$-extremally disconnected spaces, ${\ast}$-hyperconnected spaces, and generalized hyperconnectedness are as examples for this paper.
|
CommonCrawl
|
Statistical Shape Modelling
University of Basel
Simple shape modelling using the multivariate normal distribution
In this article, we will show how we can build a very simple model of hand shapes by modelling the variation in the length and the span of the hand using a bivariate normal distribution. This article serves two purposes: first it illustrates the concepts we introduced in the previous video using a concrete example. Second, it also introduces many key ideas behind statistical shape models. We conclude this article with a note on degenerate normal distributions, which often occur in shape modelling.
A simple shape model
In this first, very simplistic example of a shape model, we assume that the shape of a hand can be characterised by only two measurements: the length and the span of the hand (see Figure 1).
Figure 1: measurements for the length and the span of a hand shape.
Our task is to model how these two measurements vary in the family of hand shapes. In this course we always assume that shape variations can be modelled using a normal distribution. Hence, we assume that the random variables and are distributed according to the bivariate normal distribution
$$\left( \begin{array}{c} l \\ s \end{array} \right) \sim N\left(\left( \begin{array}{c} \mu_l \\ \mu_s \end{array} \right) , \left(\begin{array}{cc} \sigma_{ll}^2 & \sigma_{ls} \\ \sigma_{ls} & \sigma_{ss}^2 \end{array} \right) \right).$$
In this particular example, this assumption implies that
it is sensible to think of a mean hand (with average length and average span), around which all plausible hand shapes 'cluster';
it is equally likely for hands to be smaller or larger than this mean;
we are unlikely to ever observe shapes that are much larger or smaller than the mean.
Thinking about it for a while, we see that these are reasonable assumptions for the length and span of the hand (and indeed for many other anatomical measurements). Our intuition is confirmed by looking at some data. We have compiled a set of measurements of the hand length and span of 72 students. A scatterplot of these measurements is shown in Figure 2. We see that most measurements concentrate around a size of 19 cm for the length and 20 cm for the span.
Figure 2: scatterplot showing measurements of the hand span and length of 72 students.
Estimating the parameters from data
Once we have chosen the form of the model, we need to define the parameters. While we are in principle free to specify the model parameters based on prior knowledge about the anatomy, a more principled approach is to estimate the parameters from the given data. We use again the measurements shown in Figure 2. Let
$$M = \{\tilde{x}_1, \ldots, \tilde{x}_n\} = \left \{ \left(\begin{array}{c}\tilde{l}_1\\ \tilde{s}_1 \end{array}\right), \ldots, \left(\begin{array}{c}\tilde{l}_n \\ \tilde{s}_n \end{array}\right) \right \}$$
denote the set of measurements.
We can estimate the mean and covariance matrix from using the following standard formulas for the sample mean
$$\mu = \frac{1}{n}\sum_{i=1}^n \tilde{x}_i$$
and the sample covariance
$$\Sigma = \frac{1}{n-1} \sum_{i=1}^n (\tilde{x}_i - \mu)(\tilde{x}_i - \mu)^T.$$
Using the data shown in Figure 2 to estimate these parameters, we obtain the statistical shape model defined by the following bivariate normal distribution:
$$x \sim N\left(\left( \begin{array}{c} 19.3 \\ 20.2 \end{array} \right) , \left(\begin{array}{cc}2.1 & 1.4 \\ 1.4 & 3.8 \end{array} \right) \right ).$$
Using the model to reason about shapes
We can now use this model for exploring the shape family and reasoning about individual shapes. First, we can directly read off from the distribution that the corresponding marginal distribution for the length alone is and for the span we have . Not surprisingly, the variation of the span is larger than the variation in length (this is due to the fact that people spread the fingers differently when taking the measurement). From the joint distribution we also see that the span and the length are correlated, with a correlation of . Assume that we are only given the measurement of the span. The correlation in the distribution allows us to predict from this measurement the likely values for the length, by computing the conditional distribution . Assume that we observe a span of 24. The conditional distribution is . Thus the most likely length is 20.7. By looking at the variance, we also see that we still have a relatively large uncertainty for this prediction.
Finally, we can use the density function to compute how likely a given observation is by evaluating its density function . This allows us for example to conclude that observing a hand of length 20 and span 18 is less likely () than observing a hand with length 19 and span 21 (). Being able to quantify the likelihood of every shape also gives us the possibility to identify which shapes are unlikely to belong to the modelled shape family.
Degenerate normal distributions
When we define more complex shape models, we usually have many more random variables than we have measurements from which we can estimate the parameters of the mean and covariance matrix. Using the above formula to estimate the covariance matrix results in this case in a covariance matrix which is only positive semi-definite. While it is still possible to define a valid multivariate normal distribution using a positive semi-definite covariance matrix, the density cannot be defined. The distribution is referred to as a degenerate multivariate normal distribution. We will see later in the course that we can still define a valid density, if we restrict ourselves to a relevant subspace where the distribution is supported.
© University of Basel
Statistical Shape Modelling: Computing the Human Anatomy
The Science of Nuclear Energy
Discover the science behind nuclear energy and its role in energy provision in the past, present and future.
The University of Kent
Understand more about autism, including diagnosis, the autistic spectrum, and life with autism with this CPD-certified course.
Teaching Students Who Have Suffered Complex Trauma
Find out what complex trauma is, how it affects children and adolescents, and what can be done to help.
1 hr per week
|
CommonCrawl
|
Search SpringerLink
Effectiveness of Self-Compassion Related Therapies: a Systematic Review and Meta-analysis
Alexander C. Wilson1,2,
Kate Mackintosh1,
Kevin Power3 &
Stella W. Y. Chan ORCID: orcid.org/0000-0003-4088-45281
Mindfulness volume 10, pages 979–995 (2019)Cite this article
This systematic review and meta-analysis investigated whether self-compassion-related therapies, including compassion-focussed therapy, mindfulness-based cognitive therapy and acceptance and commitment therapy, are effective in promoting self-compassion and reducing psychopathology in clinical and subclinical populations. A total of 22 randomised controlled trials met inclusion criteria, with data from up to 1172 individuals included in each quantitative analysis. Effect sizes were the standardised difference in change scores between intervention and control groups. Results indicated that self-compassion-related therapies produced greater improvements in all three outcomes examined: self-compassion (g = 0.52, 95% CIs [0.32, 0.71]), anxiety (g = 0.46, 95% CIs [0.25, 0.66]) and depressive symptoms (g = 0.40, 95% CIs [0.23, 0.57]). However, when analysis was restricted to studies that compared self-compassion-related therapies to active control conditions, change scores were not significantly different between the intervention and control groups for any of the outcomes. Patient status (clinical vs. subclinical) and type of therapy (explicitly compassion-based vs. other compassion-related therapies, e.g. mindfulness) were not moderators of outcome. There was some evidence that self-compassion-related therapies brought about greater improvements in the negative than the positive subscales of the Self-Compassion Scale, although a statistical comparison was not possible. The methodological quality of studies was generally good, although risk of performance bias due to a lack of blinding of participants and therapists was a concern. A narrative synthesis found that changes in self-compassion and psychopathology were correlated in several studies, but this relationship was observed in both intervention and control groups. Overall, this review presents evidence that third-wave therapies bring about improvements in self-compassion and psychopathology, although not over and beyond other interventions.
Working on a manuscript?
Avoid the common mistakes
Self-compassion is the tendency to soothe oneself with kindness and non-judgemental understanding in times of difficulty and suffering (Neff 2003b; Gilbert 2009). Greater levels of self-compassion have been linked to reduced mental health symptoms, with meta-analyses reporting large correlations between higher levels of self-compassion and lower levels of depression, anxiety and stress in adults (r = − 0.54; MacBeth and Gumley 2012) and adolescents (r = − 0.55; Marsh et al. 2018), as well as greater overall psychological well-being (r = 0.47; Zessin et al. 2015). Motivated by the link between self-compassion and mental health, a range of compassion-based therapies have been developed (for a review, see Leaviss and Uttley 2015), and a meta-analysis has provided preliminary evidence that such therapies produce moderate positive changes in self-compassion and other mental health outcomes (Kirby et al. 2017). However, it is not possible to say from this meta-analysis whether self-compassion-related therapies are effective in treating individuals with clinical or subclinical levels of mental health problems because many of the samples included were drawn from the general non-clinical population. This therefore calls for an updated meta-analysis examining the effectiveness of self-compassion-related therapies in clinical and subclinical populations.
Compassion-focussed therapy (CFT) is the intervention that most explicitly aims to modify self-compassion. It was developed for use with people with chronic mental health problems who experience high self-criticism and shame and who do not respond well to conventional therapies (Gilbert and Proctor 2006). CFT is grounded in a theoretical assumption that we have three affective systems (threat, drive and soothing) and that enhancing the soothing system helps us manage negative thoughts and emotions through promoting social bonding and positive self-repair behaviours (Gilbert 2009). Typical techniques used in CFT include self-compassionate meditation, imagery, letter writing and dialogic role-play (Gilbert 2009). Similar techniques are used in parallel therapies, such as mindful self-compassion therapy (MSC; Neff and Germer 2013). A meta-analysis (Kirby et al. 2017) has indicated that CFT and related therapies, such as MSC, improve levels of self-compassion (d = 0.70), as well as reduce anxiety (d = 0.49), depression (d = 0.64) and psychological distress (d = 0.47), in various groups both with and without mental health conditions.
While Kirby et al. (2017) exclusively reviewed CFT, a focus on self-compassion is not restricted to one modality of therapy. It is relevant across 'third-wave' therapies, such as mindfulness-based cognitive therapy (MBCT), dialectical behavioural therapy (DBT) and acceptance and commitment therapy (ACT). As such, the second edition of the MBCT manual (Segal et al. 2013) explicitly makes the promotion of self-compassion an aim of therapy and improvement in self-compassion as a mechanism of change in mindfulness therapies (for a review, see Gu et al. 2015). This is hardly surprising given that self-compassion and mindfulness are overlapping constructs. As such, mindfulness figures in Neff's (2003a) three-part conceptualisation of self-compassion, alongside self-kindness and common humanity. Self-compassion is directly relevant to DBT, given that the DBT manual for borderline personality disorder includes several exercises designed to encourage self-compassion (Linehan 1993). Finally, self-compassion has also been linked theoretically to the core processes of ACT, in particular acceptance, cognitive diffusion, present moment awareness and self as context, which are all aimed at reducing self-criticism (Neff and Tirch 2013). Based on the similarity between self-compassion and the underlying constructs in MBCT, DBT and ACT, it is reasonable to view these different interventions as part of a family of self-compassion-related therapies that could be evaluated as a group.
Just as the clinical significance of self-compassion is not limited to one therapeutic modality, it is also not limited to one psychological diagnosis. A tendency to be self-critical, which is viewed as the opposite of self-compassion, is seen as a universal feature of psychopathology (Clark et al. 1994; Gilbert and Proctor 2006). Also, in addition to depression, anxiety and stress (MacBeth and Gumley 2012), low self-compassion has been linked to symptomology in people with persecutory delusions (Collett et al. 2016), auditory hallucinations (Dudley et al. 2018), eating disorders (Ferreira et al. 2013), and Cluster C personality disorders (Schanche et al. 2011). Psychotherapies that target self-compassion are therefore likely to be relevant across disorders. This is consistent with a transdiagnostic approach to therapy, which recognises that psychological disorders are often comorbid, share causal factors and have blurred diagnostic boundaries (Newby et al. 2015).
A final issue for consideration is that self-compassion is not necessarily a single, unitary construct. The most commonly used psychometric measure of self-compassion, the Self Compassion Scale (Neff 2003b), comprises six separate subscales including three positive and three negative: the positive subscales include self-kindness, common humanity and mindfulness, while the negative subscales include self-judgment, isolation and over-identification. These subscales have differing relationships with other psychological variables. Muris and Petrocchi (2017) found that the negative items were more strongly related to psychopathology than the positive items, and Neff (2016) found general trends for improvements in the negative subscales to predict reduced psychopathology and for the positive subscales to predict increased well-being in a randomised controlled trial (RCT) of MSC therapy. Given these differential relationships between self-compassion and mental health outcomes, the effect of therapy on self-compassion should be investigated as a multifaceted phenomenon.
This meta-analysis aimed to evaluate the effectiveness of self-compassion-related therapies, compared to a control condition, in clinical and subclinical populations. Our review extends previous reviews (Kirby et al. 2017; Leaviss and Uttley 2015) in three important ways. First, it is more inclusive of the type of therapies; thus, we use the general term 'self-compassion-related therapies' rather than CFT to refer to the therapies included in our review, as we take any intervention with the stated goal of directly or indirectly improving an individual's level of self-compassion as relevant. Second, we focussed purely on groups with classifiable mental health symptoms presenting at either a subclinical or clinical level. Third, we assessed whether particular aspects of self-compassion are more modifiable in therapy than others. The previous reviews (Kirby et al. 2017; Leaviss and Uttley 2015) indicated that we should expect therapeutic outcome to show considerable variety across studies, and so we hypothesised that improvements in self-compassion and psychopathology would be moderated by the clinical status of participants and the type of control group and intervention used in the studies.
The review was conducted following the guidance by the Centre for Reviews and Dissemination (CRD 2009). This was originally designed as a systematic review and a protocol was submitted to the PROSPERO International prospective register of systematic reviews (CRD42016033532; Mackintosh 2016). Due to the large number of studies identified during the literature search following submission of the protocol, we decided to limit our analysis to RCTs, as these offer the highest standard of evidence. The number of studies also allowed us to offer a quantitative, rather than purely qualitative, review, thereby providing more information for researchers and clinicians.
Identification and Selection of Studies
A comprehensive literature search was conducted in July 2017 using five databases: PsycINFO, Medline, Embase, CINAHL and Cochrane Library. The following keywords were used: 'compassion focused therapy*' or 'compassionate mind training' or 'mindful self-compassion' or ('mindfulness based' or 'MBCT' or 'MBSR' or 'acceptance and commitment therapy*' or 'ACT' or 'dialectical behaviour* therapy*' or 'DBT' or 'intervention' or 'treatment' and 'self-compassion' or 'self-kindness'). After removal of duplicates, studies were screened based on title. Next, abstracts and full-text articles were independently screened by two researchers according to the inclusion criteria (see below). Any ambiguities were resolved in discussion. Reference lists of the final set of studies included in the review were screened for further relevant studies, as were the reference lists of three previous reviews (Kirby et al. 2017; Leaviss and Uttley 2015; MacBeth and Gumley 2012). Additional searches were conducted on the publications of two key authors in the field of self-compassion (Neff and Gilbert) and publication lists on relevant websites (www.self-compassion.org and www.compassionatemind.co.uk).
For inclusion, studies had to be RCTs evaluating an intervention with a self-compassion component against either an active intervention or a waitlist/treatment as usual control. We required the intervention to include at least one face-to-face session with a trained therapist. The study population had to consist of adults of 18 years and over who had a clinical or subclinical mental health problem, as assessed by formal clinical diagnosis or by a validated self-report measure. Self-compassion is relevant to a range of mental health problems, so this review was not restricted to any specific diagnosis. Studies needed to include a standardised measure of self-compassion. Where possible, we also extracted depression and anxiety scores. We focussed on symptoms of depression and anxiety as key outcome variables since these have been identified as linked to self-compassion in previous meta-analyses (MacBeth and Gumley 2012; Marsh et al. 2018). They are also common outcomes in RCTs, so it was likely that we would identify a sufficient number of studies to calculate summary estimates of the effect of therapy on these two variables. Finally, all included studies needed to be published in a peer-reviewed journal in English.
Characteristics of the identified studies were independently extracted by two researchers; see Table 1 below and Tables 4 and 5 in the Appendix.
Table 1 Characteristics of studies included in the review
For the meta-analysis, we extracted outcome data for self-compassion from each paper and for depression and/or anxiety where these were reported. We extracted means and SDs pre- and post-treatment and sample sizes in the intervention and control groups. Where an intention-to-treat sample was used, we extracted the full sample size at randomisation, and where per-protocol results were given, we took the sample size of study completers, so that the weighting of studies in the meta-analysis would be proportional to the amount of data contributed. Generally, raw means were extracted, although in two cases (Kelly and Carter 2015; Kelly et al. 2017) only estimated means from multilevel modelling were reported.
We planned to accept any standardised measure of self-compassion, though in practice this meant either the Self-Compassion Scale (SCS; Neff 2003b) or the Self-Compassion Short-Form (SCS-SF; Raes et al. 2011) was required, as these are the only validated measures of the construct. The SCS is a 26-item self-report questionnaire, including six subscales, self-kindness, self-judgement, common humanity, isolation, mindfulness and over-identification. The first two subscales include 5 items and the others include 4 items; the total score is computed as the average of the six subscales. The SCS-SF includes 12 items in total (2 from each scale); SCS-SF and SCS full scores were reported to be almost perfectly correlated (r = 0.97; Raes et al. 2011). As part of our review, we were interested in addressing whether different facets of self-compassion were more modifiable in therapy than others. Some studies reported breakdowns on the subscales, so we extracted all these scores; where results were not fully reported, we contacted the authors.
We accepted any psychometrically validated measure of depression and anxiety. If studies reported more than one measure of depression or anxiety, we selected the primary outcome or pooled the results if there was no a priori reason to favour one measure. In practice, this situation occurred only twice during data extraction. Kingston et al. (2015) reported anxiety and depression using both the Hospital Anxiety and Depression Scale (HADS) and Profile of Mood States (POMS). As the HADS was the clinical screening tool, we used this in our analysis. Hou et al. (2013) reported separate results for the State Anxiety Inventory (SAI) and Trait Anxiety Inventory (TAI), and to avoid an arbitrary choice of one over the other, we averaged the means.
We assessed the quality of the studies in the review using two systems. First, we used the Cochrane Collaboration's tool for assessing risk of bias in RCTs (Higgins et al. 2011). This is the standard framework used for assessing whether there is low, high or uncertain risk of bias within studies. We checked for bias arising from: the allocation of individuals into groups (selection bias), the blinding of participants and personnel to condition during the intervention (performance bias), blinding during assessment (detection bias), missing data (attrition bias) and selective reporting of results (reporting bias). The above was supplemented by a checklist adapted from Downs and Black (1998) for evaluating randomised and non-randomised studies of healthcare interventions, and informed by the changes made by Cahill et al. (2010) for assessing practice-based research on psychological therapies. The final checklist consisted of 27 items covering four areas: reporting (11 items), external validity (4 items), internal reliability of measurement and treatment (5 items) and internal reliability of confounding variables/selection bias (7 items). Quality was assessed independently by two researchers, and inter-rater reliability was assessed using Cohen's Kappa statistic.
Meta-analysis was carried out in the open-source software environment R (version 3.4.0) using the compute.es (Del Re 2013) and metafor packages (Viechtbauer 2010). Using the mes() function in compute.es, we calculated the standardised mean difference effect size for each comparison of a compassion-related intervention with a control condition and the associated sampling variance. For the effect size, the difference in change scores between the intervention group (group 1) and the control group (group 2) was divided by the pooled pre-study standard deviation, as shown in the formula below:
$$ \frac{\left({M}_{\mathrm{Group}\ 1,\mathrm{post}-\mathrm{study}}-{M}_{\mathrm{Group}\ 1,\mathrm{pre}-\mathrm{study}}\right)-\left({M}_{\mathrm{Group}\ 2,\mathrm{post}-\mathrm{study}}-{M}_{\mathrm{Group}\ 2,\mathrm{pre}-\mathrm{study}}\right)}{\mathrm{sqrt}\ \left(\ \left({{\mathrm{SD}}_{\mathrm{Group}\ 1,\mathrm{pre}-\mathrm{study}}}^{\ast }\ \left({n}_{\mathrm{Group}\ 1}-1\right)\ \right)+\left({{\mathrm{SD}}_{\mathrm{Group}\ 2,\mathrm{pre}-\mathrm{study}}}^{\ast }\ \left({n}_{\mathrm{Group}\ 2}-1\right)\ \right)\ \right)/\left(N-2\ \right)\ \Big)} $$
This followed the meta-analysis of Kirby et al. (2017) of compassion-based interventions, and the metric was adjusted as suggested by Hedges and Olkin (1985) to correct for biased estimation in small samples (giving Hedge's g).
Some studies compared an intervention group to two different control groups. In these cases, we calculated effect sizes for each comparison of an intervention to a control. To correct for these correlated comparisons (Higgins and Green 2011), we used a multilevel meta-analytic model (Konstantopoulos 2011; Weisz et al. 2013). Using the function rma.mv() in the metafor package, we ran separate multilevel models with restricted maximum likelihood estimation for self-compassion, depression and anxiety, including ~group|study as a random term in each model. The summary effects produced by the models were interpreted according to Cohen's (1988) guidelines: Hedge's g of 0.20 as a small effect, 0.50 as medium and 0.80 as large. We tested whether the effect sizes for self-compassion, depression and anxiety differed in size using a Wald-type test.
The presence of heterogeneity was assessed using the Q-statistic, which tests whether the sum of weighted squared deviations about the summary effect size is greater than expected by sampling error. This has a χ2 distribution under the null hypothesis. Heterogeneity was quantified using the I2 statistic, calculated as (Q − df) / Q and expressed as a percentage; 0% indicates no observed heterogeneity, while 25, 50 and 75% indicate low, moderate and high heterogeneity, respectively (Higgins et al. 2003).
As detailed in the aims above, we examined if three characteristics of the studies contributed to heterogeneity: the study population (clinical or subclinical), the modality of therapy (explicitly compassion-based, i.e. CFT, or another type of intervention) and the kind of control group (active or waitlist/treatment as usual). To assess the importance of these study-level variables, we carried out meta-regressions by adding the three moderators to the multilevel models for self-compassion, depression and anxiety. Variables were dummy-coded such that positive coefficients for population, therapy type and control type meant a greater effect for clinical populations, CFT and studies with an active control, respectively. Where a moderator was significant, we split the data set by that moderator to get effect size estimates within the subgroups.
Publication bias and sensitivity of the meta-analysis to influential cases were tested. Individual effects were identified as potentially influential if they had leverage, defined by a hat value above 2/n, a conservative cut-off (Hoaglin and Kempthorne 1986), or were discrepant, with a standardised residual of ± 3. The models were compared with and without any effects that screened positive for leverage or discrepancy. Publication bias is usually investigated by funnel plots. However, traditional funnel plots, plotted with the effect sizes against their standard error, do not provide a reliable assessment of publication bias when effects are nested and when there is significant heterogeneity, and so the residuals of the moderated models, rather than the raw effects, were plotted here instead (Nakagawa and Santos 2012). This has the effect of checking for publication bias when heterogeneity has been accounted for. We ran Egger's test for the asymmetry of the funnel plot showing the residuals (Egger et al. 1997). In the case of possible publication bias, we ran the trim-and-fill procedure on the residuals of the moderated model to approximate the number of hypothetical unpublished studies 'missing' from the data set, and as advised by the originators of the procedure, it was used as a sensitivity analysis rather than an adjustment (Duval and Tweedie 2000).
See Fig. 1 for details on the selection of papers for the meta-analysis. A total of 22 studies met our inclusion criteria. Just three of these studies were included in the only other meta-analysis of compassion-based interventions (Kirby et al. 2017). One study (Huijbers et al. 2015), despite not reporting self-compassion outcomes, was included because a second study by the same authors (Huijbers et al. 2017) indicated that the SCS was administered in the RCT. We were able to obtain a breakdown of the SCS results from the authors to be included in this meta-analysis.
Flowchart showing number of records at each stage of the literature screening
Study Characteristics
See Table 1 for a description of all 22 studies. Table 4 in the Appendix presents further details on the studies, including their main outcomes and information on the therapists, treatment adherence and attrition. Of the 22 RCTs included in the review, 13 evaluated mindfulness-based therapies, 1 a day-long ACT workshop and 8 compassion-based interventions (CFT or related compassionate mind/loving-kindness approaches). The literature search did not identify any studies examining the effect of DBT on self-compassion.
Of the 13 mindfulness-based interventions, the majority (n = 9) were closely matched in format, following the treatment protocols of either Kabat-Zinn (1990) or Segal et al. (2002), with manualised weekly group sessions typically lasting between 2 and 2.5 h over 8 weeks. The other four mindfulness interventions were more heterogeneous: two were also an 8-week course but with shorter sessions, one was a longer course and one was a self-help intervention with an initial face-to-face orientation session.
The compassion-based interventions (n = 8) were more variable in format. Two had relatively minimal therapist contact time: each comprised of one face-to-face orientation session followed by 3 or 4 weeks of guided self-help. Six compassion-focussed interventions followed a more intensive course format, but the weekly sessions were shorter (1 to 1.5 h in duration) and the length of the course was more variable (between 7 and 12 weeks). Also, some had a group format (n = 3), whereas others involved one-to-one sessions (n = 3).
In 11 of the 22 studies, there was a comparison group engaging in an active control condition. In three of these (Armstrong and Rimes 2016; Hou et al. 2014; Kuyken et al. 2010), the control condition was not closely matched to the intervention in duration of social contact. In 15 studies, the control condition was waitlist or treatment as usual (TAU). In four of these studies (Huijbers et al. 2017; Kelly et al. 2017; Key et al. 2017; Kingston et al. 2015), the control groups provided a high level of comparison with the intervention group, since the participants were under the care of an outpatient clinic with consistent psychotherapy and/or pharmacotherapy. In the other 11 studies, the waitlist/TAU group contained participants with little treatment or no consistent level of treatment. Where treatment adherence was reported, it appeared high, although few studies gave a comprehensive report of adherence. Therapist competence was reported in most studies, and where it was, experienced, qualified mental health professionals delivered the interventions.
Studies included in this review report summary data for a total of 1262 participants at baseline, with individual sample sizes between 16 and 173 (median = 40). Half the studies took place in the USA (n = 8) or the UK (n = 4), with the remaining studies spread across Canada (n = 2), the Netherlands (n = 2) and one each in Japan, China, Ireland, Portugal, Norway and Israel. In total, 73.9% of the individuals were female, and mean age was 40.0 years (SD = 10.7). Data from 10 studies was based on an intention-to-treat (ITT) sample and 12 on per-protocol (PP) participants. Of the 12 papers with PP results, 4 repeated the analyses with ITT samples and reported finding equivalent results. Across all studies, post-intervention data was available on 78.8% of participants randomised to a condition; the median number of participants providing post-intervention data was 71.8% (range = 56.3–92.7%) in PP samples and 85.3% (range = 75.6–100%) in ITT samples, indicating higher attrition in PP samples. The clinical characteristics of the included studies were as follows: peri-clinical anxiety/depression (n = 4), recurrent depression in full/partial remission (n = 3), treatment-resistant depression (n = 1), social anxiety disorder (n = 2), trauma symptoms/PTSD (n = 2), eating disorder (n = 3), obsessive-compulsive disorder (n = 1), high stress (n = 2) and high self-criticism/low self-compassion (n = 4). The studies of high stress and high self-criticism/low self-compassion all selected their participants using thresholds on screening measures that might be taken as indicating a risk for developing psychopathology. See Table 1 for the characteristics of the samples.
See Table 2 for the assessment of risk of bias. There was variable risk of bias across studies, with the main problem being performance bias. There was little evidence that the participants in the experimental and control conditions were likely to have similar expectations for treatment gains, as the control groups often failed to provide a comparable level of treatment to the experimental groups. For instance, a patient is unlikely to have the same expectations of improvement if they are offered a minimal self-help course compared to weekly group therapy. See Tables 6 and 7 in the Appendix for further quality ratings. Inter-rater reliability of the two assessors was high (Kappa = 0.83).
Table 2 Assessment of risk of bias across the studies included in the review
Effects of Self-Compassion-Related Interventions on Self-Compassion, Depression and Anxiety
Figure 2 shows forest plots for all three outcomes, with each individual effect size representing a comparison between a self-compassion-related intervention and a control condition.
Forest plots showing effect sizes for the three main outcomes: self-compassion, anxiety and depression. Where authors and year are followed by (1) or (2), this indicates the comparison between the intervention group and either control group 1 or control group 2. See Table 1 for details regarding the conditions
There were 26 comparisons that measured the self-compassion outcome, covering a total of 1172 individuals. The overall effect was medium-sized effect for greater improvement in self-compassion in the self-compassion intervention compared to the control, g = 0.52, 95% CIs [0.32, 0.71], p < 0.001. As can be seen in the forest plot, 19 of the 26 comparisons were at least small-sized, and 15 were medium-sized. Across the studies, heterogeneity was moderate, Q(25) = 63.63, p < 0.001, I2 = 60.7%.
There were 17 comparisons that measured anxiety, covering a total of 665 individuals. The overall effect was borderline medium for anxiety, g = 0.46, 95% CIs [0.25, 0.66], p < 0.001. Heterogeneity was small, Q(16) = 28.67, p = 0.041, I2 = 44.2%. There were 22 comparisons that measured depressive symptoms, covering a total of 1063 individuals. A small to medium effect was found for depressive symptoms, g = 0.40, 95% CIs [0.23, 0.57], p < 0.001. There was evidence of moderate heterogeneity, Q(21) = 51.09, p < 0.001, I2 = 58.9%.
We tested whether the magnitude of the summary effect for self-compassion differed significantly from those for anxiety and depression using Wald-type tests. Both tests were non-significant: z = 0.42, p = 0.676; z = 0.80, p = 0.380. However, on a study by study level, there was evidence that interventions often varied in their impact on self-compassion and the psychopathology measures. The average absolute difference between a study's effect size for self-compassion and for depression was 0.45 (SD = 0.46), and between self-compassion and anxiety, it was 0.41 (SD = 0.41).
For the three meta-analytic models, all standardised residuals were between − 2.68 and 2.10, and no hat values were flagged, suggesting that there were no influential outliers.
Types of Control, Intervention and Population as Possible Moderators of Outcome
Meta-regressions indicated that the type of control (active vs. waitlist/TAU) was the only study-level moderator of outcome. Study population (clinical or subclinical) and type of therapy (explicitly compassion-based, i.e. CFT, or another type of intervention) showed no effects on outcome. In Table 3 , the studies are categorised according to the three moderators.
Table 3 Studies classified by hypothesised moderators
For self-compassion, the omnibus test of the moderators was significant, QM(3) = 12.92, p = 0.005. Residual heterogeneity was also significant, QE(22) = 42.05, p = 0.006, indicating that 47.7% of variability remained unexplained by the model. In the meta-regression for self-compassion, type of control was significant (β = 0.54, SE = 0.16, p < 0.001), but type of intervention (β = 0.20, p = 0.350) and population (β = − 0.05, p = 0.787) were not. For anxiety, the omnibus test of the moderators was also significant, QM(3) = 18.62, p < 0.001, and a non-significant test for residual heterogeneity indicated that all variability was explained, QE(13) = 10.05, p = 0.690. For anxiety, type of control was also a significant predictor (β = 0.53, SE = 0.16, p < 0.001). Neither type of intervention (β = − 0.31, p = 0.169) nor population (β = 0.27, p = 0.082) was significant. Finally, the full model for depression was not significant, QM(3) = 3.84, p = 0.279, and type of control closely missed significance, though it did retain a sizeable coefficient, as in the other models, β = 0.38, SE = 0.20, p = 0.066. Type of intervention (β = − 0.10, p = 0.610) and population (β = 0.16, p = 0.378) were non-significant as moderators. Residual heterogeneity was significant, QE(18) = 40.34, p = 0.002, with 55.4% of variability remaining unexplained.
Given the results of the meta-regressions that indicated substantial between-studies differences based on the type of control used, we ran subgroup analysis to extract summary estimates at the subgroup level. For studies with a passive control condition, summary effects were moderate: self-compassion, g = 0.72 [0.53, 0.90], p < 0.001; depression, g = 0.56 [0.38, 0.73], p < 0.001; anxiety g = 0.69, [0.44, 0.93], p < 0.001. For studies with an active control condition, effect sizes were not significant, though self-compassion only marginally missed significance, g = 0.27 [− 0.04, 0.58], p = 0.092. Estimates for the other outcomes were as follows: anxiety, g = 0.15 [− 0.05, 0.35], p = 0.138; depression, g = 0.17 [− 0.17, 0.52], p = 0.324.
It is worth noting that there was substantial variability in the nature of the passive TAU control groups: they varied from having no treatment at all to having ongoing outpatient care. We therefore conducted exploratory analysis beyond our planned moderator analysis. In our last subgroup analysis, we grouped studies with a high-level TAU control together with those with an active control group. In practice, this meant re-categorising four studies where all or most control participants received psychological treatment and/or psychotropic medication in their usual care (Huijbers et al. 2017; Kelly et al. 2017; Key et al. 2017; Kingston et al. 2015) as having active rather than passive controls. Under this subgrouping, estimates for the three outcomes were as follows: self-compassion, g = 0.35 [0.09, 0.62], p = 0.010; depression, g = 0.16 [− 0.15, 0.47], p = 0.302; and anxiety g = 0.23, [0.00, 0.45], p = 0.049.
Effects of Self-Compassion-Related Interventions on Subscales of the SCS
Sixteen studies used the full form of the SCS; the rest used the SCS-SF, which does not allow reliable calculation of subscale scores (Raes et al. 2011). We had access to the breakdown of subscores in 8 studies, either through published data and unpublished data obtained directly from authors, with a total sample of 326 people. We ran a random effects meta-analysis on each scale. Effects were similar across scales: typically, medium in size, though there was a tendency for negative subscales to be associated with slightly higher effects. Note that all these studies employed passive TAU control groups. Bearing in mind that our moderator analysis above found that studies with active controls were associated with lower effects, it is possible that the following results overestimate the true effect of treatment; nonetheless, these results give a sense of the relative effect on different subscales. Summary effects were as follows: self-kindness, g = 0.58 [0.37, 0.80]; self-judgement, g = 0.54 [0.31, 0.77]; common humanity, g = 0.46 [0.24, 0.68]; isolation, g = 0.63 [0.41, 0.85]; mindfulness, g = 0.41 [0.19, 0.63]; and over-identification, g = 0.72 [0.48, 0.96]; all ps < 0.001. There was no evidence of heterogeneity in any analysis, all p values ≥ 0.301. See Fig. 3 for forest plots of each SCS subscale.
Forest plots showing effect sizes for the six subscales of the SCS
Publication bias was assessed by inspecting funnel plots and performing Egger's regression to test for asymmetry in the plots. Contour-enhanced funnel plots of the observed effects and funnel plots of residuals are shown in Fig. 4 (Appendix). As explained in the "Method," Egger's test was run on the residuals of the models including our study-level moderators. Results for Egger's test were as follows: self-compassion, p = 0.136; anxiety, p = 0.737; depression, p = 0.851. Given that p = 0.1 is taken as the threshold for significance in Egger's test, we took the borderline result for self-compassion as reason for investigating the sensitivity of the self-compassion effects to publication bias by running the trim-and-fill procedure on the residuals for the moderated model for self-compassion. This indicated that 3 studies were 'missing' from the left of the plot and that adding these would adjust the summary effect slightly, β = 0.08.
Funnel plots of observed effects and residuals for all measures. (A) Funnel plots of the observed effects (Hedge's g) for individual studies against study precision, represented here by SE of Hedge's g. Pseudo-confidence regions are shown by the light (0.05 < p < 0.01) and dark grey bands (0.01 < p < 0.001). Possible publication bias is indicated if there are more studies in these regions than in the white inside the bands, at the bottom of the plots compared to the top (i.e. as a function of study precision). However, differences may also relate to heterogeneity across studies. In the plots above, effect size and precision are confounded by the type of control, indicating that heterogeneity is a factor. The dotted blue line marks the overall summary effect. (B) Funnel plots showing residuals, with heterogeneity relating to the moderators removed, against the same scale for study precision
This review evaluated the effectiveness of interventions aiming to increase self-compassion among individuals with a mental disorder or a subclinical psychological difficulty. Our results are somewhat equivocal. On the one hand, we found that self-compassion-related therapies, compared to a control condition, successfully increase self-compassion and reduce levels of depression and anxiety with medium effect sizes. These results indicate that self-compassion is a psychological characteristic that can be modified in therapy, and this is of clinical interest given the relationship between self-compassion and psychopathology (MacBeth and Gumley 2012; Marsh et al. 2018). However, this meta-analysis also found that self-compassion-related therapies did not produce better outcomes than active control conditions. This indicates that such therapies are unlikely to have any specific effect over and above the general benefits of any active treatment. We should therefore be cautious about claiming that it is possible to 'target' self-compassion in therapy. Instead, it would seem that self-compassion is one of the many psychological characteristics that are modifiable during the course of a range of therapies.
The studies included in this review included participants with a range of clinical and subclinical presentations, and there did not seem to be any evidence suggesting that self-compassion-related interventions were more suited to some presentations compared with others. Our meta-regressions did not find that clinical or subclinical level of presentation moderated the effect size. It must be borne in mind, though, that our analysis likely had insufficient power to detect a small effect. In addition, our review covered a variety of interventions which are all hypothesised as acting, at least partially, by increasing self-compassion. There was no evidence in the meta-regression that the type of intervention moderated outcome, which is consistent with the idea that a range of therapies can modify an individual's level of self-compassion. Nonetheless, the proviso above, regarding power, applies here too.
One question that could not be tackled quantitatively in the review is the question of mediation: do increases in self-compassion mediate improvements in psychopathology? This is an important question, given that increased self-compassion is assumed to be the mechanism of change in self-compassion-related therapies (e.g. Gilbert 2009). While the meta-analysis cannot answer the question, five of the studies included in the review did include some basic analysis of mediation, and we give a narrative synthesis of these findings below. Hoffart et al. (2015) and Kuyken et al. (2010) both found that increased self-compassion predicted improved psychopathology (PTSD symptoms in the former case, depression in the latter) across their samples, with no differences between the treatment and control groups. In two further studies, change in self-compassion also showed large-sized correlations with the key outcome measures: post-intervention social anxiety-related psychopathology (Koszycki et al. 2016) and change in neuroticism (Armstrong and Rimes 2016). These correlations did not vary between the treatment and control groups. Of the five studies assessing self-compassion as a mediator, only Eisendrath et al. (2016) did not find an effect of change in self-compassion on their primary outcome. The overall consensus across these studies is that increases in self-compassion are related to improvements in psychopathology. However, this relationship is not specific to self-compassion-related therapies; in fact, whenever this association was found in intervention groups, it was also found in control groups. We would therefore need to be sceptical of any suggestion that promoting self-compassion can improve psychopathological symptoms. This calls into question the proposed mechanism of change in self-compassion-related therapies: namely, that self-compassion is the primary target of therapy, with other psychological characteristics changing as a consequence of improvements in self-compassion. Although a sophisticated analysis of mediation would be needed to assess this, the emerging picture is that self-compassion-related therapies do not have a special role to play in promoting self-compassion, either as an end in itself or as a means of influencing other psychological characteristics.
All studies used the SCS (Neff 2003b) or SCS-SF (Raes et al. 2011) to assess self-compassion. The full SCS covers six factors associated with self-compassion. When evaluating the effect of self-compassion-related interventions compared to a control condition on pre-post scores in these subscales, there was significantly greater improvement in all. Interestingly, the negative subscales (self-judgment, isolation and over-identification) showed a trend for greater improvement than the positive subscales (self-kindness, common humanity and mindfulness); in particular, over-identification (g = 0.72) and isolation (g = 0.63) had the greatest effect sizes. This echoes the results of two other papers included in this review (Kelly and Carter 2015; Kelly et al. 2017) that reported larger effects for the negative compared to the positive items. Collectively, these findings speak to the debate around the psychometric properties of the SCS. On the one hand, the fact that all six subscales showed significant improvements support the validity of the scale, since a self-compassion intervention would be expected to improve scores across the subscales of a self-compassion measure. However, there seemed to be variability in how modifiable different subscales were, meaning that studies only analysing differences in SCS total score may lose clinically relevant information. Williams et al. (2014) suggested researchers avoid using total scores because the scale did not fit a one-factor structure, and our analysis indicates that a total score may not fully reflect the differential psychotherapeutic benefits of the six facets.
Methodologically, the studies included in the review were of reasonable quality. While the earliest review of self-compassion-related therapies concluded that treatment effectiveness was difficult to evaluate given methodological weaknesses in the field (Leaviss and Uttley 2015), we can be more confident in the quality of the RCTs reviewed here, although there was a particular risk of performance bias across the studies. It is unlikely that participants would expect as much improvement or found the condition as credible if they were in the control compared to the treatment group in many of the studies. This comes down to an absence of an active control condition in much research, and even where there was an active control condition, it was not always clear if it was well matched to the intervention condition in terms of social contact. It is important that conditions are matched on social contact in order to control for the significant impact of common factors in psychotherapy (Wampold 2015). The most rigorous test would involve comparing self-compassion-related therapies to gold-standard treatments, like CBT. This would involve evaluating the relative impact on primary mental health outcomes, as well as characterising the role of self-compassion in the therapeutic process. While this research is needed, it is worth noting that many studies included in this review did offer a reasonably high level of comparison, with treatment as usual sometimes including a high level of ongoing psychotherapy and/or pharmacotherapy. A further limitation was that some studies suffered from quite substantial attrition, often without making any rigorous analysis of any differences between dropouts and completers. Intention-to-treat analyses were not carried out consistently, although favouring per-protocol analyses is understandable given the sometimes low sample sizes and the fledgling status of compassion-related therapies. Longer follow-up periods would also be advisable, given that boosting self-compassion is likely to be a useful approach for buffering against relapsing mental disorders; this can only be assessed if medium-term follow-up is conducted.
In conclusion, this meta-analysis found that self-compassion-related interventions had moderate effects on self-compassion, depression and anxiety outcomes across 22 RCTs. However, when limiting analysis to comparisons between self-compassion-related interventions and active control condition, there were no significant differences in outcome. This suggests that self-compassion-related interventions lacked a specific effect when compared to other active treatments. There was no evidence that effects differed between clinical and subclinical populations, nor between therapies with an explicit or implicit aim to boost self-compassion. In the analysis of the subscales of the SCS, there was some variability in how modifiable subscales were, with negative subscales appearing to be more amenable to therapeutic change, supporting the view that collapsing the subscales into one total may risk losing clinically relevant information. Synthesis of research findings indicated that changes in self-compassion were related to changes in psychopathology, although there was no evidence that this relationship was specific to self-compassion-related interventions. Overall, this review provides good evidence that levels of self-compassion can be modified in third-wave self-compassion-related therapies, but does not indicate that these therapies are any better in promoting self-compassion than other active psychological treatments.
Arimitsu, K. (2016). The effects of a program to enhance self-compassion in Japanese individuals: A randomized controlled pilot study, The Journal of Positive Psychology, 11(6), 559-571. https://doi.org/10.1080/17439760.2016.1152593.
Armstrong, L., & Rimes, K. A. (2016). Mindfulness-based cognitive therapy for neuroticism (stress vulnerability): a pilot randomized study. Behavior Therapy, 47, 287–298. https://doi.org/10.1016/j.beth.2015.12.005.
Beaumont, E., Durkin, M., McAndrew, S., & Martin, C. R. (2016). Using compassion focused therapy as an adjunct to trauma-focused CBT for fire service personnel suffering with trauma-related symptoms. The Cognitive Behaviour Therapist, 9, e34. https://doi.org/10.1017/S1754470X16000209.
Cahill, J., Barkham, M., & Stiles, W. B. (2010). Systematic review of practice-based research on psychological therapies in routine clinic settings. British Journal of Clinical Psychology, 49, 421–453. https://doi.org/10.1348/014466509X470789.
Centre for Reviews and Dissemination. (2009). Sytematic reviews: CRD's guidance for undertaking reviews in healthcare. York: York Publishing Services URL: https://www.york.ac.uk/crd/guidance.
Clark, L. A., Watson, D., & Mineka, S. (1994). Temperament, personality, and the mood and anxiety disorders. Journal of Abnormal Psychology, 103, 103–116. https://doi.org/10.1037/0021-843X.103.1.103.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Erlbaum Associates.
Collett, N., Pugh, K., Waite, F., & Freeman, D. (2016). Negative cognitions about the self in patients with persecutory delusions: an empirical study of self-compassion, self-stigma, schematic beliefs, self-esteem, fear of madness, and suicidal ideation. Psychiatry Research, 239, 79–84. https://doi.org/10.1016/j.psychres.2016.02.043.
Cornish, M. A., & Wade, N. G. (2015). Working through past wrongdoing: examination of a self-forgiveness counselling intervention. Journal of Counselling Psychology, 62, 521–528. https://doi.org/10.1037/cou0000080.
de Bruin, E. I., van der Zwan, J. E., & Bögels, S. M. (2016). A RCT comparing daily mindfulness meditations, biofeedback exercises, and daily physical exercise on attention control, executive functioning, mindful awareness, self-compassion, and worrying in stressed young adults. Mindfulness, 7, 1182–1192. https://doi.org/10.1007/s12671-016-0561-5.
Del Re, A. C. (2013). Compute.es: compute effect sizes. R package version 0.2-2. URL: http://cran.r-project.org/web/packages/compute.es.
Downs, S. H., & Black, N. (1998). The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. Journal of Epidemiology and Community Health, 52, 377–384. https://doi.org/10.1136/jech.52.6.377.
Duarte, C., Pinto-Gouveia, J., & Stubbs, R. J. (2017). Compassionate attention and regulation of eating behaviour: a pilot study of a brief low-intensity intervention for binge eating. Clinical Psychology & Psychotherapy, 24, 1437–1447. https://doi.org/10.1002/cpp.2094.
Dudley, J., Eames, C., Mulligan, J., & Fisher, N. (2018). Mindfulness of voices, self-compassion, and secure attachment in relation to the experience of hearing voices. British Journal of Clinical Psychology, 57, 1–17. https://doi.org/10.1111/bjc.12153.
Duval, S., & Tweedie, R. (2000). Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56, 455–463. https://doi.org/10.1111/j.0006-341X.2000.00455.x.
Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315, 629–634. https://doi.org/10.1136/bmj.315.7109.629.
Eisendrath, S. J., Gillung, E., Delucchi, K. L., Segal, Z. V., Nelson, J. C., McInnes, L. A., … & Feldman, M. D. (2016). A randomized controlled trial of mindfulness-based cognitive therapy for treatment-resistant depression. Psychotherapy and Psychosomatics, 85, 99–110. doi: https://doi.org/10.1159/000442260
Falsafi, N. (2016). A randomized controlled trial of mindfulness versus yoga: effects on depression and/or anxiety in college students. Journal of the American Psychiatric Nurses Association, 22, 483–497. https://doi.org/10.1177/1078390316663307.
Ferreira, C., Pinto-Gouveia, J., & Duarte, C. (2013). Self-compassion in the face of shame and body image dissatisfaction: implications for eating disorders. Eating Behaviors, 14, 207–210. https://doi.org/10.1016/j.eatbeh.2013.01.005.
Gilbert, P. (2009). Introducing compassion-focused therapy. Advances in Psychiatric Treatment, 15, 199–208. https://doi.org/10.1192/apt.bp.107.005264.
Gilbert, P., & Proctor, S. (2006). Compassionate mind training for people with high shame and self-criticism: overview and pilot study of a group therapy approach. Clinical Psychology & Psychotherapy, 13, 353–379. https://doi.org/10.1002/cpp.507.
Gu, J., Strauss, C., Bond, R., & Cavanagh, K. (2015). How do mindfulness-based cognitive therapy and mindfulness-based stress reduction improve mental health and wellbeing? A systematic review and meta-analysis of mediation studies. Clinical Psychology Review, 37, 1–12. https://doi.org/10.1016/j.cpr.2015.01.006.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. San Diego: Academic.
Higgins, J. P. T. & Green, S. (2011). Cochrane handbook for systematic reviews of interventions version 5.1.0. The Cochrane Collaboration. URL: http://training.cochrane.org/handbook.
Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 327, 557–560. https://doi.org/10.1136/bmj.327.7414.557.
Higgins, J. P. T., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., et al. (2011). The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ, 343, d5928. https://doi.org/10.1136/bmj.d5928.
Hoaglin, D. C., & Kempthorne, P. J. (1986). Influential observations, high leverage points, and outliers in linear regression: comment. Statistical Science, 1, 408–412. https://doi.org/10.1214/ss/1177013627.
Hoffart, A., Øktedalen, T., & Langkaas, T. F. (2015). Self-compassion influences PTSD symptoms in the process of change in trauma-focused cognitive-behavioral therapies: a study of within-person processes. Frontiers in Psychology, 6, 1273. https://doi.org/10.3389/fpsyg.2015.01273.
Hou, R. J., Wong, S.Y-S, Yip, B. H-K., Hung, A. T. F., Lo, H. H-M., Chan, P. H. S., et al. (2013). The effects of mindfulness-based stress reduction program on the mental health of family caregivers: a randomized controlled trial. Psychotherapy and Psychosomatics, 83(1), 45-53. https://doi.org/10.1159/000353278.
Hou, R. J., Wong, S. Y., Yip, B. H., Hung, A. T., Lo, H. H., Chan, P. H., … & Ma, S. H. (2014). The effects of mindfulness-based stress reduction program on the mental health of family caregivers: a randomized controlled trial. Psychotherapy and Psychosomatics, 83, 45–53. doi: https://doi.org/10.1159/000353278.
Huijbers, M. J., Spinhoven, P., Spijker, J., Ruhé, H. G., van Schaik, D. J. F., van Oppen, P., et al. (2015). Adding mindfulness-based cognitive therapy to maintenance antidepressant medication for prevention of relapse/recurrence in major depressive disorder: randomised controlled trial. Journal of Affective Disorders, 187, 54–61. https://doi.org/10.1016/j.jad.2015.08.0230165-0327.
Huijbers, M. J., Crane, R. S., Kuyken, W., Heijke, L., van den Hout, I., Donders, A. R. T., & Speckens, A. E. M. (2017). Teacher competence in mindfulness-based cognitive therapy for depression and its relation to treatment outcome. Mindfulness, 8, 960–972. https://doi.org/10.1007/s12671-016-0672-z.
Jazaieri, H., Goldin, P. R., Werner, K., Ziv, M., & Gross, J. J. (2012). A randomized trial of MBSR versus aerobic exercise for social anxiety disorder. Journal of Clinical Psychology, 68, 715–731. https://doi.org/10.1002/jclp.21863.
Kabat-Zinn, J. (1990). Full catastrophe living: using the wisdom of your body and mind to face stress, pain, and illness. New York: Delta.
Kelly, A. C., & Leybman, M. J. (2012). Calming your eating disorder voice with compassion. Unpublished manual.
Kelly, A. C., & Carter, J. C. (2015). Self-compassion training for binge eating disorder: a pilot randomized controlled trial. Psychology and Psychotherapy: Theory, Research and Practice, 88, 285–303. https://doi.org/10.1111/papt.12044.
Kelly, A. C., Wisniewski, L., Martin-Wagar, C., & Hoffman, E. (2017). Group-based compassion-focused therapy as an adjunct to outpatient treatment for eating disorders: a pilot randomized controlled trial. Clinical Psychology & Psychotherapy, 24, 475–487. https://doi.org/10.1002/cpp.2018.
Key, B. L., Rowa, K., Bieling, P., McCabe, R., & Pawluk, E. J. (2017). Mindfulness-based cognitive therapy as an augmentation treatment for obsessive–compulsive disorder. Clinical Psychology & Psychotherapy, 24, 1109–1120. https://doi.org/10.1002/cpp.2076.
Kingston, T., Collier, S., Hevey, D., McCormick, M. M., Besani, C., Cooney, J., & O'Dwyer, A. M. (2015). Mindfulness-based cognitive therapy for psycho-oncology patients: an exploratory study. Irish Journal of Psychological Medicine, 32, 265–274. https://doi.org/10.1017/ipm.2014.81.
Kirby, J. N., Tellegen, C. L., & Steindl, S. R. (2017). A meta-analysis of compassion-based interventions: current state of knowledge and future directions. Behavior Therapy, 48, 778–792. https://doi.org/10.1016/j.beth.2017.06.003.
Konstantopoulos, S. (2011). Fixed effects and variance components estimation in three-level meta-analysis. Research Synthesis Methods, 2, 61–76. https://doi.org/10.1002/jrsm.35.
Koszycki, D., Thake, J., Mavounza, C., Daoust, J. P., Taljaard, M., & Bradwejn, J. (2016). Preliminary investigation of a mindfulness-based intervention for social anxiety disorder that integrates compassion meditation and mindful exposure. The Journal of Alternative and Complementary Medicine, 22, 363–374. https://doi.org/10.1089/acm.2015.0108.
Kuyken, W., Watkins, E., Holden, E., White, K., Taylor, R. S., Byford, S., … & Dalgleish, T. (2010). How does mindfulness-based cognitive therapy work? Behaviour Research and Therapy, 48, 1105–1112. doi: https://doi.org/10.1016/j.brat.2010.08.003
Langkaas, T. F., Hoffart, A., Øktedalen, T., Ulvenes, P. G., Hembree, E. A., & Smucker, M. (2017). Exposure and non-fear emotions: a randomized controlled study of exposure-based and rescripting-based imagery in PTSD treatment. Behaviour Research and Therapy, 97, 33–42. https://doi.org/10.1016/j.brat.2017.06.007.
Leaviss, J., & Uttley, L. (2015). Psychotherapeutic benefits of compassion-focused therapy: an early systematic review. Psychological Medicine, 45(5), 927–945. https://doi.org/10.1017/S0033291714002141.
Linehan, M. M. (1993). Skills training manual for treating borderline personality disorder. New York: Guilford.
MacBeth, A., & Gumley, A. (2012). Exploring compassion: a meta-analysis of the association between self-compassion and psychopathology. Clinical Psychology Review, 32, 545–552. https://doi.org/10.1016/j.cpr.2012.06.003.
Mackintosh, K. (2016). The effectiveness of compassion-focused and mindfulness-based psychological interventions in improving self-compassion in clinical populations. PROSPERO: International prospective register of systematic reviews. URL: http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016033532.
Mann, J., Kuyken, W., O'Mahen, H., Ukoumunne, O. C., Evans, A., & Ford, T. (2016). Manual development and pilot randomised controlled trial of mindfulness-based cognitive therapy versus usual care for parents with a history of depression. Mindfulness, 7, 1024–1033. https://doi.org/10.1007/s12671-016-0543-7.
Marsh, I. C., Chan, S. W. Y., & MacBeth, A. (2018). Self-compassion and psychological distress in adolescents—a meta-analysis. Mindfulness, 9, 1011–1027 https://doi.org/10.1007/s12671-017-0850-7.
Muris, P., & Petrocchi, N. (2017). Protection or vulnerability? A meta-analysis of the relations between the positive and negative components of self-compassion and psychopathology. Clinical Psychology & Psychotherapy, 24, 373–383. https://doi.org/10.1002/cpp.2005.
Nakagawa, S., & Santos, E. S. (2012). Methodological issues and advances in biological meta-analysis. Evolutionary Ecology, 26, 1253–1274 https://doi.org/10.1007/s10682-012-9555-5.
Neff, K. D. (2003a). Self-compassion: an alternative conceptualization of a healthy attitude toward oneself. Self and Identity, 2, 85–101. https://doi.org/10.1080/15298860309032.
Neff, K. D. (2003b). The development and validation of a scale to measure self-compassion. Self and Identity, 2, 223–250 https://doi.org/10.1080/15298860309027.
Neff, K. D. (2016). The Self-Compassion Scale is a valid and theoretically coherent measure of self-compassion. Mindfulness, 7, 264–274. https://doi.org/10.1007/s12671-015-0479-3.
Neff, K. D., & Germer, C. K. (2013). A pilot study and randomized controlled trial of the Mindful Self-Compassion program. Journal of Clinical Psychology, 69, 28–44. https://doi.org/10.1002/jclp.21923.
Neff, K. D., & Tirch, D. (2013). Self-compassion and ACT. In T. B. Kashdan & J. Ciarrochi (Eds.), Mindfulness, acceptance, and positive psychology: the seven foundations of well-being (pp. 79–107). Oakland: Context Press/New Harbinger Publications.
Newby, J. M., McKinnon, A., Kuyken, W., Gilbody, S., & Dalgleish, T. (2015). Systematic review and meta-analysis of transdiagnostic psychological treatments for anxiety and depressive disorders in adulthood. Clinical Psychology Review, 40, 91–110. https://doi.org/10.1016/j.cpr.2015.06.002.
Raes, F., Pommier, E., Neff, K. D., & Van Gucht, D. (2011). Construction and factorial validation of a short form of the Self-Compassion Scale. Clinical Psychology & Psychotherapy, 18(3), 250–255. https://doi.org/10.1002/cpp.702.
Schanche, E., Stiles, T., McCullough, L., Svartberg, M., & Nielsen, G. (2011). The relationship between activating affects, inhibitory affects, and self-compassion in psychotherapy patients with Cluster C personality disorders. Psychotherapy, 48, 293–303. https://doi.org/10.1037/a0022012.
Segal, Z. V., Williams, J. M. G., & Teasdale, J. D. (2002). Mindfulness-based cognitive therapy for depression. New York: Guilford.
Segal, Z. V., Williams, J. M. G., & Teasdale, J. D. (2013). Mindfulness-based cognitive therapy for depression (2nd ed.). New York: Guilford.
Shahar, B., Szepsenwol, O., Zilcha-Mano, S., Haim, N., Zamir, O., Levi-Yeshuvi, S., & Levit-Binnun, N. (2015). A wait-list randomized controlled trial of loving-kindness meditation programme for self-criticism. Clinical Psychology & Psychotherapy, 22, 346–356. https://doi.org/10.1002/cpp.1893.
Van Dam, N. T., Hobkirk, A. L., Sheppard, S. C., Aviles-Andrews, R., & Earleywine, M. (2014). How does mindfulness reduce anxiety, depression, and stress? An exploratory examination of change processes in wait-list controlled mindfulness meditation training. Mindfulness, 5, 574–588. https://doi.org/10.1007/s12671-013-0229-3.
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48 URL: http://www.jstatsoft.org/v36/i03/.
Wampold, B. E. (2015). How important are the common factors in psychotherapy? An update. World Psychiatry, 14, 270–277. https://doi.org/10.1002/wps.20238.
Weisz, J. R., Kuppens, S., Eckshtain, D., Ugueto, A. M., Hawley, K. M., & Jensen-Doss, A. (2013). Performance of evidence-based youth psychotherapies compared with usual clinical care: a multilevel meta-analysis. JAMA Psychiatry, 70, 750–761. https://doi.org/10.1001/jamapsychiatry.2013.1176.
Williams, M. J., Dalgleish, T., Karl, A., & Kuyken, W. (2014). Examining the factor structures of the five facet mindfulness questionnaire and the self-compassion scale. Psychological Assessment, 26, 407–418. https://doi.org/10.1037/a0035566.
Yadavaia, J. E., Hayes, S. C., & Vilardaga, R. (2014). Using acceptance and commitment therapy to increase self-compassion: a randomized controlled trial. Journal of Contextual Behavioral Science, 3, 248–257. https://doi.org/10.1016/j.jcbs.2014.09.002.
Zessin, U., Dickhauser, O., & Garbade, S. (2015). The relationship between self-compassion and well-being: a meta-analysis. Applied Psychology: Health and Well-Being, 7, 340–362. https://doi.org/10.1111/aphw.12051.
The authors would like to thank Mr. Anders Jespersen and Ms. Antonia Klases for their assistance in study screening and quality rating and Ms. Rowena Stewart for her advice on the use of databases.
This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors.
Section of Clinical Psychology, School of Health in Social Science, University of Edinburgh, Doorway 6 Medical Quad, Teviot Place, Edinburgh, EH8 9AG, UK
Alexander C. Wilson, Kate Mackintosh & Stella W. Y. Chan
Department of Experimental Psychology, University of Oxford, Oxford, UK
Alexander C. Wilson
Department of Psychology, University of Stirling, Stirling, UK
Kate Mackintosh
Stella W. Y. Chan
Alexander C. Wilson and Kate Mackintosh are joint first authors.
Correspondence to Stella W. Y. Chan.
ESM 1
(DOCX 27 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Wilson, A.C., Mackintosh, K., Power, K. et al. Effectiveness of Self-Compassion Related Therapies: a Systematic Review and Meta-analysis. Mindfulness 10, 979–995 (2019). https://doi.org/10.1007/s12671-018-1037-6
Issue Date: 15 June 2019
Compassion-focussed therapy
Over 10 million scientific documents at your fingertips
Switch Edition
Corporate Edition
Not affiliated
© 2023 Springer Nature Switzerland AG. Part of Springer Nature.
|
CommonCrawl
|
data descriptors
LifeSnaps, a 4-month multi-modal dataset capturing unobtrusive snapshots of our lives in the wild
Physical activity, sleep and cardiovascular health data for 50,000 individuals from the MyHeart Counts Study
Steven G. Hershman, Brian M. Bot, … Euan A. Ashley
Best practices for analyzing large-scale health data from wearables and smartphone apps
Jennifer L. Hicks, Tim Althoff, … Scott L. Delp
COVID-BEHAVE dataset: measuring human behaviour during the COVID-19 pandemic
Kostas Konsolakis, Oresti Banos, … Hermie Hermens
A multidevice and multimodal dataset for human energy expenditure estimation using wearable devices
Shkurta Gashi, Chulhong Min, … Fahim Kawsar
A systematic review of smartphone-based human activity recognition methods for health research
Marcin Straczkiewicz, Peter James & Jukka-Pekka Onnela
Guidelines for wrist-worn consumer wearable assessment of heart rate in biobehavioral research
Benjamin W. Nelson, Carissa A. Low, … Nicholas B. Allen
Capturing sleep–wake cycles by using day-to-day smartphone touchscreen interactions
Jay N. Borger, Reto Huber & Arko Ghosh
Daily motionless activities: A dataset with accelerometer, magnetometer, gyroscope, environment, and GPS data
Ivan Miguel Pires, Nuno M. Garcia, … Petre Lameski
Real-world longitudinal data collected from the SleepHealth mobile app study
Sean Deering, Abhishek Pratap, … Carl J. Stepnowsky
Data Descriptor
Sofia Yfantidou ORCID: orcid.org/0000-0002-5629-34931,
Christina Karagianni ORCID: orcid.org/0000-0001-5772-31871,
Stefanos Efstathiou1,
Athena Vakali ORCID: orcid.org/0000-0002-0666-69841,
Joao Palotti ORCID: orcid.org/0000-0002-7099-97162,
Dimitrios Panteleimon Giakatos ORCID: orcid.org/0000-0002-1161-576X1,
Thomas Marchioro3,
Andrei Kazlouski3,
Elena Ferrari4 &
Šarūnas Girdzijauskas ORCID: orcid.org/0000-0003-4516-73175
Scientific Data volume 9, Article number: 663 (2022) Cite this article
Genetic databases
Ubiquitous self-tracking technologies have penetrated various aspects of our lives, from physical and mental health monitoring to fitness and entertainment. Yet, limited data exist on the association between in the wild large-scale physical activity patterns, sleep, stress, and overall health, and behavioral and psychological patterns due to challenges in collecting and releasing such datasets, including waning user engagement or privacy considerations. In this paper, we present the LifeSnaps dataset, a multi-modal, longitudinal, and geographically-distributed dataset containing a plethora of anthropological data, collected unobtrusively for the total course of more than 4 months by n = 71 participants. LifeSnaps contains more than 35 different data types from second to daily granularity, totaling more than 71 M rows of data. The participants contributed their data through validated surveys, ecological momentary assessments, and a Fitbit Sense smartwatch and consented to make these data available to empower future research. We envision that releasing this large-scale dataset of multi-modal real-world data will open novel research opportunities and potential applications in multiple disciplines.
Measurement(s) Step Unit of Distance • Nutrition, Calories • Physical Activity Measurement • Oxygen Saturation Measurement • maximal oxygen uptake measurement • Electrocardiography • Respiratory Rate • skin temperature sensor • Unit of Length • Very Light Exercise • heart rate variability measurement • resting heart rate • electrodermal activity measurement • State-Trait Anxiety Inventory • Positive and Negative Affect Schedule • Stages and Processes of Change • Behavioural Regulations in Exercise • 50-item International Personality Item Pool version of the Big Five Markers
Technology Type(s) FitBit • Survey • Survey
Sample Characteristic - Organism Homo sapiens
Sample Characteristic - Environment anthropogenic habitat
Sample Characteristic - Location Greece • Italy • Sweden • Cyprus
Background & Summary
The past decade has witnessed the arrival of ubiquitous wearable technologies with the potential to shed light on many aspects of human life. Studies with toddlers1, older adults2, students3, athletes4, or even office workers5 can be enriched with wearable devices and their increasing number of measured parameters. While the use of wearable technologies fosters research in various areas, such as privacy and security6, activity recognition7, human-computer interaction8, device miniaturization9 and energy consumption10, one of the most significant prospects is to transform healthcare by providing objective measurements of physical behaviors at a low-cost and unprecedented scale11. Many branches of medicine, such as sports and physical activity12, sleep13 and mental health14, to name a few, are turning to wearables for scientific evidence. However, the laborious work of clinically validating the biometric data generated by these new devices is of critical importance and often disregarded15.
In recent years, with the intent to further clinically validate the data generated by smart and wearable devices, considerable effort has been directed toward building several datasets covering a variety of health problems and sociological issues3,16,17,18,19,20,21. For instance, the largest cohort ever studied comes from Althoff et al.16 in 2017. They collected 68 M days of physical activity data from 717k mobile phone users distributed across 111 countries. The study aimed to correlate inactivity and obesity within and across countries. Unfortunately, the data from this study is somewhat limited to any other research question, as only the aggregated numbers were made public. Likewise, exclusively focusing on asthma, Chan et al.17 also developed a mobile application to collect data from 6k users with daily questionnaires on the participants' asthma symptoms. With a focus on cardiovascular health, the MyHeartCounts Cardiovascular Health Study18 is another extensive study containing 50k participants recruited in the US within six months. Although many participants enrolled in the study, the mean engagement time with the app per user was only 4.1 days. As a result of selecting only one health problem and collecting data exclusively for that specific condition, the collected data are often of limited use to other researchers. Closer to our scale and philosophy, Philip Schmidt et al.21 published the WESAD dataset, studying stress among 15 people, recent graduates, in the lab during a 2-hour experiment. Thus, WESAD is inherently different as it does not capture a realistic picture of our lives in the wild, given that the recruiting is restricted, and the emotional conditions are provoked in lab conditions. Similar to our study, Vaizman et al.20 collected data from 60 users, exploiting different types of everyday devices like smartphones and smartwatches. Nevertheless, the mean participation time was only 7.6 days. In the same line, the StudentLife3 app collected daily data from the devices of 48 students throughout the 10-week term at Dartmouth College, seeking the effect of workload on stress levels and general students' well-being and mental health. Table 1 facilitates the comparison between the contributions and limitations of the above-mentioned previous works. Our study possesses complementary contributions, incorporating new sensors and surveys, and subsequently adding new data types overtaking the above-mentioned works.
Table 1 Comparison between related datasets and the LifeSnaps data.
Our goal is to provide researchers and practitioners with a rich, open-to-use dataset that has been thoroughly anonymized and ubiquitously captures human behavior in the wild, namely humans' authentic behavior in their regular environments. Designed to be of general utility, this paper introduces the LifeSnaps dataset, a multi-modal, longitudinal, space and time distributed dataset containing a plethora of anthropological data such as physical activity, sleep, temperature, heart rate, blood oxygenation, stress, and mood of 71 participants collected from mid-2021 to early-2022. Figure 1 summarizes the study in terms of modalities, locations, and timelines.
The study timeline, the data collected, and the country of residency of our participants in Europe.
In synthesis, the LifeSnaps dataset's main contributions are:
Privacy and anonymity: we follow the state-of-the-art anonymization guidelines22 and EU's General Data Protection Regulation (GDPR)23 to collect, process, and distribute participants' data, and ensure privacy and anonymity for those involved in the study. Specifically, the LifeSnaps study abides by GDPR's core principles, namely lawfulness, fairness and transparency, purpose limitation, data minimization, accuracy, storage limitation, and integrity and confidentiality23.
Data modalities: we use three complementary, heterogeneous data modalities: (1) survey data, containing evaluations such as personality and anxiety scores; (2) ecological momentary assessments to extract participants' daily goals, mood, and context; and (3) sensing data from a flagship smartwatch;
Emerging datatypes: due to the use of a recently released device (FitBit Sense was released at the end of September 2020), various new data types that are rarely studied, such as temperature, oxygen saturation, heart rate variability, automatically assessed stress and sleep phases are now open and available, empowering the next wave of sensing research;
Rich data granularity: we aim to provide future researchers with the best data granularity possible, for that, we make available data as raw as technologically possible. Different data types exhibit different levels of detail, from second to daily granularity, always with respect to privacy constraints;
Community code sharing and reproducibility: apart from the raw data format, we also provide a set of preprocessing scripts (https://github.com/Datalab-AUTH/LifeSnaps-EDA), aiming to facilitate future research and familiarize researchers and practitioners who can largely exploit and use LifeSnaps data;
In the wild data: participants were explicitly told to move on with their lives, as usual, engaging in their natural and unscripted behavior, while no lab conditions or restrictions were imposed on them. This ensures the in the wild nature of the dataset and subsequently its ecological validity;
Trade-offs scale, duration and participation: we designed the study to have a fair balance between the study length (8 weeks) and the number of participants (71), setting and conducting weekly reminders and follow-ups to maintain engagement. For this reason, we envision the LifeSnaps dataset can facilitate research in various scientific domains.
Our goal in sharing LifeSnaps is to empower future research in different disciplines from diverse perspectives. We believe that LifeSnaps, as a multi-modal, privacy-preserving, distributed in time and space, easily reproducible, and rich granularity time-series dataset will help the scientific community to answer questions related to human life, physical and mental health, and overall support scientific curiosity concerning sensing technologies.
This section discusses participants' recruitment and demographics and moves on to describe the study's timeline, data collection process, and data availability. Finally, it sheds light on user engagement, data quality, and completeness issues.
Participants' recruitment was geographically distributed in four countries, namely Greece, Cyprus, Italy, and Sweden, while the study was approved by the Institutional Review Board (IRB) of the Aristotle University of Thessaloniki (Protocol Number 43661/2021). All study participants were recruited through convenience sampling or volunteering calls to university mailing lists. In the end, the locations of the participants recruited coincide with the locations of the partner universities of the RAIS consortium, which organized the LifeSnaps study. Namely, there were 24 participants in Sweden, 10 in Italy, 25 in Greece, and 12 in Cyprus, as depicted in Fig. 1. Participants were not awarded any monetary or other incentives for their participation. As mentioned earlier, all participants provided written informed consent to data processing and to de-identified data sharing. The content of the informed consent is also delineated on the study's web page (https://rais-experiment.csd.auth.gr/the-experiment/). Specifically, released data is de-identified and linked only by a subject identifier. We initially recruited 77 participants, of which 4 dropped out of the study, 1 faced technical difficulties during data export, and 1 withdrew consent, leading to a total number of 71 study participants. This translates to a 7% drop rate, lower than similar in the wild studies3,18. We further discuss user engagement with the study in a later section.
To participate, subjects were required to (a) have exclusive access to a Bluetooth-enabled mobile phone with internet access without any operating system restrictions for the full study duration, (b) have exclusive access to a personal e-mail for the full study duration, (c) be at least 18 years old at the time of recruitment, and (d) be willing to wear and sync a wearable sensor so that data can be collected and transmitted to the researchers.
Among the 71 participants who completed the study, 42 are male, and 29 are female, with an approximate 60/40 ratio. All but two provided their age, half under 30 and half over 30 (ranges are defined so that k-anonymity is guaranteed). Two participants reported incorrect weight values, making it impossible to compute their BMI. Among the rest, 55.1% had a BMI of 19 or lower, 23.2% were in the range 20–24, 14.5% were in the range 25–29, and the remaining 7.2% had 30 or above. The highest level of education for most participants (68%) is the Master's degree, while 14.9% had completed Bachelor's and the other 16.4% had a Ph.D. or a higher degree. Histograms summarizing these distributions are shown in Fig. 2.
Histograms representing the distributions of demographic attributes among the 71 participants. Each column displays the number of occurrences for the corresponding category. Note that some participants did not reveal their age, BMI, or level of education.
With regard to the participants' usage of wearable devices, out of 66 responses, 51% stated that they currently own or have owned a wearable device in the past. Out of those who have not used wearables, 40% identified high cost as a preventive factor, while only 3% mentioned technical difficulties or trust in their own body, and none reported general mistrust towards wearables (multiple choice possible). Out of those who do use wearables, 44% purchased them to gain increased control over their physical activity, 44% due to general interest in technology, 38% for encouragement reasons, 35% for health monitoring, 18% received them as a gift, 15% after a doctor's recommendation, and 9% through an insurance rewards' program (multiple choice possible). The most common indicators tracked include heart rate (91%), step count (88%), distance (73%), calories (62%), and exercise duration (55%). Finally, concerning sharing of sensing data, 32% of participants have shared data with researchers or friends, or family, while a smaller percentage has shared data with exercise platforms (15%), such as Strava, or their doctor (6%).
All study participants wore a Fitbit Sense smartwatch (https://www.fitbit.com/global/us/products/smartwatches/sense) for the duration of the study, while no other incentives were given to the participants. The watch was returned at the end of the study period.
The LifeSnaps Study is a two-round study; the first round ran from May 24th, 2021 to July 26th, 2021 (n = 38), while the second round ran from November 15th, 2021 to January 17th, 2022 (n = 34), totaling more than four months and 71 M rows of user data. One participant joined both study rounds. In this section, we discuss the entry and exit procedure, the data collection and monitoring, and related privacy considerations (see Fig. 3).
The study procedure from both the participants (left) and researchers' (right) viewpoint.
Participant entry and exit
Upon recruitment, each participant received a starter pack, including information about the study (e.g., study purpose, types of data collected, data management), as well as a step-by-step instructions manual (https://rais-experiment.csd.auth.gr/). Note that we conducted the study amidst the Covid-19 pandemic in compliance with the national and regional restrictions, minimizing in-person contact and encouraging remote recruitment. Initially, each participant provided written informed consent to data processing and sharing of the de-identified data. Specifically, they were informed that after the end of the study, the data would be anonymized so that their identity could no longer be inferred and shared with the scientific community. Then, they were instructed to set up their Fitbit devices, authorize our study compliance app to access their Fitbit data, and install the SEMA3 EMA mobile application24. Participants were encouraged to carry on with their normal routine and wear their Fitbit as much as possible. At the end of the study, participants were instructed to export and share their Fitbit data with the researchers and reset and return their Fitbit Sense device.
During the entry stage, a series of health, psychological, and demographic baseline surveys were administered using a university-hosted instance of LimeSurvey25. During the exit stage, we re-administered a subset of the entry surveys as post-measures. Specifically, we distributed a demographics survey, the Physical Activity Readiness Questionnaire (PAR-Q)26, and the 50-item International Personality Item Pool version of the Big Five Markers (IPIP)27 at entry, and the Stages and Processes of Change Questionnaire28,29, a subset of the transtheoretical model (TTM) constructs30,31, and the Behavioural Regulations in Exercise Questionnaire (BREQ-2)32,33 at entry and exit, as shown in Table 2. Note that the demographics and PAR-Q surveys are excluded from the published dataset for anonymity purposes. Figure 4 delineates the entry and exit stages' flow.
Table 2 The three data modalities in the LifeSnaps Dataset: Surveys, SEMA3, and Fitbit data, accompanied by data types, a short description, and related statistics.
Time flow of the 9-week study.
The data collection phase lasted for up to 9 weeks for each study round, incorporating three distinct data modalities, each with its own set of advantages as discussed below (see Fig. 4 for the time flow of the 9-week study):
Fitbit data
This modality includes 71 M rows of sensing data collected in the wild from the participants' Fitbit Sense devices. The term "in the wild" was first coined in the field of anthropology34,35,36, referring to cognition in the wild. Nowadays, its definition is broader, referring to "research that seeks to understand new technology interventions in everyday living"37. Sensing data collected in the wild are likely to provide a more accurate and representative picture of an individual's behavior than a snapshot of data collected in a lab38, thus facilitating the development of ecologically valid systems that work well in the real world39.
At the time of the study, Fitbit Sense is the flagship smartwatch of Fitbit embedded with a 3-axis accelerometer, gyroscope, altimeter, built-in GPS receiver, multi-path optical heart rate tracker, on-wrist skin temperature, ambient light, and multipurpose electrical sensors, as per the device manual and sensor specifications40. While the majority of offline raw sensor data (e.g., accelerometer or gyroscope measurements) are not available for research purposes, we have collected a great number of heterogeneous aggregated data of varying granularity, as shown in Table 2. Such data include, but are not limited to, steps, sleep, physical activity, and heart rate, but also data types from newly-integrated sensors, such as electrodermal activity (EDA) responses, electrocardiogram (ECG) readings, oxygen saturation (SpO2), VO2 max, nightly temperature, and stress indicators as shown in Table 2. All timestamp values follow the form of the Fitbit API as mentioned in the documentation (https://dev.fitbit.com/build/reference/device-api/accelerometer/), while all duration values in Fitbit data types, such as exercise, mindfulness sessions, and sleep types, are measured in milliseconds.
SEMA3 data
This modality includes more than 15 K participants' Ecological Momentary Assessment (EMA) self-reports with regard to their step goals, mood, and location. EMA involves "repeated sampling of subjects' current behaviors and experiences in real-time, in subjects' natural environments"41. EMA benefits include more accurate data production than traditional surveys due to reduced recall bias, maximized ecological validity, and the potential for repeated assessment41. Additionally, complementary to sensing data that do not comprehensively evaluate an individual's daily experiences, EMA surveys can capture more detailed time courses of a person's subjective experiences and how they relate to objectively measured data42.
We distribute two distinct EMAs through the SEMA3 app: the step goal EMA (from now onward called Step Goal Survey) and the context and mood EMA (from now onward called Context and Mood Survey), as seen in Table 2. Once a day (morning), we schedule the Step Goal Survey; namely, we ask the participants how many steps they would like to achieve on that day with choices ranging from less than 2000 to more than 25000. Thrice a day (morning-afternoon-evening), we schedule the Context and Mood Survey, namely, we inquire the participants about their current location and feelings. The location choices include home, work/school, restaurant/bar/entertainment venues, transit, and others (free text field). An option "Home Office" is also added on the second round upon request. The mood choices include happy, rested/relaxed, tense/anxious, tired, alert, and sad, while an option neutral is added at the beginning of the first round upon request.
Surveys data
This modality includes approximately 1 K participants' responses to various health, psychological and demographic surveys administered via Mailchimp, an e-mail marketing service (https://mailchimp.com/). Surveys have the benefit of accurately capturing complex constructs which would otherwise be unattainable, such as personality traits, through the usage of validated instruments. A validated instrument describes a survey that has been "tested for reliability, i.e., the ability of the instrument to produce consistent results and validity, i.e., the ability of the instrument to produce correct results. Validated instruments have been extensively tested in sample populations, are correctly calibrated to their target, and can therefore be assumed to be accurate43.
Apart from the entry and exit surveys discussed above, namely demographics, PAR-Q, IPIP, TTM, and BREQ-2, participants received weekly e-mails asking them to complete two surveys: the Positive and Negative Affect Schedule Questionnaire (PANAS)44, and the State-Trait Anxiety Inventory (S-STAI)45. For more details on the administered surveys, refer to Table 2.
Data acquisition monitoring
During the study, we adopted various monitoring processes that automatically produce statistics on compliance. If a participant's Fitbit did not sync for more than 48 h, we would send an e-mail to the participant with a reminder and a troubleshooting guide, while also informing the research team. Also, if a participant's response rate in the SEMA3 app fell below 50%, the researchers would get notified to ping the participants to try and get them back on track. Finally, survey completion rates and e-mail opens and clicks were also monitored for the entry and exit surveys to ensure maximum compliance. Re-sent e-mails were sent every Tuesday to those participants who did not open their start-of-the-week e-mail on Monday to remind them to complete their weekly surveys.
Data preprocessing
Certain preprocessing steps were required to ensure data coherence as indicated below:
Modality-agnostic user ID
For linking the three distinct data modalities, namely Fitbit, SEMA3, and surveys data, we assign a 2-byte random id to each participant, which is common across all modalities to replace modality-specific user identifiers. This modality-agnostic user identifier enables data joins between different modalities.
Time zone conversion
Given the geographically-distributed nature of our study, we also need to establish a common reference time zone to facilitate analysis and between-user comparisons. Due to privacy considerations, we cannot disclose the users' time zones or their UTC offset. Hence, we convert all data of sub-daily granularity from UTC to local time. Specifically, concerning Fitbit data, only a subset of types are stored in UTC (see Table 3), as also verified by the Fitbit community (https://community.fitbit.com/t5/Fitbit-com-Dashboard/Working-with-JSON-data-export-files/td-p/3098623). These types are converted to local time based on the participant's time zone, as declared in their Fitbit profile. Similarly, we convert SEMA3 and survey data modalities from the export timezone to local time using the same methodology. We also consider Daylight Saving Time (DST) during the conversion, since all recruitment countries follow this practice at the time of the study.
Table 3 The export time zone of different Fitbit data types: some types are exported in UTC, while others in local time. Time zone does not apply to daily granularity data types.
Due to the participants' multilingualism, the exported Fitbit data, specifically exercise types, are exported in different languages. To this end, and with respect to the users' privacy, we translate all exercise types to their English equivalent based on the unique exercise identifier provided.
Tabular data conversion
To facilitate the reusability of the LifeSnaps dataset, additionally to the raw data, we convert Fitbit and SEMA3 data to a tabular format to store in CSV files (see later for code availability). During the conversion process, we read each data type independently and perform a series of processing steps, including type conversion for numerical or timestamped data (data are stored as strings in MongoDB), duplicate elimination for records with the same user id and timestamp, and, optionally, aggregation (average, summation, or maximum) for data types of finer than the requested granularity.
Surveys scoring
Similarly, to facilitate the integration of the survey data, we provide, additionally to the raw responses, a scored version of each survey (scale) in a tabular format (see later for code availability). Each survey has by definition a different scoring function as described below:
IPIP
For the IPIP scale, we follow the official scoring instructions (https://ipip.ori.org/new_ipip-50-item-scale.htm). Each of the 50 items is assigned to a factor on which that item is scored (i.e., of the five factors: (1) Extraversion, (2) Agreeableness, (3) Conscientiousness, (4) Emotional Stability, or (5) Intellect/Imagination), and its direction of scoring (+ or −). Negatively signed items are inverted. Once scores are assigned to all of the items in the 50-item scale, we sum up the values to obtain a total scale score per factor. For between-user comparisons, we also add a categorical variable per factor that describes if the user scores below average, average, or above average with regard to this factor conditioned on the user's gender.
For the TTM scale, each user is assigned a stage of change (i.e., of the five stages: (1) Maintenance, (2) Action, (3) Preparation, (4) Contemplation, or (5) Precontemplation) based on their response to the respective scale. Regarding the Processes of Change for Physical Activity, each item is assigned to a factor on which that item is scored (i.e., of the 10 factors: (1) Consciousness Raising, (2) Dramatic Relief, (3) Environmental Reevaluation, (4) Self Reevaluation, (5) Social Liberation, (6) Counterconditioning, (7) Helping Relationships, (8) Reinforcement Management, (9) Self Liberation, or (10) Stimulus Control). Once scores are assigned to all of the items, we calculate each user's mean for every factor, according to the scoring instructions (https://hbcrworkgroup.weebly.com/transtheoretical-model-applied-to-physical-activity.html).
BREQ-2
For the BREQ-2 scale, each item is again assigned to a factor on which that item is scored (i.e., of the five factors: (1) Amotivation, (2) External regulation, (3) Introjected regulation, (4) Identified regulation, (5) Intrinsic regulation). Once scores are assigned to all of the items, we calculate each user's mean for every factor, according to the scoring instructions (http://exercise-motivation.bangor.ac.uk/breq/brqscore.php). We also create a categorical variable describing the self-determination level for each user, namely the maximum scoring factor.
For the PANAS scale, each item contributes to one of two affect scores (i.e. (1) Positive Affect Score, or (2) Negative Affect Score). Once items are assigned to a factor, we sum up the item scores per factor as per the scoring instructions (https://ogg.osu.edu/media/documents/MB%20Stream/PANAS.pdf). Scores can range from 10 to 50, with higher scores representing higher levels of positive or negative affect, respectively.
S-STAI
For the S-STAI scale, we initially reverse scores of the positively connotated items, and then total the scoring weights, resulting in the STAI score (the higher the score the more stressed the participant feels) as per the scoring instructions (https://oml.eular.org/sysModules/obxOML/docs/id_150/State-Trait-Anxiety-Inventory.pdf). To assign some interpretation to the numerical value, we also create a categorical variable, assigning each user to a STAI stress level (i.e., of three levels, (1) below average, (2) average, or (3) above average STAI score). Note that due to human error, the S-STAI scale was administered with a 5-point Likert scale instead of a 4-point one. During processing, we convert each item to a 4-point scale in accordance with the original.
Privacy considerations
Before the start of the study, we made a commitment to the participants to protect their privacy and sensitive information. Therefore, we thoroughly anonymize the dataset under publication. In the process, we adhere to the following principles: (i) minimizing the probability for successful re-identification of users by real-world adversaries, (ii) maximizing the amount of retained data that are of use to the researchers and practitioners, (iii) abiding by the principles and recommendations of GDPR in regards to the handling of personal information, and (iv) following the established anonymization practices and principles.
We strive to maintain k-anonymity22 of the dataset - ensuring that every user is indistinguishable from at least k-1 other individuals. Prior to anonymization, data are stored on secure university servers and proprietary cloud services. According to GDPR, when participants withdraw their consent, their original data are removed, while consent is valid for two years unless withdrawn. Since anonymized data are excluded from the GDPR regulations, they can be stored indefinitely.
We initially remove the identities of the participants such as full names, usernames, and email addresses, and substitute them with a 12-byte random id per user (https://www.mongodb.com/docs/manual/reference/method/ObjectId/). Furthermore, we discard completely (or almost) the parameters that are extremely identifying but are of limited value to the recipients of the dataset, including ethnicity, country, language, and timezone. We also exclude or aggregate some of the physical characteristics of users, namely age, height, weight, and health conditions.
Handling quasi-identifiers
By quasi-identifiers we imply non-time-series parameters that can be employed for re-identification. For the anthropometric parameters that cannot be disclosed unmodified, the aggregated versions were released instead. We choose the ranges not just to maintain the anonymity of the dataset but also for them to reflect the real-world divisions. Thus, we split participants into two age categories: below 30, i.e., young adults, and the rest. In a similar manner, we withhold the height and weight of the participants and release BMI instead. To protect the users who may have the outlier values for this parameter, we form 3 distinct BMI groups: < = 19, > = 25, and > = 30 for underweight, overweight, and obese individuals respectively. The participants' gender (in the context of this study, gender signifies a binary decision that is offered by Fitbit) is the only quasi-identifier that is released unaltered. Based on the quasi-identifiers that are present in the release version of the data (gender, age, and extreme values of BMI) we show that our dataset achieves 2 to 12-anonymity, depending on the "strength" of the adversary we consider.
To ensure that the anonymity requirements are met, we need to verify that a realistic adversary cannot re-identify the participants. For instance, if the attacker knows that, say, John Smith who participated in the experiment took exactly 12345 steps on a given day, they would easily de-anonymize him. No possible anonymization (except, perhaps, perturbing the time-series data with noise, hence hindering their usability) can preserve privacy in that case.
Instead, we consider a realistic yet strong threat model. Suppose, the adversary obtained the list of all the participants in the dataset and found their birth dates from public sources as depicted in Fig. 5. Moreover, the attacker stalked all the users from the list online or in public places and learned their appearances and some of their distinct traits, e.g., very tall, plump, etc. For the privacy of LifeSnaps to be compromised it is enough for the adversary to de-anonymize just a single user. Therefore, we protect the dataset as a whole by ensuring the anonymity of every individual. If the attacker utilizes only the definite quasi-identifiers (gender and age), our dataset achieves unconditional 12-anonymity. Since we do not disclose the height and weight of the participants, the attacker may not directly utilize the observations of physical states (column "Note" in Fig. 5) of the users for de-anonymization. Suppose, the attacker wants to make use of the appearances and associate them with the released BMI. Since BMI is a function of height and weight, they need to estimate both attributes in order to calculate it (Eq. 1). Suppose, the adversary is able to guess the parameters with some error intervals, say, ±5 kg for weight and ±5 cm for height. The final error interval for BMI can be calculated according to the error propagation (Eq. 2).
$$BMI=\frac{weight}{heigh{t}^{2}}$$
$${\varepsilon }_{BMI}=\frac{\Delta weight}{weight}+\frac{2\ast \Delta height}{height}$$
In the threat model we consider, the adversary obtained a list of all the participants in the dataset, found their age, and learned their appearances. They aim to link the individuals (or a single person) back to their aggregated data. Since we do not release height and weight, the physical appearances of the users are significantly less beneficial for de-anonymization. LifeSnaps is 12-anonymous under the normal threat model and at least 2-anonymous under the strongest one.
For instance, for a person of 170 cm and 70 kg, while \(\lfloor BMI\rfloor =24\), the error interval I contains 8 integer values, I = {21, 22, 23, 24, 25, 26, 27, 28}. In fact, the list of possible values for BMI stretches nearly from underweight to obese. It is evident that due to the error propagation the adversary is not able to glean significant insights into the users. Even if the attacker is able to perfectly estimate the height and weight of the participants, i.e., with no error interval, they can diminish the anonymity factor k at most down to 2
Data Records
The LifeSnaps data record is distributed in two formats to facilitate access to broader audiences with knowledge of different technologies, and researchers and practitioners across diverse scientific domains:
Through compressed binary encoded JSON (BSON) exports of raw data (stored in the mongo_rais_anonymized folder) to facilitate storage in a MongoDB instance (https://www.mongodb.com/);
Through CSV exports of daily and hourly granularity (stored in the csv_rais_anonymized folder) to facilitate immediate usage in Python or any other programming language. Note that the CSV files are subsets of the MongoDB database presenting aggregated versions of the raw data.
The data are stored in Zenodo (http://zenodo.org/), a general-purpose open repository, and can be accessed online https://doi.org/10.5281/zenodo.682668246. We have compressed the dataset to facilitate usage, but the uncompressed version exceeds 9GB. The data can be loaded effortlessly in a single line of code, and loading instructions are provided in the repository's documentation. In the remainder of this section, we analyze the structure and organization of the raw data distributed via MongoDB JSON exports.
The MongoDB database includes three collections, fitbit, sema, and surveys, containing the Fitbit, SEMA3, and survey data, respectively. Each document in any collection follows the format shown in Fig. 6 (top left), consisting of four fields: _id, id (also found as user_id in sema and survey collections), type, and data. The _id field is the MongoDB-defined primary key and can be ignored. The id field refers to a user-specific ID used to identify each user across all collections uniquely. The type field refers to the specific data type within the collection, e.g., steps (see Table 2 for distinct types per collection). The data field contains the actual information about the document, e.g., steps count for a specific timestamp for the steps type, in the form of an embedded object. The contents of the data object are type-dependent, meaning that the fields within the data object are different between different types of data. In other words, a steps record will have a different data structure (e.g., step count, timestamp, etc.) compared to a sleep record (e.g., sleep duration, sleep start time, sleep end time, etc.). As mentioned previously, all timestamps are stored in local time format, and user IDs are common across different collections.
A generic document format for all three collections in MongoDB (top left), along with example document formats for each collection, fitbit, sema, and surveys.
Fitbit collection
Regarding the Fitbit collection, it contains 32 distinct data types, such as steps or sleep (see Table 2 for the full list). When it comes to the contents of individual Fitbit data types, researchers should refer to the official company documentation (https://dev.fitbit.com/build/reference/web-api/) for definitions. To assist in future usage, we also provide a UML diagram of the available Fitbit data types and their subtypes in our dataset in Fig. 7. Figure 6 presents the structure of an example document of the fitbit collection (bottom right).
A UML diagram for the Fitbit modality, including all available data types and subtypes.
Surveys collection
Concerning the survey collection, it contains six distinct data types, namely IPIP, BREQ-2, demographics, PANAS, S-STAI, and TTM. Each type contains the user responses to the respective survey questions (SQ), as well as certain timestamp fields (See the top right document in Fig. 6 for an example BREQ-2 response). Specifically, each SQ is encoded in the dataset for ease-of-use purposes and CSV compatibility. For example, the first SQ of the BREQ-2 scale, i.e., I exercise because other people say I should, is encoded as engage[SQ001]. For decoding purposes, we provide tables mapping these custom codes to scale items in the GitHub repository (https://github.com/Datalab-AUTH/LifeSnaps-EDA/blob/main/SURVEYS-Code-to-text.xlsx) and Zenodo. User responses to SQs are shared as is without any encoding. Each survey document also contains three timestamp fields, namely "submitdate" (i.e., timestamp of form submission), "startdate" (i.e., timestamp of form initiation), and "datestamp" (i.e., timestamp of the last edit in the form after submission; coincides with the "submitdate" if no edits were made). To facilitate usage, in the folder scored_surveys, we provide scored versions of the raw survey data in CSV format, as discussed in the Data Preprocessing section.
SEMA collection
Finally, regarding the sema collection, it contains two distinct data types, namely Context and Mood Survey, and Step Goal Survey. Each EMA document contains a set of fields related to the study and the survey design (e.g., "STUDY_NAME", "SURVEY_NAME"), as well as the participant responses to the EMA questions, and timestamp fields. Figure 6 (bottom left) presents an example of the "Context and Mood Survey", where the participant is at home (i.e., "PLACE": "HOME") and they are feeling tense (i.e., "MOOD": "TENSE/ANXIOUS"). Apart from the pre-defined context choices, participants are also allowed to enter their location as a free text, which appears in the "OTHER" field. Concerning the timestamp fields, each document contains the following: "CREATED_TS" (i.e., the timestamp from when the survey was created within the SEMA3 app), "SCHEDULED_TS" (i.e., the timestamp from when the survey was scheduled, e.g., when the participant received the notification), "STARTED_TS" (i.e., the timestamp from when the participant started the survey), "COMPLETED_TS" (i.e., the timestamp from when participant completed the survey), "EXPIRED_TS" (i.e., the timestamp from when the survey expired), "UPLOADED_TS" (i.e., the timestamp from when the survey was uploaded to the SEMA3 server). Note that if a participant completes a scheduled EMA, then the EXPIRED_TS field is null, otherwise, the COMPLETED_TS field is null. Only completed EMAs have valid values in the STEPS field for the Step Goal Survey or the PLACE and MOOD fields for the Context and Mood Survey.
We built the LifeSnaps Dataset with the goal to serve multiple-purpose scientific research. In this section, we discuss indicative use cases for the data (see Fig. 8), and hope that researchers and practitioners will devise further uses for various aspects of the data.
Indicative use cases for the LifeSnaps dataset.
Among others, the LifeSnaps dataset includes data emerging from diverse vital signals, such as heart rate, temperature, and oxygen saturation. Such signals can be of use to medical researchers and practitioners for general health monitoring, but also to signal processing experts due to their fine granularity. When it comes to coarse granularity data, the dataset includes a plethora of physical activities, sleep sessions, and other data emerging from behaviors related to physical well-being. Such data can be exploited by researchers and practitioners within the sleep research and sports sciences domain to study how human behavior affects overall physical well-being. On top of that, the LifeSnaps dataset is a rich source of mood and affect data, both measured and self-reported, that have the potential to empower research in the domains of stress monitoring and prediction, and overall mental well-being. Additionally, the diverse modalities of the dataset allow for exploring the correlation between objectively measured and self-reported mental and physical health data, while the psychological and behavioral scales distributed can facilitate research in the behavioral sciences domain. On a different note, incentive schemes, such as badges, and their effect on user behavior, user compliance, and engagement with certain system features could be of great interest to human-computer interaction experts. Finally, handling such sensitive data can fuel discussions and efforts towards privacy preservation research and data valorization.
Technical Validation
Fitbit data validity has been extensively studied and verified in prior work47,48,49. To this end, this section goes beyond data validity to discuss data quality issues emerging from the study procedure, as presented in the previous section. Specifically, we provide details on user engagement and compliance with the study, and we delineate data completeness and other limitations that we have encountered during the process.
Throughout both study rounds, the users received reminder e-mails, as discussed earlier, for weekly survey completion. The open and click rates after e-mail communication vary, as shown in Fig. 9. Overall, during the first round, the compliance is higher than that of the second, while in both study rounds, there is a sharp decrease between weeks 2–4 concerning open, click, and response rates. The open and click rates follow approximately the same trend and they exhibit a high percentage (~80% and ~65%, respectively) in the beginning and end of the study batches and a medium percentage between weeks 5–7 (~70% and ~50%, respectively). The response rate shows a steep drop from week 1 to week 3–4 in both rounds plummeting almost to 30% at its lowest level (from more than 90% in the beginning), and settling around 40–50% for the second month of each round. Note that the response rate is sometimes higher than the open or click rates, due to resent reminder e-mails (discussed earlier in the data collection monitoring section). An interesting finding is that reminder e-mails seem to be more effective in the first weeks of the study, where the response rate is higher than the open and click rates, but have diminished effectiveness later on, where the response rate is similar to the click rate excluding resents.
Mail communication open and click rates (excluding resent e-mails) and response rates (overall) for first (left) and second (right) study round.
SEMA3
We also study and quantify user engagement through the lens of the SEMA3 EMA responses. The SEMA3 average user compliance for both rounds is 43%, calculated from the SEMA3 online platform. Moreover, Fig. 10 shows the different compliance levels in terms of response rate across participants. Contrary to related studies of a similar scale17, where there is a significant decline in the number of users responding to the daily and weekly surveys, and the response rate distribution closely resembles an exponential decay curve, in our experiment, nearly 1 in 3 participants replied to more than 75% of the EMA surveys (bottom subfigure). In the top subfigure, we also notice that numerous participants exhibited extremely high levels of engagement (>90%). Additionally, Fig. 11 depicts when the participants preferred to answer the SEMA3 EMAs surveys throughout the week. It seems that the most common time slot was during the weekdays between 10 to 12 am.
SEMA3 EMAs response rate bar plot (up) and histogram (bottom) for both rounds of the experiment.
When did our participants answer the SEMA3 EMAs?.
Regarding the completeness of the SEMA3 responses, participants engaged in various activities throughout their daily routine during the study. The Context and Mood EMA responses (Figs. 12 and 13) give us a peek into the participants' lifestyles, given that data are collected in the wild. During both rounds, the most frequent response to where our participants were located was HOME, which is unsurprising during the Covid-19 pandemic, with the second most frequent response being either WORK or SCHOOL. During the first study round, the answer OUTDOORS appears more often than in the second one, which might be partly explained because the first round occurred during summer in Europe, when the weather is more friendly for outdoor activities. Regarding emotions, interestingly, TIREDNESS is a response that usually appears at the beginning of the week and SADNESS appears on Mondays or in the middle of the week. On the same note, the participants seem to use HAPPY more often on Fridays or during the weekends. HAPPY was also selected an outstanding amount of times during Christmas vacations.
Context (left) and mood (right) EMA responses per day for the first study round. Every date shown in the figure is a Monday.
Context (left) and mood (right) EMA responses per day for the second study round. Every date shown in the figure is a Monday.
Having discussed user engagement with regard to surveys and EMAs, from here onward we explore compliance with the Fitbit device. The heatmaps in Fig. 14 display the Fitbit data availability for both rounds, incorporating the distance, steps, and estimated oxygen variation Fitbit types, while the ones in Fig. 15 depict all available Fitbit Types throughout the study dates of both study rounds. Although there is a declining trend throughout the study dates in both study batches, the participants' engagement is still outstanding, producing valuable data as the mean user engagement rises to 41.37 days (42.71 days in the first round and 40.03 in the second round), corresponding to 73% of the total study days. The standard deviation is 16.67 days and the median is 49.5 days in the first round, while in the second round the standard deviation and median are 17.39 and 40.13 days respectively. An interesting finding is a decline in Fitbit usage in the first days of the year (second round), and a resumption to normal use shortly after. We also notice that the number of users who wear their Fitbit device during the night is smaller compared to during the daytime, which is in accordance with prior work, revealing that almost 1 out of 2 users do not wear their watch during sleeping or bathing50.
Fitbit data availability throughout the first (left) and second (right) study round.
All Fitbit data types available throughout the first (left) and second (right) study rounds.
Summing up, in Fig. 16, all collection types, namely the synced Fitbit data, the Step Goal and Context and Mood SEMA3 questionnaires, and the surveys data are compared, visualizing the complete picture of the user engagement throughout both rounds. In both study batches, we did not lose a significant number of participants from the beginning to the end, contributing to the completeness and quality of the LifeSnaps dataset. Note that while Fitbit and SEMA3 engagement waned with time, survey participation peaked at the end of the study. We presume this is due to the repeated e-mails unresponsive participants received for extracting and sharing their Fitbit data accompanied by survey completion reminders. It is worth mentioning that the participants could respond to the surveys throughout the week after receiving the notification e-mail, which is why the survey data appear more spread and low through the study dates during both study rounds.
User engagement (Fitbit, SEMA3 and Surveys collections) throughout the first (left) and second (right) study round.
Digging a little bit into the data, we can visualize (Fig. 17) the average steps (left) and the number of exercise sessions (right) for all participants combined for both study batches. We can clearly see that there is a significant increase in the number of steps around 5PM during the weekdays assuming that most people were leaving their jobs at that time. As expected, the steps start to appear later in the morning during the weekends compared to the weekdays and Saturday seems to be the day of the week with the largest mean number of steps. Regarding the time and day of the week, our participants preferred to exercise, on Mondays and Saturdays the sessions appear more spread throughout the daytime. Similarly, with the steps pattern, there is a rise in the number of exercise sessions starting from 5PM during the weekdays and an outstanding elevation on Saturdays.
Pattern analysis: When and how much have the participants walked (left) and exercised (right).
Note here that the majority of published datasets do not provide information on user engagement. On the contrary, we discuss user engagement patterns for all three modalities (i.e., Fitbit, SEMA, and surveys) separately, reaffirming the quality and validity of the LifeSnaps dataset. Additionally, according to Yfantidou et al.51, 70% of health behavior change interventions (user studies) utilizing self-tracking technology have a sample size of fewer than 70 participants, while 59% has a duration of fewer than 8 weeks. In accordance with these findings, the median sample size of the relevant studies discussed in the introduction is a mere 16 users, while a significant number of these datasets are constrained to lab conditions. Such comparisons highlight the research gap we aim to close and the LifeSnaps dataset's importance for advancing ubiquitous computing research.
The LifeSnaps study experienced similar limitations to previous personal informatics data collection studies both in terms of participant engagement and technical issues.
As discussed earlier, while participant dropout rates are low (7%), there exists a decreasing trend in compliance over time across data modalities. Despite the data collection monitoring system in place and the subsequent nudging of non-compliant participants, we still encountered a ~18% decrease in SEMA3 compliance (~24% in the first round and ~11% in the second round), ~18% decrease in Fitbit compliance (~24% in the first round and ~12% in the second round), and approximately 5% decrease in survey communication open rates between entry and exit weeks. Note here that a small percentage of non-compliant participants with Fitbit is due to allergic reactions participants developed to the watch's wristband and reported to the research team. At the same time, waning compliance with the smartwatch is expected due to well-documented novelty effects lasting from a few hours to a few weeks52. Nevertheless, such compliance rates are still higher than similar studies of the same duration53. To better understand how non-compliant participants can be engaged, differing incentive mechanisms should be put in place to gauge how such participants would respond.
Independent of compliance issues, we also encountered a set of technical limitations that were out of our control:
Device incompatibility
One participant (~1%) reported full incompatibility of their mobile phone device with the SEMA3 app. To decrease entry barriers, we did not restrict accepted operating systems for the users' mobile phones. Despite SEMA3 supporting both Android and iOS, there are known limitations with regard to certain mobile phone brands (https://docs.google.com/document/d/1e0tQvFSnegpJg2P5mfMU7N7R9D3gTZatzxGmVy-EkQE/edit). Additionally, multiple participants reported issues with notifications and SEMA3 operation. All received troubleshooting support in accordance with the official manual above. There were no compatibility issues reported for the Fitbit device.
While processing the data, we encountered corrupted data records due to either user or system errors. For instance, there exists a user weighing 10 kg, which is an erroneous entry that also affects the calculation of correlated data types such as calories. Concerning system errors, we noticed corrupted dates (e.g., 1 January 1980 00:00:00) for a single data type in the surveys collection, which emerged from the misconfiguration of the LimeSurvey proprietary server. Also, we encountered outlier data values (e.g., stress score of 0) outside the accepted ranges. We leave such entries intact since they are easily distinguishable and can be handled on a case-by-case basis. Finally, we excluded swimming data because they portrayed an unrealistic picture of the users' activity, i.e., false positives of swimming sessions in an uncontrolled setting, possibly due to issues with the activity recognition algorithm.
Missing data
Missing Fitbit data emerge either from no-wear time or missing export files. No-wear time is related to participants' compliance and has been discussed previously (see Figs. 14 and 15). Missing export files most likely emerge from errors in the archive export process on the Fitbit website and are outside of our control. For instance, while we have resting heart rate data for 71 users, we only have heart rate data -the most common data type- for 69 users (~3% missing). Similarly, as seen in Table 2, we encountered unexpectedly missing export files for the steps (~3%), profile (~1%), exercise (~3%), and distance (~4%) data types. Missing SEMA3 and survey data emerge solely from non-compliance.
External influences
We encouraged the participants to continue their normal lives during the study without scripted tasks. Such an approach embraces our study's "in the wild" nature but is more sensitive to external influences. For instance, we assume a single time zone per participant (taken from their Fitbit profile), as described in our methodology. However, if a participant chooses to travel to a location with a different time zone during the study, this assumption no longer holds. Based on our observations from the SEMA3 provided time zones, this only applies to ~3% of participants and for a limited time period. Additionally, the study took place during the Covid-19 pandemic and its related restrictions, which might have affected the participants' behavior. However, we see this external influence as an opportunity for further research rather than a limitation.
Lack of documentation
While Fitbit provides thorough documentation for its APIs, there is no documentation regarding the archive export file format, with the exception of the integrated README files. To this end, it was impossible to define the export time zone per Fitbit data type without analysis. To overcome this challenge, we compared record-by-record the exported data with the dashboard data on the Fitbit mobile app, which are always presented in local time. Table 3 is the result of this comparison.
While we provide the LifeSnaps dataset with full open access to "enable third parties to access, mine, exploit, reproduce and disseminate (free of charge for any user) this research data" (https://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-cutting-issues/open-access-data-management/open-access_en.htm), we encourage everyone who uses the data to abide by the following code of conduct:
You confirm that you will not attempt to re-identify the study participants under no circumstances;
You agree to abide by the guidelines for ethical research as described in the ACM Code of Ethics and Professional Conduct54;
You commit to maintaining the confidentiality and security of the LifeSnaps data;
You understand that the LifeSnaps data may not be used for advertising purposes or to re-contact study participants;
You agree to report any misuse, intentional or not, to the corresponding authors by mail within a reasonable time;
You promise to include a proper reference on all publications or other communications resulting from using the LifeSnaps data.
All code for dataset anonymization is available at https://github.com/Datalab-AUTH/LifeSnaps-Anonymization. All code for reading, processing, and exploring the data is made openly available at https://github.com/Datalab-AUTH/LifeSnaps-EDA. Information about setup, code dependencies, and package requirements are available in the same GitHub repository.
Nakagawa, M. et al. Daytime nap controls toddlers' nighttime sleep. Scientific Reports 6, 1–6 (2016).
Kekade, S. et al. The usefulness and actual use of wearable devices among the elderly population. Computer methods and programs in biomedicine 153, 137–159 (2018).
Wang, R. et al. Studentlife: Assessing mental health, academic performance and behavioral trends of college students using smartphones. 3–14, https://doi.org/10.1145/2632048.2632054 (Association for Computing Machinery, Inc, 2014).
Düking, P., Hotho, A., Holmberg, H.-C., Fuss, F. K. & Sperlich, B. Comparison of non-invasive individual monitoring of the training and health of athletes with commercially available wearable technologies. Frontiers in physiology 7, 71 (2016).
Schaule, F., Johanssen, J. O., Bruegge, B. & Loftness, V. Employing consumer wearables to detect office workers' cognitive load for interruption management. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies 2, 1–20 (2018).
Arias, O., Wurm, J., Hoang, K. & Jin, Y. Privacy and security in internet of things and wearable devices. IEEE Transactions on Multi-Scale Computing Systems 1, 99–109 (2015).
Lara, O. D. & Labrador, M. A. A survey on human activity recognition using wearable sensors. IEEE communications surveys & tutorials 15, 1192–1209 (2012).
Mencarini, E., Rapp, A., Tirabeni, L. & Zancanaro, M. Designing wearable systems for sports: a review of trends and opportunities in human–computer interaction. IEEE Transactions on Human-Machine Systems 49, 314–325 (2019).
Tricoli, A., Nasiri, N. & De, S. Wearable and miniaturized sensor technologies for personalized and preventive medicine. Advanced Functional Materials 27, 1605271 (2017).
Qaim, W. B. et al. Towards energy efficiency in the internet of wearable things: A systematic review. IEEE Access 8, 175412–175435 (2020).
Perez-Pozuelo, I. et al. The future of sleep health: a data-driven revolution in sleep science and medicine. NPJ digital medicine 3, 1–15 (2020).
Butte, N. F., Ekelund, U. & Westerterp, K. R. Assessing physical activity using wearable monitors: measures of physical activity. Med Sci Sports Exerc 44, S5–12 (2012).
De Zambotti, M., Cellini, N., Goldstone, A., Colrain, I. M. & Baker, F. C. Wearable sleep technology in clinical and research settings. Medicine and science in sports and exercise 51, 1538 (2019).
Hickey, B. A. et al. Smart devices and wearable technologies to detect and monitor mental health conditions and stress: A systematic review. Sensors 21, 3461 (2021).
Article ADS PubMed PubMed Central Google Scholar
Izmailova, E. S., Wagner, J. A. & Perakslis, E. D. Wearable devices in clinical trials: hype and hypothesis. Clinical Pharmacology & Therapeutics 104, 42–52 (2018).
Althoff, T. et al. Large-scale physical activity data reveal worldwide activity inequality. Nature 547, 336–339, https://doi.org/10.1038/nature23018 (2017).
Chan, Y. F. Y. et al. Data descriptor: The asthma mobile health study, smartphone data collected using researchkit. Scientific Data 5, https://doi.org/10.1038/sdata.2018.96 (2018).
Hershman, S. G. et al. Physical activity, sleep and cardiovascular health data for 50,000 individuals from the myheart counts study. Scientific Data 6, https://doi.org/10.1038/s41597-019-0016-7 (2019).
Thambawita, V. et al. Pmdata: A sports logging dataset. 231–236, https://doi.org/10.1145/3339825.3394926 (Association for Computing Machinery, Inc, 2020).
Vaizman, Y., Ellis, K. & Lanckriet, G. Recognizing detailed human context in the wild from smartphones and smartwatches. IEEE pervasive computing 16, 62–74 (2017).
Schmidt, P., Reiss, A., Duerichen, R. & Laerhoven, K. V. Introducing wesad, a multimodal dataset for wearable stress and affect detection. 400–408, https://doi.org/10.1145/3242969.3242985 (Association for Computing Machinery, Inc, 2018).
Sweeney, L. k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10, 557–570 (2002).
Article MathSciNet MATH Google Scholar
European Commission. 2018 Reform of EU Data Protection Rules.
Koval, P. et al. Sema3: Smartphone ecological momentary assessment, version 3. Computer software]. Retrieved from http://www.sema3.com (2019).
LimeSurvey Project Team/Carsten Schmitz. LimeSurvey: An Open Source survey tool. LimeSurvey Project, Hamburg, Germany (2012).
Warburton, D. E. et al. Evidence-based risk assessment and recommendations for physical activity clearance: an introduction (2011).
Goldberg, L. R. The development of markers for the big-five factor structure. Psychological assessment 4, 26 (1992).
Nigg, C., Norman, G., Rossi, J. & Benisovich, S. Processes of exercise behavior change: Redeveloping the scale. Annals of behavioral medicine 21, S79 (1999).
Nigg, C. Physical activity assessment issues in population based interventions: A stage approach. Physical activity assessments for health-related research 227–239 (2002).
Marcus, B. H. & Simkin, L. R. The transtheoretical model: applications to exercise behavior. Medicine & Science in Sports & Exercise (1994).
Prochaska, J. O. & Velicer, W. F. The transtheoretical model of health behavior change. American journal of health promotion 12, 38–48 (1997).
Markland, D. & Tobin, V. A modification to the behavioural regulation in exercise questionnaire to include an assessment of amotivation. Journal of Sport and Exercise Psychology 26, 191–196 (2004).
Ryan, R. M. & Deci, E. L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist 55, 68 (2000).
Hutchins, E. Cognition in the Wild (MIT press, 1995).
Suchman, L. A. Plans and situated actions: The problem of human-machine communication (Cambridge university press, 1987).
Lave, J. Cognition in practice: Mind, mathematics and culture in everyday life (Cambridge University Press, 1988).
Rogers, Y. & Marshall, P. Research in the wild. Synthesis Lectures on Human-Centered Informatics 10, i–97 (2017).
Lee, J., Kim, D., Ryoo, H.-Y. & Shin, B.-S. Sustainable wearables: Wearable technology for enhancing the quality of human life. Sustainability 8, 466 (2016).
Vaizman, Y., Ellis, K., Lanckriet, G. & Weibel, N. Extrasensory app: Data collection in-the-wild with rich user interface to self-report behavior. In Proceedings of the 2018 CHI conference on human factors in computing systems, 1–12 (2018).
Fitbit LLC. Fitbit sense user manual. Fitbit LLC (2022).
Shiffman, S., Stone, A. A. & Hufford, M. R. Ecological momentary assessment. Annu. Rev. Clin. Psychol. 4, 1–32 (2008).
Kim, J., Marcusson-Clavertz, D., Yoshiuchi, K. & Smyth, J. M. Potential benefits of integrating ecological momentary assessment data into mhealth care systems. BioPsychoSocial medicine 13, 1–6 (2019).
Jones, T. L., Baxter, M. & Khanduja, V. A quick guide to survey research. The Annals of The Royal College of Surgeons of England 95, 5–7 (2013).
Watson, D., Clark, L. A. & Tellegen, A. Development and validation of brief measures of positive and negative affect: the panas scales. Journal of personality and social psychology 54, 1063 (1988).
Spielberger, C. D., Sydeman, S. J., Owen, A. E. & Marsh, B. J. Measuring anxiety and anger with the State-Trait Anxiety Inventory (STAI) and the State-Trait Anger Expression Inventory (STAXI). (Lawrence Erlbaum Associates Publishers, 1999).
Yfantidou, S. et al. LifeSnaps: a 4-month multi-modal dataset capturing unobtrusive snapshots of our lives in the wild, Zenodo, https://doi.org/10.5281/zenodo.6826682 (2022).
Redenius, N., Kim, Y. & Byun, W. Concurrent validity of the fitbit for assessing sedentary behavior and moderate-to-vigorous physical activity. BMC medical research methodology 19, 1–9 (2019).
Feehan, L. M. et al. Accuracy of fitbit devices: systematic review and narrative syntheses of quantitative data. JMIR mHealth and uHealth 6, e10527 (2018).
Nelson, B. W. & Allen, N. B. Accuracy of consumer wearable heart rate measurement during an ecologically valid 24-hour period: intraindividual validation study. JMIR mHealth and uHealth 7, e10828 (2019).
Jeong, H., Kim, H., Kim, R., Lee, U. & Jeong, Y. Smartwatch wearing behavior analysis: a longitudinal study. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 1–31 (2017).
Yfantidou, S., Sermpezis, P. & Vakali, A. Self-tracking technology for mhealth: A systematic review and the past self framework. arXiv preprint arXiv:2104.11483 (2021).
Cecchinato, M. E., Cox, A. L. & Bird, J. Always on (line)? user experience of smartwatches and their role within multi-device ecologies. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3557–3568 (2017).
Mundnich, K. et al. Tiles-2018, a longitudinal physiologic and behavioral data set of hospital workers. Scientific Data 7, 1–26 (2020).
Gotterbarn, D. et al. ACM code of ethics and professional conduct (2018).
Costa, P. T. & McCRAE, R. R. A five-factor theory of personality. The Five-Factor Model of Personality: Theoretical Perspectives 2, 51–87 (1999).
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie SkÅ'odowska-Curie grant agreement No 813162. The content of this paper reflects only the authors' view and the Agency and the Commission are not responsible for any use that may be made of the information it contains. First and foremost, the authors would like to thank the participants of the LifeSnaps study who agreed to share their data for scientific advancement. The authors would like to further thank the web developers, T. Valk and S. Karamanidis, for their contribution to the project, all past and present RAIS fellows for their help with participants' recruitment, G. Pallis and M. Christodoulaki for their support with the ethics committee application, and B. Carminati for her feedback on data anonymization and privacy considerations.
Aristotle University of Thessaloniki, School of Informatics, Thessaloniki, 54124, Greece
Sofia Yfantidou, Christina Karagianni, Stefanos Efstathiou, Athena Vakali & Dimitrios Panteleimon Giakatos
Earkick, Zurich, 8008, Switzerland
Joao Palotti
Foundation for Research and Technology Hellas, Heraklion, 70013, Greece
Thomas Marchioro & Andrei Kazlouski
University of Insubria, Varese, 21100, Italy
Elena Ferrari
KTH Royal Institute of Technology, Stockholm, 11428, Sweden
Šarūnas Girdzijauskas
Sofia Yfantidou
Christina Karagianni
Stefanos Efstathiou
Athena Vakali
Dimitrios Panteleimon Giakatos
Thomas Marchioro
Andrei Kazlouski
S.Y., S.E., A.V. and S.G. conceived the study, S.Y. and S.E. designed the study, S.Y. conducted the study, S.Y., C.K., S.E. and J.P. analyzed the results, D.P.G. handled the database management, A.K., T.M., S.Y. and C.K. anonymized the data. All authors reviewed the manuscript.
Correspondence to Sofia Yfantidou or Athena Vakali.
Yfantidou, S., Karagianni, C., Efstathiou, S. et al. LifeSnaps, a 4-month multi-modal dataset capturing unobtrusive snapshots of our lives in the wild. Sci Data 9, 663 (2022). https://doi.org/10.1038/s41597-022-01764-x
Scientific Data (Sci Data) ISSN 2052-4463 (online)
|
CommonCrawl
|
Mathematics > Classical Analysis and ODEs
arXiv:1110.2839 (math)
[Submitted on 13 Oct 2011]
Title:Uniform Asymptotic Expansions for the Discrete Chebyshev Polynomials
Authors:J.H. Pan, R. Wong
Abstract: The discrete Chebyshev polynomials $t_n(x,N)$ are orthogonal with respect to a distribution function, which is a step function with jumps one unit at the points $x=0,1,..., N-1$, N being a fixed positive integer. By using a double integral representation, we derive two asymptotic expansions for $t_{n}(aN,N+1)$ in the double scaling limit, namely, $N\rightarrow\infty$ and $n/N\rightarrow b$, where $b\in(0,1)$ and $a\in(-\infty,\infty)$. One expansion involves the confluent hypergeometric function and holds uniformly for $a\in[0,1/2]$, and the other involves the Gamma function and holds uniformly for $a\in(-\infty, 0)$. Both intervals of validity of these two expansions can be extended slightly to include a neighborhood of the origin. Asymptotic expansions for $a\geq1/2$ can be obtained via a symmetry relation of $t_{n}(aN,N+1)$ with respect to $a=1/2$. Asymptotic formulas for small and large zeros of $t_{n}(x,N+1)$ are also given.
Subjects: Classical Analysis and ODEs (math.CA); Complex Variables (math.CV)
Cite as: arXiv:1110.2839 [math.CA]
(or arXiv:1110.2839v1 [math.CA] for this version)
From: Jianhui Pan [view email]
[v1] Thu, 13 Oct 2011 04:31:59 UTC (3,323 KB)
math.CA
math.CV
|
CommonCrawl
|
A New Theory About The Universe
What would be a recommendable path for me to study Hairer's theory of regularity structure?
Discussion on Ron Maimon's alleged bad character
Reference for stochastic processes which helps moving from a basic level to a measure theory one
Controversial discussions about renormalisation, EFT, etc ...
Existence of objects arising in physics
Research problems in application of Lie groups to differential equations
If one wants to become a theoretical physicist working on the interface of quantum field theory and statistical mechanics, but he is not into particle physics, not much about superconductor, superfluid etc. What area or question would you suggest him to look into?
How about a Polymath-like Project for Theoretical Physics?
Discussion on things to take note of in an emergency-like single-moderator situation
Discussion about Hairer's existence theory for stochastic differential equations
The discussion started below this submission creation request, but it is better deserves a separate chat thread.
mathematical-physics
asked Aug 15, 2014 in Chat by SchrodingersCatVoter (-10 points) [ revision history ]
http://arxiv.org/abs/1303.5113 also http://arxiv.org/abs/1401.3014 , there are also great later articles by Hairer citing these. These are, to my mind, the biggest step in mathematical quantum field theory since Wilson.
answered Aug 15, 2014 by Ron Maimon (7,720 points) [ no revision ]
Most voted comments show all comments
The first one is a book-sized paper. Could you please point to a particular result relevant for quantum YM? (The word ''Yang'' is in none of the two papers.)
commented Aug 15, 2014 by Arnold Neumaier (15,737 points) [ no revision ]
The first (2013) work says on p.4 in Remark 1.1 that they obtain a nonperturbative Euclidean equivalent of superrenormalizable QFT. Thus in 4D this is relevant at best to scalar QED, which is (apart from its relatives) the only superrenormalizable theory in 4D.
It won't contribute to the solution of the YM problem.
''The details of the method requires superrenormalizability for the time being, but this is just a technical limitation'' The same was claimed for the methods of constructive field theory, which could construct $\Phi^4_3$ long, long ago but then got stuck. The technical limitations are enormous! I believe true progress only when someone actually overcomes that barrier.
Nevertheless, the treatise is good, and valuable for the analysis of true SDE (rather than QFTs).
@RonMaimon, for the record Hairer just got Fields prize yesterday for this regularity stuff.
commented Aug 15, 2014 by Jia Yiyang (2,640 points) [ no revision ]
Oh wow! He got recognized so quickly? Great! congrats to him.
commented Aug 15, 2014 by Ron Maimon (7,720 points) [ no revision ]
Most recent comments show all comments
But this is Euclidean $\Phi^4_3$, which is unrelated to its Minkowski version unless $O(4)$ invariance and reflection positivity are shown.
O(4) invariance is really automatic, as it is present in O(4) invariant regulators (his regulator is a smooth function convolved with the noise, you can make it a rotationally invariant bump, he always does in his specific examples anyway), and the convergence was established to be independent of regulator (this is one of the great advances of his formalism, the form of the regulator clearly doesn't matter in taking the limit). The thing that he constructs is an SPDE solution, with a good (distributional) continuum limit, whose long-time solution, if you take a constant time slice, has statistics that eventually converges to a random pick from $\phi^4_3$ path integral. These are already known to be reflection positive from other work, and it should be easy to show it directly, you asked about it, let an answer come. I believe (not sure, haven't thought at all) that with appropriate reflection positive statistics on the initial conditions, it is extremely easy to show reflection positivity for all time slices and for all regulators, not just asymptotically at the large t limit.
There is no way reflection positivity is going to stop the program (unlike, say, the superrenormalizability business which requires a real idea advancing this stuff). The program constructs the stochastic quantized field theory in what can only be called 'the right way'.
Any difficulty in turning the SPDE construction into a field theory construction can only lie in establishing that the long-time limit is converging (but it is true that the long time limit will be converging to the appropriate thing). You could, for example, define the SPDE for $\phi^3$ field theory in 4d, and it should be ok as an SPDE, the theory is superrenormalizable, but it would not converge at large times to anything, the solutions would run away to $\phi\rightarrow -\infty$.
It might also have a finite time limit to the SPDE evolution, there are no results that guarantee global solutions, as sometimes they won't exist. The results he proves show local existence of SPDE solutions, and you need a further estimate for global solutions. This one of the places where he focuses his current research, connecting the SPDE to the stationary distribution at long times. Again, he is interested in SPDEs, but the formalism is general enough to apply to any (bosonic/real-Euclidean-action) QFT.
I don't claim that Hairer solved all the problems of constructing QFTs at one stroke, he just solved the main one--- defining a rigorous renormalization procedure that is completely non-perturbative, easy to work with and prove things about, is general enough that it should work precisely in those cases when traditional renormalization does, and corresponds exactly to what physicists know about renormalization (it's essentially a framework for producing convergence from a rigorous version of the OPE--- the OPE defines the possible renormalization terms in products of regularized distributions, and then the analysis parts just show the statistical convergence when the regulator is relaxed).
Because he is doing stochastic quantization, his renormalization for the stochastic time evolution is completely separately from constructing a vacuum state, and it works even when there is no vacuum state. The two problems are decoupled, and this allows you to think about short-distance renormalization without considering long-distance properties, like confinement. This separation property is central, it's what allowed the Glimm/Jaffe program to succeed too, but there the separation was intrinsically due to the superrenormalizability, it wasn't something that could be generalized. Here it can be generalized to any nonlinear SPDE even stochastically quantized Yang-Mills theory.
p$\hbar$ysicsOverfl$\varnothing$w
|
CommonCrawl
|
State the first law of thermodynamics in terms of…
State the first law of thermodynamics in terms of (a) the energy of the universe; (b) the creation or destruction of energy; (c) the energy change of system and surroundings. Does the first law reveal the direction of spontaneous change? Explain.
State qualitatively the relationship between entropy and freedom of particle motion. Use this idea to explain why you will probably never (a) be suffocated because all the air near you has moved to the other side of the room; (b) see half the water in your cup of tea freeze while the other half boils.
Why is $\Delta S_{\text { vap }}$ of a substance always larger than $\Delta S_{\text { fus }} ?$
How does the entropy of the surroundings change during an exothermic reaction? An endothermic reaction? Other than the examples in text, describe a spontaneous endothermic process.
(a) What is the entropy of a perfect crystal at 0 $\mathrm{K}$ ?
(b) Does entropy increase or decrease as the temperature rises?
(c) Why is $\Delta H_{\mathrm{f}}^{\circ}=0$ but $S^{\circ}>0$ for an element?
(d) Why does Appendix $\mathrm{B}$ list $\Delta H_{\mathrm{f}}^{\circ}$ values but not $\Delta S_{\mathrm{f}}^{\circ}$ values?
Kevin C.
Distinguish between the terms spontaneous and nonspontaneous. Can a nonspontaneous process occur? Explain.
Explain the difference between the spontaneity of a reaction (which depends on thermodynamics) and the speed at which the reaction occurs (which depends on kinetics). Can a catalyst make a nonspontaneous reaction spontaneous?
Review the subsection in this chapter entitled Making a Nonspontaneous Process Spontaneous in Section 18.8. The hydrolysis of ATP, shown in Problem 91, is often used to drive nonspontaneous processes-such as muscle contraction and protein synthesis-in living organisms. The nonspontaneous process to be driven must be coupled to the ATP hydrolysis reaction. For example, suppose the nonspontaneous process is A + B-AB (Gpositive). The coupling of a nonspontaneous
reaction such as this one to the hydrolysis of ATP is often accomplished by the mechanism:
a. Calculate K for the reaction between glutamate and ammonia. (The standard free energy change for the reaction is +14.2 kJ>mol. Assume a temperature of 298 K.)
b. Write a set of reactions such as those given showing how the glutamate and ammonia reaction can couple with the hydrolysis of ATP. What is G rxn and K for the coupled reaction?
Compare and contrast spontaneous and nonspontaneous
reactions.
(a) Can endothermic chemical reactions be spontaneous?
(b) Can a process be spontaneous at one temperature and nonspontaneous at a different temperature? (c) Water can be decomposed to form hydrogen and oxygen, and the hydrogen and oxygen can be recombined to form water. Does this mean that the processes are thermodynamically reversible? (d) Does the amount of work that a system can do on its surroundings depend on the path of the process?
|
CommonCrawl
|
How to construct the unitary representation of the function $f(x, y, z) = (x \oplus y, y \oplus z)$?
Consider the function $f:\{0, 1\}^3\to\{0, 1\}^2$ with $f(x, y, z) = (x \oplus y, y \oplus z)$. How would you construct its standard unitary representation?
DavidDavid
$\begingroup$ Unitary transformations are reversible, the function is not. $\endgroup$ – kludg Dec 2 '19 at 18:24
$\begingroup$ @kludg Well, ancillary qubits always come to the rescue. $\endgroup$ – Sanchayan Dutta Dec 3 '19 at 9:23
There's not exactly a standard way of doing it. The first thing you have to do is make the transformation unitary. The way that's guaranteed to work is to introduce 2 extra qubits. However that's not necessary in this case. Instead, the circuit is very simple: controlled not controlled from y targeting z, followed by controlled not controlled from x and targeting y.
This is for the function $g : \{ 0,1 \}^3 \to \{0,1\}^3$ $$ g(x,y,z)=(x,x\oplus y , y \oplus z) $$
AHusain
$\begingroup$ by "there isn't a standard way to do this" do you mean that there isn't a canonical choice of "unitarization" of the function? With this I agree. But I would say that there is a standard procedure to solve this kind of problem. If you can look at the truth table of the function, the least number of ancillas you need to make it into a reversible gate is exactly equal to the (ceil of) the $\log_2$ of the maximum number of equal outputs assigned to the same input, that is $\lceil \log_2 \max_y |f^{-1}(y)|\rceil$ $\endgroup$ – glS Dec 3 '19 at 11:24
A good way to start is to have a look at the truth table of the function:
$$\begin{array}{ccccc} x & y & z & \text{out1} & \text{out2} \\\hline 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 \\ \end{array}$$
An interesting feature you might notice from this is that each output $(o_1,o_2)$ occurs exactly twice. This tells you that a single ancilla is enough to make this into a unitary mapping.
To build this mapping, you simply need to add a third output to each input $(x,y,z)$, taking care to assign different outcomes whenever two triples $(x,y,z)$ and $(x',y',z')$ are assigned the same value by $f$. Clearly, there are multiple ways to do this (more precisely, there are $2^4$ ways to do it). Once this assignment is done, the unitary transformation you are looking for is the corresponding permutation matrix.
An example would be the following:
$$\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \end{pmatrix}.$$
See also this other answer to a similar problem.
Minimal quantum OR circuit
Minimum number of ancilla qubits required to make a transformation unitary?
How exactly does Simon's algorithm solve the Simon's problem?
What's an example of building a circuit $U_f$ that implements a simple function $f$?
Mapping Algebraic Normal Form of Exclusive Sum of Products to Toffoli Network
Reversible computation without inverting the circuit
How to construct a quantum circuit (QIP system) for the graph non-isomorphism problem?
How is it possible to implement unitary operator when its size is exponential in inputs?
Cost of implementing Boolean function quantumly?
A basic question on circuits and matrix representation
|
CommonCrawl
|
Results for 'Derick de Jongh'
1000+ found
Listing dateFirst authorImpactPub yearRelevanceDownloads Order
1 filter applied
BibTeX / EndNote / RIS / etc
Export this page: Choose a format.. Formatted textPlain textBibTeXZoteroEndNoteReference Manager
Limit to items.
pro authors only
open access only
published only
Configure languages here. Sign in to use this feature.
categorization shortcuts
open articles in new windows
Open Category Editor
Hospitality as a pivotal value in leadership: A transdisciplinary engagement with the case of Chief Albert Luthuli.Yolande Steenkamp & Derick de Jongh - 2021 - HTS Theological Studies 77 (4):1-10.details
This article presents hospitality as a pivotal value in the context of increasing diversity that characterises the complex relations in which leadership emerges. After reviewing the concept of Otherness in philosophy, the notion of hospitality as developed by Richard Kearney in relation to his philosophy of religion is introduced. The case of Nobel Peace Prize Laureate Chief Albert Luthuli is then presented as a biographical leadership study from the African context to illustrate how hospitality as open response to radical Otherness (...) may inspire collaboration and foster positive change. The article then addresses ways in which the notions of hospitality and Otherness present new opportunities to leadership studies for responding to the relational challenges of the globalised world. Amidst an increased scholarly focus on relationality and the need for relational intelligence, globalisation routinely confronts leaders and their followers with radical Otherness. Through dialogue between theology, philosophy of religion and leadership studies and by presenting a case from the African context, the article offers in print what is called for in the global context, namely an open response to the alterity of the Other that enables collaboration amidst increasing diversity. CONTRIBUTION: Proceeding from a transdisciplinary engagement, the article illustrates that leadership studies stood to benefit from dialogue with theology and philosophy of religion, which offers ways of addressing the Otherness that characterise the globalised context of leadership. (shrink)
Direct download (2 more)
Foreword to Special Issue on 'Responsible Leadership'.Nicola M. Pless, Thomas Maak & Derick de Jongh - 2011 - Journal of Business Ethics 98 (S1):1-1.details
Business Ethics in Applied Ethics
Management Ethics in Applied Ethics
Bookmark 1 citation
Intermediate Logics and the de Jongh property.Dick de Jongh, Rineke Verbrugge & Albert Visser - 2011 - Archive for Mathematical Logic 50 (1-2):197-213.details
We prove that all extensions of Heyting Arithmetic with a logic that has the finite frame property possess the de Jongh property.
Intuitionistic Logic in Logic and Philosophy of Logic
Nonclassical Logics in Logic and Philosophy of Logic
Bookmark 7 citations
Intermediate Logics and the de Jongh property.Dick Jongh, Rineke Verbrugge & Albert Visser - 2011 - Archive for Mathematical Logic 50 (1-2):197-213.details
De Jongh and Glivenko theorems for equality theories ★.Alexey Romanov - 2007 - Journal of Applied Non-Classical Logics 17 (3):347-357.details
This paper is concerned with the logical structure of intuitionistic equality theories. We prove that De Jongh theorem holds for the theory of decidable equality, but uniform De Jongh theorem fails even for the theory of weakly decidable equality. We also show that the theory of weakly decidable equality is the weakest equality theory which enjoys Glivenko theorem.
The de Jongh property for Basic Arithmetic.Mohammad Ardeshir & S. Mojtaba Mojtahedi - 2014 - Archive for Mathematical Logic 53 (7-8):881-895.details
We prove that Basic Arithmetic, BA, has the de Jongh property, i.e., for any propositional formula A built up of atoms p1,..., pn, BPC⊢\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\vdash}$$\end{document}A if and only if for all arithmetical sentences B1,..., Bn, BA⊢\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\vdash}$$\end{document}A. The technique used in our proof can easily be applied to some known extensions of BA.
Areas of Mathematics in Philosophy of Mathematics
Binary modal logic and unary modal logic.Dick de Jongh & Fatemeh Shirmohammadzadeh Maleki - forthcoming - Logic Journal of the IGPL.details
Standard unary modal logic and binary modal logic, i.e. modal logic with one binary operator, are shown to be definitional extensions of one another when an additional axiom |$U$| is added to the basic axiomatization of the binary side. This is a strengthening of our previous results. It follows that all unary modal logics extending Classical Modal Logic, in other words all unary modal logics with a neighborhood semantics, can equivalently be seen as binary modal logics. This in particular applies (...) to standard modal logics, which can be given simple natural axiomatizations in binary form. We illustrate this in the logic K. We call such logics binary expansions of the unary modal logics. There are many more such binary expansions than the ones given by the axiom |$U$|. We initiate an investigation of the properties of these expansions and in particular of the maximal binary expansions of a logic. Our results directly imply that all sub- and superintuitionistic logics with a standard modal companion also have binary modal companions. The latter also applies to the weak subintuitionistic logic WF of our previous papers. This logic doesn't seem to have a unary modal companion. (shrink)
Dick de Jongh and Franco Montagna. Provable fixed points. Zeitschrift für mathematische Logik und Grundlagen der Mathematik, vol. 34 , pp. 229–250. [REVIEW]Lev D. Beklemishev - 1993 - Journal of Symbolic Logic 58 (2):715-717.details
A semantical proof of De Jongh's theorem.Jaap van Oosten - 1991 - Archive for Mathematical Logic 31 (2):105-114.details
In 1969, De Jongh proved the "maximality" of a fragment of intuitionistic predicate calculus forHA. Leivant strengthened the theorem in 1975, using proof-theoretical tools (normalisation of infinitary sequent calculi). By a refinement of De Jongh's original method (using Beth models instead of Kripke models and sheafs of partial combinatory algebras), a semantical proof is given of a result that is almost as good as Leivant's. Furthermore, it is shown thatHA can be extended to Higher Order Heyting Arithmetic+all trueΠ (...) 2 0 -sentences + transfinite induction over primitive recursive well-orderings. As a corollary of the proof, maximality of intuitionistic predicate calculus is established wrt. an abstract realisability notion defined over a suitable expansion ofHA. (shrink)
Stable Formulas in Intuitionistic Logic.Nick Bezhanishvili & Dick de Jongh - 2018 - Notre Dame Journal of Formal Logic 59 (3):307-324.details
In 1995 Visser, van Benthem, de Jongh, and Renardel de Lavalette introduced NNIL-formulas, showing that these are exactly the formulas preserved under taking submodels of Kripke models. In this article we show that NNIL-formulas are up to frame equivalence the formulas preserved under taking subframes of frames, that NNIL-formulas are subframe formulas, and that subframe logics can be axiomatized by NNIL-formulas. We also define a new syntactic class of ONNILLI-formulas. We show that these are the formulas preserved in monotonic (...) images of frames and that ONNILLI-formulas are stable formulas as introduced by Bezhanishvili and Bezhanishvili in 2013. Thus, ONNILLI is a syntactically defined set of formulas axiomatizing all stable logics. This resolves a problem left open in 2013. (shrink)
The future of psychopharmacological enhancements: Expectations and policies.Maartje Schermer, Ineke Bolt, Reinoud de Jongh & Berend Olivier - 2009 - Neuroethics 2 (2):75-87.details
The hopes and fears expressed in the debate on human enhancement are not always based on a realistic assessment of the expected possibilities. Discussions about extreme scenarios may at times obscure the ethical and policy issues that are relevant today. This paper aims to contribute to an adequate and ethically sound societal response to actual current developments. After a brief outline of the ethical debate concerning neuro-enhancement, it describes the current state of the art in psychopharmacological science and current uses (...) of psychopharmacological enhancement, as well as the prospects for the near future. It then identifies ethical issues regarding psychopharmacological enhancements that require attention from policymakers, both on the professional and on the governmental level. These concern enhancement research, the gradual expansion of medical categories, off-label prescription and responsibility of doctors, and accessibility of enhancers on the Internet. It is concluded that further discussion on the advantages and drawbacks of enhancers on a collective social level is still needed. (shrink)
Cognitive Enhancement in Applied Ethics
Bookmark 25 citations
Botox for the Brain: Enhancement of Cognition, Mood and pro-Social Behavior and Blunting of Unwanted Memories.Reinoud de Jongh, Ineke Bolt, Maartje Schermer & Berend Olivier - 2008 - Neuroscience and Biobehavioral Reviews 32 (4):760–776.details
A sequence of decidable finitely axiomatizable intermediate logics with the disjunction property.D. M. Gabbay & D. H. J. De Jongh - 1974 - Journal of Symbolic Logic 39 (1):67-78.details
Computability in Philosophy of Computing and Information
Mathematical Logic in Philosophy of Mathematics
On the proof of Solovay's theorem.Dick de Jongh, Marc Jumelet & Franco Montagna - 1991 - Studia Logica 50 (1):51-69.details
Solovay's 1976 completeness result for modal provability logic employs the recursion theorem in its proof. It is shown that the uses of the recursion theorem can in this proof replaced by the diagonalization lemma for arithmetic and that, in effect, the proof neatly fits the framework of another, enriched, system of modal logic so that any arithmetical system for which this logic is sound is strong enough to carry out the proof, in particular $\text{I}\Delta _{0}+\text{EXP}$ . The method is adapted (...) to obtain a similar completeness result for the Rosser logic. (shrink)
Provability logics for relative interpretability.Frank Veltman & Dick De Jongh - 1990 - In Petio Petrov Petkov (ed.), Mathematical Logic. Proceedings of the Heyting '88 Summer School. New York, NY, USA: pp. 31-42.details
In this paper the system IL for relative interpretability is studied.
Provability Logic in Logic and Philosophy of Logic
The decidability of dependency in intuitionistic propositional Logi.Dick de Jongh & L. A. Chagrova - 1995 - Journal of Symbolic Logic 60 (2):498-504.details
A definition is given for formulae $A_1,\ldots,A_n$ in some theory $T$ which is formalized in a propositional calculus $S$ to be (in)dependent with respect to $S$. It is shown that, for intuitionistic propositional logic $\mathbf{IPC}$, dependency (with respect to $\mathbf{IPC}$ itself) is decidable. This is an almost immediate consequence of Pitts' uniform interpolation theorem for $\mathbf{IPC}$. A reasonably simple infinite sequence of $\mathbf{IPC}$-formulae $F_n(p, q)$ is given such that $\mathbf{IPC}$-formulae $A$ and $B$ are dependent if and only if at least (...) on the $F_n(A, B)$ is provable. (shrink)
Proof Theory in Logic and Philosophy of Logic
Modal completeness of ILW.Dick De Jongh & Frank Veltman - 1999 - In Jelle Gerbrandy, Maarten Marx, Maarten de Rijke & Yde Venema (eds.), Essays Dedicated to Johan van Benthem on the Occasion of His 50th Birthday. Amsterdam University Press.details
This paper contains a completeness proof for the system ILW, a rather bewildering axiom system belonging to the family of interpretability logics. We have treasured this little proof for a considerable time, keeping it just for ourselves. Johan's ftieth birthday appears to be the right occasion to get it out of our wine cellar.
Modal and Intensional Logic in Logic and Philosophy of Logic
Generic Generalized Rosser Fixed Points.Dick H. J. de Jongh & Franco Montagna - 1987 - Studia Logica 46 (2):193-203.details
To the standard propositional modal system of provability logic constants are added to account for the arithmetical fixed points introduced by Bernardi-Montagna in [5]. With that interpretation in mind, a system LR of modal propositional logic is axiomatized, a modal completeness theorem is established for LR and, after that, a uniform arithmetical completeness theorem with respect to PA is obtained for LR.
Explicit Fixed Points in Interpretability Logic.Dick de Jongh & Albert Visser - 1991 - Studia Logica 50 (1):39-49.details
The problem of Uniqueness and Explicit Definability of Fixed Points for Interpretability Logic is considered. It turns out that Uniqueness is an immediate corollary of a theorem of Smoryński.
Provable Fixed Points.Dick De Jongh & Franco Montagna - 1988 - Zeitschrift fur mathematische Logik und Grundlagen der Mathematik 34 (3):229-250.details
Provable Fixed Points.Dick De Jongh & Franco Montagna - 1988 - Mathematical Logic Quarterly 34 (3):229-250.details
Comparing strengths of beliefs explicitly.S. Ghosh & D. de Jongh - 2013 - Logic Journal of the IGPL 21 (3):488-514.details
Bayesian Reasoning, Misc in Philosophy of Probability
Studies in Discourse Representation Theory and the Theory of Generalized Quantifiers.J. A. G. Groenendijk, Dick de Jongh & M. J. B. Stokhof (eds.) - 1986 - Foris Publications.details
Semantic Automata Johan van Ben them. INTRODUCTION An attractive, but never very central idea in modern semantics has been to regard linguistic expressions ...
Discourse in Philosophy of Language
Generalized Quantifiers in Philosophy of Language
Intensional logics.Dick De Jongh & Frank Veltman - unknowndetails
This first chapter contains an introduction to modal logic. In section 1.1 the syntactic side of the matter is discussed, and in section 1.2 the subject is approached from a semantic point of view.
Berchtesgaden (19 november 1940) : voorgeschiedenis, inhoud en resultaat.Albert De Jonghe - 1978 - Res Publica 20 (1):41-54.details
The leopoldistic version of the events before Berchtesgaden - politically the most important period in the Question Royale during the occupation - is from the start till the end historically not grounded. The known facts prove that the King was absolutely not passive in political matters. He doesn't reject the proposal for a meeting with Hitler. Already on May 31 he declares to agree in principle to meet the Führer. On June 26 he again expresses this willingness. In October he (...) sends his sister Marie-José, crownprincess of Italy, to Hitler, collecting the requested invitation for a meeting. The meeting Hitler-Leopold III at Berchtesgaden reveals not only a humanitarian, but also an indeniable political character.There is too great a difference between the leopoldistic version and the facts - as far as known at this moment and from limited sources. One is inclined to ask oneself if the editors of the Whitebook and of the report of the Servais-commission, had knowledge of all the facts, which normally should have been at their disposal. If the answer is no, than one has to assume «a secret of the King». (shrink)
Essay on I-valuations (X).Dhj de Jongh - 1968 - In P. Braffort & F. van Scheepen (eds.), Automation in Language Translation and Theorem Proving. Brussels, Commission of the European Communities, Directorate-General for Dissemination of Information.details
$74.94 used (collection) View on Amazon.com
Starting with Whitehead: Raising Children to Thrive in Treacherous Times.Lynn Sargent de Jonghe - 2022 - Hamilton Books.details
Following A.N. Whitehead's rhythm of education, the author provides a guide for parents and educators on raising children to thrive in times of tempestuous change. Each chapter presents exemplary educational events rich in context, and then draws on seminal research to ground her recommendations in a robust theoretical foundation.
Alfred North Whitehead in 20th Century Philosophy
$76.33 new View on Amazon.com
Rosser orderings and free variables.Dick de Jongh & Franco Montagna - 1991 - Studia Logica 50 (1):71-80.details
It is shown that for arithmetical interpretations that may include free variables it is not the Guaspari-Solovay system R that is arithmetically complete, but their system R⁻. This result is then applied to obtain the nonvalidity of some rules under arithmetical interpretations including free variables, and to show that some principles concerning Rosser orderings with free variables cannot be decided, even if one restricts onself to "usual" proof predicates.
In memoriam: Anne sjerp Troelstra 1939–2019.Dick de Jongh & Joan Rand Moschovakis - 2020 - Bulletin of Symbolic Logic 26 (3-4):293-295.details
Much Shorter Proofs.Dick de Jongh & Franco Montagna - 1989 - Mathematical Logic Quarterly 35 (3):247-260.details
Locke and Hooker on the Finding of the Law.Eugeen De Jonghe - 1988 - Review of Metaphysics 42 (2):301-325.details
THE PURPOSE OF THE PRESENT EXPOSITION is to put forward an interpretation of Locke's and Hooker's conception of the finding of the law. The topics which will be examined are the knowledge and content of the different types of law and, above all, the standard of the good law. That Locke and Hooker used the same language, to a large extent, in treating the concept of law can be seen immediately in a comparison of Locke's Essays on the Law of (...) Nature, Two Treatises on Government, and Essay on Human Understanding with the first book of Hooker's Laws of Ecclesiastical Polity. This similarity facilitated Locke's recourse to Hooker's texts when he wanted to strengthen some of the arguments of the Second Treatise. But is the use of similar language not deceptive in the present case? Do terms used in 1690 intend the same meaning as they did when used in 1660? After nearly four decades of conflicting interpretations of Locke's political and philosophical texts, one can hardly expect to offer a new answer. This analysis will explore the similarity within the framework of a traditional, natural law interpretation of Locke. We shall find that, in Hooker's Laws and Locke's Two Treatises, we are faced with the same pattern of exposition concerning natural law and its source. Following this path of inquiry represents a decision to put aside, for the time being, the influential but highly problematic suggestion that Locke wished to deceive his readers in the exposition of some of the most central parts of his political theory. (shrink)
Legal Authority and Obligation in Philosophy of Law
Locke and Other Philosophers in 17th/18th Century Philosophy
Locke: The Law of Nature in 17th/18th Century Philosophy
Public Goods and the Commons: Opposites or Complements?Maurits de Jongh - 2021 - Political Theory 49 (5):774-800.details
The commons have emerged as a key notion and underlying experience of many efforts around the world to promote justice and democracy. A central question for political theories of the commons is whether the visions of social order and regimes of political economy they propose are complementary or opposed to public goods that are backed up by governmental coordination and compulsion. This essay argues that the post-Marxist view, which posits an inherent opposition between the commons as a sphere of inappropriable (...) usage and statist public infrastructure, is mistaken, because justice and democracy are not necessarily furthered by the institution of inappropriability. I articulate an alternative pluralist view based on James Tully's work, which discloses the dynamic interplay between public and common modes of provision and enjoyment, and their civil and civic orientations respectively. Finally, the essay points to the Janus-faced character of the commons and stresses the co-constitutive role of public goods and social services for just and orderly social life while remaining attentive to the dialectic of empowerment and tutelage that marks provision by government. (shrink)
Much Shorter Proofs.Dick de Jongh & Franco Montagna - 1989 - Zeitschrift fur mathematische Logik und Grundlagen der Mathematik 35 (3):247-260.details
Preface.Dick de Jongh & Albert Visser - 1993 - Annals of Pure and Applied Logic 61 (1-2):1.details
Epistemic Paradoxes in Epistemology
Extendible formulas in two variables in intuitionistic logic.Nick Bezhanishvili & Dick de Jongh - 2012 - Studia Logica 100 (1-2):61-89.details
We give alternative characterizations of exact, extendible and projective formulas in intuitionistic propositional calculus IPC in terms of n-universal models. From these characterizations we derive a new syntactic description of all extendible formulas of IPC in two variables. For the formulas in two variables we also give an alternative proof of Ghilardi's theorem that every extendible formula is projective.
The Kuznetsov-Gerčiu and Rieger-Nishimura logics.Guram Bezhanishvili, Nick Bezhanishvili & Dick de Jongh - 2008 - Logic and Logical Philosophy 17 (1-2):73-110.details
We give a systematic method of constructing extensions of the Kuznetsov-Gerčiu logic KG without the finite model property (fmp for short), and show that there are continuum many such. We also introduce a new technique of gluing of cyclic intuitionistic descriptive frames and give a new simple proof of Gerčiu's result [9, 8] that all extensions of the Rieger-Nishimura logic RN have the fmp. Moreover, we show that each extension of RN has the poly-size model property, thus improving on [9]. (...) Furthermore, for each function f: omega -> omega, we construct an extension Lf of KG such that Lf has the fmp, but does not have the f-size model property. We also give a new simple proof of another result of Gerčiu [9] characterizing the only extension of KG that bounds the fmp for extensions of KG. We conclude the paper by proving that RN.KC = RN + (¬p vee ¬¬p) is the only pre-locally tabular extension of KG, introduce the internal depth of an extension L of RN, and show that L is locally tabular if and only if the internal depth of L is finite. (shrink)
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Public goods in Michael Oakeshott's 'world of pragmata'.Maurits de Jongh - 2022 - European Journal of Political Theory 21 (3):561-584.details
Michael Oakeshott's account of political economy is claimed to have found its 'apotheosis under Thatcherism'. Against critics who align him with a preference for small government, this article points to Oakeshott's stress on the indispensability of an infrastructure of government-provided public goods, in which individual agency and associative freedom can flourish. I argue that Oakeshott's account of political economy invites a contestatory politics over three types of public goods, which epitomize the unresolvable tension he diagnosed between nomocratic and teleocratic conceptions (...) of the modern state. These three types are the system of civil law, the by-products of the operation of civil law and public goods which result from policies. The article concludes that Oakeshott offers an important corrective to political theories which favour either market mediation or radical democratic governance of the commons as self-sustaining modes of providing and enjoying goods. (shrink)
Public goods in Michael Oakeshott's 'world of pragmata'.Maurits de Jongh - 2019 - European Journal of Political Theory 21 (3):147488511989045.details
Michael Oakeshott's account of political economy is claimed to have found its 'apotheosis under Thatcherism'. Against critics who align him with a preference for small government, this article poin...
Giorgi Japaridze and Dick de Jongh. The logic of provability. Handbook of proof theory, edited by Samuel R. Buss, Studies in logic and the foundations of mathematics, vol. 137, Elsevier, Amsterdam etc. 1998, pp. 475–546. [REVIEW]Toshiyasu Arai - 2000 - Bulletin of Symbolic Logic 6 (4):472-473.details
Foundations of Pragmatics and Lexical Semantics.J. A. G. Groenendijk, Dick de Jongh & M. J. B. Stokhof (eds.) - 1986 - Providence, Ri, Usa, Foris Publications ;.details
Lexical Semantics in Philosophy of Language
Semantics-Pragmatics Distinction in Philosophy of Language
Interpretability in PRA.Marta Bílková, Dick de Jongh & Joost J. Joosten - 2010 - Annals of Pure and Applied Logic 161 (2):128-138.details
Interpolation, Definability and Fixed Points in Interpretability Logics.Carlos Areces, Eva Hoogland & Dick de Jongh - 2000 - In Marcus Kracht, Maarten de Rijke, Heinrich Wansing & Michael Zakharyaschev (eds.), Advances in Modal Logic. CSLI Publications. pp. 53-76.details
Efficacy of virtual reality exposure therapy and eye movement desensitization and reprocessing therapy on symptoms of acrophobia and anxiety sensitivity in adolescent girls: A randomized controlled trial.Parisa Azimisefat, Ad de Jongh, Soran Rajabi, Philipp Kanske & Fatemeh Jamshidi - 2022 - Frontiers in Psychology 13.details
BackgroundAcrophobia is a specific phobia characterized by a severe fear of heights. The purpose of the present study was to investigate the efficacy of two therapies that may ameliorate symptoms of acrophobia and anxiety sensitivity, i.e., virtual reality exposure therapy and eye movement desensitization and reprocessing therapy with a Waiting List Control Condition.MethodsWe applied a three-armed randomized controlled pre-post-test design with 45 female adolescent students. Students who met DSM-5 criteria for acrophobia were randomly assigned to either VRET, EMDR, or a (...) WLCC. The study groups were evaluated one week before the intervention and one week after the last intervention session regarding symptoms of acrophobia and anxiety sensitivity.ResultsThe data showed that both the application of VRET and EMDR therapy were associated with significantly reduced symptoms of acrophobia and anxiety sensitivity in comparison to the Waiting List.LimitationsThe sample consisted only of adolescent women. Due to the recognizable differences between the two interventions, the therapists and the participants were not blind to the conditions.ConclusionThe results suggest that both VRET and EMDR are interventions that can significantly improve symptoms of acrophobia and anxiety sensitivity in female adolescents.Clinical Trial Registrationhttps://www.irct.ir/trial/57391, identifier: IRCT20210213050343N1. (shrink)
Provable Fixed Points.Much Shorter Proofs.Rosser Orderings in Bimodal Logics.Much Shorter Proofs: A Bimodal Investigation. [REVIEW]Lev D. Beklemishev, Dick de Jongh, Franco Montagna & Alessandra Carbone - 1993 - Journal of Symbolic Logic 58 (2):715.details
Interpretability in.Marta Bílková, Dick de Jongh & Joost J. Joosten - 2010 - Annals of Pure and Applied Logic 161 (2):128-138.details
In this paper, we study IL(), the interpretability logic of . As is neither an essentially reflexive theory nor finitely axiomatizable, the two known arithmetical completeness results do not apply to : IL() is not or . IL() does, of course, contain all the principles known to be part of IL, the interpretability logic of the principles common to all reasonable arithmetical theories. In this paper, we take two arithmetical properties of and see what their consequences in the modal logic (...) IL() are. These properties are reflected in the so-called Beklemishev Principle , and Zambella's Principle , neither of which is a part of IL. Both principles and their interrelation are submitted to a modal study. In particular, we prove a frame condition for . Moreover, we prove that follows from a restricted form of . Finally, we give an overview of the known relationships of IL() to important other interpretability principles. (shrink)
Properties of Intuitionistic Provability and Preservativity Logics.Rosalie Iemhoff, Dick de Jongh & Chunlai Zhou - 2005 - Logic Journal of the IGPL 13 (6):615-636.details
We study the modal properties of intuitionistic modal logics that belong to the provability logic or the preservativity logic of Heyting Arithmetic. We describe the □-fragment of some preservativity logics and we present fixed point theorems for the logics iL and iPL, and show that they imply the Beth property. These results imply that the fixed point theorem and the Beth property hold for both the provability and preservativity logic of Heyting Arithmetic. We present a frame correspondence result for the (...) preservativity principle Wp that is related to an extension of Löb's principle. (shrink)
Logics in Logic and Philosophy of Logic
Interpolation, Definability and Fixed Points in Interpretability Logics.Carlos Areces, Eva Hoogland & Dick de Jongh - 2000 - In Michael Zakharyaschev, Krister Segerberg, Maarten de Rijke & Heinrich Wansing (eds.), Advances in Modal Logic, Volume 2. CSLI Publications. pp. 53-76.details
On unification and admissible rules in Gabbay–de Jongh logics.Jeroen P. Goudsmit & Rosalie Iemhoff - 2014 - Annals of Pure and Applied Logic 165 (2):652-672.details
In this paper we study the admissible rules of intermediate logics. We establish some general results on extensions of models and sets of formulas. These general results are then employed to provide a basis for the admissible rules of the Gabbay–de Jongh logics and to show that these logics have finitary unification type.
A separable axiomatization of the Gabbay–de Jongh logics.Yokomizo Kyohei - 2017 - Logic Journal of the IGPL 25 (3):365-380.details
Kripke incompleteness of predicate extentions of Gabbay-de jongh's logic of the finite binary trees.Tatsuya Shimura - 2002 - Bulletin of the Section of Logic 31 (2):111-118.details
1 — 50 / 1000
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
General Editors:
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Area Editors:
David Bourget
Gwen Bradford
Berit Brogaard
Margaret Cameron
James Chase
Rafael De Clercq
Ezio Di Nucci
Esa Diaz-Leon
Barry Hallen
Hans Halvorson
Jonathan Ichikawa
Michelle Kosch
Øystein Linnebo
JeeLoo Liu
Paul Livingston
Brandon Look
Manolo Martínez
Matthew McGrath
Michiru Nagatsu
Susana Nuccetelli
Giuseppe Primiero
Jack Alan Reynolds
Darrell P. Rowbottom
Aleksandra Samonek
Constantine Sandis
Howard Sankey
Jonathan Schaffer
Thomas Senor
Daniel Star
Jussi Suikkanen
Aness Kim Webster
Other editors
Learn more about PhilPapers
|
CommonCrawl
|
QUALITY : They use pure and high quality Ingredients and are the ONLY ones we found that had a comprehensive formula including the top 5 most proven ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine, Bacopin and N-Acetyl L-Tyrosine. Thrive Natural's Super Brain Renew is fortified with just the right ingredients to help your body fully digest the active ingredients. No other brand came close to their comprehensive formula of 39 proven ingredients. The "essential 5" are the most important elements to help improve your memory, concentration, focus, energy, and mental clarity. But, what also makes them stand out above all the rest was that they have several supporting vitamins and nutrients to help optimize brain and memory function. A critical factor for us is that this company does not use fillers, binders or synthetics in their product. We love the fact that their capsules are vegetarian, which is a nice bonus for health conscious consumers.
Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect.
So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can't see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines when I look at the photos, I can see a difference!. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.)
In the nearer future, Lynch points to nicotinic receptor agents – molecules that act on the neurotransmitter receptors affected by nicotine – as ones to watch when looking out for potential new cognitive enhancers. Sarter agrees: a class of agents known as α4β2* nicotinic receptor agonists, he says, seem to act on mechanisms that control attention. Among the currently known candidates, he believes they come closest "to fulfilling the criteria for true cognition enhancers."
I have elsewhere remarked on the apparent lack of benefit to taking multivitamins and the possible harm; so one might well wonder about a specific vitamin like vitamin D. However, a multivitamin is not vitamin D, so it's no surprise that they might do different things. If a multivitamin had no vitamin D in it, or if it had vitamin D in different doses, or if it had substances which interacted with vitamin D (such as calcium), or if it had substances which had negative effects which outweigh the positive (such as vitamin A?), we could well expect differing results. In this case, all of those are true to varying extents. Some multivitamins I've had contained no vitamin D. The last multivitamin I was taking both contains vitamins used in the negative trials and also some calcium; the listed vitamin D dosage was a trivial ~400IU, while I take >10x as much now (5000IU).
Some nootropics are more commonly used than others. These include nutrients like Alpha GPC, huperzine A, L-Theanine, bacopa monnieri, and vinpocetine. Other types of nootropics ware still gaining traction. With all that in mind, to claim there is a "best" nootropic for everyone would be the wrong approach since every person is unique and looking for different benefits.
Learning how products have worked for other users can help you feel more confident in your purchase. Similarly, your opinion may help others find a good quality supplement. After you have started using a particular supplement and experienced the benefits of nootropics for memory, concentration, and focus, we encourage you to come back and write your own review to share your experience with others.
We'd want 53 pairs, but Fitzgerald 2012's experimental design called for 32 weeks of supplementation for a single pair of before-after tests - so that'd be 1664 weeks or ~54 months or ~4.5 years! We can try to adjust it downwards with shorter blocks allowing more frequent testing; but problematically, iodine is stored in the thyroid and can apparently linger elsewhere - many of the cited studies used intramuscular injections of iodized oil (as opposed to iodized salt or kelp supplements) because this ensured an adequate supply for months or years with no further compliance by the subjects. If the effects are that long-lasting, it may be worthless to try shorter blocks than ~32 weeks.
One should note the serious caveats here: it is a small in vitro study of a single category of human cells with an effect size that is not clear on a protein which feeds into who-knows-what pathways. It is not a result in a whole organism on any clinically meaningful endpoint, even if we take it at face-value (many results never replicate). A look at followup work citing Rapuri et al 2007 is not encouraging: Google Scholar lists no human studies of any kind, much less high-quality studies like RCTs; just some rat followups on the calcium effect. This is not to say Rapuri et al 2007 is a bad study, just that it doesn't bear the weight people are putting on it: if you enjoy caffeine, this is close to zero evidence that you should reduce or drop caffeine consumption; if you're taking too much caffeine, you already have plenty of reasons to reduce; if you're drinking lots of coffee, you already have plenty of reasons to switch to tea; etc.
One symptom of Alzheimer's disease is a reduced brain level of the neurotransmitter called acetylcholine. It is thought that an effective treatment for Alzheimer's disease might be to increase brain levels of acetylcholine. Another possible treatment would be to slow the death of neurons that contain acetylcholine. Two drugs, Tacrine and Donepezil, are both inhibitors of the enzyme (acetylcholinesterase) that breaks down acetylcholine. These drugs are approved in the US for treatment of Alzheimer's disease.
Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit.
An expert in legal and ethical issues surrounding health care technology, Associate Professor Eric Swirsky suggested that both groups have valid arguments, but that neither group is asking the right questions. Prof Swirsky is the clinical associate professor of biomedical and health information sciences in the UIC College of Applied Health Sciences.
"Who doesn't want to maximize their cognitive ability? Who doesn't want to maximize their muscle mass?" asks Murali Doraiswamy, who has led several trials of cognitive enhancers at Duke University Health System and has been an adviser to pharmaceutical and supplement manufacturers as well as the Food and Drug Administration. He attributes the demand to an increasingly knowledge-based society that values mental quickness and agility above all else.
There are a number of treatments for the last. I already use melatonin. I sort of have light therapy from a full-spectrum fluorescent desk lamp. But I get very little sunlight; the surprising thing would be if I didn't have a vitamin D deficiency. And vitamin D deficiencies have been linked with all sorts of interesting things like near-sightedness, with time outdoors inversely correlating with myopia and not reading or near-work time. (It has been claimed that caffeine interferes with vitamin D absorption and so people like me especially need to take vitamin D, on top of the deficits caused by our vampiric habits, but I don't think this is true34.) Unfortunately, there's not very good evidence that vitamin D supplementation helps with mood/SAD/depression: there's ~6 small RCTs with some findings of benefits, with their respective meta-analysis turning in a positive but currently non-statistically-significant result. Better confirmed is reducing all-cause mortality in elderly people (see, in order of increasing comprehensiveness: Evidence Syntheses 2013, Chung et al 2009, Autier & Gandini 2007, Bolland et al 2014).
If you're suffering from blurred or distorted vision or you've noticed a sudden and unexplained decline in the clarity of your vision, do not try to self-medicate. It is one thing to promote better eyesight from an existing and long-held baseline, but if you are noticing problems with your eyes, then you should see an optician and a doctor to rule out underlying medical conditions.
A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes."
The Smart Pills Technology are primarily utilized for dairy products, soft drinks, and water catering in diverse shapes and sizes to various consumers. The rising preference for easy-to-carry liquid foods is expected to boost the demand for these packaging cartons, thereby, fueling the market growth. The changing lifestyle of people coupled with the convenience of utilizing carton packaging is projected to propel the market. In addition, Smart Pills Technology have an edge over the glass and plastic packaging, in terms of environmental-friendliness and recyclability of the material, which mitigates the wastage and reduces the product cost. Thus, the aforementioned factors are expected to drive the Smart Pills Technology market growth over the projected period.
Similar to the way in which some athletes used anabolic steroids (muscle-building hormones) to artificially enhance their physique, some students turned to smart drugs, particularly Ritalin and Adderall, to heighten their intellectual abilities. A 2005 study reported that, at some universities in the United States, as many as 7 percent of respondents had used smart drugs at least once in their lifetime and 2.1 percent had used smart drugs in the past month. Modafinil was used increasingly by persons who sought to recover quickly from jet lag and who were under heavy work demands. Military personnel were given the same drug when sent on missions with extended flight times.
Yet some researchers point out these drugs may not be enhancing cognition directly, but simply improving the user's state of mind – making work more pleasurable and enhancing focus. "I'm just not seeing the evidence that indicates these are clear cognition enhancers," says Martin Sarter, a professor at the University of Michigan, who thinks they may be achieving their effects by relieving tiredness and boredom. "What most of these are actually doing is enabling the person who's taking them to focus," says Steven Rose, emeritus professor of life sciences at the Open University. "It's peripheral to the learning process itself."
the larger size of the community enables economies of scale and increases the peak sophistication possible. In a small nootropics community, there is likely to be no one knowledgeable about statistics/experimentation/biochemistry/neuroscience/whatever-you-need-for-a-particular-discussion, and the available funds increase: consider /r/Nootropics's testing program, which is doable only because it's a large lucrative community to sell to so the sellers are willing to donate funds for independent lab tests/Certificates of Analysis (COAs) to be done. If there were 1000 readers rather than 23,295, how could this ever happen short of one of those 1000 readers being very altruistic?
So, I have started a randomized experiment; should take 2 months, given the size of the correlation. If that turns out to be successful too, I'll have to look into methods of blinding - for example, some sort of electronic doohickey which turns on randomly half the time and which records whether it's on somewhere one can't see. (Then for the experiment, one hooks up the LED, turns the doohickey on, and applies directly to forehead, checking the next morning to see whether it was really on or off).
These days, young, ambitious professionals prefer prescription stimulants—including methylphenidate (usually sold as Ritalin) and Adderall—that are designed to treat people with attention deficit hyperactivity disorder (ADHD) and are more common and more acceptable than cocaine or nicotine (although there is a black market for these pills). ADHD makes people more likely to lose their focus on tasks and to feel restless and impulsive. Diagnoses of the disorder have been rising dramatically over the past few decades—and not just in kids: In 2012, about 16 million Adderall prescriptions were written for adults between the ages of 20 and 39, according to a report in the New York Times. Both methylphenidate and Adderall can improve sustained attention and concentration, says Barbara Sahakian, professor of clinical neuropsychology at the University of Cambridge and author of the 2013 book Bad Moves: How Decision Making Goes Wrong, and the Ethics of Smart Drugs. But the drugs do have side effects, including insomnia, lack of appetite, mood swings, and—in extreme cases—hallucinations, especially when taken in amounts the exceed standard doses. Take a look at these 10 foods that help you focus.
Burke says he definitely got the glow. "The first time I took it, I was working on a business plan. I had to juggle multiple contingencies in my head, and for some reason a tree with branches jumped into my head. I was able to place each contingency on a branch, retract and go back to the trunk, and in this visual way I was able to juggle more information."
In addition, the cognitive enhancing effects of stimulant drugs often depend on baseline performance. So whilst stimulants enhance performance in people with low baseline cognitive abilities, they often impair performance in those who are already at optimum. Indeed, in a study by Randall et al., modafinil only enhanced cognitive performance in subjects with a lower (although still above-average) IQ.
The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work.
Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn't seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It's applied to injured parts; for the brain, it's typically applied to points on the skull like F3 or F4.) And LLLT is, naturally, completely safe without any side effects or risk of injury.
Iluminal is an example of an over-the-counter serotonergic drug used by people looking for performance enhancement, memory improvements, and mood-brightening. Also noteworthy, a wide class of prescription anti-depression drugs are based on serotonin reuptake inhibitors that slow the absorption of serotonin by the presynaptic cell, increasing the effect of the neurotransmitter on the receptor neuron – essentially facilitating the free flow of serotonin throughout the brain.
I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn't, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I'm not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road.
Interesting. On days ranked 2 (below-average mood/productivity), nicotine seems to have boosted scores; on days ranked 3, nicotine hurts scores; there aren't enough 4's to tell, but even '5 days seem to see a boost from nicotine, which is not predicted by the theory. But I don't think much of a conclusion can be drawn: not enough data to make out any simple relationship. Some modeling suggests no relationship in this data either (although also no difference in standard deviations, leading me to wonder if I screwed up the data recording - not all of the DNB scores seem to match the input data in the previous analysis). So although the 2 days in the graph are striking, the theory may not be right.
Imagine a pill you can take to speed up your thought processes, boost your memory, and make you more productive. If it sounds like the ultimate life hack, you're not alone. There are pills that promise that out there, but whether they work is complicated. Here are the most popular cognitive enhancers available, and what science actually says about them.
The data from 2-back and 3-back tasks are more complex. Three studies examined performance in these more challenging tasks and found no effect of d-AMP on average performance (Mattay et al., 2000, 2003; Mintzer & Griffiths, 2007). However, in at least two of the studies, the overall null result reflected a mixture of reliably enhancing and impairing effects. Mattay et al. (2000) examined the performance of subjects with better and worse working memory capacity separately and found that subjects whose performance on placebo was low performed better on d-AMP, whereas subjects whose performance on placebo was high were unaffected by d-AMP on the 2-back and impaired on the 3-back tasks. Mattay et al. (2003) replicated this general pattern of data with subjects divided according to genotype. The specific gene of interest codes for the production of Catechol-O-methyltransferase (COMT), an enzyme that breaks down dopamine and norepinephrine. A common polymorphism determines the activity of the enzyme, with a substitution of methionine for valine at Codon 158 resulting in a less active form of COMT. The met allele is thus associated with less breakdown of dopamine and hence higher levels of synaptic dopamine than the val allele. Mattay et al. (2003) found that subjects who were homozygous for the val allele were able to perform the n-back faster with d-AMP; those homozygous for met were not helped by the drug and became significantly less accurate in the 3-back condition with d-AMP. In the case of the third study finding no overall effect, analyses of individual differences were not reported (Mintzer & Griffiths, 2007).
Since LLLT was so cheap, seemed safe, was interesting, just trying it would involve minimal effort, and it would be a favor to lostfalco, I decided to try it. I purchased off eBay a $13 48 LED illuminator light IR Infrared Night Vision+Power Supply For CCTV. Auto Power-On Sensor, only turn-on when the surrounding is dark. IR LED wavelength: 850nm. Powered by DC 12V 500mA adaptor. It arrived in 4 days, on 7 September 2013. It fits handily in my palm. My cellphone camera verified it worked and emitted infrared - important because there's no visible light at all (except in complete darkness I can make out a faint red light), no noise, no apparent heat (it took about 30 minutes before the lens or body warmed up noticeably when I left it on a table). This was good since I worried that there would be heat or noise which made blinding impossible; all I had to do was figure out how to randomly turn the power on and I could run blinded self-experiments with it.
The title question, whether prescription stimulants are smart pills, does not find a unanimous answer in the literature. The preponderance of evidence is consistent with enhanced consolidation of long-term declarative memory. For executive function, the overall pattern of evidence is much less clear. Over a third of the findings show no effect on the cognitive processes of healthy nonelderly adults. Of the rest, most show enhancement, although impairment has been reported (e.g., Rogers et al., 1999), and certain subsets of participants may experience impairment (e.g., higher performing participants and/or those homozygous for the met allele of the COMT gene performed worse on drug than placebo; Mattay et al., 2000, 2003). Whereas the overall trend is toward enhancement of executive function, the literature contains many exceptions to this trend. Furthermore, publication bias may lead to underreporting of these exceptions.
The truth is that, almost 20 years ago when my brain was failing and I was fat and tired, I did not know to follow this advice. I bought $1000 worth of smart drugs from Europe, took them all at once out of desperation, and got enough cognitive function to save my career and tackle my metabolic problems. With the information we have now, you don't need to do that. Please learn from my mistakes!
|
CommonCrawl
|
Sectionscaret-down
Publicationscaret-down
Bookscaret-down
The Projectcaret-down
Usercaret-down
Authors' publicationsEducationResearchTransversal
Published on Aug 10, 2021DOI
10.21428/36973002.c76458f1
Framework based on parameterized images on ResNet to identify intrusions in smartwatches or other related devices
The continuous appearance and improvement of mobile devices in the form of smartwatches, smartphones and other similar devices has led to a growing and unfair interest in putting their users under the magnifying glass and control of applications.
by Juan Antonio Lloret Egea, Celia Medina Lloret, Adrián Hernández González, Diana Díaz Raboso, Carlos Campos, Kimberly Riveros Guzmán, Luis Miguel Cortés Carballo, and The Bible of AI ™
Published onAug 10, 2021
Copyright 2020-2021 (and successive years)© - All rights reserved- La Biblia de la IA - The Bible of AI ™ ISSN 2695-641. License: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
This English version is a literal translation of the publication in Spanish: 'Framework' basado en imágenes parametrizadas sobre ResNet para identificar intrusiones en 'smartwatches' u otros dispositivos afines. (Un eje singular de la publicación "Estado del arte de la ciencia de datos en el idioma español y su aplicación en el campo de la Inteligencia Artificial") | DOI: 10.21428/39829d0b.981b7276 .
A conceptual and algebraically framework that did not exist up until then in its morphology, was developed. What is more, it is pioneer on its implementation in the area of Artificial Intelligence (AI) and it was started up in laboratory, on its structural aspects, as a fully operational model. At qualitative level, its greatest contribution to AI is applying the conversion or transduction of parameters obtained by ternary logic [1] (multi-valued systems) and associating them with an image. This image will be analysed by means of a residual artificial network ResNet34[2], [3] to warn us of an intrusion. The field of application of this framework includes everything from smartwatches, tablets, and PC's to the home automation based on the KNX standard[4].
This framework proposes for AI, a reverse engineering, in such a way that drawing on common and revisable mathematical principles, which applied in a 2D graphical image to detect intrusion. Reverse engineering will be scrutinised the device's security and privacy by using artificial intelligence in order to mitigate these lesions for the end users.
Furthermore, the framework is postulated to meet the highest expectations of being an useful tool for society and human being aligned with «Opinion of the European Committee of the Regions – White Paper on Artificial Intelligence — A European approach to excellence and trust». Besides contributing to avoid the malpractice of generating black boxes [5] in the use of artificial intelligence[6], making it something incomprehensible.
Confidence and clarification in technology is a short-, medium- and long-term goal to society do not be affected by the spurious technologies, which prejudice a good social development. «These technologies have extended the opportunities of freedom expression and social, civic and political mobilization. At the same time they arouse serious concerns»[7].
I. Framework
1.- Need to know
There is a widespread availability of «intelligent» devices that all we have our disposal, from digital cameras, tablets to smartwatches. The problem occurs when the lack of knowledge of these electronic devices management extends to a large percentage of the population[8], thus ignoring the risks which might be entailed. Although it is true that an electronic device is intelligent because it interacts appropriately and autonomously, we cannot forget this theory: If something brings benefits and solutions in specific situations, it means something should know in advance what is occurring. It all comes down to a simple word: data.
One of the most innovative electronic devices is smartwatch. Smartwatches have sensors which identify models or human behaviour patterns based on automatic learning techniques, the Bayes theorem, processing data and the method of K Nearest Neighbours [9]. These procedures raise a large volume of information with which we intent to meet the goal by specifying the results that are expected. These sensors are very useful for monitoring human activity: walking, cycling, running and walking up and down stairs [10].
The use of smartwatches may represent a serious threat to the safety of children and adolescents [11]. The most important safety failures of low-cost smartwatches happen in applications and in connections with servers which store data. The most popular brands of smartwatches used by minors are Carl Kids Watch, hellOO! Children's Smart Watch, SMA-WATCH-M2 y GATOR Watch. The highest recurring problems we face are certain failures in the implementation of certifies for secure connections HTTPS [12] and information related to unencrypted electronic register [13]. Nevertheless, devices from popular manufacturers like Nokia [14], Samsung and Huawei[15], these kind of problems are not frequent because they use encrypted connections.
In that regard, it is important to note the conditions that affect to the safety of smartwatches' brands. In this way, when buying a smartwatch this information will be useful. Mainly, the criterion for choosing brands is knowing the number of sales: Samsung, Apple, Fitbit and Garmin.
Image credit: Security in Samsung and Apple devices by Adrián Hernández, 2021, Mangosta (https://mangosta.org/seguridad-en-los-dispositivos-samsung-y-apple/)
Image credit: Security in Fitbit and Garmin devices by Adrián Hernández, 2021, Mangosta (https://mangosta.org/seguridad-en-dispositivos-fitbit-y-garmin/)
In data collecting, Samsung is the only brand which does not compile online information from children under 13 years [16]. However, it does not occur with Apple [17], Fitbit [18] and Garmin [19], that make no distinction between ages. Additionally, all brands chosen for our study, they share information with a third party to analyse metrics and compare results. According to these references, Samsung is the only company in which user's data are yielded, rented, or sold. On the contrary, Apple is the unique company that does not process data for advertising or commercial research purposes. For this reason, the conclusion we get is that all brands compile non-anonym data but with differences.
2.- Algebraic description and framework calculation[20]
The systematic is based on duality or paring of parameters which are determinants of an intrusion & associated image. The question of what parameters are to be defined, it will be result of the experience in the area of cybersecurity[21] , which we will propose later, and they will be exposed.
3.1.-Algebraic description
Given a specific number of intrusion parameters named «n», that is, a numerical succession of the terms ∈N\in \N∈N∧an=b2/b≥3\land \hspace{0.1cm} a_n=b^2 \hspace{0.1cm} /\hspace{0.1cm} b \geq3∧an=b2/b≥3, characteristic for each instance of the framework to determine; given (P⃗x\vec P_xPx), composed by a group of terms or axial elements with vectors that reports to «n» →(P⃗x(1....n)\vec P_{x(1....n)}Px(1....n)) whose module |P⃗x(1...n)\vec P_{x(1...n)}Px(1...n)| only takes ternary values of the closed value [0,1, U], and a maximum number of vectors of (P⃗x\vec P_xPx) called «m» ∈Z\in \Z∈Z → [P⃗1,P⃗2,P⃗3...P⃗m]0f:n→m [ \vec P_{1}, \vec P_{2},\vec P_{3}...\vec P_{m}] _0^{f: {n→m}}[P1,P2,P3...Pm]0f:n→m and determine its value by m=3nm=3^nm=3n; and given a continue numerical succession of pairs of ordered pairs (x(1…n), yi) which bijectively applies that an element in the group (I⃗mg;y\vec Img; yImg;y), / ∀teˊrminox→y=x≤m \forall \hspace{0.1cm} término \hspace{0.1cm} x →\hspace{0.1cm}y=x \hspace{0.1cm} _\leq m∀teˊrminox→y=x≤m ∧i=(n−1,n)\wedge \hspace{0.1cm} i=(n-1, \hspace{0.1cm} n)∧i=(n−1,n), we define this framework as:
An intrusión Vector System → S⃗Vintrusioˊn=[P⃗x(1...n),I⃗mgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}SVintrusioˊn=[Px(1...n),Imgyi]03(n). that complies with an associated image, or transduced, is function of a parameters array (p) →{p1, p2, p3, p4… pn}; vectored each umpteenth set of these one as (P⃗x\vec P_xPx) / : I⃗mgyi=f(P⃗x(1...n))03(n)\vec Img _{y_i}={f (\vec P{_{x(1...n)} })} _0^{3^{(n)}}Imgyi=f(Px(1...n))03(n), and for each of the vectors of the set (P⃗x\vec P_xPx) , it has a bijective relation[22] with its reflected image (I⃗mgy\vec Img_yImgy).
Thus, (I⃗mgy)(\vec Img_y)(Imgy) is the consequence of the lineal transformation of the vectors system (P⃗x\vec P_xPx), which is defined by the intrusion parameters array (p). This array is dynamically built ∀x≤m \forall \hspace{0.1cm}x \hspace{0.1cm} \leq m∀x≤m by this expression:
I⃗mgyi⇔[P⃗x2−P⃗x1,P⃗x3−P⃗x2,P⃗x4−P⃗x3,...,P⃗xn−P⃗x(n−1),P⃗x1−P⃗xn]∀n∈N. \hspace{0.1cm}\vec Img{_y} _i \Leftrightarrow [\vec P_{x2} - \vec P_{x1}, \hspace{0.1cm}\vec P_{x3}-\vec P_{x2}, \hspace{0.1cm}\vec P_{x4} - \vec P_{x3},...,\vec P_{xn} - \vec P_{x({n-1})}, \vec P_{x1}-\vec P_{xn}] \hspace{0.1cm} \forall \hspace{0.1cm}n \hspace{0.1cm} \in \hspace{0.1cm} \mathbb N .Imgyi⇔[Px2−Px1,Px3−Px2,Px4−Px3,...,Pxn−Px(n−1),Px1−Pxn]∀n∈N.
The closed polygonal image, that will associate a specific array of intrusive parameters for being valuating by Artificial Intelligence (AI), it will be defined by this vectorial expression:
(Σj=1j=n(P⃗xn−P⃗x(n−1)))−(P⃗x1−P⃗xn)=0(\hspace{0.1cm} \Sigma _{j=1} ^{j=n}\hspace{0.1cm} (\vec Px_n-\vec Px_{(n-1)}))-(\vec Px_1-\vec Px_n)=0(Σj=1j=n(Pxn−Px(n−1)))−(Px1−Pxn)=0
A) FRAMEWORK'S DOMAINS
1) Domain {D1} ∀P⃗x(1...n)∈N⃗∧:xn∈[0,1,U]. \forall \hspace{0.1cm} \vec P _{x(1...n)} \in \hspace {0.1cm} \vec \mathbb N \hspace {0.1cm} \land: xn \in [0,1,U].∀Px(1...n)∈N∧:xn∈[0,1,U].
2) Domain {D2} de I⃗mgyi=f(P⃗x(1...n))03(n)∈R⃗2→ f:N→R2. \vec Img _{y_i}={f (\vec P{_{x(1...n)} })} _0^{3^{(n)}} \in \hspace{0.1cm} \vec \mathbb R^2 → \ f: \mathbb N→ \mathbb R^2 . Imgyi=f(Px(1...n))03(n)∈R2→ f:N→R2.
3) Domain {D3} del grupo (P⃗x\vec P_xPx) ∈N⃗n \in \vec \mathbb N^n ∈Nn.
(P⃗x)⇔(P⃗1,P⃗2,P⃗3,...P⃗m)/D1[0,1,U]:∀P⃗x(1...n)∈N⃗∧∀P⃗x∈N⃗n(\vec Px) \Leftrightarrow (\vec P_1, \vec P_2, \vec P_3,...\vec P_m) \hspace{0.1cm} / \hspace{0.1cm} {D _1\hspace{0.1cm}[0,1,U]:\hspace{0.1cm} \forall \hspace{0.1cm}\vec P_{x(1...n)} \in \hspace{0.1cm} \vec \mathbb N} \land \forall \hspace{0.1cm}\vec P_x \hspace{0.1cm} \in \vec \mathbb N^n(Px)⇔(P1,P2,P3,...Pm)/D1[0,1,U]:∀Px(1...n)∈N∧∀Px∈Nn
4) Domain {D4}(I⃗mgy)∈R⃗n(\vec Img_y) \in \vec \mathbb R^n(Imgy)∈Rn.
B) POLAR COORDINATES OF THE FRAMEWORK
The previous lineal transformation of (I⃗mgy)(\vec Img_y)(Imgy) would be a closed, polygonal and vectorial line that in polar coordinates, easy to implement by using radial charts, it implies for P⃗x(1...n)\vec P _{x(1...n)} Px(1...n) a vector generator system r⃗=∣mod∣α\vec r= |mod|_\alphar=∣mod∣α of modular value (0,1,U) and the angle by α. The value of the angle will be between 0º y 360º, depending on the number of parameters used in the intrusion scanning and associated to its axes. In the case of n=9 parameters and for a specific instance «x», [P⃗x1,P⃗x2,P⃗x3...P⃗x9] [ \vec P_{x1}, \vec P_{x2},\vec P_{x3}...\vec P_{x9}][Px1,Px2,Px3...Px9], α\alphaα will be assigned a value (or divergence between the vectors system) (P⃗x(1...9)α\vec P_{x (1...9)}\alphaPx(1...9)α) of 40º → =360º9=\frac{360º}{9}=9360º.
2.1.1- Nature of the intrusions Vectorial System's dimensions S⃗Vintrusioˊn=[P⃗x(1...n),I⃗mgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}SVintrusioˊn=[Px(1...n),Imgyi]03(n)
As initial proposal of representation, radial graphics to generate images and using these axes as vector radios of a circumference in the plane, separated an angle α\alphaα. Among them, the most interesting part for us, the graphic, which is completely located in the lineal application f:N→R2f: \mathbb N→ \mathbb R^2 f:N→R2.
However, with regard to the coordinates of each vector and its algebraic nature, the real dimension would be:
1) Group of vectors P⃗x(1...n)\vec P x _{(1...n)}Px(1...n) {Dim1}N\NN=1.
2) Group of vectors I⃗mgyi \vec Img{_y} _i Imgyi {Dim2}R\RR=2.
3) Group of vectors P⃗x\vec P_xPx {Dim3}N\NN=n.
4) Group of vectors I⃗mgy \vec Img_yImgy {Dim4}R\RR=n.
2.1.2.- Algebraic properties of P⃗x(1...n),I⃗mgyi\vec P x _{(1...n)}, \vec Img_{y_{i}}Px(1...n),Imgyi that conform S⃗Vintrusioˊn=[P⃗x(1...n),I⃗mgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}SVintrusioˊn=[Px(1...n),Imgyi]03(n)
The properties to be analysed for this vectorial system range from: neutral element [23], symmetric element [24], associative [25], commutative [26] and distributive property [27], going through the analysis of P⃗x(1...n),I⃗mgyi\vec P x _{(1...n)}, \vec Img_{y_{i}}Px(1...n),Imgyi as an abelian group [28], ring [29], body [30] and vectorial space[31]… Its development, testing and demonstration are proposed for the next version of this publication. Moreover, it is omitted in this review for getting a general conceptualization of the framework as well as its development in laboratory.
2.1.3- Inferences of the algebraic definition
This mathematical model can be applied the theory of Lineal Algebraic Field[32]. Similarly to radial graphics, that enclose areas in its irregular polygons, defined by its vertexes. They can be algebraically operating by applying the Gauss method or the Gauss area calculation[33] and its attached properties. Also, properties of the suitable ternary logic and combinational algebra of Boole [34], among others… This is why the number of theorems and corollaries [35] for this framework, its application in AI and detection of intrusions, they are topics of study, wider set of possibilities and rough in different applications described hereby.
Image credit: Example formula of Gauss area calculation, Wikipedia, Isalar derivative work: Nat2 (talk) - Polygon_area_formula.jpg, Mangosta (https://mangosta.org/formula-del-area-de-gauss/)
2.2.-Calculation and systematic use
The maximum number «m» of possible vectors with the sampling of n=9 parameters and ternary logic (0,1,U) is → m=39;→Δr⃗(n=9)=19.683\ m=3^9;→ \Delta \vec r_{(n=9)}=19.683 m=39;→Δr(n=9)=19.683 vectors of intrusion. They will be increased as number of parameters «n» of 16, a 25, 36, etc., raise. It follows a logic of square matrices and its own numeric succession, previously defined. For a number n ≤9 does not emerge justified enough AI use because any programming language by means of an algorithm or specific library, it could do it in a trivial way. However, n>9 is justified because of the computing power and the speed in the management of massive data the graphic systems associated to artificial intelligence [36]. According to the current trends of services, servers, and graphic cards providers, such as, Google or IBM, among others. This capacity of AI is differentiating in values of «n» superior to 25 because it would represent → m=325;→Δr⃗(n=25)=19.68347.288.609.443\ m=3^{25}; → \Delta \vec r_{(n=25)}=19.68347.288.609.443 m=325;→Δr(n=25)=19.68347.288.609.443 vectors, that is, almost one trillion of vectors which would give a good precision in its estimates of intrusion. To get these parameters, we capture the rules in logs of IDs's [37] and the use of known techniques about monitoring systems (apps, etc). It is flexible and open to other collection methodology and the framework could admit without further problems.
The intrusion parameters serves as a basis of design a graphical model of positives, negatives, and training, especially, separated, ordinated in contents units, folders, or directories. In each new identity of parameters and images, when the image has been transferred to the assigned content unity, artificial intelligence (previously trained) tells us if there is or not an intrusion. For the final decision, whether it is an intrusion? This framework proposes adding the possibility of a human being evaluates the answer emitted by artificial intelligence. On the one hand, artificial intelligence warns and scans all intrusions via S⃗Vintrusioˊn=[P⃗x(1...n),I⃗mgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}SVintrusioˊn=[Px(1...n),Imgyi]03(n) to a technologist finally decides. On the other hand, the management of a relational database about declared positives and annexes to the system which integrates or encapsulates the framework. That, along with the performance of AI for those positives that are not yet, it will let generating an improved knowledge database that shall be increased based on the number of users' petitions.
3.- Singular definitions and an example
3.1.-Definition in Object-oriented Programming (OOP)[38]
Against this background, we can abstract the idea which defines a class called "Intrusion", and the legacy of this class in other, defined by 9 parameters, which in its turn generates the class called 'Intrusiónbase9'. 'Intrusiónbase9' is feasible to implement with any programming language. For each instance of this class, we will get an object of S⃗Vintrusioˊn=[P⃗x(1...n),I⃗mgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}SVintrusioˊn=[Px(1...n),Imgyi]03(n) for its subsequent classification as intrusion [True/False].
3.2 Definition of 9 parameters
These 9 parameters mean an instance of the general framework, previously described. For this reason, this one inherits all features of it and where the values (0,1,U) specify: switch off or not used, switch on or in use and indetermined, respectively.
Num. Parameter (n=9) associated to P⃗x(1...n) \vec P x _{(1...n)}Px(1...n)
Definition of parameter
Recomended procedure
n=1→Px1
Unknown or unauthorized web destination (url)
Snort Rule[39] IDS or similar
Unencrypted communication (HTTPS)
Local data transfer to other storage
Use or connection to BSSID [40] not allowed or unknown
Kismet rule [41] -WIDS o similar
Use or connection to Bluetooth port[42] not allowed or unauthorized
Use or connection to GPS[43] not allowed or unauthorized
Use or camera connection not allowed or unknown
Monitor system or similar
Use of audio device (microphone / speaker) not allowed or unknown
Monitoring of physical parameters (speed, temperature, battery) with unusual result
3.3.- An application example
In general, it occurs I⃗mgyi=f(P⃗x(1...n))03(n)\vec Img _{y_i}={f (\vec P{_{x(1...n)} })} _0^{3^{(n)}}Imgyi=f(Px(1...n))03(n) , we see an example with 9 parameters of an imagen in an determined instant T1 for a moment x=1;→I⃗mg1i=f(P⃗1(1...9))x=1;→ \vec Img 1{i}={f (\vec P{_{1 \hspace{0.1cm} (1...9)} })}x=1;→Img1i=f(P1(1...9)) →I⃗mg1i=f(P⃗11,P⃗12,P⃗13,P⃗14,P⃗15,P⃗16,P⃗17,P⃗18,P⃗19)\vec Img 1_i= f (\vec P1_1, \vec P1_2, \vec P1_3, \vec P1_4, \vec P1_5, \vec P1_6, \vec P1_7, \vec P1_8, \vec P1_9)Img1i=f(P11,P12,P13,P14,P15,P16,P17,P18,P19). The graphic image to be scrutined by artificial intelligence, it will be created from this lineal transformation:
I⃗mg1i⇔[P⃗12−P⃗11,P⃗13−P⃗12,P⃗14−P⃗13,...,P⃗19−P⃗18,P⃗11−P⃗19]\vec Img1{_i} \Leftrightarrow [\vec P1_2- \vec P1_1, \hspace{0.1cm}\vec P1_3-\vec P1_2, \hspace{0.1cm}\vec P1_4 -\vec P1_3,...,\vec P1_9- \vec P1_8, \vec P1_1- \vec P1_9]Img1i⇔[P12−P11,P13−P12,P14−P13,...,P19−P18,P11−P19]
The partial vectors (I⃗mg1i\vec Img 1_{i}Img1i) would be dynamically built in clockwise direction from vectors P⃗1(1...9)\vec P1_{(1...9)}P1(1...9), with a lag of 40º between them.
I⃗mg11=P⃗12−P⃗11;I⃗mg12=P⃗13−P⃗12;I⃗mg13=P⃗14−P⃗13;I⃗mg14=P⃗15−P⃗14;I⃗mg15=P⃗16−P⃗15;I⃗mg16=P⃗17−P⃗16;I⃗mg17=P⃗18−P⃗17;I⃗mg18=P⃗19−P⃗18;I⃗mg19=P⃗11−P⃗19\vec Img1{_1}= \vec P1_2- \vec P1_1;\hspace {0.1cm} \vec Img1{_2}= \vec P1_3- \vec P1_2;\hspace {0.1cm} \vec Img1{_3}= \vec P1_4- \vec P1_3;\hspace {0.1cm} \vec Img1{_4}= \vec P1_5- \vec P1_4;\hspace {0.1cm} \vec Img1{_5}= \vec P1_6- \vec P1_5;\hspace {0.1cm} \vec Img1{_6}= \vec P1_7- \vec P1_6;\hspace {0.1cm} \vec Img1{_7}= \vec P1_8- \vec P1_7;\hspace {0.1cm}\vec Img1{_8}= \vec P1_9- \vec P1_8;\hspace {0.1cm} \vec Img1{_9}= \vec P1_1- \vec P1_9Img11=P12−P11;Img12=P13−P12;Img13=P14−P13;Img14=P15−P14;Img15=P16−P15;Img16=P17−P16;Img17=P18−P17;Img18=P19−P18;Img19=P11−P19
By assigning values, previously scanned, to the parameters array (p1)=(0,1,U,0,1,1,0,U.0), these will define, intrinsically, (P⃗1(1...9))( \vec P1_{(1...9)} )(P1(1...9)), and its associated image I⃗mg1i\vec Img1{_i}Img1i it would be built as the figure below. The graphic level has been assigned the value «3», indeterminations «U». Y «1» ó «2», which correspond to the binary values (0,1), respectively.
Image credit: Example of the capture of parameters by Juan Antonio Lloret Egea, 2021, Mangosta (https://mangosta.org/i_parametros-3/)
II.-Laboratory of the framework
4.-Implementation of the 9 parameters
4.1.-Unknown or unauthorized url destination
log tcp $HOME_NET any -> !DireccionIPConfianza any (sid: 1000001; rev: 001;)
log tcp any any -> !DireccionIPConfianza any (sid: 1000001; rev: 001;)
The rule can be defined in two ways. The first one we use the variable "HOME_NET", previously defined, in which we specify what is our network. The second one, we do not specify the network that we are using but we define the rule with the command "any". This, despite of being less efficient in terms of resources, we assure that it works. This rule can be read as: "Create a log when TCP protocol is used from our network to send packages from any port to a different address from the trusted address".
4.2.-Unencrypted communication (HTTPS)
log tcp $HOME_NET any -> any any (content:"http"; sid: 1000002; rev: 001; )
log tcp any any -> any any (content:"http"; sid: 1000002; rev: 001; )
The rule can be defined in two ways. The first one we use the variable "HOME_NET", previously defined, in which we specify what is our network. The second one, we do not specify the network that we are using but we define with the command "any". This, despite of being less efficient in terms of resources, we assure that it works. This rule can be read as: "Create a log when TCP protocol to send packages from any port to any address and port with HTTP content".
4.3.-Local data transfer to other storage
log tcp $HOME_NET !puerto_confianza -> any any(sid: 1000003; rev: 001;)
log tcp any !puerto_confianza -> any any(sid: 1000003; rev: 001;)
This rule can be read as: "Create a log when the TCP protocol is used to send packages from any port, which is not the confidence one, to any address and port."
4.4-Use or connection to BSSID not allowed or unknown
For Windows devices, Kismet can be used or integrated in Python by using the subprocess module (Windows 7 and above) that lets us to get the state of the devices process and the answer that the operating system gives us after entering a sentence in the console. To know the BSSID to which the device is connected we use: "netsh wlan show interfaces", which will give us, between many data, the BSSID to which the computer is connected. If this BSSID is not a known BSSID, we will get "1" as a value. This will be used to design the graphic which Artificial Intelligence use to determine if we are victims of an intrusion or not. If the BSSID is known, we will store "0" as a value. On the contrary, if it is not possible, we will have an undefined value in this parameter.
For Android, it can be used WifiInfo. That is a defined class by Google used to manage connections of the device [44] and it has as an inherit the class Object [45]. Furthermore, the method getBSSID () will be used to get the BSSID by a MAC address of 6 bytes: XX: XX: XX: XX: XX: XX. Again, if the BBSID is known, it will be stored as a "0" value, if "no", it will store as "1" value. Moreover, if it is not possible to define the state of devices connection, it will be defined as "indetermined" value.
4.5.-Use or connection to BSSID not allowed or unknown
For Windows devices, Kismet can be used or integrated in Python by using PyBluez library, which lets us to get the state of Bluetooth connection.
For Android, BluetoothProfile interface can be used (a collection of methods and constants), defined by Google to manage Bluetooth connections in Android devices. Moreover, we need to use another interface, ServiceListener, that lets us to list, easily, the (constant) states that we look for:
STATE_CONNECTING: Device in connection status. STATE_DISCONNECTING: Device in disconnection status. STATE_CONNECTED: Connected device.
STATE_DISCONNECTED: Disconnected device.
These states can be seen similar but the difference is in the first ones we will get the state of our Bluetooth connection, that means, if we have activated or not Bluetooth. As against that, in the last two ones we will get information about our connection is active or not. However, a device can be active the Bluetooth connection but it not be connected to another device [46].
4.6.-Use o connection to GPS not allowed or unauthorized
For Windows devices, Kismet can be used or integrated in Python by using the gpds library, which lets us to get the GPS position when executing the script. If it does not exist a connection, we will not get any position and we will storage "0" as a value. On the contrary, if we get any position, we will store "1" as a value.
For Android, GnssStatus.Callback can be used, a defined class by Google to manage the navigation global system by satellite in Android devices. Another possibility is using GpsStatus tool but it cannot monitory "GLONASS", "GALILEO", "BEIDOU". In addition, we use the defined method onStarted() that warns us when GNSS is activate and onStopped(), its analogous when the process is stopped [47].
4.7.-Use or camera connection not allowed or unknown
For Windows devices, the library cv2 in Python is used. It allows us to have control about video devices, in this case, to test if the camera is activate or not when the file is executing. If the camera is online for the test environment, it will appear this message:
Image credit: Console output online camera status
by Adrián Hernández, 2021, Mangosta (https://mangosta.org/i_parametros-2/)
Additionally, we will store "1" as a value in the variable defined to do the graphic with the rest of the parameters. If, instead, the camera is offline, it will appear this message in the test environment:
Image credit: Console output offline camera status
bu Adrián Hernández, 2021, Mangosta (https://mangosta.org/ii_parametros-2/)
In this case, to do the graphic, it will be stored "0" as a value.
For Android devices, android.hardware.camera2 [48] is used and the callback of Android: CameraDevice.StateCallback [49], that includes the method onOpened(CameraDevice camera), and it brings us back if the camera is open or not.
4.8.-Use of audio device (microphone / speaker) not allowed or unknown
For Windows devices, Python and the library Pyaudio are used. This library lets us to capture an audio signal using micro and lately, it can visualise its temporal representation [50]. The aim is capturing the audio wave in the time that we consider. If that line is not flat, the micro may be active.
Image credit: Microphone status by uses audio waves
by Adrián Hernández, 2021, Mangosta (https://mangosta.org/iii_parametros-2/)
For Android, AudioManager can be used. That is a defined class by Google to manage the microphone in Android devices [51]. As well, it is necessary to use Context class [52] and the String AUDIO_SERVICE. Furthermore, we can test from the API 11 of Android (correspondent to HoneyComb – Android 3.O.x) if the micro is online, using the defined constant: MODE_IN_COMMUNICATION / MODE_IN_CALL.
The followed steps to create this structure are:
1.- Using getSystemService(java.lang.String)
2.- Including Context class and the String AUDIO_SERVICE: getSystemService(Context.AUDIO_SERVICE)
3.- Including the created sentence in the step 2 in AudioManager class: (AudioManager)context.getSystemService(Context.AUDIO_SERVICE).
4.- Currently the getMode method associated to AudioManager it offers us 5 results:
MODE_NORMAL: There are no calls or actions set.
MODE_RINGTONE: There is a request to the microphone.
MODE_CALL_SCREENING: There is a call connected but the audio is not in use.
MODE_IN_CALL: There is a phone call.
MODE_IN_COMMUNICATION: There is an application which is making audio/video or VoIP communications.
Definitely, if the method brings us a "MODE_IN_CALL" or "MODE_IN_COMMUNICATION", we will have the microphone activate.
4.9.-Monitoring of physical parameters (speed, temperature, battery) with unusual results
For Windows devices, Python and the psutil library will be used. This library lets us monitoring and recovering information about the system such as CPU, RAM, disk use, network or battery. Furthermore, it is a multiplatform which allows us to integrate it on any operating system in the future [50].
It can be generated a log by using psutil and establishing it by using the use percentage and the sustained period. This percentage would be considered as an anormal use of physical parameters: speed, temperature, and battery. Moreover, if we want to monitor what is happening with CPU and RAM in real time, we have to use the Matplotlib library. This library is used for generating graphics based on data contented in lists or arrays.
Image credit: Monitoring of physical parameters by Adrián Hernández, 2021, Mangosta (https://mangosta.org/iv_parametros-2/)
4.10.-Graphs obtained from the parameters analysis
The polar graphs that we get to control each parameter, they allow to Artificial Intelligence classifies images for predicting if there is an intrusion or not. They follow this reasoning:
Image credit: Polar graphics by Adrián Hernández, 2021, Mangosta (https://mangosta.org/vi_parametros-2/)
5.-Diagramming and systematic work in the implementation of the framework
For the design, compilation, and execution of the framework in laboratory, we consider different possibilities and the use of programming languages. On the one hand, the own use of AI about its training and classification of images that we call systematic work (S1) and in which Python is used. On the other hand, the design of a database storage and data use, both for known defined devices (if they are positive in intrusion or not) and for those devices to define. For this systematic work (S2) it is used SQL. And, finally, it is therefore necessary the own management of the shell and its general assembled. For this systematic work (S3) it is used Java. (SQL language is singular in the work systematic S2. For the systematic works S1 and S3, it can be used one programming language to choose between Python and Java. Each of them presents advantages and disadvantages, with respect to the use of AI and the objective operating systems in smart devices.)
We will also raise the flow diagrams of the logic to be used, general and singular. Adding a complementary element as the step by step of the use of AI, the design of database and finally, the essential codification about different programming languages that we used: SQL, Python and Java.
5.1.-General diagram of the framework
In this section, we address the "skeleton" of the document. So, we present the general diagram which is divided in three parts: the part related to user, the database and AI. The main objective is getting an overview of the procedures for understanding all the methodology.
Image credit: Diagram of general flow of the framework by Diana Díaz, 2021, Mangosta (https://mangosta.org/diagramacion-general-framework/)
The general diagram has been designed by using an open-source diagramming application called Drawio [53]. This diagram, especially, contains two more diagrams: Java connection with database and the use of fast.ai + Google Colaboratory, which also appears at the bottom.
Firstly, we start with the installation of the application. By signing to the app, the user will be asked if he wants that his device will be analysed. If the answer is yes, the process continues, if not, the app closes. If the process continues, the user has to enter the required data, that is, type of device, brand and model. At the same time, it will be connected Java with the database to validate the information. As a consequence, it will test if the device is registered, providing of enough information to do the analysis. If it is registered, it will be verified with the historic the possible actions which the device could be making without user's consent. On the contrary, if the device does not exist in the database, it requires the user's authorization to complete the process. In the case that the user does not agree with this analysis, the application closes. Furthermore, artificial intelligence is used to test if there is a risk of intrusion. To carry out this process, we do a capture of 9 predefined parameters, which assign a ResNet34 pretrained model with fast.ai, indicating if there is an intrusion. With the result of AI, a professional will analyse the obtained data and he will decide if the results are consistent. If yes, these data will be stored in the database and the results will be communicated to the user, and if not, these data will be discarded.
5.2.-Java diagramming
The Java diagrams which is defined as systematic work (S3). The main objective of this diagramming is defining, on the one hand, how Java accesses to database by using driver JDBC. On the other hand, how it does a checking with a SQL query.
Image credit: Java diagramming by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/diagramacion-de-java/)
JDBC driver (Java Database Connectivity) facilitates the connection between a database management system and any software that works with JAVA [54]. Apart from JDBC driver, we use the database management system MySQL to manage files of the database and phpMyAdmin tools, which serves to control the administration of MySQL and it is available in XAMPP. We also work in the environment of Java with Eclipse. Eclipse is a development environment software integrated to connect the database to Java [55]. It is important to add that in the design of the Java diagram, we use the program Drawio as a tool, as in the rest of diagrams (S1, S2 and S3).
In next section, we explain how it makes the connection between the database and Java, both in local and the "cloud" by using the Eclipse virtual box. When all applicatives are installed, JDBC driver is included in our Eclipse project. As a general recommendation, it is convenient creating, previously, the database, trying to avoid errors.
Image credit: Connection to database - Java by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/conexion-a-la-base-de-datos-java/)
Next step: defining the object "connection" in Eclipse for establishing the connection and entering the path of the server with the database which we want to get access. We name the database "security", both in local and the "cloud". Moreover, it is necessary to specify the user and password to get access to database. Lastly, we look for the host and port in phpMyAdmin to include them in the path, by this way making the connection from the "cloud".
Access to code for professors and students: https://mangosta.org/conexion-a-la-base-de-datos-java001/
A connection, in local or in the "cloud", is independent each other, that means, each one is maked in a different class. The only difference is that each class in Eclipse is the path.
Image credit: Connection to the database - Java by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/conexion-a-la-base-de-datos-java002/)
When the connection to the database is tested, we make queries as much as we like, using SQL language. For instance, we execute a consultation by using the objects "Statement" and "Resulset". As a result, they will be displayed in a window the table and register. That is because they have been called from Eclipse.
This section will be more detailed in Java codification.
5.3.-Python diagram
Python diagram is split into two. On the one hand, it makes reference to the connection of the database to Python, on the other hand, it is based on the use of artificial intelligence in Python.
5.3.1.-Database connection diagram with Python
This diagramming details the database connection by using Python. This is defined as systematic work (S1) in the platform of Google Colaboratory.
Image credit: Python diagramming by Diana Díaz, 2021, Mangosta (https://mangosta.org/diagramacion-conexion-base-de-datos-con-python/)
The tools for this procedure are Google Colaboratory, a Google application to create virtual box environment based on Jupyter Notebook [56]. Jupyter is a file of extension IPYNB that combines many cells with Markdown code or text [57]. The MySQL Connector/Python connector is written in Python, and it allows to connect us to an external dataset, however, it is necessary a means of interaction. For this reason, we use the package PyMySQL [58].
Next step is connecting the database to Python. Firstly, we access to Google Colab and we create a new notebook, which automatically becomes in a Jupyter notebook. Once it is opened, we install the MySQL Connector/Python connecter with "pip", a package management system written in Python [59]. Once it is installed, we import it. Similarly, PyMySQL package should be installed. At this point, if the process does not give us an error, we will have all that needed to the database connection from our notebook of Google Colab. To connect us, we use a variable in which by means of the package PyMySQL, we store the information of the path: name, user, password, and the name of database which we want to connect to. Then, we turn the variable in a cursor because with this, we make a search in database. With the cursor, we can execute over it the query that we want. Subsequently, it is possible to print it with the obtained result. Once the queries are executed, we close the connection to database.
This section will be more detailed in Python codification.
5.3.2-Diagramming of the use of Artificial Intelligence in Python
This diagram, defined as systematic work (S3), explains how intrusions are detected. The objective is studying intrusions by using images focalised on the IDS. Particularly, we have chosen images of tigers and cats. Similarly, in the next section are detailed which steps are required for detecting intrusions by using ResNet34, Jupyter notebooks and Google Colaboratory.
Image credit: Diagramming of the use of Artificial Intelligence in Python by Adrián Hernández, 2021, Mangosta (https://mangosta.org/diagramacion-uso-de-la-inteligencia-artificial-en-python/)
To fully achieve this task, we use Drawio (for the design of diagrams), Google Drive and Google Colaboratory, which includes the programming language Python. Apart from these tools, we work with fast.ai, that is, a library of deep learning that allows the inputting of pretrained models such as ResNet34 for the subsequent classification of the images [60]. Therefore, it is not necessary downloading any kind of software because all the processes will be carried out in the "cloud".
Thus, we enter to Google Drive to create three folders: "train", "validation" and "test". Within those folders, we create two subfolders defined as "cat" and "tiger" in which the images will be grouped.
Image credit: Storage of cat images by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/gatos/)
Image credit: Storage of the tigers images by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/tigres/)
Subsequently, we enter to Google Colaboratory, we indicate the path where our dataset is located in the first cell. By this way, we train the model with these images. Consequently, we create the pretrained model with ResNet34 architecture. When we say pretrained model, we refer to a specific model which allows that other models get advanced results without providing of huge amounts of computation, time, and patience [61]. It is a way of knowing an appropriate technique of training. Furthermore, if we want to use the model in the future, it would not be trained another time. ResNet34 (residual neural network) is a pretrained model of ImageNet. ImageNet is a recognising system of visual objectives [62]. Both help us to improve the performance and optimize the results, concerning to intrusion detections.
The next step consists of conducting a training cycle with a defined number of epochs to train it (see below in the process of detecting intrusions with artificial intelligence). In this way Artificial Intelligence can study all sets of images. To interpret the results, we create a confusion matrix which helps us to evaluate the performance of the images classification model. This matrix compares the real values with the predicted ones. By this way, we will see how the images classification model is working. In fact, in the following graphic representation, we check that artificial intelligence only mistakes in an image: it classifies that image as a tiger when it is a cat.
Image credit: Answer of Artificial Intelligence by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/i_respuesta-inteligencia-artificial/)
Image credit: Answer of Artificial Intelligence (confusion matrix) by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/ii_respuesta-inteligencia-artificial/)
6.-Step by step in the process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks, and Google Colaboratory
Google Colab is a working environment that allows us executing Jupyter notebooks in a web browser. Among their many advantages, not previously configuration and free access to GPU's (Tesla T4 or Tesla K80). Therefore, it offers the possibility of training and using artificial intelligence without subscriptions and not using our own hardware resources.
A Jupyter notebook is a document that executes cells of "live" code, plain text, equations and so on. The structured which follows is an ordinated list of input/output. The name of Jupyter has its origins from: Julia + Python + R, which are the three programming languages included in Jupyter notebooks. The main components of Jupyter Notebook are a set of areas (interpreter) and the dashboard. Similarly, changing the area of the notebook, we can execute other languages from Google Colab, for instance, Java.
Once we know enough about the components that will give form to our artificial intelligence, we explain the procedure:
The first step is preparing the environment of Google Colab, if not, we can not access to it without activating it. To do this, we click on Google Colaboratory from Google Drive.
Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial/)
If the option does not appear, we have to click on this url: https://colab.research.google.com/
Once we access to Google Colab, we have available the predetermined notebook by Google to initiate us in Python from the "cloud". However, we can also create a white notebook to include tests. To do it, we click on the option: "new notebook".
Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial001/)
The new created notebook has the following structure:
Previously, the cell that appears it allows us to write code in Python and execute it by using the button "play", which is located in the same cell.
Google Colaboratory is not configured for using GPU'S, so we have to prepare it. To do it, it is necessary to click on these options: "execution environment" and "changing the type of execution environment".
Then, clicking on the drop-down menu with the word "none" and replace it by "GPU".
At this point, Google will assign a GPU to the Jupyter notebook, which it will be tested by using the following instruction in a code cell:
As we see in the image, a Tesla T4 is assigned, although it could be a Tesla K80. The reason is why because of the two GPU's that Google offers free of charges. Now we have set up the environment of Google Colab for working with artificial intelligence. However, we need to create the structure of folders that we are going to use in order to train our model. This procedure will be realised from Google Drive for reducing its complexity, but it could be realised in a local host, setting up the environment of Colab.
To carry out the first tests, we use images of cats and tigers, respectively. The main aim is that from a given image, which AI has never seen them before, it will be able to differentiate between cats and tigers. The initial structure has three folders, which we named them as we want, but for convenience, it is recommendable defined as:
A folder defined as Test, attributed to the images that artificial intelligence has never seen them before, will used to test that AI can classify graphical representations correctly. Another folder defined as Train, for purposes of training our model. In this case, there will be a differentiation between tigers and cats. Moreover, artificial intelligence will know if it is visualizing a cat or tiger. A folder defined as Validation, which serves us to validate our model. It will be a distinction inside it. This folder lets us to know how artificial intelligence will classify the images. Furthermore, inside the folders Train and Validation, we have this structure:
Inside each folder, there are images of cats and tigers. The images of each folder should be different. Moreover, the higher number of figures we use, the greater precision for artificial intelligence to classify other graphical representations.
When you have created the structure of folders and the environment of Google Colab is ready, we start working with artificial intelligence. To do this, we execute the referred code to connect our environment of Google Colab with Google Drive, where the folders, which AI will use from the created notebook, are located.
Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificialc001/
When both environments are associated, we define the paths, for instance, root_dir and base_dir.
Similarly, we define the path, that means, the location of the main directory that we use.
To train the artificial intelligence model is needed to import fast.ai. ResNet34 refers to a residual network that was introduced by Microsoft in 2015. Once we use fast.ai, we import the library:
These instructions serve to import the fast.ai libraries and vision fast.ai as well as "error_rate" of fast.ai metrics. "Error_rate" is used to determine the grate of error in our trained model. In other words, it is used to know, according to our criteria, if we have to improve the model or on contrary, the obtained error is acceptable and the model is apt to use it.
When the libraries are imported, we should test that the setting up of the notebook is correct. To do this, we print on the screen an aleatory image by using the defined path and with these instructions: open-image (the path of the image by adding the variable path) e img.show().
Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial013/) (.) Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificialc005/
When we see evidences that the image is displayed correctly, we create the data (exhibition of images) that our model is going to use. To start, we define the batch size, which allows us to indicate the number of images that we transit, at the same time, to the memory. As a preventive measure, it is recommendable not using a very high batch size, otherwise, we may create a memory error in the graphic card and consequently the stopping of the process. To define the size of the batch we write this code:
In this case, the number 6 indicates the amount of images which are transited, at the same time, to the memory. Once we have assigned the batch size, we define the data that we are going to use, in this particular case, images. To do it, we use this cell of code:
In this cell, we define the variable data, considering the folders structure that we have previously created (Train, Validation and Test) and indicating that these folders should be added for their predefined variables: train, valid and test.
When the dataset is defined, we initialize our model. Before this action, we create the model and loading ResNet34 by using this code:
We use cnn_learner and indicate it the dataset (data), the model (model.resnet34) and moreover, we want to know the error_rate which is imported from fastai.metrics. This code line serves for creating and loading the model, so the next step is training it. To do this, we use:
The values in parenthesis indicate the number of "epoch" that we use for training. But what is an epoch? An epoch is the process of transferring all data by Artificial Intelligence. Particularly, all the images located in the training folder go through artificial intelligence. When you make a training with 6 epochs, all images from the folder defined as Train, they will go through 6 times by AI.
Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter Notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial018/) (.) Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial-resnet34-cuadernos-jupyter-y-google-colaboratoryc009bis/
When the training is finished, we use the final error_rate of the training model. In the case of the top image, the error_rate is 0.003165 or in terms of percentage, 0.31%, approximately. Moreover, it is important to know what is the use of error_rate. Error_rate informs about the precision of the model and it allows us to decide if we should make a training with more epochs or by contrast, using the model. If we are not sure about what means an error of 0.31%, we can print a confusion matrix:
Image credit: Process of detecting intrusions with Artificial Intelligence: ResNet34, Jupyter notebooks and Google Colaboratory by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial-019/)
In this confusion matrix is compared the x-axis (axis X), the predictions of Artificial Intelligence (cats or tiger) in the ordinate axis (axis Y), which is the image. In this aspect, AI makes the right decision in 331 images of cats and 299 images of cats. However, AI predicts in 2 images that the images are cats when images really are tigers. Therefore, the error would be: → 2632=0.316\frac{2}{632} = 0.316% 6322=0.316%.
When we consider that the error is acceptable, we keep the model to use it later on.
The path and name of the model are indicated between brackets. When the model is kept, we test it with images which AI has never seen. To do it, we use a new cell code that contains this information:
Image credit: Process of detecting intrusions with Artificial Intelligence by Adrián Hernández, 2021, Mangosta (https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificial020/) (.) Access to code for professors and students: https://mangosta.org/proceso-de-detectar-intrusiones-con-inteligencia-artificialc011/
On this point, we use the stored model in learn and we go through the image of the folder defined as Test, in this case the 520 one. Furthermore, we can print the image in Google Colab to verify if it really happens what AI is going to predict in the next step. For printing the image, we use img.show(). To "say" to artificial intelligence analyses the loaded image, we use this code:
In this respect, we call back to the model with learn and we use predict(img) to make use of the trained model. The result is offered as tensor (0) for cats and tensor (1) for tigers. Moreover, it gives us the possibility of being a tiger (99.9%), between brackets.
7.-Creation and structure of database
It has been created a database, taking into account the need of collecting the data that smartwatches or other related devices offers to test if all these data are authorized and known by the user, or, conversely, they are not authorized. For this reason, we have into account two factors: the data needed to access to the information of the users and the data that constitute a danger because of unauthorized access. In this sense, we focus on data collected by smartwatches with an additional table (THealth).
To design the diagram, we use Drawio, as the previous designs and other software related to databases such as SQL language, MySQL and PhpMyAdmin.
The two fundamental keys of the database are the importance of Booleans data and trivalent logic, about which we shall have more to say later on. But what is the function of Booleans fields? Why we use them in the database? The reason for this is to find valid information in less time, apart from making combined searches if various terms [63].
To clarify ideas on the database, we explain the design and structure of the 5 tables which compose it. The first table is "TDevice". This table is composed by 3 fields: ID (identity number, located in all tables as a continuous filament), the name of the brand (varchar) and the model (varchar). The second table is "TDeviceType" has 6 fields available: ID and other five Booleans fields (smartwatch, portable PC, desktop computer, tablet and smartphone) which answer true or false depending on the type of device. The third table "TNonAuthorizedAccess" has 4 fields: ID and 3 Boolean ones, camera, microphone and GPS). The aim of this table is knowing if the device has access to some of them. The fourth table defined as "THealth" has 8 fields: ID and 7 Booleans ones, heart frequency, sleep, automatic register of exercise, stress, level of oxygen, menstrual cycle, and steps. It is intended that this table informs us about the information which collects the device. The last one table is "TDetection" and it has only two fields: ID and 1 Boolean (positive). This table is very important because it will be stored, after testing the rest of data from the other tables, if the device is positive or not, that is, if it has or not an intrusion.
Image credit: Entity-relationship diagram by Diana Díaz, 2021, Mangosta (https://mangosta.org/diagrama-entidad-relacion/)
As we can see in the previous image, the diagram is composed by three important elements:
Entity. It is represented by rectangles which show the names of the tables, for instance, TUnauthorizedAccess.
Atributes, with oval shape. They define the features of entity. As an example, we see them represented by nID, 1Camera and so on.
Relations, with rhomboid shape. They show the connections with entities. In this case, a connection between two entities will be: TDevice has TUnauthorizedAccess.
Trivalent logic
To design the database, we focus on the speed of queries because it is important to get a good development of the project. For this reason, Boolean fields are used.
Bivalent logic distinguishes two values: true or false, which are represented by 1 and 0, respectively [64]. In this case, we apply ternary or trivalent logic, that is, a logical system in which three values are represented: true, false, or undefined. The third value is interpreted as the possibility of some is neither true or false, but undefined. In this context, the categorised indicator listed as indetermined, showing not value, is represented as: "null".
Physical model
Once the design is visualized and the structure of tables is analysed, the next step is studying the database from the physical model.
Image credit: Physical model by Diana Díaz, 2021, Mangosta (https://mangosta.org/modelo-fisico-2/)
Use of SQL and its database management system
SQL2 (Structured Query Language) became the standard from American National Standards Institute in 1986 and from International Organization for Standardization (ISO) in 1987 [65]. It has taken such prominence because of being a non-procedural language, that is, it specifies what it wants but non how and where to get it. As well it is relationally complete because it allows making queries [66].
Now that we know how SQL works, we can determine the database management system used. To develop the database, we use MySQL. MySQL is a database management system with an open-source relational code. Such an open source should be a model of software based on the open collaborative way, which is focused on practical benefits such as access to the source code [67]. These features make it accessible and practical. In addition, it is a reliable and standardized option for its extensive use. On the other hand, SQLite [68] is a motor of the database because of SQL language is included. Its main advantage is that it does not need a server for working. For this reason, it is very useful to work with applications. Another advantage is the space it occupies (<500 kb) and what makes feasible that the database is manageable in the device rather than in our server.
But why is it notable versus SQLite? The database has been mainly designed with Boolean fields. However, SQLite has not available this option. The only possible option would be converting Booleans to integer in its storage class, slowing down the database. Additionally, SQLite is a system oriented to work with a low volume, so if we enter a large amount of data, its system is not as efficient as we need. Finally, we consider most advantageous continuing with the use of MySQL and server.
Representation of data
For that, we talk about Ckan and Dkan, that are, tools used for the data management in the web. The advantage is that Ckan and Dkan are platforms of open-source code, free and open access. The main difference is that DKAN is a version of CKAN which is developed in Drupal [69].
Image credito: Snapshot of DKAN, 2021, Authorship of the National Health Countil: https://getdkan.org/
But what is drupal? Drupal is a contents management system (CMS), changeable that allows a lot of services as publication of public opinion research, forums, articles, and images [70]. Similarly, it is a dynamic system that allows storing data in a database and editing them in a web environment.
Image credit: Snapshot of Drupal, 2021, Authorship of Drupal: https://www.drupal.org/
In the situation above, CKAN and DKAN are similar, but DKAN is a priori, a superior version. So, we consider more interesting the use of DKAN platform. It is therefore necessary to highlight that CKAN has an exigent consumption of hardware and inefficient management of security for users and resources. Considering important the use of one of these platforms for offering the user a more evident perspective of data, as well as a full transparency. For this reason, it is a platform used by governments [71] such as Australia's or Canada's in addition to non-profits institutions.
8.-Basic codification of the framework in SQL, Python and Java
8.1.-SQL coding
To create the database, we use SQL code. As is clear, we need to use UTF-8, an encoding format of characters Unicode and ISO 10646 [72]. We use UTF-8 for detecting registered words in the database which contains special characters as the use of letter "Ñ", for example in the word "sueño". On the other hand, the table TDevice (the last one in the code) represents the foreign key for being the junction point of the rest of tables.
Image credit: SQL coding by Diana Díaz, 2021, Mangosta (https://mangosta.org/iii_codificacion-sql/) (.) Access to code for professors and students: https://mangosta.org/codificacion-sql001/
Image credit: SQL coding by Diana Díaz, 2021, Mangosta (https://mangosta.org/iv_codificacion-sql/) (.) Access to code for professors and students: https://mangosta.org/codificacion-sql002/
8.2.-Java coding
In this section, we add the Java code [73] with which we have accessed to the database defined as "security", as well as the result of making a query in SQL language. Firstly, we import the library java.sql to use the database from Eclipse:
Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/viii_codificacion-java/) (.) Acceso al código para docentes y estudiantes: https://mangosta.org/codificacion-java/
Moreover, it is necessary to ensure that the JDBC driver is well connected. Next, we make the connection with the database by following these instructions:
Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/ix_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java001/
The DriverManager class has a String method which has the path of the database. To do it, we use the JDBC connector for MySQL which we installed in the section of Java diagramming. Additionally, we test it on the server of the database in localhost and in the "cloud", changing the path of the database.
Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/x_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-002/
By default, MySQL server open the port 3306. It is necessary to connect to this port if we want to consult the database. Consequently, we create the object "Statement". To do it, we follow these steps:
Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/xi_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-003/
By this way, the object "Resultset" allows us to get the results from the query. In our case, it shows all the registers from the demanded table, that is, TDevice:
Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/xii_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-004/
Now, it is necessary to use the loop "while" and the variable "myResulset" to get all the results from the query.
Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/xiii_codificacion-java/) (.) Access to code for professors and students: https://mangosta.org/codificacion-java-005/
If it is impossible to establish a connection to the database in Eclipse, when executing in Eclipse it will appear via screen: "Does not work". After all these steps are completed, we get the table with the registers which we have consulted:
Image credit: Java coding by Kimberly Riveros, 2021, Mangosta (https://mangosta.org/codificacion-java-2/)
8.3.-Python coding
In this section, we add the Python code that we have used to access the database defined as "security", as well as the result of making a query in SQL language. Firstly, we install the connector mysql-connector-python, a driver to communicate with MySQL servers [74].
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/i_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/ii_codificacion-python/)
Then, we do the same with PyMySQL, another package used for the interaction with MySQL databases. Once installed, we import it:
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/iii_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/iv_codificacion-python/)
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/x_codificacion-python-2/) (.) Access to code for professors and students: (https://mangosta.org/iv_codificacion-python/)
Once all packages are installed, we connect to database. To do it, we create a variable called "myConnection" and using PyMySQL, we store the path of the database, user, password and the name of database, that is, the required information to stablish the connection to database.
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xi_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/v_codificacion-python/)
The next step is converting this variable in a cursor. To do it, we execute the cursor with the query we want to make.
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xii_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/vi_codificacion-python/)
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xiii_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/vii_codigo-python/)
To conclude, we print via screen the result of the query. When we finish, we close the connection.
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xiv_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/viii_codificacion_python/)
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/x_codificacion-python/)
Image credit: Python coding by Kimberly Riveros and Diana Díaz, 2021, Mangosta (https://mangosta.org/xv_codificacion-python/) (.) Access to code for professors and students: (https://mangosta.org/ix_codificacion-python/)
9.-Interfaces
We use them to front-end [75] of the application (UI / UX) [76] or the interactive part in which user interacts with his application and all entries are produced through this layer. The usability and design of the interface environment in different modes of functioning M1, M2 and M3 (which are defined further on, in 10.2) it may let a parental and management control of enough devices. The design of the interface in a key element for a solvent and practical use of the framework.
9.1.-Kivy
Kivy [77] is only technically another library of Python but not just any. It is a framework to create interfaces of user that functions in many platforms as Android, iOS, Windows, Linux and macOS. It can be viewed also as an alternative to React Native [78], Flutter [79], Solar2D and so on [80].
Fundamentally, Kivy is for the enthusiasts of Python and Machine Learning. With theh library of Kivy, the best choice is using Conda [81]. Installing Kivy with Conda is easy: «conda install kivy -c conda-forge». Later, we can compile an application for iOS or Android from source code [82] and execute it in a simulator to visualize its appearance and result.
9.2.-Klotin
Klotin [83] is a programming language of static type [84] that runs on Java Virtual Machine [85] (JVM) and it can also be compiled [86] to source code of JavaScript. Although, it has not a compatible syntax with Java, Klotin is designed to interoperate with Java code, and it is known to be used in the creation of Android applications.
9.3.-Social and legal elements in the design of interfaces
Here are specific directions to codify and develop the graphical and communicative environments with the users of this framework. They are assumptions that should be always presented for developers in the beginning to avoid harmful future technical modifications in the final implemented environments. The framework should never forget its main objective: providing the maximum protection and encouraging social plurality.
Conceptual criteria and the use of gender equality [87] and accessibility [88] as well as put the emphasis on avoiding all type of bias [89] should be clearly stated.
The language, particularly, oriented to minors should be carefully studied for an appropriated use.
The regulatory compliance of data protecting [90] and others, which derive from application, should be a presential imperative.
As far as possible, it must be provided enough educational information about cybersecurity and artificial intelligence.
III.-Stages and modes applicable to the framework and minimum guidelines of requirements
The characterization in stages and modes of this framework is proposed to define systems infrastructure and ensure an appropriate level of procedure. Apart from this, speed is an important feature to execute in different smart devices, as well as the generalized description of all instruments and associated tools.
10.1-Stages
10.1.1-Stages 1 (S1): laboratory mode and AI learning
It is in this sense that can be made configurations, basic and advanced settings. The different setting of frameworks and models help us to optimize and improve the result and subsequently this one and its associated experience will be applied to the stage (S2) or the mode of execution of AI.
To train artificial intelligence we can start from the scratch or make a transference of knowledges with a model, previously trained. We can propose specific models for images because ResNet offers general results, very acceptable, in the proportion of successes in a classification, accuracy [91] or success rate in the prediction. Furthermore, we can use different frameworks, which the most recognised are Pytorh, Tensorflow, Fastai and Keras…Fastai is our framework.
10.1.2-Stages 2 (S2): execution mode of AI
It is the result of the state S1. This state is defined for a trained model and kept for its execution. It should not admit any advanced settings.
10.2-Modes
The modes are the working systems of operating which can be implemented to this framework for smart devices which they are applicable.
10.2.1-Mode 1 (M1): standalone mode
It is subjected to the state S2. Its aim of definition is that it can be downloaded and run on as any other app of market [92]. Mainly, S2 runs on smartphone, tablets, and PC's. It is recommendable total compatibility of the Mode M1 for the operating systems Android 11 or superior; iOS 14.5.1 or superior; the operating systems Windows 10 or superior or MAX OS X 10.15 or superior. The minimum requirements of hardware will be the recommended ones by their manufacturers.
10.2.2-Mode 2 (M2): connected or slaved mode from ohter device
It is conceived for the connection of how two devices which compatible technologies, for example, Bluetooth, WiFi or other by using app or a complementary satellite software for this purpose. At this point, the most of current smartwatches will be summoned, which are referenced in 11.1x section. The minimum requirements of hardware for the deployment of the framework in mode M2 are: 4GB of internal memory for storage, 512MB of RAM and Bluetooth 4.0.
10.2.3- Mode 3 (M3): server mode or training and learning laboratory
It is mainly conceived in the state S1. Generally, the mode M3 is used for the design and management of AI, as well as the mode that embeds and manages the mode M1 and the mode M2 of all related devices to it. It is from a Client-Server infrastructure [93]. According to its execution, two classes are stablished:
A) Mode 3A: computation in the "cloud"
Analogous to the system that we have proposed in laboratory…It can also be implemented with an ad hoc [94] own infrastructure of a server provider of the framework and assuming the inherent costs of its usability and deplorability. AWS of Amazon, Google Cloud [95], Azure [96] and others are possible candidates for this purpose. For the complementation of the environment, it has to be implemented a XAMPP server [97] or similar in a hosting [98] and adapting services for Servlets of Java [99] to get a complete operativity. For the mode 3B, the requirements of hardware need to be considered, at least, what we used in our free setting in laboratory. The reason is to guarantee a satisfactory execution result of AI and those indicated by the software publishers, which is associated to it.
B) Mode 3B: computation in local
To work in this mode, we are able to do a local installation [100] with fastai and Anaconda or Miniconda under the operating system Linux or Windows. Furthermore, for the own management of database, it is necessary to install MySQL or some similar software. And Java JDK [101] for the development and execution of codification in Java.
For this mode 3B the hardware has to be more specific and demanding, although it should not exceed 2.500 or 3.000 euros to the current market price for an optimum setting of a production environment. This mode has to be capable of executing a software similar to Anaconda Enterprise 4 or 5 [102]. (The requirements are: CPU: 2 x 64-bit 2.8 GHz 8.00 GT/s CPUs; RAM: 32 GB (or 16GB of 1600MHz DDR3 RAM); 300GB of disk storage and access to Internet). It is advised RAM memory of 64 GB and a hard disk of similar characteristics or higher than 970 EVO Plus (1TB) of Samsung for the whole set that the Mode 31b supposes.
GPU [103] is the component that will make the most of slow and large calculations in the process of training a convolutional network [104] . Mainly, these calculations are multiplications of matrixes, convolutions, and activation functions [105]. The hardware provided by NVIDIA, for instance, the model 2080 Ti [106] with 13.45 TFLOPs and 11 GB of memory GDDR6 is a solid base. For CPU, Intel is an unavailable option. The family i7, the i7-10700k, can be a warrantor choice.
IV.-Range of application
The environments in which the framework can be implemented, they will generate own specifications for each environment. Mainly, they are characterized by hardware, in the layer of the model OSI [107] defined by operating, its operating system [108] and connectivity to a defined network and Internet, apart from the connection between devices. Particular attention needs to the speed of execution of the framework, so the applicable procedures should be the most necessaries and light wherever possible and the most optimum speed that can be achieved. All this in the best balance of consumption performance and taking into account the capacities and morphology where it will be used. Some of these environments, where we make descriptions and proposals of software and hardware, by denoting that they are not essential and exclusive in the final implementation.
11.1-Smartwatches
A smartwatch can suppose a style of live, even a concept of freedom; they are such as a plastic housing of watch that carries and camouflages digital technology. Moreover, it is incorporated to a subject as an usual everyday wear, a wearable (wearable technology [109])), "The information that moves with you". It was the motto of Android Wear [110] in 2014. And, coupled with hand, or with wrist, it can suppose a serious problem of privacy.
A smartwatch has nuances that will make a difference, for example, its design, its size, its user interface [111], if it is associated or not to a smart mobile phone by means of Bluetooth, if the smartwatch has SIM card or not [112], if it can be managed as a another device in the network (WiFi) or not [113], if it is really a smartwatch or a wrist of activity (fitness [114]); or all at once, or part of them.
A conglomeration of hardware, benefits and market prices are associated to smartwathces and they will determine some elements as screen (acoustic wave [115], resistive or capacity according to touch [116], and panel: LCD [117], IPS [118], AMOLED [119] or OLED [120]), the incorporated sensors, RAM memory, internal memory for storage, microprocessor, (own fabrication, Qualcomm, ARM, MediaTek…), battery life, if it incorporates audio and photographic camera, communication and approximation technology such as NFC [121], WiFi or RFID [122] and so on.
We can find them from the minimum prices (≈,<) a 20 € in the Chinese market in AliExpress (Tipmant Smartwatch) or in Amazon, to prices that can reach up to 2.000 € (TAG HEUER CONNECTED GOLF EDITION) [123], or even more expensive.
If we look ahead for some kind of devices, the main difference could be verifying if they replace the smartphone, if a smartwatch can make and receive calls (using LTE bands [124], for instance, related to democratization of 5G [125]) and The Internet of Things (IoT) [126].
Normally, they have a reduced version as the operating system of Android, iOS or Linux (Wear OS [127], WatchOs [128] and Tizen OS [129]), they are the most common. This digital technology is the platform where the framework can be executed in a reduced environment of S.O.
11.1.1-Hardware of some of the smartwatches
Dual-Core 1 GHz Exynos3250
4GB internal memory, 512MB RAM
1.63" Super AMOLED with resolution 320x320
Qualcomm Snapdragon 400 a 1 GHz
Bluetooth 4.0, Wi-Fi, 3G
AMOLED of 2" and 360x480 pixel
Samsung Gear S2.
S.O: Tizen
Bluetooth, BT4.1, Wi-Fi, NFC
sAMOLED
360 X 360 (302 ppi)
SO: Wear OS by Google™
Single-Core 1 GHz TI OMAP 3 / Qualcomm® Snapdragon™ Wear 3100
4GB internal memory, 512MB RAM / 1GB RAM + 8GB internal memory
Bluetooth 4.0 LE / Bluetooth 4.2, Wi-Fi b/g/n, NFC, GPS / GLONASS / Beidou / Galileo
LCD of 1.56" with resolution of 320x290, 205 ppi, LCD with backlighting / 1.2"" Circular AMOLED(390x390)
320 mAh / 355mAh
LG G Watch
SO: Android 4.3 in advance
Qualcomm Snapdragon 400 a 1.2 GHz
1.65" IPS LCD
P-OLED of 1.3" and 320x320 pixel
Sony SmartWatch 3
Quad-core ARM Cortex A7 a 1,2GHz
Bluetooth 4.0, NFC
1.6" and 320x320 pixel
Asus ZenWatch
AMOLED touchscreen capacitive of 1.63" and 320x320 pixel
Apple S1
WiFi and GPS of iPhone
Retina with Force Touch
Up to 18 hours of authonomy
SO: iOS 14 or later
S6 64-bit dual-core
32 GB internal memory, 1GB
LTE y UMTS, Wi-Fi, Bluetooth 5.0
Retina OLED LTPO
Credit table 2: updated and improved version of de «Hardware characteristics[130]. Smartwatch. (2021, 15 february). Wikipedia, The Free Encyclopaedia. Dated consulted: 13:44, july 18, 2021 from https://es.wikipedia.org/w/index.php?title=Reloj_inteligente&oldid=133248752.»
The framework for any smartwatch should take that the state of recommended working is the state S2 (execution of AI) and the mode M2 (intertwined). Dynamically, in accordance with the evolution of technology associated to a smartwatch, the mode M1 (autonomous) can also be a valid option in the medium term.
11.2.-Other related devices
11.2.1-Smartphones, tablets and PC's
Currently, these devices have soundly evolved in the high term of the sector. They are guarantor and in general sufficient, with high-performances technological and a low economy of cost associated. Although tablets do not end getting a final place on the market, according to some associated meters (Image 66). Smartphones and PC'S are rivals for them because in relation with technology and price, they are probably losing out. Nowadays, tablets, Are they more a "useful fad" than a real necessity for user? [131]…However, some manufacturers as Microsoft tend to unify the operating systems as Windows 10 and 11 [132] so that its integration is assured at these three levels or families of devices. Everything is scalable and transversal. An all-in-one.
The framework for any element, smartphone, tablet or PC, should assume that the state of recommended working for these one is the state S2 (execution of AI) and the mode M1 (autonomous).
Image credit: «Tendencies of Internet 2021. Statistics and facts by countries». VPNMentor. URL: https://es.vpnmentor.com/blog/tendencias-de-internet-estadisticas-y-datos-en-los-estados-unidos-y-el-mundo/, Mangosta (https://mangosta.org/tendencias-2/)
11.2.2.-Automation
Talking about automation is talking about its types of connections for a total and real application of it in conjunction with 5G [125] and the IoT [126] and a structure, more than probably, in «intelligent [133] bus».
All sensors and actuators are connected with a bus cable. The whole system is defined as "bus system". Image credit: «KNX Basic knowledges of the standar KNX». V9-14. KNX.org. Accesed 22/07/2021. Url: https://www.knx.org/wAssets/docs/downloads/Marketing/Flyers/KNX-Basics/KNX-Basics_es.pdf, Mangosta (https://mangosta.org/conocimientos-basicos-2/)
Currently, all space of technology of wireless communications coexists: Bluetooth, WiFi, RFID, NFC and others. Although morphologies of connection for the automation are waiting decision from the industry and the associated standards, which vie among them by defending different interests.
A line of work, for example, Zigbee pursues: «Secure communications with a poor submission rate of data and maximization of the useful life of its batteries» [134] or Z-wave [135]: «a meshed network that uses radio waves of low energy to communicate from a device to another, by allowing the wireless control of house appliances and other devices, such as illumination control, security systems, thermostats, windows, locks, swimming pools and garage door»; or Sigfox [136]: «An operator of global network and developer of the network 0G founded in 2009 which implements wireless networks to connect devices of low consumption as electric meters, alarms centres or smartwatches, which need to be continuously activated and sending small amount of data».
In these technological and extremely tumultuous times, at level of strategies and disturbances, (as a standard and secure value of the communication market for the next 5 years), Bluetooth and WiFi seem to be the most secure and stable bets during this period: at technological and commercial level, of course. And the most of current devices, one (Bluetooth), or other (WiFi), or both implement them: providing of this technology very popular among users and manufacturers.
The framework for any domestic element (different from smartwatch, smartphone, tablet and PC) should assume that the state of recommendable worksfor domotic elements is the state S2 (execution of AI), and available in both modes M1 (autonomous) & M2 (associated).
11.2.2.1-Domotic elements with Android & iOS
The smarthome applications are an evidence. They are undeniable. The operating systems as Android and iOS make a serious contribution. Any domestic or wearable device that we have described is susceptible of implementing these operating systems. They can and have to be protected by the implementation of our framework. According to previously enounced, it can be installed on these technologies.
To cite some references: Amazon Alexa, Casa, Smartthings, Google Home, LIFX, Houseinhand, KNX, TaHoma by Somfy, Home Connect App, Smart Life, Philips Hue and so on.
11.2.2.2-Cross-platform JRE (Java Runtime Environment) in domotic devices
Java Virtual Machine [85] is a cross-platform environment designed to be portable with independency from the systems and environment which envelops it.
«Java Virtual Machine can be implemented in software, hardware, a development tool or a web browser; it reads and executes precompiled bytecode code that is independent from the platform. JVM provides definitions for a set of instructions, set of registers, format for class files, stack data, a heap with garbage collector and a memory area. Any implementation of JVM, which be approved by SUN, must be capable of execution any class that reaches with the specification.» [85] .
Being this free standing an objective domotic development in the implementation of our Smart Domotic Intrusions Detector.
11.2.2.3-Description of the domotic standard KNX as a regulator framework for domotic devices
The standard of KNX protocol[4] is based on the concept of intelligent bus [133], and its best meaning is understanding it as a «central neuron» that communicates all the elements from a dwelling. This kind of devices connection is undoubtedly conducive for greater application of AI techniques. The ISO standardization (ISO/IEC 145443)[137] governs this typology. Both the capacity of applying AI techniques and standardization, they constituent an ideal platform of application for our Smart Domotic Intrusions Detector, to the implementation and application of the framework defined in the present research work.
These elements should be protected by the Smart Domotic Intrusions Detector: light switches • Light control keyboards •Movement detectors •Presence detectors • Windows and doors contacts •Doorbell of entrance •Water meters, gas, electricity and heat •Surge sensors •External and internal ambient temperature sensors •Temperature sensors in hot water circuits and heating •Modules to preadjust the setting temperature in rooms •Internal and external light sensors •Wind sensors to control blinds and awnings •State indicators or failure in white good appliances •Leakage sensors •Level sensors •Receptors of radio frequency in doors latch •Receptor of the infrared of remote control •Fingerprints readers or electronic cards to access control [138]... and more besides actuators and modules.
All of them have to be protected, and the implemented framework in an intelligent IDS is needed to apply it here.
NXnet/IP in the reference model OSI. Image credit: «KNX Basic Knowledge about standard KNX». V9-14. KNX.org. Accessed tol 22/07/2021. Url: https://www.knx.org/wAssets/docs/downloads/Marketing/Flyers/KNX-Basics/KNX-Basics_es.pdf, Mangosta (https://mangosta.org/estandar-knx/)
All tests carried out in laboratory provide enough knowledge to infer that the framework can be embedded in a microcontroller of PIC 18F family [139] PIC18F4550 [140] or other similar with the KNX systematic [141], which is supported by the specific design of electronic circuitry that implements the needed combinational logic.
Installation in Protoboard of The Penguin System. Julio F. De la Cruz G. Accessed to 22/07/200121. Url: http://3.bp.blogspot.com/-JtZH7qb-j-w/UhLyWQdHLjI/AAAAAAAAAjQ/cjoKPWOIxsw/s1600/image7923.png, Mangosta (https://mangosta.org/montaje-en-protoboard/)
The design of this domotic device is conceived as an appliance [142] that has, at least, the hardware requirements of a smartwatch to embed the state and mode of inherent working to this one, previously defined, using an encapsulated software or firmware [143].
Its philosophy of working should be based on plug&play technique [144] so it can be an element more of protection, hosted among devices from electrical panel of any house, which normally hosts an ICP [145], that is, a differential [146] and circuit breaker switches [147].
Therefore, the IDS domotic device is conceived as an additional element of security and protection at home or in a work environment which will be implemented by the KNX standard[4].
V.-Conclusion
It has been demonstrated that the use of AI, in large part of its dimensions, can be carried out with academic rigor, definable in the field of mathematics and usable in laboratory. Moreover, the use of AI can be thought, reasoned, and applied.
The framework is motivated by the continuous apparition and improvement of mobile devices in the shape of smartwatches, smartphones, and other similar devices that it has simultaneously favoured a growing and unfair interest to control the applicatives to its users. They are prey of an abusive interference in their lives to measure and manage, telematically, their Internet practices, and also steal them personal data, for instance, when people do exercises that require the measurement of vital signs such as: heartbeats, arterial pressure and oxygen level. The industry, which artfully produces and manages them, is increasingly canalizing financial costs as well as sciences as artificial intelligence (AI), data and strategies flow management to achieve the final result point to the manipulation, sale and kidnapping of people's lives that use them. Maybe all of this with a protector and last owner: Artificial Intelligence (AI), plenty of obscurity and obscured internal mechanisms or 'Black Boxes" and from naïve societies, like ours, which are driven by unknowing, probably, the final cause and effect for humanity.
12.1.-Corollary I
We can raise this framework as a space for generating frameworks, analogue to a vectorial space that generates vectorial subspaces [148] and S⃗Vintrusioˊn=[P⃗x(1...n),I⃗mgyi]03(n)\vec SV_{intrusión}= {[\vec P x _{(1...n)}, \vec Img_{y_{i}}]_0^{3^{(n)}}}SVintrusioˊn=[Px(1...n),Imgyi]03(n) as its homologue generator system [149], pro of the transparency and explicability of technologies such as AI. It is where each analogic, ambiguous, difficultly definable, wide of spectrum and thought, and overwhelming in dimensions, it can be scanned on its more fundamental parameters, as postulates, demonstrates and defends this framework, according to us. Those initial parameters, that a child could draw despite of being an unknown object for him, like it was a squiggle, as a symbol [150] or primary idea about childhood. As he learns and improves "his parameters" of personal definition and values, this definition will be more careful, concrete and adapted to his own reality. Similarly, it occurs in this framework, in its foundations. In order for artificial intelligence could be explainable, transparent and inside of human control. Because an inverse engineering can always return it to homo sapiens world [151], our real world with imperfections and realities.
12.2.-Corollary II
By inference of corollary I, another important worry about artificial intelligence, algorithmic bias [152]: genre, cultural, race, individual disabilities, thoughts, and ideologies…can be mitigated. Trying to achieve a more equitable society in values. Together with its human technological achievements.
12.3.-Corollary III
By inference of the corollaries I and II, Artificial Intelligence (AI) can be humanly educated to attain a higher good of human beings.
Why and how?
It can be educated because we can be selective in those parameters that we want to skew, (like the race of thoroughbred animal). With them, it will be possible to train AI for thinking and deciding in that reasoning determined line, which is almost genetic form, because the parameters suppose quasi the genetic syntheses in laboratory of the thinking of predefined AI, having into account what we want to take or not during the training period.
All human behaviour, or abstract thinking or entelechy, can be scanned (scribbled) by this framework, according to the following algebraic expression:
(Σj=1j=n(P⃗xn−P⃗x(n−1)))−(P⃗x1−P⃗xn)=0(\hspace{0.1cm} \Sigma _{j=1} ^{j=n}\hspace{0.1cm} (\vec Px_n-\vec Px_{(n-1)}))-(\vec Px_1-\vec Px_n)=0(Σj=1j=n(Pxn−Px(n−1)))−(Px1−Pxn)=0 .
On contrary to the main objective of this framework, the latter carried to the most negative and cruel extremes with humanity, can generate dangerous AI: something on which nationalist social movements in Germany [153] and the theory of "superman" also discussed: The Übermensch [154]. All science always has two-faced: a proper and bad use for humanity.
12.4.-Corollary IV
Theorem of the ene-dimensional existentialism
Given a framework, such as that described here, all universe of ene-dimensional possibilities can be reflected in graphical plans that represent it in a specific instant T1 for a defined and observable environment. (In an image that tells more than one thousand words…)
The graphics of this framework are completely located in the lineal application f:N→R2f: \mathbb N→ \mathbb R^2 f:N→R2…But:
«[…] Taking the coordinates of each vector and its algebraic nature, the real dimension would be:
1) The group of vectors P⃗x(1...n)\vec P x _{(1...n)}Px(1...n) {Dim1}N\NN=1.
2) The group of vectors I⃗mgyi \vec Img{_y} _i Imgyi {Dim2}R\RR=2.
3) The group of vectors P⃗x\vec P_xPx {Dim3}N\NN=n.
4) El grupo de vectores I⃗mgy \vec Img_yImgy {Dim4}R\RR=n. […]».
What it is inferred that ser I⃗mgy \vec Img_yImgy {Dim4}R\RR ene-dimensional, it can also be a theorical universe of possibilities, similar to ours, and if we follow the exposed model, it can be non-limited. There, beyond R⃗3\vec R^3R3 our existence, it makes us shiver of doubts and scepticism. However, it is resolved in graphical plans (as our own existence in souvenir photos) by means I⃗mgyi \vec Img{_y} _i Imgyi {Dim2}R\RR=2; that returns it visual and existential: earthly before our eyes. If appropriate, compatible with the «Heisenberg' Uncertainty Principle » [155].
(Cqd: Quod erat demonstrandum).
If we have learned anything of the history of science is that while metaphysics of greeks [156] and other thinkers cultivated ideas and qualitative abstractions [157], scientists and mathematicians such as Galileo Galilei [158], Isaac Newton [159], Laplace [160], Leibniz [160]and other «giants» took it to the field of quantification [161] and infinitesimally discreet [162] and measurable. To the observable mathematics existentialism [163] of Blaise Pascal.
If we have, surprisedly, learned that a flavored, curious and awaken mind, it can suppose and intuit physical arguments and universal mathematics, without necessity of providing of all information from dataset, which an entity is composed. It did so Philosophiæ naturalis principia mathematica [164] by Newton, someone who gave birth to diferential calculus and glimpsed the mathematical universe in infinite. Following the pathway of his professor, Isaac Barrow [165].
Professors and students are hope of all the human science. The way to learn from errors and the way to follow in the future. Moreover, training and educating also to our imperishable descendants, robotic minds, that we define as «Artificial Intelligence (AI)»… And ensuring a humanly technologically tomorrow, and not, on contrary, a technologically humanly tomorrow because it would be a serious problem.
VI.-Auxiliary systems and services implemented
NextCloud and Gitlab for internal and external use of this publication
VII.-Declaration of conflic of interests
The authors state that they have not any interest conflict as of the date of this release. This publication is not subsidized by any project that could provide financing sources, nor with the support or sponsorship of any brand or similar. Each authors represent itself and act independently. Moreover, it is declared the future intention of launching a software, which is defined in this url: mangosta.org. At the same time, this domine and its web hosting serve as tool of this publication to get improve limited techniques in this platform.
VIII.-Manifestation of the investigation team who has done this project: professors and students
A scientific reality about cybersecurity and AI what is described here, that is palpable and auditable in a document was found thanks to investigation and education. Because science, we think, it should not be a business for anything more than society and in benefit of humanity, a sign of the major loa and respect to all scientists who made it possible to reach this point of knowledge. Those minds and shoulders of «geniuses giants» on which we get on and all we can walk much faster.
Juan Antonio Lloret Egea, on behalf of all investigation team.
Keywords: #cienciaabierta, #openscience, #investigación, #research, #IA, #AI, #inteligenciaartificial #artificialintelligence, #IDS, #ciberseguridad, #cybersecurity #español #educación #education #enseñanza #skills
Licencia: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
IX.- Acknowlegments
IES La Arboleda, Alcorcón, Madrid http://www.laarboleda.es/ for contributing with professors of the IT department, the students of the first year of Development of Multiplatform Applications and the students of the second year of Networking Microcomputer Systems. Furthermore, the support given by the management team of the institute. And resources provided by them, they are exposed in: https://arboledalan.net/ciberseguridad/
Regional Library of Murcia, for supporting the diffusion of the activity: https://bibliotecaregional.carm.es/agenda/presentacion-biblia-de-la-ia-ai-bible-publicacion-sobre-inteligencia-artificial/
Posthumously to sir Isaac Newton, for teaching us how to look and educate us in the way of doing it: «I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the sea-shore, and diverting yself in now and then finding a smoother preblle or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me» [166].
A Reply to this Pub
Data as the main focus of "State of the art of data science in Spanish language and its application in the field of Artificial Intelligence"
by Celia Medina Lloret, Adrián Hernández González, Diana Díaz Raboso, Jose Alejandro Orozco Leal, Ángel Manuel Pérez López, and Marta Riquelme
Published on Jun 26, 2021
www.openscience.online
According to the results, there is an evidence of cultural bias for data science in Spanish language. The outcome of the consultation, which carried out on 12 April 2021, confirms that only 10 out of 23.771 datasets "speaks" Spanish.
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY-NC-ND 4.0)
Login to discuss
Why not start the discussion?
The Bible of AI ™ OpenScience
Media Quick View
|
CommonCrawl
|
Home / Transformer / Ideal Transformer | Theory | Equations | Example Problems
Ideal Transformer | Theory | Equations | Example Problems
Ahmad Faizan Transformer
In all transformers, single phase or three phase, the primary winding gets energy from the mains and the secondary winding is connected to the load(s). If the secondary circuit is turned off by a switch, there is no load on the secondary winding.
Ideal Transformer Theory
Nevertheless, the primary winding is still connected to electricity and forms a closed circuit. In this case, the primary winding behaves as a coil with a core; a current flows through it that (1) warms the winding and (2) warms the core as a result of eddy currents and hysteresis.
Eddy current is a local movement of electrons in a ferromagnetic material when placed in a magnetic field.
In a solid piece of iron or other ferromagnetic metal, there can be many local circuits of eddy current in various directions, depending on the variation of the magnetic field. This unwanted current not only warm up a core, it consumes energy and also affects the magnetic property of the core metal.
To reduce the eddy currents and its effects, all the core materials of transformers and motors are made up of laminated metal.
Each lamination has a layer of paint or nonconductive wax on it so that it is electrically isolated from its neighbors. In this way, eddy currents are limited to those only inside of one laminated metal.
You May Also Read: Transformer Working Principle | How Transformer Works
Figure 1 shows what a laminated metal looks like. This greatly reduces the amount of energy loss by eddy current and the resulting heat.
Figure 1 Laminated metal for transformers and motors.
Nevertheless, eddy currents cannot be entirely stopped and the associated energy loss cannot be reduced to zero.
There are other types of loss in transformers, such as hysteresis and mechanical loss (energy changed to noise). But these are relatively smaller than eddy current loss in a well-designed transformer.
The loss due to the current in the winding(s) is called copper loss, and the losses due to eddy currents, hysteresis, and so on are referred to as core loss because they occur in the transformer core, and they are independent of how much the current is (but they increase as frequency goes up).
Copper loss: Amount of power loss in a transformer that corresponds to the resistance of the wire winding and it depends on the load current (the percentage of loading).
Core loss: Amount of power loss in a transformer that corresponds to the quality of design and core material and is independent of the load current.
Ideal Transformer Example
To develop the electrical relationships between the primary and secondary transformers, all losses are assumed to be zero. This simplifies the relationships and facilitates developing the pertinent equations. Such an assumed transformer in which all the losses are neglected is called an ideal transformer.
Ideal transformer: When in a transformer all the losses are assumed to be zero and, as a result, input power equals output power.
In an ideal transformer, the output power and the input power are equal. That is, all power received by the primary winding is delivered to the secondary winding.
In dealing with any device, including transformers, one needs to bear in mind that it is always the load that determines how much power is required.
In an ideal transformer, all the losses are assumed to be zero. As a result, the power in the primary side is equal to the secondary side power.
Voltage Relationship in Ideal Transformer
Referring to Figure 2, the primary winding is connected to a supply voltage V1 and the secondary voltage V2 is applied to a load.
In general, a load can be resistive, inductive, capacitive, or a combination of these. A current I2 will flow in the secondary winding. On the basis of this current, the primary winding carries a current I1.
Figure 2 Number of turns N, voltage V and current I in the primary and secondary circuits of a transformer.
The relationship between the primary and secondary voltages is based on the ratio of the number of turns in the primary and secondary windings.
The turns ratio in a transformer is the ratio of turns in the primary winding to that of the secondary winding and is denoted by a.
Turns ratio: Ratio of the number of turns in the secondary and primary windings of a transformer.
The following equation is the fundamental relationship for an ideal transformer:
\[\begin{matrix} \frac{{{V}_{1}}}{{{V}_{2}}}=\frac{{{N}_{1}}}{{{N}_{2}}}=a & {} & \left( 1 \right) \\\end{matrix}\]
Where V1 and V2 are the primary and secondary voltages, respectively; N1 and N2 are the number of turns in the primary and secondary windings, and a is the turns ratio.
For most transformers, the turns ratio is fixed and cannot change.
When a > 1, the secondary voltage is smaller than the primary voltage, thus a step-down transformer.
When a < 1, the secondary voltage is larger than the primary voltage and the transformer is a step-up transformer.
Equation 1 shows that a transformer with a given turns ratio, for instance, 10, can divide its input voltage by 10 at the secondary winding. This transformer can theoretically also multiply its input voltage by 10.
For instance, if the side with the lower number of turns is connected to 120 V, the secondary winding has 1200 V at its terminals. This implies an important and serious issue: in working with a transformer, special care must be taken for its correct connection to the source and loads. The wrong connection can easily lead to damage and injuries.
In a step-up transformer, the secondary voltage is higher than the primary voltage. In a step-down transformer, it is the reverse.
Take note that although theoretically, one can connect a transformer for stepping up or stepping down a given voltage, and there are transformers designed for working both ways, in practice, design considerations such as wire thickness and transformer power rating determine the limitation for the use of a transformer.
A transformer cannot necessarily be connected to any arbitrary voltage or in an arbitrary fashion. In working with a transformer, special care must be taken for correct connection of its primary and secondary to the outside circuits.
Ideal Transformer Example Problem 1
The primary winding of a transformer has 1000 turns. If this transformer is to be used for changing 120 V input to 24 V output, how many turns are needed for the secondary winding?
It follows from Equation 1 that the number of secondary winding turns is
\[{{N}_{2}}=\frac{{{V}_{2}}}{{{V}_{1}}}\times {{N}_{1}}=\frac{24}{120}\times 1000=200\]
In using transformers, often the turns ratio is mentioned in the form of a:1 (a to 1) or 1:a (1 to a). For example, in a 2:1 transformer the primary voltage is twice the secondary voltage.
In conjunction with Equation 1, one can understand that in a transformer the ratio
\[\frac{V}{N}=\frac{{{V}_{1}}}{{{N}_{1}}}=\frac{{{V}_{2}}}{{{N}_{2}}}\]
is a constant. This constant is called volts per turn and determines how many volts there are per each turn of either the primary or the secondary winding. The use of this constant is in the design stage of a transformer.
Volts per turn: Number indicating the value of volts for each turn of winding in a transformer. This value can be obtained from either the primary side values or the secondary values by dividing the voltage by the number of turns.
Current Relationship in Ideal Transformer
The relationship between the primary and secondary currents for an ideal transformer is based on the power relationship; that is, the power in the primary side is equal to the power consumed in the secondary side.
Always, in AC electricity the consumed power refers to the apparent power, the product of voltage and current. Writing
\[\begin{align} & S={{V}_{1}}{{I}_{1}}={{V}_{2}}{{I}_{2}} \\ & \\ & Leads\text{ }to \\ & \\ & \begin{matrix} \frac{{{V}_{1}}}{{{V}_{2}}}=\frac{{{I}_{2}}}{{{I}_{1}}} & {} & {} \\ or & {} & \left( 2 \right) \\ \frac{{{I}_{2}}}{{{I}_{1}}}=\frac{{{N}_{1}}}{{{N}_{2}}}=a & {} & {} \\\end{matrix} \\\end{align}\]
This equation also implies that in a transformer the side with the higher voltage has a smaller current and vice versa.
The secondary current I2 can always be found from the secondary voltage and the load impedance connected to the secondary. Having the current in the secondary, then the current I1 in the primary can be found. This is for an ideal transformer, and I1 represents the current in the primary due to the load only.
If there is no load connected to the secondary side, this current is zero, too. In other words, in an ideal transformer, the current I1 as obtained from Equation 2 does not include the no-load current and the core loss currents (due to eddy currents in the core, mentioned earlier) that exist in a real transformer.
Because the current in the secondary and primary windings are not the same, the wire sizes for the two windings are not the same.
The current in the winding with the lower number of turns is always higher than the current in the other winding. Thus, the lower voltage (higher current) side always has a thicker wire than the higher voltage (lower current) side. This can be a good way to judge the connections if in doubt.
For a step-down transformer, the secondary current is higher and the winding wire is thicker than that of the primary winding.
A step-down transformer is used to change 220 to 110 V. If a resistive load consuming 500 W is connected to the 110 V side, what is the current due to this load in the secondary and primary windings?
Because this is a 2:1 transformer, current in the primary winding is half of the current in the secondary winding. From the power of the load and the voltage in the secondary side, the current I2 can be found
\[{{I}_{2}}=\frac{P}{{{V}_{2}}}=\frac{500}{110}=4.54A\]
From which the current in the primary side follows
${{I}_{1}}=0.5\times 4.54=2.27A$
|
CommonCrawl
|
On the Dirichlet problem for fully nonlinear elliptic equations on annuli of metric cones
Sign-changing tower of bubbles for a sinh-Poisson equation with asymmetric exponents
November 2017, 37(11): 5693-5706. doi: 10.3934/dcds.2017246
2-manifolds and inverse limits of set-valued functions on intervals
Sina Greenwood 1,, and Rolf Suabedissen 2,
University of Auckland, Private Bag 92019, Auckland, New Zealand
University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, United Kingdom
* Corresponding author: Sina Greenwood
Received April 2017 Published July 2017
Figure(2)
Suppose for each $n\in\mathbb{N}$, $f_n \colon [0,1] \to 2^{[0,1]}$ is a function whose graph $\Gamma(f_n) = \left\lbrace (x,y) \in [0,1]^2 \colon y \in f_n(x)\right\rbrace$ is closed in $[0,1]^2$ (here $2^{[0,1]}$ is the space of non-empty closed subsets of $[0,1]$). We show that the generalized inverse limit $\varprojlim (f_n) = \left\lbrace (x_n) \in [0,1]^\mathbb{N} \colon \forall n \in \mathbb{N},\ x_n \in f_n(x_{n+1})\right\rbrace$ of such a sequence of functions cannot be an arbitrary continuum, answering a long-standing open problem in the study of generalized inverse limits. In particular we show that if such an inverse limit is a 2-manifold then it is a torus and hence it is impossible to obtain a sphere.
Keywords: Generalized inverse limit, upper semicontinuous function, continuum, 2-manifold, closed relation.
Mathematics Subject Classification: Primary: 54C08, 54E45; Secondary: 54F15, 54F65.
Citation: Sina Greenwood, Rolf Suabedissen. 2-manifolds and inverse limits of set-valued functions on intervals. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5693-5706. doi: 10.3934/dcds.2017246
E. Akin, The General Topology of Dynamical Systems, vol. 1 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 1993. Google Scholar
I. Banič and J. Kennedy, Inverse limits with bonding functions whose graphs are arcs, Topology Appl., 190 (2015), 9-21. doi: 10.1016/j.topol.2015.04.009. Google Scholar
I. Banič, M. Črepnjak and V. Nall, Some results about inverse limits with set-valued bonding functions, Topology Appl., 202 (2016), 106-111. doi: 10.1016/j.topol.2016.01.007. Google Scholar
R. Engelking, General Topology, vol. 6 of Sigma Series in Pure Mathematics, 2nd edition, Heldermann Verlag, Berlin, 1989, Translated from the Polish by the author. Google Scholar
J. Gallier and D. Xu, A Guide to the Classification Theorem for Compact Surfaces, vol. 9 of Geometry and Computing, Springer, Heidelberg, 2013. doi: 10.1007/978-3-642-34364-3. Google Scholar
S. Greenwood and J. Kennedy, Connectedness and Ingram-Mahavier products, Topology Appl., 166 (2014), 1-9. doi: 10.1016/j.topol.2014.01.016. Google Scholar
S. Greenwood and J. Kennedy, Connected generalized inverse limits over intervals, Fund. Math., 236 (2017), 1-43. doi: 10.4064/fm241-4-2016. Google Scholar
G. Guzik, Minimal invariant closed sets of set-valued semiflows, J. Math. Anal. Appl., 449 (2017), 382-396. doi: 10.1016/j.jmaa.2016.11.072. Google Scholar
K. P. Hart, J. Nagata and J. E. Vaughan (eds.), Encyclopedia of General Topology, Elsevier Science Publishers, B. V., Amsterdam, 2004. Google Scholar
W. Hurewicz and H. Wallman, Dimension Theory, Princeton Mathematical Series, v. 4, Princeton University Press, Princeton, N. J., 1941. Google Scholar
A. Illanes, A circle is not the generalized inverse limit of a subset of $[0, 1]^2$, Proc. Amer. Math. Soc., 139 (2011), 2987-2993. doi: 10.1090/S0002-9939-2011-10876-1. Google Scholar
W. T. Ingram, An Introduction to Inverse Limits with Set-Valued Functions, SpringerBriefs in Mathematics, Springer, New York, 2012. doi: 10.1007/978-1-4614-4487-9. Google Scholar
W. T. Ingram and W. S. Mahavier, Inverse limits of upper semi-continuous set valued functions, Houston J. Math., 32 (2006), 119-130. Google Scholar
H. Kato, On dimension and shape of inverse limits with set-valued functions, Fund. Math., 236 (2017), 83-99. doi: 10.4064/fm233-4-2016. Google Scholar
J. Kennedy and V. Nall, Dynamical properties of shift maps on inverse limits with a set valued function, Ergodic Theory and Dynamical Systems, (2016), 1-26. doi: 10.1017/etds.2016.73. Google Scholar
R. Langevin and F. Przytycki, Entropie de l'image inverse d'une application, Bull. Soc. Math. France, 120 (1992), 237-250. doi: 10.24033/bsmf.2185. Google Scholar
W. S. Mahavier, Inverse limits with subsets of $[0, 1]×[0, 1]$, Topology Appl., 141 (2004), 225-231. doi: 10.1016/j.topol.2003.12.008. Google Scholar
R. P. McGehee and T. Wiandt, Conley decomposition for closed relations, J. Difference Equ. Appl., 12 (2006), 1-47. doi: 10.1080/00207210500171620. Google Scholar
R. McGehee, Attractors for closed relations on compact hausdorff spaces, Indiana Univ. Math. J., 41 (1992), 1165-1209. doi: 10.1512/iumj.1992.41.41058. Google Scholar
V. Nall, More continua which are not the inverse limit with a closed subset of a unit square, Houston J. Math., 41 (2015), 1039-1050. Google Scholar
A. R. Pears, Dimension Theory of General Spaces, Cambridge University Press, Cambridge, England-New York-Melbourne, 1975. Google Scholar
T. Wiandt, Liapunov functions for closed relations, J. Difference Equ. Appl., 14 (2008), 705-722. doi: 10.1080/10236190701809315. Google Scholar
Figure 1. A torus as a GIL on intervals
Figure Options
Download full-size image
Download as PowerPoint slide
Figure 2. A circle as a binary Mahavier product of simply-connected spaces
Tamara Fastovska. Upper semicontinuous attractor for 2D Mindlin-Timoshenko thermoelastic model with memory. Communications on Pure & Applied Analysis, 2007, 6 (1) : 83-101. doi: 10.3934/cpaa.2007.6.83
Jan Prüss, Gieri Simonett. On the manifold of closed hypersurfaces in $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5407-5428. doi: 10.3934/dcds.2013.33.5407
Hassan Emamirad, Philippe Rogeon. Semiclassical limit of Husimi function. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 669-676. doi: 10.3934/dcdss.2013.6.669
Massimo Tarallo, Zhe Zhou. Limit periodic upper and lower solutions in a generic sense. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 293-309. doi: 10.3934/dcds.2018014
Mark Pollicott. Closed orbits and homology for $C^2$-flows. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 529-534. doi: 10.3934/dcds.1999.5.529
Raz Kupferman, Cy Maor. The emergence of torsion in the continuum limit of distributed edge-dislocations. Journal of Geometric Mechanics, 2015, 7 (3) : 361-387. doi: 10.3934/jgm.2015.7.361
Helge Holden, Nils Henrik Risebro. The continuum limit of Follow-the-Leader models — a short proof. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 715-722. doi: 10.3934/dcds.2018031
Pierre Degond, Sophie Hecht, Nicolas Vauchelet. Incompressible limit of a continuum model of tissue growth for two cell populations. Networks & Heterogeneous Media, 2020, 15 (1) : 57-85. doi: 10.3934/nhm.2020003
Henk Bruin, Sonja Štimac. On isotopy and unimodal inverse limit spaces. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1245-1253. doi: 10.3934/dcds.2012.32.1245
Sigve Hovda. Closed-form expression for the inverse of a class of tridiagonal matrices. Numerical Algebra, Control & Optimization, 2016, 6 (4) : 437-445. doi: 10.3934/naco.2016019
Dingheng Pi. Limit cycles for regularized piecewise smooth systems with a switching manifold of codimension two. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 881-905. doi: 10.3934/dcdsb.2018211
Armengol Gasull, Hector Giacomini. Upper bounds for the number of limit cycles of some planar polynomial differential systems. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 217-229. doi: 10.3934/dcds.2010.27.217
Agust Sverrir Egilsson. On embedding the $1:1:2$ resonance space in a Poisson manifold. Electronic Research Announcements, 1995, 1: 48-56.
Jian Hou, Liwei Zhang. A barrier function method for generalized Nash equilibrium problems. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1091-1108. doi: 10.3934/jimo.2014.10.1091
Seung Jun Chang, Jae Gil Choi. Generalized transforms and generalized convolution products associated with Gaussian paths on function space. Communications on Pure & Applied Analysis, 2020, 19 (1) : 371-389. doi: 10.3934/cpaa.2020019
Guillaume Bal, Alexandre Jollivet. Generalized stability estimates in inverse transport theory. Inverse Problems & Imaging, 2018, 12 (1) : 59-90. doi: 10.3934/ipi.2018003
Naeem M. H. Alkoumi, Pedro J. Torres. Estimates on the number of limit cycles of a generalized Abel equation. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 25-34. doi: 10.3934/dcds.2011.31.25
Zhuchun Li, Yi Liu, Xiaoping Xue. Convergence and stability of generalized gradient systems by Łojasiewicz inequality with application in continuum Kuramoto model. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 345-367. doi: 10.3934/dcds.2019014
A. Pankov. Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 419-430. doi: 10.3934/dcds.2007.19.419
Kanji Inui, Hikaru Okada, Hiroki Sumi. The Hausdorff dimension function of the family of conformal iterated function systems of generalized complex continued fractions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (2) : 753-766. doi: 10.3934/dcds.2020060
HTML views (41)
Sina Greenwood Rolf Suabedissen
|
CommonCrawl
|
Empirical risk minimization: probabilistic complexity and stepsize strategy
Chin Pang Ho ORCID: orcid.org/0000-0002-2143-978X1 &
Panos Parpas2
Computational Optimization and Applications volume 73, pages 387–410 (2019)Cite this article
Empirical risk minimization is recognized as a special form in standard convex optimization. When using a first order method, the Lipschitz constant of the empirical risk plays a crucial role in the convergence analysis and stepsize strategies for these problems. We derive the probabilistic bounds for such Lipschitz constants using random matrix theory. We show that, on average, the Lipschitz constant is bounded by the ratio of the dimension of the problem to the amount of training data. We use our results to develop a new stepsize strategy for first order methods. The proposed algorithm, Probabilistic Upper-bound Guided stepsize strategy, outperforms the regular stepsize strategies with strong theoretical guarantee on its performance.
Empirical risk minimization (ERM) is one of the most powerful tools in applied statistics, and is regarded as the canonical approach to regression analysis. In the context of machine learning and big data analytics, various important problems such as support vector machines, (regularized) linear regression, and logistics regression can be cast as ERM problems, see for e.g. [17]. In an ERM problem, a training set with m instances, \(\{(\mathbf {a}_i,b_i)\}_{i=1}^m\), is given, where \(\mathbf {a}_i \in \mathbb {R}^n\) is an input and \(b_i \in \mathbb {R}\) is the corresponding output, for \(i=1,2,\dots , m\). The ERM problem is then defined as the following convex optimization problem,
$$\begin{aligned} \min _{\mathbf {x}\in \mathbb {R}^n} \left\{ F(\mathbf {x}) \triangleq \frac{1}{m} \sum _{i=1}^m \phi _i(\mathbf {a}_i^T \mathbf {x}) + g(\mathbf {x})\right\} , \end{aligned}$$
where each loss function \(\phi _i\) is convex with a Lipschitz continuous gradient, and the regularizer \(g:\mathbb {R}^n \rightarrow \mathbb {R}\) is a continuous convex function which is possibly nonsmooth. Two common loss functions are
Quadratic loss function: \(\phi _i(x) = \dfrac{1}{2}(x-b_i)^2 .\)
Logistic loss function: \(\phi _i(x) = \log (1+\exp (-xb_i)) .\)
One important example of g is the scaled 1-norm \(\omega \Vert \mathbf {x}\Vert _1\) with a scaling factor \(\omega \in \mathbb {R}^+\). This particular case is known as \(\ell _1\) regularization, and it has various applications in statistics [3], machine learning [18], signal processing [6], etc. The regularizer g acts as an extra penalty function to regularize the solution of (1). \(\ell _1\) regularization encourages sparse solutions, i.e. it favors solutions \(\mathbf x\) with few non-zero elements. This phenomenon can be explained by the fact that the \(\ell _1\) norm is the tightest convex relaxation of the \(\ell _0\) norm, i.e. the cardinality of the non-zero elements of \(\mathbf x\) [5].
In general, if the regularizer g is nonsmooth, subgradient methods are used to solve (1). However, subgradient methods are not advisable if g is simple enough, and one can achieve higher efficiency by generalizing existing algorithms for unconstrained differentiable convex programs. Much research has been undertaken to efficiently solve ERM problems with simple g's. Instead of assuming the objective function is smooth and continuously differentiable, they aim to solve problems of the following form
$$\begin{aligned} \min _{\mathbf {x} \in \mathbb {R}^n} \{F(\mathbf {x}) \triangleq f(\mathbf {x})+g(\mathbf {x})\}, \end{aligned}$$
where \(f:\mathbb {R}^n \rightarrow \mathbb {R}\) is a convex function with L-Lipschitz continuous gradient, and \(g:\mathbb {R}^n \rightarrow \mathbb {R}\) is a continuous convex function which is nonsmooth but simple. By simple we mean that a proximal projection step can be performed either in closed form or is at least computationally inexpensive. Norms, and the \(\ell _1\) norm in particular satisfy this property. A function f is said to have a L-Lipschitz continuous gradient if
$$\begin{aligned} \Vert \nabla f(\mathbf {x}) - \nabla f(\mathbf {y}) \Vert \le L \Vert \mathbf {x} - \mathbf {y} \Vert , \quad \forall \mathbf {x},\mathbf {y}\in \mathbb {R}^n. \end{aligned}$$
For the purpose of this paper, we denote the matrix \(\mathbf {A} \in \mathbb {R}^{m \times n}\) to be a dataset such that the \(i^{\text {th}}\) row of \(\mathbf {A}\) is \(\mathbf {a}_i^T\), and so in the case of ERM problems,
$$\begin{aligned} f(\mathbf {x})= \frac{1}{m} \sum _{i=1}^m \phi _i(\mathbf {a}_i^T \mathbf {x}) = \frac{1}{m} \sum _{i=1}^m \phi _i(\mathbf {e}_i^T \mathbf {A} \mathbf {x}), \end{aligned}$$
where \(\mathbf {e}_i \in \mathbb {R}^m\) has 1 on its \(i^{\text {th}}\) component and 0's elsewhere. f is called the empirical risk in ERM. We assume that each \(\phi _i\) have a \(\gamma _i\)-Lipschitz continuous gradient and
$$\begin{aligned} \gamma \triangleq \max \{\gamma _1,\gamma _2,\dots , \gamma _m \}. \end{aligned}$$
Many algorithms [1, 4, 9, 13, 20] have been developed to solve (1) and (2). One famous example is the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) [1], which is a generalization of the optimal method proposed by Nesterov [10] for unconstrained differentiable convex programs. FISTA, with backtracking stepsize strategy, is known to converge according to the following rate,
$$\begin{aligned} F(\mathbf {x}_k)-F(\mathbf {x}_{\star }) \le \frac{2\eta L\Vert \mathbf {x}_0 - \mathbf {x}_{\star } \Vert ^2}{(k+1)^2}, \end{aligned}$$
where \(\mathbf {x}_{\star }\) is a solution of (2), and \(\eta \) is the parameter which is used in the backtracking stepsize strategy. The convergence result in (5) contains three key components: the distance between the initial guess and the solution \(\Vert \mathbf {x}_0 - \mathbf {x}_{\star } \Vert \), the number of iterations k, and the Lipschitz constant L. While it is clear that the first two components are important to explain the convergence behavior, the Lipschitz constant, L, is relatively mysterious.
The appearance of L in (5) is due to algorithm design. In each iteration, one would have to choose a constant \(\tilde{L}\) to compute the stepsize that is proportional to \(1/\tilde{L}\), and \(\tilde{L}\) has to be large enough to satisfy the properties of the Lipschitz constant locally [1, 13]. Since the global Lipschitz constant condition (3) is a more restrictive condition, the Lipschitz constant L always satisfies the requirement of \(\tilde{L}\), and so L is used in convergence analysis. We emphasize that the above requirement of \(\tilde{L}\) is not unique for FISTA. For most first order methods that solve (2), L also appears in their convergence rates for the same reason.
Despite L being an important quantity in both convergence analysis and stepsize strategy, it is usually unknown and the magnitude could be arbitrary for a general nonlinear function; one could artificially construct a small dimensional function with large Lipschitz constant, and a high dimensional function with small Lipschitz constant.
Therefore, L is often treated as a constant [10, 11] that is independent of the dimensions of the problem, and so the convergence result shown in (5) is considered to be "dimension-free" because both \(\Vert \mathbf {x}_0 - \mathbf {x}_{\star } \Vert \) and k are independent of the dimension of the problem. Dimension-free convergence shows that for certain types of optimization algorithms, the number of iterations required to achieve a certain accuracy is independent of the dimension of the model. For large scale optimization models that appear in machine learning and big data applications, algorithms with dimension-free convergence are extremely attractive [1, 2, 16].
On the other hand, since L is considered to be an arbitrary constant, stepsize strategies for first order methods were developed independent of the knowledge of L. As we will show later, for adaptive strategies that try to use small \(\tilde{L}\) (large stepsize), extra function evaluations will be needed. If one try to eliminate the extra function evaluations, then \(\tilde{L}\) has to be sufficiently large, and thus the stepsize would be small. This trade-off is due to the fact that L is unknown.
In this paper, we take the first steps to show that knowledge of L can be obtained in the case of ERM because of its statistical properties. For the ERM problem, it is known that the Lipschitz constant is highly related to \(\Vert \mathbf {A}\Vert \) [1, 14], and so understanding the properties of \(\Vert \mathbf {A}\Vert \) is the goal of this paper. If \(\mathbf {A}\) is arbitrary, then \(\Vert \mathbf {A}\Vert \) would also be arbitrary and analyzing \(\Vert \mathbf {A}\Vert \) would be impossible. However, for ERM problems that appear in practice, \(\mathbf {A}\) is structured. Since \(\mathbf {A}\) is typically constructed from a dataset then it is natural to assume that the rows of \(\mathbf {A}\) are independent samples of some random variables. This particular structure of \(\mathbf {A}\), allows us to consider \(\mathbf {A}\) as a non-arbitrary but random matrix. We are therefore justified to apply techniques from random matrix theory to derive the statistical bounds for the Lipschitz constant.
The contributions of this paper is twofold:
We obtain the average/probabilistic complexity bounds which provide better understanding of how the dimension, size of training set, and correlation affect the computational complexity. In particular, we showed that in the case of ERM, the complexity is not "dimension-free".
The derived statistical bounds can be computed/estimated with almost no cost, which is an attractive benefit for algorithms. We develop a novel stepsize strategy called Probabilistic Upper-bound Guided stepsize strategy (PUG). We show that PUG may save unnecessary cost of function evaluations by adaptively choosing \(\tilde{L}\) intelligently. Promising numerical results are provided at the end of this paper.
Many research on bounding extreme singular values using random matrix theory have been taken in recent years, e.g. see [8, 15, 19]. However, we would like to emphasize that developments in random matrix theory is not our objective. Instead, we would like to consider this topic as a new and important application of random matrix theory. To the best of our knowledge, no similar work has been done in understanding how the statistics of the training set would affect the Lipschitz constant, computational complexity, and stepsize.
This paper studies the Lipschitz constant L of the empirical risk f given in (4). In order to satisfy condition (3), one could select an arbitrarily large L, however, this would create a looser bound on the complexity (see for e.g. (5)). Moreover, L also plays a big role in stepsize strategy for first order algorithms. In many cases such as FISTA, algorithms use stepsize that is proportional to 1 / L. Therefore, a smaller L is always preferable because it does not only imply lower computational complexity, but also allows a larger stepsize for algorithms. While the lowest possible L that satisfies (3) is generally very difficult to compute, in this section, we will estimate the upper and lower bounds of L using the dataset \(\mathbf {A}\).
Notice that the Lipschitz constant condition (3) is equivalent to the following condition.
$$\begin{aligned} f(\mathbf {y}) \le f(\mathbf {x}) + \nabla f(\mathbf {x})^T(\mathbf {y}-\mathbf {x}) + \frac{L}{2} \Vert \mathbf {y}-\mathbf {x} \Vert ^2, \quad \forall \mathbf {x},\mathbf {y}\in \mathbb {R}^n. \end{aligned}$$
Therefore, a L that satisfies (6) also satisfies (3), and vice versa.
Proposition 1
Suppose f is of the form (4), then L satisfies the Lipschitz constant condition (6) with
$$\begin{aligned} L\le & {} \Bigg \Vert \text {Diag}\left( \sqrt{\frac{\gamma _1}{m}},\ldots ,\sqrt{\frac{\gamma _m}{m}}\right) \mathbf {A} \Bigg \Vert ^2\\\le & {} \Bigg \Vert \text {Diag}\left( \sqrt{\frac{\gamma _1}{m}},\ldots ,\sqrt{\frac{\gamma _m}{m}}\right) \Bigg \Vert ^2 \Vert \mathbf {A}\Vert ^2 \le \frac{\gamma }{m} \Vert \mathbf {A}\Vert ^2 . \end{aligned}$$
See Proposition 2.1 in [14]. \(\square \)
Proposition 1 provides an upper bound for L, where \(\gamma \) is the maximum Lipschitz constant of loss functions, and it is usually known or easy to compute. For example, it is known that \(\gamma = 1\) for quadratic loss functions, and \(\gamma = \max _i b_i^2/4\) for logistics loss functions.
The upper bound of L is tight for the class of ERM problems. We can prove that by considering the example of least squares, where we have
$$\begin{aligned} L = \frac{\gamma }{m}\Vert \mathbf {A}\Vert ^2 = \frac{1}{m}\Vert \mathbf {A}\Vert ^2. \end{aligned}$$
In order to derive the lower bound of L, we need the following assumption.
There exists a positive constant \(\tau > 0\) such that
$$\begin{aligned} \phi _i(x) + \phi _i'(x)(y-x) + \frac{\tau }{2} \vert y-x \vert ^2 \le \phi _i(y) , \quad \forall x,y\in \mathbb {R} , \end{aligned}$$
for \(i=1,2,\dots , m\).
The above assumption requires the strongly-convex loss function \(\phi _i\), which is not restrictive in practical setting. In particular, quadratic loss function satisfies Assumption 2, and the logistics loss function satisfies Assumption 2 within a bounded box \([-b,b]\) for any positive \(b\in \mathbb {R}^{+}\). With the above assumption, we derive the lower bound of L using \(\mathbf {A}\).
Suppose f is of the form (4) with \(\phi _i\) satisfying Assumption 2 for \(i=1,2,\dots , m\), then L satisfies the Lipschitz constant condition (6) with
$$\begin{aligned} \frac{\tau \lambda _{\min }( \mathbf {A}^T \mathbf {A})}{m} \le L . \end{aligned}$$
By Assumption 2, for \(i=1,2,\dots , m\),
$$\begin{aligned} \phi _i(\mathbf {e}_i^T \mathbf {A} \mathbf {y}) \ge \phi _i(\mathbf {e}_i^T \mathbf {A} \mathbf {x}) + \phi _i'(\mathbf {e}_i^T \mathbf {A} \mathbf {x}) (\mathbf {e}_i^T \mathbf {A} \mathbf {y}-\mathbf {e}_i^T \mathbf {A} \mathbf {x}) + \frac{\tau }{2} \vert \mathbf {e}_i^T \mathbf {A} \mathbf {y}-\mathbf {e}_i^T \mathbf {A} \mathbf {x} \vert ^2. \end{aligned}$$
$$\begin{aligned} f(\mathbf {y})\ge & {} \frac{1}{m}\sum _{i=1}^m \left( \phi _i(\mathbf {e}_i^T \mathbf {A} \mathbf {x}) + \phi _i'(\mathbf {e}_i^T \mathbf {A} \mathbf {x}) (\mathbf {e}_i^T \mathbf {A} \mathbf {y}-\mathbf {e}_i^T \mathbf {A} \mathbf {x}) + \frac{\tau }{2} \vert \mathbf {e}_i^T \mathbf {A} \mathbf {y}-\mathbf {e}_i^T \mathbf {A} \mathbf {x} \vert ^2 \right) ,\\= & {} f(\mathbf {x}) + \frac{1}{m}\sum _{i=1}^m \left( \mathbf {e}_i^T \mathbf {A} \phi _i'(\mathbf {e}_i^T \mathbf {A} \mathbf {x}) ( \mathbf {y}- \mathbf {x}) + \frac{\tau }{2} \vert \mathbf {e}_i^T \mathbf {A} \mathbf {y}-\mathbf {e}_i^T \mathbf {A} \mathbf {x} \vert ^2 \right) ,\\= & {} f(\mathbf {x}) + \langle \nabla f(\mathbf {x}) , \mathbf {y}- \mathbf {x} \rangle + \frac{\tau }{2m} \Vert \mathbf {A} \mathbf {y}- \mathbf {A} \mathbf {x} \Vert ^2 ,\\\ge & {} f(\mathbf {x}) + \langle \nabla f(\mathbf {x}) , \mathbf {y}- \mathbf {x} \rangle + \frac{\tau \lambda _{\min }( \mathbf {A}^T \mathbf {A})}{2m} \Vert \mathbf {y}- \mathbf {x} \Vert ^2 . \end{aligned}$$
From Propositions 1 and 3, we bound L using the largest and lowest eigenvalues of \(\mathbf {A}^T\mathbf {A}\). Even though \(\mathbf {A}\) can be completely different for different dataset, the statistical properties of \( \mathbf {A} \) can be obtained via random matrix theory.
Complexity analysis using random matrix theory
In this section, we will study the statistical properties of \( \Vert \mathbf {A}\Vert ^2= \Vert \mathbf {A}^T\mathbf {A}\Vert = \lambda _{\max }(\mathbf {A}^T\mathbf {A})\) as well as \(\lambda _{\min }(\mathbf {A}^T\mathbf {A})\). Recall that \(\mathbf {A}\) is an \(m \times n\) matrix containing m observations, and each observation contains n measurements which are independent samples from n random variables, i.e. we assume the rows of the matrix \(\mathbf {A}\) are samples from a vector of n random variables \({\varvec{\xi }}^T=(\xi _1,\xi _2,\ldots ,\xi _n)\) with covariance matrix \({\varvec{\Sigma }} = \mathbb {E}\left[ {\varvec{\xi }} {\varvec{\xi }}^T \right] \). To simplify the analysis, we assume, without loss of generality, that the observations are normalized, and so all the random variables have mean zero and unit variance. Therefore, \(\mathbb {E}[\xi _i] = 0\) for \(i = 1,2,\ldots , n\), and the diagonal elements of \({\varvec{\Sigma }}\) are all 1's. This assumption is useful and simplifies the arguments and the analysis of this section but it is not necessary. The results from this section could be generalized without the above assumption, but it does not give further insights for the purposes of this section. In particular, this assumption will be dropped for the proposed stepsize strategy PUG, and so PUG is vaild for all the datasets used in practice.
Statistical bounds
We will derive both the upper and lower bounds for the average \(\Vert \mathbf {A} \Vert ^2\), and show that the average \(\Vert \mathbf {A} \Vert ^2\) increases nearly linearly in both m and n. The main tools for the proofs below can be found in [19].
Lower bounds
The following Lemma follows from Jensen's inequality and plays a fundamental role on what is to follow.
Lemma 4
For a sequence \(\{ \mathbf {Q}_k : k = 1,2,\ldots ,m \}\) of random matrices,
$$\begin{aligned} \lambda _{\max } \left( \sum _k \mathbb {E}[\mathbf {Q}_k] \right) \le \mathbb {E} \left[ \lambda _{\max } \left( \sum _k \mathbf {Q}_k \right) \right] . \end{aligned}$$
For details, see [19].
With Lemma 4, we can derive a lower bound on the expected \(\Vert \mathbf {A}^T\mathbf {A}\Vert \).
We will start by proving the lower bound in the general setting, where the random variables are correlated with general covariance matrix \({\varvec{\Sigma }}\); then, we will add assumptions on \({\varvec{\Sigma }}\) to derive lower bounds in different cases.
Theorem 5
Let \(\mathbf {A}\) be an \(m \times n\) random matrix in which its rows are independent samples of some random variables \({\varvec{\xi }}^T=(\xi _1,\xi _2,\ldots ,\xi _n)\) with \(\mathbb {E}[\xi _i] = 0\) for \(i = 1,2,\ldots , n\), and covariance matrix \({\varvec{\Sigma }}\). Denote \(\mu _{\max } = \lambda _{\max }( {\varvec{\Sigma }} )\) then
$$\begin{aligned} m \mu _{\max } =m \lambda _{\max } \left( {\varvec{\Sigma }} \right) \le \mathbb {E} \left[ \Vert \mathbf {A} \Vert ^2 \right] . \end{aligned}$$
In particular, if \(\xi _1,\xi _2,\ldots ,\xi _n\) are some random variables with zero mean and unit variance, then
$$\begin{aligned} \max \{m \mu _{\max } , n\} \le \mathbb {E} \left[ \Vert \mathbf {A} \Vert ^2 \right] . \end{aligned}$$
We first try to prove (7). Denote \(\mathbf {a}_i^T\) as the \(i^{\text {th}}\) row of \(\mathbf {A}\). We can rewrite \(\mathbf {A}^T\mathbf {A}\) as
$$\begin{aligned} \mathbf {A}^T \mathbf {A} = \sum _{k=1}^m \mathbf {a}_k\mathbf {a}_k^T, \end{aligned}$$
where \(\mathbf {a}_k\mathbf {a}_k^T,\)'s are independent random matrices with \(\mathbb {E} \left[ \mathbf {a}_k\mathbf {a}_k^T\right] = {\varvec{\Sigma }}\). Therefore,
$$\begin{aligned} \mathbb {E} \left[ \lambda _{\max } \left( \mathbf {A}^T \mathbf {A} \right) \right]= & {} \mathbb {E} \left[ \lambda _{\max } \left( \sum _{k=1}^m \mathbf {a}_k\mathbf {a}_k^T \right) \right] \\\ge & {} \lambda _{\max } \left( \sum _{k=1}^m \mathbb {E} \left[ \mathbf {a}_k\mathbf {a}_k^T \right] \right) = m \lambda _{\max } \left( {\varvec{\Sigma }} \right) . \end{aligned}$$
In order to prove (8), we use the fact that
$$\begin{aligned} \mathbb {E}\left[ \Vert \mathbf {A} \Vert ^2\right] = \mathbb {E}\left[ \Vert \mathbf {A}^T \Vert ^2\right] = \mathbb {E}\left[ \Vert \mathbf {A}\mathbf {A}^T \Vert \right] \ge \Vert \mathbb {E}\left[ \mathbf {A}\mathbf {A}^T \right] \Vert , \end{aligned}$$
where the last inequality is obtained by applying Jensen's inequality. Therefore, we can write \(\mathbf {A}\mathbf {A}^T\) as
$$\begin{aligned} \mathbf {A}\mathbf {A}^T = \sum _{i=1}^m \sum _{j=1}^m \mathbf {a}_i^T\mathbf {a}_j \mathbf {Y}_{i,j}, \end{aligned}$$
where \(\mathbf {Y}_{i,j} \in \mathbb {R}^{m \times m}\) is a matrix such that \((\mathbf {Y}_{i,j})_{p,q} = 1\) if \(i=p\) and \(j=q\), and otherwise \((\mathbf {Y}_{i,j})_{p,q} = 0\). By the assumption that each entry of \(\mathbf {A}\) are random variable with zero mean and unit variance, we obtain
$$\begin{aligned} \mathbb {E}\left[ \mathbf {a}_i^T\mathbf {a}_i \right] = \mathbb {E}\left[ a_{i,1}^2 + a_{i,2}^2 + \cdots + a_{i,n}^2 \right] = \mathbb {E}\left[ a_{i,1}^2\right] + \mathbb {E}\left[ a_{i,2}^2 \right] + \cdots +\mathbb {E}\left[ a_{i,n}^2 \right] =n, \end{aligned}$$
for \( i=1,2,\ldots , m,\) and for \(i\ne j\),
$$\begin{aligned} \mathbb {E}\left[ \mathbf {a}_i^T\mathbf {a}_j \right] = \mathbb {E}\left[ a_{i,1}\right] \mathbb {E}\left[ a_{j,1}\right] + \mathbb {E}\left[ a_{i,2}\right] \mathbb {E}\left[ a_{j,2} \right] + \cdots + \mathbb {E} \left[ a_{i,n}\right] \mathbb {E}\left[ a_{j,n} \right] = 0. \end{aligned}$$
$$\begin{aligned} \mathbb {E}[\Vert \mathbf {A} \Vert ^2]\ge \Vert \mathbb {E}[ \mathbf {A}\mathbf {A}^T ]\Vert = \Bigg \Vert \mathbb {E}\left[ \sum _{i=1}^m \sum _{j=1}^m \mathbf {a}_i^T\mathbf {a}_j \mathbf {Y}_{i,j} \right] \Bigg \Vert = \Bigg \Vert \sum _{i=1}^m n \mathbf {Y}_{i,i} \Bigg \Vert = \Vert n \mathbf {I}_{n} \Vert = n. \end{aligned}$$
Theorem 5 provides a lower bound of the expected \(\Vert \mathbf {A}^T\mathbf {A}\Vert \). The inequality in (7) is a general result and makes minimal assumptions on the covariance \({\varvec{\Sigma }}\). Note that the lower bound is independent of n. The reason is that this general setting covers cases where \({\varvec{\Sigma }}\) is not full rank: some \(\xi _i\)'s could be fixed 0's instead of having unit variance. In fact, when all \(\xi _i\)'s are 0's for \(i=1,2,\ldots ,n\), which implies \({\varvec{\Sigma }} = \mathbf {0}_{n \times n}\), the bound (7) is tight because \(\mathbf {A} = \mathbf {0}_{m\times n}\). For the setting that we consider in this paper, Eq. (8) is a tighter bound than (7) and depends on both m and n. In the case where all variables are independent, we could simplify the results above into the following.
Corollary 6
Let \(\mathbf {A}\) be an \(m \times n\) random matrix in which its rows are independent samples of some random variables \({\varvec{\xi }}^T=(\xi _1,\xi _2,\ldots ,\xi _n)\) with \(\mathbb {E}[\xi _i] = 0\), \(\mathbb {E}[\xi _i^2] = 1\), and \(\xi _i\)'s are independent for \(i = 1,2,\ldots , n\), then
$$\begin{aligned} \max \{m , n\} \le \mathbb {E} \left[ \Vert \mathbf {A} \Vert ^2 \right] . \end{aligned}$$
Since all random variables are independent, \({\varvec{\Sigma }} = \mathbf {I}_n\) and so \(\mu _{\max } = \lambda _{\max }({\varvec{\Sigma }}){=}~1\).
Upper bounds
In order to compute an upper bound of the expected \(\Vert \mathbf {A}^T\mathbf {A}\Vert \), we first compute its tail bounds. The idea of the proof is to rewrite the \(\mathbf {A}^T\mathbf {A}\) as a sum of independent random matrices, and then use the existing results in random matrix theory to derive the tail bounds of \(\Vert \mathbf {A}^T\mathbf {A}\Vert \). We then compute the upper bound of the expected value. Notice that our approach for computing the tail bounds, in principle, is the same as in [19]. However, we present a tail bound that is easier to be integrated into the upper bound of the expected \(\Vert \mathbf {A}^T\mathbf {A}\Vert \). That is, the derived bound can be directly used to bound \(\Vert \mathbf {A}^T\mathbf {A}\Vert \) without any numerical constant.
In order to compute the tail bounds, the following two Lemmas will be used.
[19] Suppose that \(\mathbf {Q}\) is a random positive semi-definite matrix that satisfies \(\lambda _{\max }(\mathbf {Q}) \le 1 \). Then
$$\begin{aligned} \mathbb {E} \left[ e^{\theta \mathbf {Q}} \right] \preccurlyeq \mathbf {I} + (e^{\theta } - 1) (\mathbb {E}\left[ \mathbf {Q}\right] ), \quad \text {for} \ \theta \in \mathbb {R}, \end{aligned}$$
where \(\mathbf {I}\) is the identity matrix in the correct dimension.
[19] Consider a sequence \(\{\mathbf {Q}_k:k=1,2,\ldots ,m\}\) of independent, random, self-adjoint matrices with dimension n. For all \(t \in \mathbb {R}\),
$$\begin{aligned}&\mathbb {P} \left\{ \lambda _{\max } \left( \sum _{k=1}^m \mathbf {Q}_k \right) \ge t\right\} \nonumber \\&\quad \le n \inf _{\theta >0} \exp \left( -\theta t + m \ \log \lambda _{\max } \left( \frac{1}{m} \sum _{k=1}^m \mathbb {E} e^{\theta \mathbf {Q}_k} \right) \right) . \end{aligned}$$
Combining the two results from random matrix theory, we can derive the following theorem for the tail bound of \(\Vert \mathbf {A}^T\mathbf {A}\Vert \).
Let \(\mathbf {A}\) be an \(m \times n\) random matrix in which its rows are independent samples of some random variables \({\varvec{\xi }}^T =(\xi _1,\xi _2,\ldots ,\xi _n)\) for \(i = 1,2,\ldots , n\), and covariance matrix \({\varvec{\Sigma }} = \mathbb {E}\left[ {\varvec{\xi }} {\varvec{\xi }}^T \right] \). Denote \(\mu _{\max } = \lambda _{\max }( {\varvec{\Sigma }})\) and suppose
$$\begin{aligned} \lambda _{\max } \left[ {\varvec{\xi }} {\varvec{\xi }}^T \right] \le R \quad \text {almost surely}. \end{aligned}$$
Then, for any \(\theta ,t \in \mathbb {R}^+\),
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t\right\} \le n \exp \left[ -\theta t + m \ \log \left( 1 + (e^{\theta R} - 1) \mu _{\max } /R \right) \right] . \end{aligned}$$
In particular,
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t\right\} \le n \left[ \frac{\mu _{\max }(mR-t)}{t(R-\mu _{\max })}\right] ^{\frac{t}{R}} \left[ 1+ \frac{t-\mu _{\max } m}{mR - t} \right] ^m. \end{aligned}$$
Denote \(\mathbf {a}_i^T\) as the \(i^{\text {th}}\) row of \(\mathbf {A}\). We can rewrite \(\mathbf {A}^T\mathbf {A}\) as
$$\begin{aligned} \mathbf {A}^T \mathbf {A} = \sum _{k=1}^m \mathbf {a}_k\mathbf {a}_k^T. \end{aligned}$$
Notice that \(\mathbf {a}_k\mathbf {a}_k^T\)'s are independent, random, positive-semidefinite matrices, and \(\mathbb {E}\left[ \mathbf {a}_k\mathbf {a}_k^T\right] = {\varvec{\Sigma }}\), for \(k=1,2,\ldots ,m\). Also, Using the Lemma 8, for any \(\theta >0\), we have
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t\right\}= & {} \mathbb {P} \left\{ \lambda _{\max } \left( \sum _{k=1}^m \mathbf {a}_k\mathbf {a}_k^T \right) \ge t\right\} ,\\\le & {} n \exp \left[ -\theta t + m \ \log \lambda _{\max } \left( \frac{1}{m} \sum _{k=1}^m \mathbb {E} e^{\theta \mathbf {a}_k\mathbf {a}_k^T } \right) \right] . \end{aligned}$$
Notice that \(\lambda _{\max }(\mathbf {a}_k\mathbf {a}_k^T) \le R\), by rescaling on Lemma 7, we have,
$$\begin{aligned} \mathbb {E}\left[ e^{\tilde{\theta }(1/R)(\mathbf {a}_k\mathbf {a}_k^T) } \right] \preccurlyeq \mathbf {I}_n + (e^{\tilde{\theta }} - 1) \left( \mathbb {E} \left[ (1/R)\left( \mathbf {a}_k\mathbf {a}_k^T\right) \right] \right) , \quad \text {for any} \ \tilde{\theta } \in \mathbb {R}, \end{aligned}$$
and thus
$$\begin{aligned}&\mathbb {E}\left[ e^{\theta (\mathbf {a}_k\mathbf {a}_k^T) } \right] \preccurlyeq \mathbf {I}_n + \frac{(e^{\theta R} - 1)}{R} \mathbb {E} \left[ \mathbf {a}_k\mathbf {a}_k^T \right] = \mathbf {I}_n + \frac{(e^{\theta R} - 1)}{R} {\varvec{\Sigma }} , \quad \text {for any} \ \theta \in \mathbb {R}.\\&\mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t\right\} \le n \exp \left[ -\theta t + m \ \log \lambda _{\max } \left( \frac{1}{m} \sum _{k=1}^m \mathbf {I}_n + \frac{(e^{\theta R} - 1)}{R} {\varvec{\Sigma }} \right) \right] ,\\&\qquad \qquad \qquad \qquad \qquad \qquad = n \exp \left[ -\theta t + m \ \log \left( 1 + (e^{\theta R} - 1) \lambda _{\max } ({\varvec{\Sigma }} )/R \right) \right] ,\\&\qquad \qquad \qquad \qquad \qquad \qquad = n \exp \left[ -\theta t + m \ \log \left( 1 + (e^{\theta R} - 1) \mu _{\max } /R \right) \right] . \end{aligned}$$
Using standard calculus, the upper bound is minimized when
$$\begin{aligned} \theta ^{\star } = \frac{1}{R} \log \left[ \frac{t(R-\mu _{\max })}{\mu _{\max }(mR-t)}\right] . \end{aligned}$$
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t\right\}\le & {} n \exp \left[ -\theta ^{\star } t + m \ \log \left( 1 + (e^{\theta ^{\star } R} - 1) \mu _{\max } /R \right) \right] ,\\= & {} n \left[ \frac{\mu _{\max }(mR-t)}{t(R-\mu _{\max })}\right] ^{\frac{t}{R}} \left[ 1+ \frac{t-\mu _{\max } m}{mR - t} \right] ^m. \end{aligned}$$
For matrices which contain samples from unbounded random variables, assumption (11) in Theorem 9 might seem to be restrictive; however, in practice, assumption (11) is mild due to the fact that datasets that are used in the problem (1) are usually normalized and bounded. Therefore, it is reasonable to assume that an observation will be discarded if its magnitude is larger than some constant.
The tail bound (13) is the tightest bound over all possible \(\theta \)'s in (12), but it is difficult to interpret the relationships between the variables. The following corollary takes a less optimal \(\theta \) in (12), but yields a bound that is easier to understand.
Corollary 10
In the same setting as Theorem 9, we have
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t\right\} \le n \exp \left[ \frac{2m\mu _{\max } - t}{R} \right] . \end{aligned}$$
In particular, for \(\epsilon \in \mathbb {R}^+\), we have
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \le 2m\mu _{\max } - R \log \left( \frac{\epsilon }{n} \right) \right\} \ge 1-\epsilon . \end{aligned}$$
Using Eq. (12), and the fact that \(log(y) \le y-1\), \(\forall y \in \mathbb {R}^+\) , we have
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t\right\} \le n \exp \left[ -\theta t + \frac{m\mu _{\max }}{R} (e^{\theta R} - 1) \right] . \end{aligned}$$
The above upper bound is minimized when \(\theta = (1/R)\log \left[ t/(m\mu _{\max })\right] \), and so
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \ge t \right\}\le & {} n \exp \left[ - \frac{t}{R} \log \left[ \frac{t}{m\mu _{\max }} \right] + \frac{m\mu _{\max }}{R} \left( \frac{t}{m\mu _{\max }} - 1 \right) \right] ,\\= & {} n \exp \left[ \frac{t}{R}\left( \log \left[ \frac{m\mu _{\max }}{t} \right] + 1 - \frac{m\mu _{\max }}{t} \right) \right] ,\\= & {} n \exp \left[ \frac{t}{R}\left( \log \left[ \frac{m\mu _{\max } e}{t} \right] - \frac{m\mu _{\max }}{t} \right) \right] ,\\\le & {} n \exp \left[ \frac{t}{R}\left( \frac{m\mu _{\max } e}{t} - 1 - \frac{m\mu _{\max }}{t} \right) \right] ,\\\le & {} n \exp \left[ \frac{1}{R}\left( m\mu _{\max } (e-1) - t \right) \right] ,\\\le & {} n \exp \left[ \frac{1}{R}\left( 2m\mu _{\max } - t \right) \right] . \end{aligned}$$
Set \(\epsilon = n \exp \left[ \dfrac{1}{R}\left( 2m\mu _{\max } - t \right) \right] \), we obtain \(t = 2m\mu _{\max } - R \log \left( \dfrac{\epsilon }{n} \right) \). \(\square \)
The bound in (15) follows directly from (14) and shows that with high probability \(1-\epsilon \) (for small \(\epsilon \)), \(\lambda _{\max } \left( \mathbf {A}^T\mathbf {A} \right) \) is less than \(2m\mu _{\max } + R\log (n)- R \log \left( \epsilon \right) \). Applying the results in Corollary 10 provides the upper bound of the expected \(\Vert \mathbf {A}^T\mathbf {A}\Vert \).
$$\begin{aligned} \mathbb {E} \left[ \lambda _{\max } (\mathbf {A}^T\mathbf {A}) \right] \le 2m\mu _{\max } + R \log \left( n \right) + R. \end{aligned}$$
Using the Eq. (14), and the fact that
$$\begin{aligned} 1\le n\exp \left[ \dfrac{2m\mu _{\max } -t}{R} \right] \quad \text {when} \quad t \le 2m\mu _{\max } - R \log \left[ \frac{1}{n} \right] , \end{aligned}$$
$$\begin{aligned} \mathbb {E} \left[ \lambda _{\max } (\mathbf {A}^T\mathbf {A}) \right]= & {} \int _0^{\infty } \mathbb {P}\{ \lambda _{\max } (\mathbf {A}^T\mathbf {A}) > t \} \ \text {d}t ,\\\le & {} \int _0^{2m\mu _{\max } - R \log \left[ \frac{1}{n} \right] } 1 \ \text {d}t\\&\,+ \int _{2m\mu _{\max } - R \log \left[ \frac{1}{n} \right] }^{\infty } n \exp \left[ \frac{2m\mu _{\max } - t}{R} \right] \ \text {d}t ,\\= & {} 2m\mu _{\max } - R \log \left[ \frac{1}{n} \right] + R . \end{aligned}$$
Therefore, for a matrix \(\mathbf {A}\) which is constructed by a set of normalized data, we obtain the bound
$$\begin{aligned} \max \{m \mu _{\max } , n\} \le \mathbb {E} \left[ \Vert \mathbf {A} \Vert ^2 \right] \le 2m\mu _{\max } + R \log \left( n \right) + R. \end{aligned}$$
The result in (18) might look confusing because for small m and large n, the lower bound is of the order of n while the upper bound is of the order of \(\log (n)\). The reason is that we have to take into account the factor of dimensionality in the constant R. To illustrate this, we prove the following corollary.
Let \(\mathbf {A}\) be an \(m \times n\) random matrix in which its rows are independent samples of some random variables \({\varvec{\xi }}^T =(\xi _1,\xi _2,\ldots ,\xi _n)\) with \(\mathbb {E}[\xi _i] = 0\) for \(i = 1,2,\ldots , n\), and covariance matrix \({\varvec{\Sigma }} = \mathbb {E}\left[ {\varvec{\xi }} {\varvec{\xi }}^T \right] \). Denote \(\mu _{\max } = \lambda _{\max }( {\varvec{\Sigma }})\) and suppose \(\vert \xi _i \vert \le c\) almost surely for \(i=1,2,\ldots ,n\). Then
$$\begin{aligned} \lambda _{\max } \left[ {\varvec{\xi }}{\varvec{\xi }}^T \right] \le c^2n \quad \text {almost surely}. \end{aligned}$$
$$\begin{aligned} \max \{m \mu _{\max } , n\} \le \mathbb {E} \left[ \Vert \mathbf {A} \Vert ^2 \right] \le 2m\mu _{\max } + c^2 n\log \left( n \right) + c^2n \end{aligned}$$
Since \({\varvec{\xi }} {\varvec{\xi }}^T\) is a symmetric rank 1 matrix, we have
$$\begin{aligned} \lambda _{\max }({\varvec{\xi }} {\varvec{\xi }}^T) = \Vert {\varvec{\xi }} {\varvec{\xi }}^T\Vert \le n \Vert {\varvec{\xi }}{\varvec{\xi }}^T\Vert _{\max } = n \max _{1\le i,j\le n}\{\vert \xi _i \xi _j \vert \} \le c^2n \quad \text {almost surely}. \end{aligned}$$
Therefore, R increases linearly in n for bounded \({\varvec{\xi }}\). Recall that the lower bound of the expected \(\Vert \mathbf {A}\Vert ^2\) is linear in both m and n, and the upper bound in (20), is almost linear in both m and n. Therefore, our results on the bounds for the expected Lipschitz constant are nearly-optimal up to some constant.
On the other hand, in order to obtain the lower bound of L, we also need tail bound of \(\lambda _{\min } (\mathbf {A}^T\mathbf {A})\), which is provided in the following theorem.
Theorem 13
Let \(\mathbf {A}\) be an \(m \times n\) random matrix in which its rows are independent samples of some random variables \({\varvec{\xi }}^T =(\xi _1,\xi _2,\ldots ,\xi _n)\) for \(i = 1,2,\ldots , n\), and covariance matrix \({\varvec{\Sigma }} = \mathbb {E}\left[ {\varvec{\xi }} {\varvec{\xi }}^T \right] \). Denote \(\mu _{\min } = \lambda _{\min }( {\varvec{\Sigma }})\) and suppose \(\vert \xi _i \vert \le c\) almost surely for \(i=1,2,\dots , n\).
Then, if \(\mu _{\min } \ne 0\), for any \(\epsilon \in \left( n \exp \left[ - \frac{m\mu _{\min }}{2nc^2} \right] \ ,n \right) \)
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\min } \left( \mathbf {A}^T\mathbf {A} \right) \le m\mu _{\min } - \sqrt{2c^2 n m\mu _{\min }\log \left( \frac{n}{\epsilon } \right) } \right\} \le \epsilon . \end{aligned}$$
Suppose \(\vert \xi _i \vert \le c\) almost surely for \(i=1,2,\dots , n\). Then using Corollary 12 we have
$$\begin{aligned} \lambda _{\max } \left[ {\varvec{\xi }}{\varvec{\xi }}^T \right] \le c^2n = R \quad \text {almost surely}. \end{aligned}$$
Using the Theorem 1.1 from [19], for any \(\theta \in (0,1)\) we have
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\min } \left( \mathbf {A}^T\mathbf {A} \right) \le \theta m\mu _{\min } \right\}\le & {} n \left[ \frac{\exp [\theta -1]}{\theta ^{\theta }} \right] ^{m\mu _{\min }/R} ,\\= & {} n \exp \left[ \left( -(1-\theta )-\theta \log (\theta ) \right) \left( \frac{m\mu _{\min }}{R} \right) \right] . \end{aligned}$$
Notice that \(\theta >0 \) and
$$\begin{aligned} 2 \log ( \theta ) \ge 2\left( 1 - \frac{1}{\theta } \right) = \frac{2(\theta -1)}{\theta } \ge \frac{(\theta + 1)(\theta -1)}{\theta } = \frac{\theta ^2 -1}{\theta }, \end{aligned}$$
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\min } \left( \mathbf {A}^T\mathbf {A} \right) \le \theta m \mu _{\min } \right\}\le & {} n \exp \left[ \left( -(1-\theta )-\theta \log (\theta ) \right) \left( \frac{m\mu _{\min }}{R} \right) \right] ,\\\le & {} n \exp \left[ \left( -(1-\theta )-\theta \frac{\theta ^2 -1}{2\theta } \right) \left( \frac{m\mu _{\min }}{R} \right) \right] ,\\= & {} n \exp \left[ -\frac{1}{2} \left( \theta - 1\right) ^2 \left( \frac{m\mu _{\min }}{R} \right) \right] . \end{aligned}$$
For \(\mu _{\min } \ne 0\), we let \(\epsilon = n \exp \left[ - \left( \theta - 1\right) ^2 \frac{m\mu _{\min }}{2R} \right] \) and so
$$\begin{aligned} \theta = 1 - \sqrt{\frac{2R}{m \mu _{\min }} \log \left( \frac{n}{\epsilon } \right) } . \end{aligned}$$
In particular, suppose \(\epsilon \in \left( n \exp \left[ - \frac{m\mu _{\min }}{2R} \right] \ ,n \right) \), then
$$\begin{aligned} \begin{aligned} 0< 1 - \sqrt{\frac{2R}{m \mu _{\min }} \log \left( \frac{n}{\epsilon } \right) } < 1 \end{aligned} \end{aligned}$$
$$\begin{aligned} \mathbb {P} \left\{ \lambda _{\min } \left( \mathbf {A}^T\mathbf {A} \right) \le m\mu _{\min } - \sqrt{2Rm\mu _{\min }\log \left( \frac{n}{\epsilon } \right) } \right\} \le \epsilon , \end{aligned}$$
for \(\epsilon \in \left( n \exp \left[ - \frac{m\mu _{\min }}{2R} \right] \ ,n \right) \)\(\square \)
For the tail bound in Theorem 13 to be meaningful, m has to be sufficiently large compared to n. In such cases, the smallest eigenvalue \(\lambda _{\min } \left( \mathbf {A}^T\mathbf {A} \right) \) is at least \(\mathcal {O}(m-\sqrt{nm\log {n}})\) with high probability.
Complexity analysis
In this section, we will use the probabilistic bounds of L to study the complexity of solving ERM. We focus only on FISTA for illustrative purpose and clear presentation of the idea of the proposed approach. But the approach developed in this section can be applied to other algorithms as well.
By the assumption that \(\mathbf {A}\) is a random matrix, we also have the solution \(\mathbf {x}_{\star }\) as a random vector. Notice that the study of randomization of \(\mathbf {x}_{\star }\) is not covered this paper. In particular, if the statistical properties of \(\mathbf {x}_{\star }\) can be obtained, existing optimization algorithms might not be needed to solve the ERM problem. Therefore, in this paper, we remove this consideration by denoting a constant M such that \(\Vert \mathbf {x}_0 - \mathbf {x}_{\star } \Vert ^2 \le M\). In such case, we have the FISTA convergence rate
$$\begin{aligned} F(\mathbf {x}_k)-F(\mathbf {x}_{\star }) \le \frac{2\eta L M}{(k+1)^2}, \end{aligned}$$
where \(\mathbf {x}_{\star }\) is the solution of (1), and \(\eta > 1\) is the parameter which is used in the backtracking stepsize strategy.
Using Proposition 1 and Corollary 12, we know
$$\begin{aligned} \max \left\{ \gamma \mu _{\max } , \frac{\gamma n}{m}\right\} \le \frac{\gamma }{m} \mathbb {E} \left[ \Vert \mathbf {A} \Vert ^2 \right] \le 2\gamma \mu _{\max } + \frac{\gamma }{m}(c^2n\log \left( n \right) + c^2n). \end{aligned}$$
Thus, on average,
$$\begin{aligned} F(\mathbf {x}_k)-F(\mathbf {x}_{\star }) \le \frac{2\eta M }{(k+1)^2} \left( 2\gamma \mu _{\max } + \frac{\gamma }{m}(c^2n\log \left( n \right) + c^2n) \right) . \end{aligned}$$
In (22)–(23), the lower bound of \((\gamma /m) \mathbb {E} \left[ \Vert \mathbf {A} \Vert ^2 \right] \) is linear in n / m, and upper bound is nearly-linear in n / m. This suggests that the average complexity of ERM is bounded by the ratio of the dimensions to the amount of data. In particular, problems with overdetermined systems (\(m>>n\)) can be solved more efficiently than problems with underdetermined systems (\(m<n\)).
Another critical factor of the complexity is \(\mu _{\max } = \lambda _{\max }({\varvec{\Sigma }})\), where \({\varvec{\Sigma }}\) is the covariance matrix of the rows of \(\mathbf {A}\). In the ideal situation of regression analysis, all inputs should be statistically linearly independent. In such cases, since we assume the diagonal elements of \({\varvec{\Sigma }}\) are 1's, \(\mu _{\max }=1\). It is, however, almost impossible to ensure this situation for practical applications. In practice, since \({\varvec{\Sigma }} \in \mathbb {R}^{n \times n}\), \(\mu _{\max } = \lambda _{\max }({\varvec{\Sigma }}) = \Vert {\varvec{\Sigma }} \Vert \) is likely to increase as n increases.
Similarly we can compute the probabilistic lower bound of L in the case that m is sufficiently larger than n. Using Theorem 13, we can show that L is bounded above by
$$\begin{aligned} \mathcal {O}\left( \mu _{\min } - \sqrt{(n\log n)/m} \right) . \end{aligned}$$
We emphasize the lower bound of L is not equivalent to the lower bound of the complexity. However, since the stepsize of first order method algorithms is proportional to 1 / L, this result indicates that the upper bound of the stepsize which potentially could guarantee convergence.
PUG: Probabilistic Upper-bound Guided stepsize strategy
The tail bounds in Sect. 3, as a by-product of the upper bound in Sect. 3.1.2, can also be used in algorithms. As mentioned in the introduction, L is an important quantity in the stepsize strategy since the stepsize is usually inversely proportional to L. However, in large scale optimization, the computational cost of evaluating \(\Vert \mathbf {A}\Vert ^2\) is very expensive. One could use backtracking techniques to avoid the evaluation of the Lipschitz constant; in each iteration, we find a large enough constant \(\tilde{L}\) such that it satisfies the properties of the Lipschitz constant locally. In the case of FISTA [1], for the \(k^{\text {th}}\) iteration with incumbent \(\mathbf {x}_k\) one has to find a \(\tilde{L}\) such that
$$\begin{aligned} F\left( p_{\tilde{L}}(\mathbf {x}_k)\right) \le Q_{\tilde{L}}\left( p_{\tilde{L}}(\mathbf {x}_k), \mathbf {x}_k \right) , \end{aligned}$$
where,
$$\begin{aligned} Q_{\tilde{L}}(\mathbf {x},\mathbf {y}) \triangleq f(\mathbf {y}) + \langle \mathbf {x} -\mathbf {y},\nabla f(\mathbf {y}) \rangle + \frac{\tilde{L}}{2} \Vert \mathbf {x} -\mathbf {y} \Vert ^2 + g(\mathbf {x}), \end{aligned}$$
and \(p_{\tilde{L}}(\mathbf {y}) \triangleq \arg \min _{\mathbf {x}} \{Q_{\tilde{L}}(\mathbf {x},\mathbf {y}): \mathbf {x} \in \mathbb {R}^n \}\). Equation (24) is identical to the Lipschitz constant condition (6) with specifically \(\mathbf {y} = p_{\tilde{L}}(\mathbf {x}_k)\) and \(\mathbf {x} = \mathbf {x}_k\). Therefore, (24) is a less restrictive condition compared to the Lipschitz constant condition (6). This indicates that \(\tilde{L}\) could be much smaller than L, and so it yields to larger stepsize. On the other hand, for \(\tilde{L} \ge L\), it is guaranteed that the local Lipschitz constant condition will be satisfied. In both cases, when computing L is intractable, we would not be able to distinguish the two cases by just having \(\tilde{L}\) that satisfies (24).
As we can see, finding a good \(\tilde{L}\) involves a series of function evaluations. In the next section, we will review the commonly used stepsize strategies.
Current stepsize strategies
To the best of our knowledge, current strategies fall into four categories:
A fixed stepsize from estimation of \(\Vert \mathbf {A} \Vert ^2\).
Backtracking-type method with initial guess \(\tilde{L}_0\), and monotonically increase \(\tilde{L} = \eta ^p \tilde{L}_0\) when it does not satisfy Lipschitz condition locally (\(\eta > 1\), \(p=0,1,\dots \)). See [1] for details.
Adaptive-type method with initial guess \(\tilde{L}_0\). Suppose \(\tilde{L}_k\) is used for the \(k^{\text {th}}\) iteration, then find the smallest p such that \(\tilde{L}_{k+1} = 2^p \tilde{L}_k\) satisfies Lipschitz condition locally (\(p= -1,0,1,\dots \)). See Nesterov's universal gradient methods [12] for details.
Adaptive stepsize strategy for a specific algorithm. See [7] for example.
Suppose \(\tilde{L}\) is used as an initial guess for the \(k^{\text {th}}\) iteration, and we select the smallest \(q \in \mathbb {N}\) such that \(\tilde{L}_k = \eta ^q \tilde{L}\) satisfies the local condition, for \(\eta \ge 1\). To guarantee convergence, it requires
$$\begin{aligned} q \ge \max \Bigg \{ \frac{1}{\log \eta } \left( \log L - \log \tilde{L} \right) , 0 \Bigg \}, \end{aligned}$$
which is also the numbers of function evaluations required. We have
$$\begin{aligned}&L \le \tilde{L}_k \le \eta L,&\quad \text {if} \quad \tilde{L} \le L,\\&L \le \tilde{L}_k = \tilde{L} ,&\quad \text {if} \quad L \le \tilde{L}. \end{aligned}$$
To guarantee convergence, it requires q such that \(\tilde{L}_k = \eta ^q \tilde{L} \ge L\). If \(\tilde{L} \le L\), q should be selected such that \(\eta ^q \tilde{L} \le \eta L\); otherwise \(q-1\) will be large enough to be selected, i.e. \(\tilde{L}_k = \eta ^{q-1} \tilde{L} \ge L\). \(\square \)
Theorem 14 covers the setting of choice (i)–(iii), also referred to as the fixed stepsize strategy, backtracking method, and Nesterov's adaptive method, respectively. For fixed stepsize strategies, \(\tilde{L} \ge L\) is selected for all iterations, which yields \(q=0\), and thus checking the local condition is not required [1]. For backtracking method, \(\tilde{L} = \tilde{L}_{k-1}\) and \(\eta > 1\) is a parameter of the strategy. Since \(\tilde{L}_k\) is monotonically increasing in k, q is monotonically decreasing. Therefore, q at the \(k^{\text {th}}\) iteration is equivalent to the total number of (extra) function evaluations for the rest of the iterations.
On the other hand, for Nesterov's adaptive method, \(\tilde{L} = \tilde{L}_{k-1}/2\) and \(\eta = 2\). \(\tilde{L}_k\) is not monotonically increasing in k, and in each iteration, q is the number of function evaluations in the worst case. Notice that once the worst case occurs (having q function evaluations) in the \(k^{\text {th}}\) iterations, q will be smaller since \(\tilde{L}_k\) is sufficiently large. In Nesterov's universal gradient methods [12], Nesterov proved that for k iterations, the number of function evaluations is bounded by \(\mathcal {O}(2k)\).
Theorem 14 illustrates the trade-off between three aspects: aggressiveness of initial guess \(\tilde{L}\), recovering rate \(\eta \), and the convergence rate. Methods with small (aggressive) initial guess \(\tilde{L}\) have the possibility to result in larger stepsize. However, it will yield a larger q, the number of function evaluations in the worst case. One could reduce q by setting a larger \(\eta \), and so \(\tilde{L}\) could scale quickly towards L, but it will generate a slower rate of convergence (\(\eta L\)). If one wants to preserve a good convergence rate (small \(\eta \)) with small number of function evaluations (small q), then \(\tilde{L}\) could not be too small. In that case one has to give up on the opportunity of having large stepsizes. The fixed stepsize strategy is the extreme case of minimizing q by giving up the opportunity of having larger stepsizes.
The proposed stepsize strategy PUG tries to reduce \(\tilde{L}\) as (iii), but guides \(\tilde{L}\) to increase reasonably and quickly when it fails to satisfy the local condition. In particular, by replacing L with its probabilistic upper bound, aggressive \(\tilde{L}\) and fast recovering rate are allowed without slowing the convergence. This above feature does not obey the trade-off that constraints choice (i)–(iii). Also, PUG is flexible compared to (iv). It can be applied to all algorithms that require L, as well as mini-batch and block-coordinate-type algorithms which require submatrix of \(\mathbf {A}\).
In this section, we will use the tail bounds to develop PUG. Using Eq. (15), we first define the upper bound at different confidence level,
$$\begin{aligned} L \le \mathcal {U}(\epsilon ) \triangleq 2\gamma \mu _{\max } -\frac{\gamma R}{m} \log \left( \frac{\epsilon }{n} \right) , \end{aligned}$$
with probability of at least \(1-\epsilon \). We point out that the probabilistic upper bound (15) does not rely on the assumption that the dataset is normalized with mean zero and unit variance, and so it is applicable to all types of datasets. The basic idea of PUG is to use the result in the following Theorem.
Suppose \(\tilde{L}\) is used as an initial guess for the \(k^{\text {th}}\) iteration, and we denote
$$\begin{aligned} \eta _{\text {PUG},N} = \left( \frac{\mathcal {U}(\epsilon )}{\tilde{L}} \right) ^{1/N}, \end{aligned}$$
where \(\mathcal {U}(\epsilon )\) is defined as in (25). If we select the smallest \(q \in \mathbb {N}\) such that \(\tilde{L}_k = \eta _{\text {PUG},N}^q \tilde{L}\) satisfies the local condition, then with probability of at least \(1-\epsilon \), it requires \(q=N\) to guarantee convergence. In particular, we have
$$\begin{aligned}&L \le \tilde{L}_k \le \mathcal {U}(\epsilon ),&\quad \text {if} \quad \tilde{L} \le L,\\&L \le \tilde{L}_k = \tilde{L} ,&\quad \text {if} \quad L \le \tilde{L}, \end{aligned}$$
with probability of at least \(1-\epsilon \).
To guarantee convergence, it requires q such that \(\tilde{L}_k = \eta _{\text {PUG},N}^q \tilde{L} \ge L\). When \(q=N\), \(\tilde{L}_k = \mathcal {U}(\epsilon ) \ge L\) with probability of at least \(1-\epsilon \). \(\square \)
Theorem 15 shows the potential advantage of PUG. With any initial guess \(\tilde{L}\), PUG is able to scale \(\tilde{L}\) quickly towards L without interfering with the probabilistic convergence rate. This unique feature allows an aggressive initial guess \(\tilde{L}\) as Nesterov's adaptive strategy without low recovering rate nor slow convergence rate. Algorithm 1 provided details of PUG with \(N=2\). In Algorithm 1, the Lipschitz constant estimation from last iteration, \(\tilde{L}_k\), is first divided by 2 to be the initial guess \(\tilde{L}\), which is the same as the Nesterov's adaptive method. We point out that,
$$\begin{aligned} \mathcal {U}(\epsilon ) \rightarrow \infty \quad \text {as} \quad \epsilon \rightarrow 0. \end{aligned}$$
Therefore, the convergence of FISTA is guaranteed with PUG, even in the extreme case that \(L \le \mathcal {U}(\epsilon )\) with \(\epsilon \approx 0\).
In the case where computing \(\mu _{\max }\) is impractical, it could be bounded by
$$\begin{aligned} \mu _{\max } = \lambda _{\max }({\varvec{\Sigma }}) = \Vert {\varvec{\Sigma }}^{\frac{1}{2}} \Vert ^2 \le \text {trace}({\varvec{\Sigma }} ) = \sum _{i=1}^n \text {Var}(\xi _i) . \end{aligned}$$
With the assumption that \(\xi _i\)'s have zero mean and unit variance, \(\mu _{\max } \le n\). For \(\mathbf {A}\) that does not satisfy these assumptions due to different normalization process of the data, (26) could be used to bound \(\mu _{\max }\). For the R in (25), one could use \(c^2 n\) as in Corollary 12, or a tighter estimation would be
$$\begin{aligned} R = \sum _{i=1}^n \max _{k} a_{i,k}^2, \end{aligned}$$
since \( \lambda _{\max } \left[ {\varvec{\xi }} {\varvec{\xi }}^T\right] = \Vert {\varvec{\xi }} {\varvec{\xi }}^T \Vert = {\varvec{\xi }}^T {\varvec{\xi }} = \sum _{i=1}^n \xi _i^2. \)
Convergence bounds: regular strategies versus PUG
Different stepsize strategies would lead to different convergence rates even for the same algorithm. Since PUG is based on the probabilistic upper bound \(\mathcal {U}(\epsilon )\) in (25), it leads to a probabilistic convergence of FISTA. In particular,
$$\begin{aligned} F(\mathbf {x}_k)-F(\mathbf {x}_{\star }) \le \frac{2 M}{(k+1)^2} \left( 2\gamma \mu _{\max } -\frac{\gamma R}{m} \log \left( \frac{\epsilon }{n} \right) \right) , \end{aligned}$$
with probability at least \(1-\epsilon \). Equation (28) holds with probability at least \(1-\epsilon \) because of Eq. (25), which holds for all iterations in FISTA. In particular, once the instance (matrix \(\mathbf {A}\)) is fixed, then we have know \(L \le \mathcal {U}(\epsilon )\), with probability at least \(1-\epsilon \). If the probabilistic upper bound holds, then \(\mathcal {U}(\epsilon )\) is the worst Lipschitz constant estimation computed by PUG in all iterations, and so (27) holds. Therefore, the above result could be obtained using the same argument as in the proof of convergence in [1].
When using regular stepsize strategies, FISTA results in convergence rates that is in the form of (21) with different \(\eta \)'s (\(\eta > 1\)). For backtracking strategy, \(\eta \) would be an user-specified parameter. It is clear from (21) that convergence is better when \(\eta \) is close to 1. However, it would take more iterations and more function evaluations to find a satisfying stepsize, and these costs are not captured in (21). In the case of Nesterov's adaptive strategy [12], \(\eta = 2\). Using the same analysis as in Sect. 3.2, L should be replaced with the upper bound in (22) for the average case, or \(\mathcal {U}(\epsilon )\) in (25) for the probabilistic case. For the probabilistic case, those convergences are in the same order as in the case of using PUG, as shown in (28).
Therefore, PUG is competitive compared to other stepsize strategies in the probabilistic case. The strength of PUG comes from the fact that it is adaptive with strong theoretical guarantee that with high probability, \(\tilde{L}_k\) will quickly be accepted at each iteration.
Mini-batch algorithms and block-coordinate algorithms
For mini-batch algorithms, each iteration is performed using only a subset of the whole training set. Therefore, in each iteration, we consider a matrix that contains the corresponding subset. This matrix is a submatrix of \(\mathbf {A}\) with the same structure, and therefore it is also a random matrix with smaller size \(\bar{m}\)-by-n, where \(\bar{m} < m\). Using the existing results, we can conclude that the associated \(\mathcal {U}(\epsilon )\) in each iteration would be larger than those in full-batch algorithms. As a result, the guaranteed stepsize for mini-batch algorithms tends to be smaller than full-batch algorithms.
On the other hand, block-coordinate algorithms do not update all dimensions at once in each iteration. Rather, a subset of dimensions will be selected to perform the update. In such setting, we only consider the variables (columns of \(\mathbf {A}\)) that are associated with the selected coordinates. We should consider a submatrix that is formed by columns of \(\mathbf {A}\). This submatrix itself is also a random matrix with smaller size m-by-\(\bar{n}\), where \(\bar{n} < n\). Using the existing results, the guaranteed stepsize for block-coordinate algorithms tends to be larger.
Thus, with minor modifications PUG can be applied to mini-batch and block-coordinate algorithms.
Numerical experiments
In the first part of this section, we will apply the bounds from Sect. 3 to illustrate the relationship between different parameters and L. Then, we will perform the PUG on two regression examples. The datasets used for the two regression examples can be found at https://www.csie.ntu.edu.tw/~cjlin/libsvm-tools/datasets.
Numerical simulations for average L
We consider three cases, and in each case we simulate \(\mathbf {A}\)'s in different dimension m's and n's. Each configuration is simulated with 1000 instances, and we study the sample average behaviors of L.
In the first case, we consider the most complicated situation and create random vector such that its entries are not identical nor independent. We use a mixture of three types of random variables (exponential, uniform, and multivariate normal) to construct the matrix \(\mathbf {A} \in \mathbb {R}^{m\times n}\). The rows of \(\mathbf {A}\) are independent samples of \({\varvec{\xi }}^T = (\xi _1,\xi _2,\ldots ,\xi _n)\). We divide \(\mathbf {A}\) into three parts with \(n_1\), \(n_2\), and \(n_3\) columns. Note that \(n_1 = n_2 = n_3 = n/3\) up to rounding errors. We assign \({\varvec{\xi }}\) with the elements where
$$\begin{aligned} \xi _j \sim {\left\{ \begin{array}{ll} Exp (1) - 1 &{} \text {if } j \le n_1,\\ \mathcal {U}(-\sqrt{3},\sqrt{3}) &{} \text {if } n_1 < j \le n_1 + n_2, \end{array}\right. } \end{aligned}$$
and \((\xi _{n_1+n_2+1},\xi _{n_1+n_2+2},\ldots ,\xi _n)\sim \mathcal {N}(\mathbf {0}_{n_3 \times 1},{\hat{\varvec{\Sigma }}})\). \({\hat{\varvec{\Sigma }}}\) is a \(n_3 \times n_3\) matrix with 1 on the diagonal and 0.5 otherwise. \(\xi _1,\xi _2,\ldots ,\xi _{n_1+n_2}\) are independent.
The scaling factors of the uniform distribution and exponential distribution are used to normalize the uniform random variables \(\xi _j\) such that \(\mathbb {E}[\xi _j]=0\), and \(\mathbb {E}[\xi _j^2]=1\). Some entries of \(\mathbf {A}\) are normally distributed or exponentially distributed, and we approximate the upper bound of the entries with \(c=3\). From statistics, we know that with very high probability, this approximation is valid.
In Fig. 1, we plot the sample average Lipschitz constant over 1000 instances. As expected, the expected Lipschitz constant is "trapped" between its lower and upper bound. We can see that the expected L increases when m and n increases with the ratio n / m is fixed. This phenomenon is due the fact that \(\mu _{\max } = \lambda _{\max }({\varvec{\Sigma }})\) increases as n increases.
To further illustrate this, we consider the second case. The setting in this case is the same as the first case except that we replace \({\hat{\varvec{\Sigma }}}\) with \(\mathbf {I}_n\). So, all the variables are linearly independent. In the case, \(\mu _{\max }=1\) regardless the size of the \(\mathbf {A}\). The ratio \(n/m = 2\) in this example. From Fig. 2, the sample average L does not increase rapidly as the size of \(\mathbf {A}\) increases. These results match with the bound (22).
In the last case, we investigate the effect of the ratio n / m. The setting is same as the first case, but we keep \(n=1024\) and test with different m's. From Fig. 3, the sample average L decreases as m increases. This result suggests that a large dataset is favorable in terms of complexity, especially for large-scale (large n) ERM problems.
Regularized logistics regression
Case I, \(m=n\)
Case II, \(2m=n\)
Case III, \(n=1024\)
We implement FISTA with three different stepsize strategies (i) the regular backtracking stepsize strategy, (ii) the Nesterov's adaptive stepsize strategy, and (iii) the proposed adaptive stepsize strategy PUG. We compare the three strategies with an example in a \(\ell _1\) regularized logistic regression problem, in which we solve the convex optimization problem
$$\begin{aligned} \min _{\mathbf {x}\in \mathbb {R}^n} \frac{1}{m} \sum _{i=1}^m \log ( 1+ \exp (-b_i\mathbf {x}^T\mathbf {a}_i) ) + \omega \Vert \mathbf {x} \Vert _1. \end{aligned}$$
We use the dataset gisette for \(\mathbf {A}\) and \(\mathbf {b}\). Gisette is a handwritten digits dataset from the NIPS 2003 feature selection challenge. The matrix \(\mathbf {A}\) is a \(6000 \times 5000\) dense matrix, and so we have \(n=5000\) and \(m=6000\). The parameter \(\omega \) is chosen to be the same as [9, 21]. We chose \(\tilde{L}_0 = 1\) for all three stepsize strategies. For backtracking stepsize strategy, we chose \(\eta = 1.5\).
We compare our proposed probabilitic bound and the deterministic upper bound \(\bar{L}\) using \(\Vert \mathbf {A} \Vert ^2 \le \text {trace}(\mathbf {A}^T \mathbf {A})\). We estimate \(\mu _{\max } = 1289.415\) and \(R = 4955\) using Eqs. (26) and (27), respectively. We thus obtain our probabilistic bound \(\mathcal {U}(0.1) = 646.941\), which is less than the deterministic upper bound \(\bar{L} = 1163.345\).
Table 1 shows the performances of three stepsize strategies. T is the scaled computational time, nIter is the scaled number of iterations, nFunEva is the scaled number of function evaluations, and Avg. \(\tilde{L}\) is the average of \(\tilde{L}\) used. This result encourages the two adaptive stepsize strategies as the number of iterations needed and the computational time are significantly smaller compared to the regular backtracking algorithm. This is due to the fact that \(\tilde{L}\) could be a lot smaller than the Lipschitz constant L in this example, and so the two adaptive strategies provide more efficient update for FISTA. As shown in Table 1, even though Nesterov's strategy yields smaller \(\tilde{L}\) on average, it leads to higher number of function evaluations and so it takes more time than PUG.
Regularized linear regression
Table 1 Gisette
We also compare the three strategies with an example in a \(\ell _1\) regularized linear regression problem, a.k.a LASSO, in which we solve the convex optimization problem
$$\begin{aligned} \min _{\mathbf {x}\in \mathbb {R}^n} \frac{1}{2m} \sum _{i=1}^m ( \mathbf {x}^T\mathbf {a}_i - b_i )^2 + \omega \Vert \mathbf {x} \Vert _1. \end{aligned}$$
We use the dataset YearPredictionMSDt (testing dataset) for \(\mathbf {A}\) and \(\mathbf {b}\). YearPredictionMSDt has matrix \(\mathbf {A}\) is a \(51630 \times 90\) dense matrix, and so we have \(n=90\) and \(m=51630\). The parameter \(\omega \) is chosen to be \(10^{-6}\). We chose \(\tilde{L}_0 = 1\) for all three stepsize strategies. For backtracking stepsize strategy, we chose \(\eta = 1.5\).
We compare our proposed probabilitic bound and the deterministic upper bound \(\bar{L}\) using \(\Vert \mathbf {A} \Vert ^2 \le \text {trace}(\mathbf {A}^T \mathbf {A})\). We estimate \(\mu _{\max } = 9.603 \times 10^6\) and \(R = 4.644 \times 10^9\) using Eqs. (26) and (27), respectively. We thus obtain our probabilistic bound \(\mathcal {U}(0.1) = 1.982 \times 10^7\), which is less than the deterministic upper bound \(\bar{L} = 2.495 \times 10^7\).
Table 2 shows the performance of three stepsize strategies, and the structure is same as Table 1. Unlike Gisette, adaptive strategies failed to provide small \(\tilde{L}\) compared to L. Also, since n is very small, the cost of function evaluation is very cheap compared to Gisette. Therefore, both adaptive strategies do not outperform the backtracking strategy in this example. However, one can see that both adaptive strategies yielded to reduction in terms of the number of function evaluations. Therefore, one could expect they outperform backtracking strategy for larger/difficult instances.
Table 2 YearPredictionMSDt
Conclusions and perspectives
The analytical results in this paper show the relationship between the Lipschitz constant and the training set of an ERM problem. These results provide insightful information about the complexity of ERM problems, as well as opening up opportunities for new stepsize strategies for optimization problems.
One interesting extension could be to apply the same approach to different machine learning models, such as neural networks, deep learning, etc.
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009). https://doi.org/10.1137/080716542
Beck, A., Tetruashvili, L.: On the convergence of block coordinate descent type methods. SIAM J. Optim. 23(4), 2037–2060 (2013). https://doi.org/10.1137/120887679
Belloni, A., Chernozhukov, V., Wang, L.: Pivotal estimation via square-root Lasso in nonparametric regression. Ann. Stat. 42(2), 757–788 (2014). https://doi.org/10.1214/14-AOS1204
Burke, J.V., Ferris, M.C.: A Gauss–Newton method for convex composite optimization. Math. Program. 71(2, Ser. A), 179–194 (1995). https://doi.org/10.1007/BF01585997
Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006). https://doi.org/10.1002/cpa.20124
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). https://doi.org/10.1109/TIT.2006.871582
Gonzaga, C.C., Karas, E.W.: Fine tuning Nesterov's steepest descent algorithm for differentiable convex programming. Math. Program. 138(1–2, Ser. A), 141–166 (2013). https://doi.org/10.1007/s10107-012-0541-z
Koltchinskii, V., Mendelson, S.: Bounding the smallest singular value of a random matrix without concentration. Int. Math. Res. Not. 23, 12991–13008 (2015)
Lee, J.D., Sun, Y., Saunders, M.A.: Proximal Newton-type methods for minimizing composite functions. SIAM J. Optim. 24(3), 1420–1443 (2014). https://doi.org/10.1137/130921428
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization, vol. 87. Kluwer Academic Publishers, Boston (2004). https://doi.org/10.1007/978-1-4419-8853-9
Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140(1, Ser. B), 125–161 (2013). https://doi.org/10.1007/s10107-012-0629-5
Nesterov, Y.: Universal gradient methods for convex optimization problems. Math. Program. 152(1–2, Ser. A), 381–404 (2015). https://doi.org/10.1007/s10107-014-0790-0
Qin, Z., Scheinberg, K., Goldfarb, D.: Efficient block-coordinate descent algorithms for the group Lasso. Math. Program. Comput. 5(2), 143–169 (2013). https://doi.org/10.1007/s12532-013-0051-x
Qu, Z., Richtarik, P.: Coordinate descent with arbitrary sampling II: expected separable overapproximation. arXiv:1412.8063 (2014)
Rudelson, M., Vershynin, R.: Non-asymptotic theory of random matrices: extreme singular values. In: Proceedings of the International Congress of Mathematicians, vol. III, pp. 1576–1602. Hindustan Book Agency, New Delhi (2010)
Saha, A., Tewari, A.: On the nonasymptotic convergence of cyclic coordinate descent methods. SIAM J. Optim. 23(1), 576–601 (2013). https://doi.org/10.1137/110840054
Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, Cambridge (2014)
Book MATH Google Scholar
Sun, T., Zhang, C.H.: Sparse matrix inversion with scaled Lasso. J. Mach. Learn. Res. 14, 3385–3418 (2013)
Tropp, J.A.: User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12(4), 389–434 (2012). https://doi.org/10.1007/s10208-011-9099-z
Yamakawa, E., Fukushima, M., Ibaraki, T.: An efficient trust region algorithm for minimizing nondifferentiable composite functions. SIAM J. Sci. Stat. Comput. 10(3), 562–580 (1989). https://doi.org/10.1137/0910036
Yuan, G.X., Ho, C.H., Lin, C.J.: An improved GLMNET for L1-regularized logistic regression. J. Mach. Learn. Res. 13(1), 1999–2030 (2012)
Imperial College Business School, Imperial College London, Ayrton Road, London, SW7 2AZ, UK
Chin Pang Ho
Department of Computing, Imperial College London, 180 Queen's Gate, London, SW7 2AZ, UK
Panos Parpas
Correspondence to Chin Pang Ho.
The work of the authors was supported, in part, by EPSRC Grants EP/M028240/1 and EP/K040723, by FP7 Marie Curie Career Integration Grant (PCIG11-GA-2012-321698 SOC-MP-ES), and by the Imperial College Junior Research Fellowship.
Ho, C.P., Parpas, P. Empirical risk minimization: probabilistic complexity and stepsize strategy. Comput Optim Appl 73, 387–410 (2019). https://doi.org/10.1007/s10589-019-00080-2
Issue Date: 01 June 2019
Stepsize strategy
|
CommonCrawl
|
Number (mult/div)
Multiplication and repeated addition (1,2,3,5,10)
Interpret products (1,2,3,5,10)
Arrays as products (1,2,3,5,10)
Multiplication and division using groups (1,2,3,5,10)
Multiplication and division using arrays (1,2,3,5,10)
Find unknowns in multiplication and division problems (1,2,3)
Find unknowns in multiplication and division problems (5,10)
Find unknowns in multiplication and division problems (1,2,3,5,10)
Multiplication and division tables (2,3,5,10)
Multiplication and division (turn arounds and fact families) (0,1,2,3,5,10)
Number sentences and word problems (1,2,3,5,10)
Extending multiplication and division calculations using patterns (1,2,3,5,10)
Problem solving with multiplication and division (1,2,3,5,10)
How do we know if the calculator is correct? (Investigation)
Multiplication as repeated addition (10x10)
Interpreting products (10x10)
Arrays as products (10x10)
Multiplication and division using groups (10x10)
Multiplication and division using arrays (10x10)
Find unknowns in multiplication and division problems (0,1,2,4,8)
Find unknowns in multiplication and division problems (3,6,7,9)
Find unknowns in multiplication and division problems (mixed)
Multiplication and division tables (10x10)
Multiplication and division (turn arounds and fact families) (10x10)
Find quotients (10x10)
Number sentences and word problems (10x10)
Multiplication and division by 10
Extending multiplication and division calculations using patterns (10x10)
Problem solving with multiplication and division (10x10)
Properties of multiplication (10x10)
Multiplication and division by 10 and 100
Distributive property for multiplication (10x10)
Use the distributive property (10x10)
Use rounding to estimate solutions
Multiply a two digit number by a small single digit number using an area model
Multiply a two digit number by a small single digit number
Multiply a single digit number by a two digit number using an area model
Multiply a single digit number by a two digit number
Multiply a single digit number by a three digit number using an area model
Multiply a single digit number by a three digit number
Multiply a single digit number by a four digit number using an area model
Multiply a single digit number by a four digit number
Multiply 2 two digit numbers using an area model
Multiply 2 two digit numbers
Multiply a two digit number by a 3 digit number
Multiply 3 numbers together
Divide a 2 digit number by a 1 digit number using area or array model
Divide a 2 digit number by a 1 digit number
Divide a 3 digit number by a 1 digit number using short division algorithm
Divide a 3 digit number by a 1 digit number resulting in a remainder
Divide a 3 digit number by a 1 digit number using short division algorithm, with remainders
Multiply various single and double digit numbers
Extend multiplicative strategies to larger numbers
Divide various numbers by single digits
Solve division problems presented within contexts
Solve multiplication and division problems involving objects or words
Multiply various single, 2 and 3 digit numbers
Divide various 4 digit numbers by 2 digit numbers
When we multiply a single-digit number by a two-digit number, there's a great way of using the area of rectangles to help us.
Remember, the area of a rectangle can be found using multiplication:
$\text{Area of a rectangle }=\text{length }\times\text{width }$Area of a rectangle =length ×width
Take a look at the video to see how you can break your multiplication problem up into smaller steps, using rectangles.
Remember!
When multiplying a two digit number by a one digit number, we can use the area of a rectangle to help use find the answer.
We can split up the length of a rectangle into smaller measurements so they are easier to multiply.
Multiples of $10$10, or other multiplication facts that you know are a great way to split up a large rectangle in a smaller rectangle.
Let's use the area model to find $65\times7$65×7.
$60$60 $5$5
Find the area of the first rectangle.
Find the area of the second rectangle.
What is the total area of the two rectangles?
So what is $65\times7$65×7?
Fill in the areas of each rectangle.
$2$2 $\editable{}$
What is the total area of both rectangles?
Bianca, Derek and Sandy completed the following calculations using area models.
One of them did not get the right answer. Choose the person, and working, that is in error:
Derek completed $41\times7$41×7 using:
$2$2 $80$80 $2$2
$5$5 $200$200 $5$5
Total: $280$280 $7$7
He found the total area to be $A=287$A=287
Sandy had to work out $54\times4$54×4 and split it up as follows:
$4$4 $200$200 $16$16
She found the total area to be $A=216$A=216
Bianca wanted to work out $58\times3$58×3 and had:
Where did Bianca make a mistake?
Calculating $3\times50$3×50 to be $180$180.
Calculating $3\times8$3×8 to be $24$24.
|
CommonCrawl
|
The supermassive black hole coincident with the luminous transient ASASSN-15lh (1710.01045)
T. Krühler, M. Fraser, G. Leloudas, S. Schulze, N. C. Stone, S. van Velzen, R. Amorin, J. Hjorth, P. G. Jonker, D. A. Kann, S. Kim, H. Kuncarayakti, A. Mehner, A. Nicuesa Guelbenzu
Nov. 18, 2017 astro-ph.CO, astro-ph.GA
The progenitors of astronomical transients are linked to a specific stellar population and galactic environment, and observing their host galaxies hence constrains the physical nature of the transient itself. Here, we use imaging from the Hubble Space Telescope, and spatially-resolved, medium resolution spectroscopy from the Very Large Telescope obtained with X-Shooter and MUSE to study the host of the very luminous transient ASASSN-15lh. The dominant stellar population at the transient site is old (around 1 to 2 Gyr), without signs of recent star-formation. We also detect emission from ionized gas, originating from three different, time-invariable, narrow components of collisionally-excited metal and Balmer lines. The ratios of emission lines in the Baldwin-Phillips-Terlevich diagnostic diagram indicate that the ionization source is a weak Active Galactic Nucleus with a black hole mass of $M_\bullet = 5_{-3}^{+8}\cdot10^{8} M_\odot$, derived through the $M_\bullet$-$\sigma$ relation. The narrow line components show spatial and velocity offsets on scales of 1 kpc and 500 km/s, respectively; these offsets are best explained by gas kinematics in the narrow-line region. The location of the central component, which we argue is also the position of the supermassive black hole, aligns with that of the transient within an uncertainty of 170 pc. Using this positional coincidence as well as other similarities with the hosts of Tidal Disruption Events, we strengthen the argument that the transient emission observed as ASASSN-15lh is related to the disruption of a star around a supermassive black hole, most probably spinning with a Kerr parameter $a_\bullet\gtrsim0.5$.
LOFAR MSSS: Discovery of a 2.56 Mpc giant radio galaxy associated with a disturbed galaxy group (1702.01571)
A. O. Clarke, G. Heald, T. Jarrett, J. D. Bray, M. J. Hardcastle, T. M. Cantwell, A. M. M. Scaife, M. Brienza, A. Bonafede, R. P. Breton, J. W. Broderick, D. Carbone, J. H. Croston, J. S. Farnes, J. J. Harwood, V. Heesen, A. Horneffer, A. J. van der Horst, M. Iacobelli, W. Jurusik, G. Kokotanekov, J. P. McKean, L. K. Morabito, D. D. Mulcahy, B.S. Nikiel-Wroczynski, E. Orru, R. Paladino, M. Pandey-Pommier, M. Pietka, R. Pizzo, L. Pratley, C. J. Riseley, H. J. A. Rottgering, A. Rowlinson, J. Sabater, K. Sendlinger, A. Shulevski, S. S. Sridhar, A. J. Stewart, C. Tasse, S. van Velzen, R. J. van Weeren, M. W. Wise
Feb. 6, 2017 astro-ph.CO, astro-ph.GA
We report on the discovery in the LOFAR Multifrequency Snapshot Sky Survey (MSSS) of a giant radio galaxy (GRG) with a projected size of $2.56 \pm 0.07$ Mpc projected on the sky. It is associated with the galaxy triplet UGC 9555, within which one is identified as a broad-line galaxy in the Sloan Digital Sky Survey (SDSS) at a redshift of $0.05453 \pm 1 \times 10^{-5} $, and with a velocity dispersion of $215.86 \pm 6.34$ km/s. From archival radio observations we see that this galaxy hosts a compact flat-spectrum radio source, and we conclude that it is the active galactic nucleus (AGN) responsible for generating the radio lobes. The radio luminosity distribution of the jets, and the broad-line classification of the host AGN, indicate this GRG is orientated well out of the plane of the sky, making its physical size one of the largest known for any GRG. Analysis of the infrared data suggests that the host is a lenticular type galaxy with a large stellar mass ($\log~\mathrm{M}/\mathrm{M}_\odot = 11.56 \pm 0.12$), and a moderate star formation rate ($1.2 \pm 0.3~\mathrm{M}_\odot/\mathrm{year}$). Spatially smoothing the SDSS images shows the system around UGC 9555 to be significantly disturbed, with a prominent extension to the south-east. Overall, the evidence suggests this host galaxy has undergone one or more recent moderate merger events and is also experiencing tidal interactions with surrounding galaxies, which have caused the star formation and provided the supply of gas to trigger and fuel the Mpc-scale radio lobes.
The Superluminous Transient ASASSN-15lh as a Tidal Disruption Event from a Kerr Black Hole (1609.02927)
G. Leloudas, M. Fraser, N. C. Stone, S. van Velzen, P. G. Jonker, I. Arcavi, C. Fremling, J. R. Maund, S. J. Smartt, T. Kruhler, J. C. A. Miller-Jones, P. M. Vreeswijk, A. Gal-Yam, P. A. Mazzali, A. De Cia, D. A. Howell, C. Inserra, F. Patat, A. de Ugarte Postigo, O. Yaron, C. Ashall, I. Bar, H. Campbell, T.-W. Chen, M. Childress, N. Elias-Rosa, J. Harmanen, G. Hosseinzadeh, J. Johansson, T. Kangas, E. Kankare, S. Kim, H. Kuncarayakti, J. Lyman, M. R. Magee, K. Maguire, D. Malesani, S. Mattila, C. V. McCully, M. Nicholl, S. Prentice, C. Romero-Canizales, S. Schulze, K. W. Smith, J. Sollerman, M. Sullivan, B. E. Tucker, S. Valenti, J. C. Wheeler, D. R. Young
Dec. 11, 2016 astro-ph.GA, astro-ph.SR, astro-ph.HE
When a star passes within the tidal radius of a supermassive black hole, it will be torn apart. For a star with the mass of the Sun ($M_\odot$) and a non-spinning black hole with a mass $<10^8 M_\odot$, the tidal radius lies outside the black hole event horizon and the disruption results in a luminous flare. Here we report observations over a period of 10 months of a transient, hitherto interpreted as a superluminous supernova. Our data show that the transient rebrightened substantially in the ultraviolet and that the spectrum went through three different spectroscopic phases without ever becoming nebular. Our observations are more consistent with a tidal disruption event than a superluminous supernova because of the temperature evolution, the presence of highly ionised CNO gas in the line of sight and our improved localisation of the transient in the nucleus of a passive galaxy, where the presence of massive stars is highly unlikely. While the supermassive black hole has a mass $> 10^8 M_\odot$, a star with the same mass as the Sun could be disrupted outside the event horizon if the black hole were spinning rapidly. The rapid spin and high black hole mass can explain the high luminosity of this event.
The Lockman Hole project: LOFAR observations and spectral index properties of low-frequency radio sources (1609.00537)
E. K. Mahony, R. Morganti, I. Prandoni, I. M. van Bemmel, T. W. Shimwell, M. Brienza, P. N. Best, M. Brüggen, G. Calistro Rivera, F. de Gasperin, M. J. Hardcastle, J. J. Harwood, G. Heald, M. J. Jarvis, S. Mandal, G. K. Miley, E. Retana-Montenegro, H. J. A. Röttgering, J. Sabater, C. Tasse, S. van Velzen, R. J. van Weeren, W. L. Williams, G. J. White
Sept. 2, 2016 astro-ph.CO, astro-ph.GA
The Lockman Hole is a well-studied extragalactic field with extensive multi-band ancillary data covering a wide range in frequency, essential for characterising the physical and evolutionary properties of the various source populations detected in deep radio fields (mainly star-forming galaxies and AGNs). In this paper we present new 150-MHz observations carried out with the LOw Frequency ARray (LOFAR), allowing us to explore a new spectral window for the faint radio source population. This 150-MHz image covers an area of 34.7 square degrees with a resolution of 18.6$\times$14.7 arcsec and reaches an rms of 160 $\mu$Jy beam$^{-1}$ at the centre of the field. As expected for a low-frequency selected sample, the vast majority of sources exhibit steep spectra, with a median spectral index of $\alpha_{150}^{1400}=-0.78\pm0.015$. The median spectral index becomes slightly flatter (increasing from $\alpha_{150}^{1400}=-0.84$ to $\alpha_{150}^{1400}=-0.75$) with decreasing flux density down to $S_{150} \sim$10 mJy before flattening out and remaining constant below this flux level. For a bright subset of the 150-MHz selected sample we can trace the spectral properties down to lower frequencies using 60-MHz LOFAR observations, finding tentative evidence for sources to become flatter in spectrum between 60 and 150 MHz. Using the deep, multi-frequency data available in the Lockman Hole, we identify a sample of 100 Ultra-steep spectrum (USS) sources and 13 peaked spectrum sources. We estimate that up to 21 percent of these could have $z>4$ and are candidate high-$z$ radio galaxies, but further follow-up observations are required to confirm the physical nature of these objects.
Prototype muon detectors for the AMIGA component of the Pierre Auger Observatory (1605.01625)
The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello
May 12, 2016 hep-ex, physics.ins-det
Auger Muons and Infill for the Ground Array) is an upgrade of the Pierre Auger Observatory to extend its range of detection and to directly measure the muon content of the particle showers. It consists of an infill of surface water-Cherenkov detectors accompanied by buried scintillator detectors used for muon counting. The main objectives of the AMIGA engineering array, referred to as the Unitary Cell, are to identify and resolve all engineering issues as well as to understand the muon-number counting uncertainties related to the design of the detector. The mechanical design, fabrication and deployment processes of the muon counters of the Unitary Cell are described in this document. These muon counters modules comprise sealed PVC casings containing plastic scintillation bars, wavelength-shifter optical fibers, 64 pixel photomultiplier tubes, and acquisition electronics. The modules are buried approximately 2.25 m below ground level in order to minimize contamination from electromagnetic shower particles. The mechanical setup, which allows access to the electronics for maintenance, is also described in addition to tests of the modules' response and integrity. The completed Unitary Cell has measured a number of air showers of which a first analysis of a sample event is included here.
The Pierre Auger Observatory Upgrade - Preliminary Design Report (1604.03637)
The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E. J. Ahn, I. Al Samarai, I. F. M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, N. Awal, A. M. Badescu, K. B. Barber, J. Bäuml, C. Baus, J. J. Beatty, K. H. Becker, J. A. Bellido, C. Berat, M. E. Bertaina, X. Bertou, P. L. Biermann, P. Billoir, S. G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, A. Bridgeman, P. Brogueira, W. C. Brown, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K. S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A. G. Chavez, A. Chiavassa, J. A. Chinellato, J. Chudoba, M. Cilmo, R. W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M. R. Coluccia, R. Conceição, F. Contreras, M. J. Cooper, A. Cordier, S. Coutu, C. E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B. R. Dawson, R. M. de Almeida, S. J. de Jong, G. De Mauro, J. R. T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, A. Di Matteo, J. C. Diaz, M. L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J. C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, M. T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C. O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A. C. Fauth, N. Fazzini, A. P. Ferguson, M. Fernandes, B. Fick, J. M. Figueira, A. Filevich, A. Filipčič, B. D. Fox, O. Fratu, M. M. Freire, B. Fuchs, T. Fujii, B. García, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P. L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P. F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A. F. Grillo, T. D. Grubb, F. Guarino, G. P. Guedes, M. R. Hampel, P. Hansen, D. Harari, T. A. Harrison, S. Hartmann, J. L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, N. Hemery, A. E. Herve, G. C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J. R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P. G. Isar, I. Jandt, S. Jansen, C. Jarne, J. A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K. H. Kampert, P. Kasper, I. Katkov, B. Kégl, B. Keilhauer, A. Keivani, E. Kemp, R. M. Kieckhafer, H. O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, O. Krömer, D. Kuempel, G. Kukec Mezek, N. Kunka, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M. A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, L. Lu, A. Lucero, M. Malacari, S. Maldera, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A. G. Mariazzi, V. Marin, I. C. Mariş, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martínez Bravo, D. Martraire, J. J. Masías Meza, H. J. Mathes, S. Mathys, J. Matthews, J. A. J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P. O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V. B. B. Mello, D. Melo, A. Menshikov, S. Messina, R. Meyhandan, M. I. Micheletti, L. Middendorf, I. A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C. A. Moura, M. A. Muller, G. Müller, S. Müller, R. Mussa, G. Navarra, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P. H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I. M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R. R. Prado, P. Privitera, M. Prouza, V. Purrello, E. J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M. D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A. C. Rovero, S. J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E. Santos, E. M. Santos, F. Sarazin, B. Sarkar, R. Sarmento, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F. G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S. J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R. C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G. R. Snow, P. Sommers, J. Sorokin, R. Squartini, Y. N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, T. Suomijärvi, A. D. Supanitsky, M. S. Sutherland, J. Swain, Z. Szadkowski, O. A. Taborda, A. Tapia, A. Tepe, V. M. Theodoro, C. Timmermans, C. J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J. F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A. M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J. R. Vázquez, R. A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A. A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, Y. Zhu, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello
April 13, 2016 astro-ph.IM, astro-ph.HE
The Pierre Auger Observatory has begun a major Upgrade of its already impressive capabilities, with an emphasis on improved mass composition determination using the surface detectors of the Observatory. Known as AugerPrime, the upgrade will include new 4 m$^2$ plastic scintillator detectors on top of all 1660 water-Cherenkov detectors, updated and more flexible surface detector electronics, a large array of buried muon detectors, and an extended duty cycle for operations of the fluorescence detectors. This Preliminary Design Report was produced by the Collaboration in April 2015 as an internal document and information for funding agencies. It outlines the scientific and technical case for AugerPrime. We now release it to the public via the arXiv server. We invite you to review the large number of fundamental results already achieved by the Observatory and our plans for the future.
Nanosecond-level time synchronization of autonomous radio detector stations for extensive air showers (1512.02216)
The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Eser, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, A. Lang, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello
Feb. 15, 2016 hep-ex, physics.ins-det
To exploit the full potential of radio measurements of cosmic-ray air showers at MHz frequencies, a detector timing synchronization within 1 ns is needed. Large distributed radio detector arrays such as the Auger Engineering Radio Array (AERA) rely on timing via the Global Positioning System (GPS) for the synchronization of individual detector station clocks. Unfortunately, GPS timing is expected to have an accuracy no better than about 5 ns. In practice, in particular in AERA, the GPS clocks exhibit drifts on the order of tens of ns. We developed a technique to correct for the GPS drifts, and an independent method is used for cross-checks that indeed we reach nanosecond-scale timing accuracy by this correction. First, we operate a "beacon transmitter" which emits defined sine waves detected by AERA antennas recorded within the physics data. The relative phasing of these sine waves can be used to correct for GPS clock drifts. In addition to this, we observe radio pulses emitted by commercial airplanes, the position of which we determine in real time from Automatic Dependent Surveillance Broadcasts intercepted with a software-defined radio. From the known source location and the measured arrival times of the pulses we determine relative timing offsets between radio detector stations. We demonstrate with a combined analysis that the two methods give a consistent timing calibration with an accuracy of 2 ns or better. Consequently, the beacon method alone can be used in the future to continuously determine and correct for GPS clock drifts in each individual event measured by AERA.
LOFAR MSSS: Detection of a low-frequency radio transient in 400 hrs of monitoring of the North Celestial Pole (1512.00014)
A. J. Stewart, R. P. Fender, J. W. Broderick, T. E. Hassall, T. Muñoz-Darias, A. Rowlinson, J. D. Swinbank, T. D. Staley, G. J. Molenaar, B. Scheers, T. L. Grobler, M. Pietka, G. Heald, J. P. McKean, M. E. Bell, A. Bonafede, R. P. Breton, D. Carbone, Y. Cendes, A. O. Clarke, S. Corbel, F. de Gasperin, J. Eislöffel, H. Falcke, C. Ferrari, J.-M. Grießmeier, M. J. Hardcastle, V. Heesen, J. W. T. Hessels, A. Horneffer, M. Iacobelli, P. Jonker, A. Karastergiou, G. Kokotanekov, V. I. Kondratiev, M. Kuniyoshi, C. J. Law, J. van Leeuwen, S. Markoff, J. C. A. Miller-Jones, D. Mulcahy, E. Orru, M. Pandey-Pommier, L. Pratley, E. Rol, H. J. A. Röttgering, A. M. M. Scaife, A. Shulevski, C. A. Sobey, B. W. Stappers, C. Tasse, A. J. van der Horst, S. van Velzen, R. J. van Weeren, R. A. M. J. Wijers, R. Wijnands, M. Wise, P. Zarka, A. Alexov, J. Anderson, A. Asgekar, I. M. Avruch, M. J. Bentum, G. Bernardi, P. Best, F. Breitling, M. Brüggen, H. R. Butcher, B. Ciardi, J. E. Conway, A. Corstanje, E. de Geus, A. Deller, S. Duscha, W. Frieswijk, M. A. Garrett, A. W. Gunst, M. P. van Haarlem, M. Hoeft, J. Hörandel, E. Juette, G. Kuper, M. Loose, P. Maat, R. McFadden, D. McKay-Bukowski, J. Moldon, H. Munk, M. J. Norden, H. Paas, A. G. Polatidis, D. Schwarz, J. Sluman, O. Smirnov, M. Steinmetz, S. Thoudam, M. C. Toribio, R. Vermeulen, C. Vocks, S. J. Wijnholds, O. Wucknitz, S. Yatawatta
Nov. 30, 2015 astro-ph.IM, astro-ph.HE
We present the results of a four-month campaign searching for low-frequency radio transients near the North Celestial Pole with the Low-Frequency Array (LOFAR), as part of the Multifrequency Snapshot Sky Survey (MSSS). The data were recorded between 2011 December and 2012 April and comprised 2149 11-minute snapshots, each covering 175 deg^2. We have found one convincing candidate astrophysical transient, with a duration of a few minutes and a flux density at 60 MHz of 15-25 Jy. The transient does not repeat and has no obvious optical or high-energy counterpart, as a result of which its nature is unclear. The detection of this event implies a transient rate at 60 MHz of 3.9 (+14.7, -3.7) x 10^-4 day^-1 deg^-2, and a transient surface density of 1.5 x 10^-5 deg^-2, at a 7.9-Jy limiting flux density and ~10-minute time-scale. The campaign data were also searched for transients at a range of other time-scales, from 0.5 to 297 min, which allowed us to place a range of limits on transient rates at 60 MHz as a function of observation duration.
Pierre Auger Observatory and Telescope Array: Joint Contributions to the 34th International Cosmic Ray Conference (ICRC 2015) (1511.02103)
Telescope Array Collaboration: R.U. Abbasi, M. Abe, T. Abu-Zayyad, M. Allen, R. Azuma, E. Barcikowski, J.W. Belz, D.R. Bergman, S.A. Blake, R. Cady, M.J. Chae, B.G. Cheon, J. Chiba, M. Chikawa, W.R. Cho, T. Fujii, M. Fukushima, T. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C.C.H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H.B. Kim, J.H. Kim, J.H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y.J. Kwon, J. Lan, S.I. Lim, J.P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J.N. Matthews, M. Minamino, Y. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, T. Nonaka, A. Nozato, S. Ogio, J. Ogura, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I.H. Park, M.S. Pshirkov, D.C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, L.M. Scott, P.D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B.K. Shin, H.S. Shin, J.D. Smith, P. Sokolsky, R.W. Springer, B.T. Stokes, S.R. Stratton, T.A. Stroman, T. Suzawa, M. Takamura, M. Takeda, R. Takeishi, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S.B. Thomas, G.B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, T. Wong, R. Yamane, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel, Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, C. Welling, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello
Nov. 6, 2015 astro-ph.HE
Joint contributions of the Pierre Auger Collaboration and the Telescope Array Collaboration to the 34th International Cosmic Ray Conference, 30 July - 6 August 2015, The Hague, The Netherlands.
The IceCube Neutrino Observatory, the Pierre Auger Observatory and the Telescope Array: Joint Contribution to the 34th International Cosmic Ray Conference (ICRC 2015) (1511.02109)
IceCube Collaboration: M.G. Aartsen, K. Abraham, M. Ackermann, J. Adams, J.A. Aguilar, M. Ahlers, M. Ahrens, D. Altmann, T. Anderson, I. Ansseau, M. Archinger, C. Arguelles, T.C. Arlen, J. Auffenberg, X. Bai, S.W. Barwick, V. Baum, R. Bay, J.J. Beatty, J. Becker Tjus, K.-H. Becker, E. Beiser, S. BenZvi, P. Berghaus, D. Berley, E. Bernardini, A. Bernhard, D.Z. Besson, G. Binder, D. Bindig, M. Bissok, E. Blaufuss, J. Blumenthal, D.J. Boersma, C. Bohm, M. Börner, F. Bos, D. Bose, S. Böser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, N. Buzinsky, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, D.F. Cowen, A.H. Cruz Silva, J. Daughhetee, J.C. Davis, M. Day, J.P.A.M. de André, C. De Clercq, E. del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K.D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J.C. Díaz-Vélez, V. di Lorenzo, J.P. Dumm, M. Dunkman, R. Eagan, B. Eberhardt, T. Ehrhardt, B. Eichmann, S. Euler, P.A. Evenson, O. Fadiran, S. Fahey, A.R. Fazely, A. Fedynitch, J. Feintzeig, J. Felde, K. Filimonov, C. Finley, T. Fischer-Wasels, S. Flis, C.-C. Fösig, T. Fuchs, T.K. Gaisser, R. Gaior, J. Gallagher, L. Gerhardt, K. Ghorbani, D. Gier, L. Gladstone, M. Glagla, T. Glüsenkamp, A. Goldschmidt, G. Golup, J.G. Gonzalez, D. Góra, D. Grant, J.C. Groh, A. Groß, C. Ha, C. Haack, A. Haj Ismail, A. Hallgren, F. Halzen, B. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, D. Hellwig, S. Hickford, J. Hignight, G.C. Hill, K.D. Hoffman, R. Hoffmann, K. Holzapfel, A. Homeier, K. Hoshina, F. Huang, M. Huber, W. Huelsnitz, P.O. Hulth, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G.S. Japaridze, K. Jero, M. Jurkovic, B. Kaminsky, A. Kappes, T. Karg, A. Karle, M. Kauer, A. Keivani, J.L. Kelley, J. Kemp, A. Kheirandish, J. Kiryluk, J. Kläs, S.R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, A. Koob, L. Köpke, C. Kopper, S. Kopper, D.J. Koskinen, M. Kowalski, K. Krings, G. Kroll, M. Kroll, J. Kunnen, N. Kurahashi, T. Kuwabara, M. Labare, J.L. Lanfranchi, M.J. Larson, M. Lesiak-Bzdak, M. Leuermann, J. Leuner, L. Lu, J. Lünemann, J. Madsen, G. Maggi, K.B.M. Mahn, R. Maruyama, K. Mase, H.S. Matis, R. Maunu, F. McNally, K. Meagher, M. Medici, A. Meli, T. Menne, G. Merino, T. Meures, S. Miarecki, E. Middell, E. Middlemas, L. Mohrmann, T. Montaruli, R. Morse, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S.C. Nowicki, D.R. Nygren, A. Obertacke, A. Olivas, A. Omairat, A. O'Murchadha, T. Palczewski, H. Pandya, L. Paul, J.A. Pepper, C. Pérez de los Heros, C. Pfendner, D. Pieloth, E. Pinat, J. Posselt, P.B. Price, G.T. Przybylski, J. Pütz, M. Quinnan, C. Raab, L. Rädel, M. Rameez, K. Rawlins, R. Reimann, M. Relich, E. Resconi, W. Rhode, M. Richman, S. Richter, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, S.M. Saba, L. Sabbatini, H.-G. Sander, A. Sandrock, J. Sandroos, S. Sarkar, K. Schatto, F. Scheriau, M. Schimp, T. Schmidt, M. Schmitz, S. Schoenen, S. Schöneberg, A. Schönwald, L. Schulte, D. Seckel, S. Seunarine, R. Shanidze, M.W.E. Smith, D. Soldin, M. Song, G.M. Spiczak, C. Spiering, M. Stahlberg, M. Stamatikos, T. Stanev, N.A. Stanisha, A. Stasik, T. Stezelberger, R.G. Stokstad, A. Stößl, R. Ström, N.L. Strotjohann, G. W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, S. Ter-Antonyan, A. Terliuk, G. Tešić, S. Tilav, P.A. Toale, M.N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, A. Turcati, E. Unger, M. Usner, S. Vallecorsa, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, J. van Santen, J. Veenkamp, M. Vehring, M. Voge, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, Ch. Weaver, C. Wendt, S. Westerhoff, B.J. Whelan, N. Whitehorn, C. Wichary, K. Wiebe, C.H. Wiebusch, L. Wille, D.R. Williams, H. Wissing, M. Wolf, T.R. Wood, K. Woschnagg, D.L. Xu, X.W. Xu, Y. Xu, J.P. Yanez, G. Yodh, S. Yoshida, M. Zoll, Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, C. Welling, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello, Telescope Array Collaboration: R.U. Abbasi, M. Abe, T. Abu-Zayyad, M. Allen, R. Azuma, E. Barcikowski, J.W. Belz, D.R. Bergman, S.A. Blake, R. Cady, M.J. Chae, B.G. Cheon, J. Chiba, M. Chikawa, W.R. Cho, T. Fujii, M. Fukushima, T. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C.C.H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H.B. Kim, J.H. Kim, J.H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y.J. Kwon, J. Lan, S.I. Lim, J.P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J.N. Matthews, M. Minamino, Y. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, T. Nonaka, A. Nozato, S. Ogio, J. Ogura, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I.H. Park, M.S. Pshirkov, D.C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, L.M. Scott, P.D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B.K. Shin, H.S. Shin, J.D. Smith, P. Sokolsky, R.W. Springer, B.T. Stokes, S.R. Stratton, T.A. Stroman, T. Suzawa, M. Takamura, M. Takeda, R. Takeishi, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S.B. Thomas, G.B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, T. Wong, R. Yamane, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel
Nov. 6, 2015 hep-ex, astro-ph.IM, astro-ph.HE
We have conducted three searches for correlations between ultra-high energy cosmic rays detected by the Telescope Array and the Pierre Auger Observatory, and high-energy neutrino candidate events from IceCube. Two cross-correlation analyses with UHECRs are done: one with 39 cascades from the IceCube `high-energy starting events' sample and the other with 16 high-energy `track events'. The angular separation between the arrival directions of neutrinos and UHECRs is scanned over. The same events are also used in a separate search using a maximum likelihood approach, after the neutrino arrival directions are stacked. To estimate the significance we assume UHECR magnetic deflections to be inversely proportional to their energy, with values $3^\circ$, $6^\circ$ and $9^\circ$ at 100 EeV to allow for the uncertainties on the magnetic field strength and UHECR charge. A similar analysis is performed on stacked UHECR arrival directions and the IceCube sample of through-going muon track events which were optimized for neutrino point-source searches.
Wide-field LOFAR imaging of the field around the double-double radio galaxy B1834+620: A fresh view on a restarted AGN and doubeltjes (1510.00577)
E. Orrú, S. van Velzen, R. F. Pizzo, S. Yatawatta, R. Paladino, M. Iacobelli, M. Murgia, H. Falcke, R. Morganti, A. G. de Bruyn, C. Ferrari (75 additional authors not shown)
Oct. 2, 2015 astro-ph.CO, astro-ph.GA
The existence of double-double radio galaxies (DDRGs) is evidence for recurrent jet activity in AGN, as expected from standard accretion models. A detailed study of these rare sources provides new perspectives for investigating the AGN duty cycle, AGN-galaxy feedback, and accretion mechanisms. Large catalogues of radio sources provide statistical information about the evolution of the radio-loud AGN population out to high redshifts. Using wide-field imaging with the LOFAR telescope, we study both a well-known DDRG as well as a large number of radio sources in the field of view. We present a high resolution image of the DDRG B1834+620 obtained at 144 MHz using LOFAR commissioning data. Our image covers about 100 square degrees and contains over 1000 sources. The four components of the DDRG B1834+620 have been resolved for the first time at 144 MHz. Inner lobes were found to point towards the direction of the outer lobes, unlike standard FR~II sources. Polarized emission was detected in the northern outer lobe. The high spatial resolution allows the identification of a large number of small double-lobed radio sources; roughly 10% of all sources in the field are doubles with a separation smaller than 1 arcmin. The spectral fit of the four components is consistent with a scenario in which the outer lobes are still active or the jets recently switched off, while emission of the inner lobes is the result of a mix-up of new and old jet activity. From the presence of the newly extended features in the inner lobes of the DDRG, we can infer that the mechanism responsible for their formation is the bow shock that is driven by the newly launched jet. We find that the density of the small doubles exceeds the density of FR-II sources with similar properties at 1.4 GHz, but this difference becomes smaller for low flux densities.
The Pierre Auger Observatory: Contributions to the 34th International Cosmic Ray Conference (ICRC 2015) (1509.03732)
The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hérve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, C. Welling, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello
Sept. 12, 2015 astro-ph.IM, astro-ph.HE
Contributions of the Pierre Auger Collaboration to the 34th International Cosmic Ray Conference, 30 July - 6 August 2015, The Hague, The Netherlands
The LOFAR Multifrequency Snapshot Sky Survey (MSSS) I. Survey description and first results (1509.01257)
G.H. Heald, R.F. Pizzo, E. Orrú, R.P. Breton, D. Carbone, C. Ferrari, M.J. Hardcastle, W. Jurusik, G. Macario, D. Mulcahy, D. Rafferty, A. Asgekar, M. Brentjens, R.A. Fallows, W. Frieswijk, M.C. Toribio, B. Adebahr, M. Arts, M.R. Bell, A. Bonafede, J. Bray, J. Broderick, T. Cantwell, P. Carroll, Y. Cendes, A.O. Clarke, J. Croston, S. Daiboo, F. de Gasperin, J. Gregson, J. Harwood, T. Hassall, V. Heesen, A. Horneffer, A.J. van der Horst, M. Iacobelli, V. Jelić, D. Jones, D. Kant, G. Kokotanekov, P. Martin, J.P. McKean, L.K. Morabito, B. Nikiel-Wroczyński, A. Offringa, V.N. Pandey, M. Pandey-Pommier, M. Pietka, L. Pratley, C. Riseley, A. Rowlinson, J. Sabater, A.M.M. Scaife, L.H.A. Scheers, K. Sendlinger, A. Shulevski, M. Sipior, C. Sobey, A.J. Stewart, A. Stroe, J. Swinbank, C. Tasse, J. Trüstedt, E. Varenius, S. van Velzen, N. Vilchez, R.J. van Weeren, S. Wijnholds, W.L. Williams, A.G. de Bruyn, R. Nijboer, M. Wise, A. Alexov, J. Anderson, I.M. Avruch, R. Beck, M.E. Bell, I. van Bemmel, M.J. Bentum, G. Bernardi, P. Best, F. Breitling, W.N. Brouw, M. Brüggen, H.R. Butcher, B. Ciardi, J.E. Conway, E. de Geus, A. de Jong, M. de Vos, A. Deller, R.J. Dettmar, S. Duscha, J. Eislöffel, D. Engels, H. Falcke, R. Fender, M.A. Garrett, J. Grießmeier, A.W. Gunst, J.P. Hamaker, J.W.T. Hessels, M. Hoeft, J. Hörandel, H.A. Holties, H. Intema, N.J. Jackson, E. Jütte, A. Karastergiou, W.F.A. Klijn, V.I. Kondratiev, L.V.E. Koopmans, M. Kuniyoshi, G. Kuper, C. Law, J. van Leeuwen, M. Loose, P. Maat, S. Markoff, R. McFadden, D. McKay-Bukowski, M. Mevius, J.C.A. Miller-Jones, R. Morganti, H. Munk, A. Nelles, J.E. Noordam, M.J. Norden, H. Paas, A.G. Polatidis, W. Reich, A. Renting, H. Röttgering, A. Schoenmakers, D. Schwarz, J. Sluman, O. Smirnov, B.W. Stappers, M. Steinmetz, M. Tagger, Y. Tang, S. ter Veen, S. Thoudam, R. Vermeulen, C. Vocks, C. Vogt, R.A.M.J. Wijers, O. Wucknitz, S. Yatawatta, P. Zarka
Sept. 3, 2015 astro-ph.IM
We present the Multifrequency Snapshot Sky Survey (MSSS), the first northern-sky LOFAR imaging survey. In this introductory paper, we first describe in detail the motivation and design of the survey. Compared to previous radio surveys, MSSS is exceptional due to its intrinsic multifrequency nature providing information about the spectral properties of the detected sources over more than two octaves (from 30 to 160 MHz). The broadband frequency coverage, together with the fast survey speed generated by LOFAR's multibeaming capabilities, make MSSS the first survey of the sort anticipated to be carried out with the forthcoming Square Kilometre Array (SKA). Two of the sixteen frequency bands included in the survey were chosen to exactly overlap the frequency coverage of large-area Very Large Array (VLA) and Giant Metrewave Radio Telescope (GMRT) surveys at 74 MHz and 151 MHz respectively. The survey performance is illustrated within the "MSSS Verification Field" (MVF), a region of 100 square degrees centered at J2000 (RA,Dec)=(15h,69deg). The MSSS results from the MVF are compared with previous radio survey catalogs. We assess the flux and astrometric uncertainties in the catalog, as well as the completeness and reliability considering our source finding strategy. We determine the 90% completeness levels within the MVF to be 100 mJy at 135 MHz with 108" resolution, and 550 mJy at 50 MHz with 166" resolution. Images and catalogs for the full survey, expected to contain 150,000-200,000 sources, will be released to a public web server. We outline the plans for the ongoing production of the final survey products, and the ultimate public release of images and source catalogs.
Total eclipse of the heart: The AM CVn Gaia14aae / ASSASN-14cn (1507.04663)
H. C. Campbell, T. R. Marsh, M. Fraser, S.T. Hodgkin, E. de Miguel, B. T. Gänsicke, D. Steeghs, A. Hourihane, E. Breedt, S. P. Littlefair, S. E. Koposov, L. Wyrzykowski, G. Altavilla, N. Blagorodnova, G. Clementini, G. Damljanovic, A. Delgado, M. Dennefeld, A. J. Drake, J. Fernández-Hernández, G. Gilmore, R. Gualandi, A. Hamanowicz, B. Handzlik, L. K. Hardy, D. L. Harrison, K. Ilkiewicz, P. G. Jonker, C. S. Kochanek, Z. Kolaczkowski, Z. Kostrzewa-Rutkowska, R. Kotak, G. van Leeuwen, G. Leto, P. Ochner, M. Pawlak, L. Palaversa, G. Rixon, K. Rybicki, B. J. Shappee, S. J. Smartt, M. A. P. Torres, L. Tomasella, M. Turatto, K. Ulaczyk, S. van Velzen, O. Vince, N. A. Walton, P. Wielgórski, T. Wevers, P. Whitelock, A. Yoldas, F. De Angeli, P. Burgess, G. Busso, R. Busuttil, T. Butterley, K. C. Chambers, C. Copperwheat, A. B. Danilet, V. S. Dhillon, D. W. Evans, L. Eyer, D. Froebrich, A. Gomboc, G. Holland, T. W.-S. Holoien, J. F. Jarvis, N. Kaiser, D. A. Kann, D. Koester, U. Kolb, S. Komossa, E. A. Magnier, A. Mahabal, J. Polshaw, J. L. Prieto, T. Prusti, M. Riello, A. Scholz, G. Simonian, K. Z. Stanek, L. Szabados, C. Waters, R. W. Wilson
July 16, 2015 astro-ph.SR
We report the discovery and characterisation of a deeply eclipsing AM CVn-system, Gaia14aae (= ASSASN-14cn). Gaia14aae was identified independently by the All-Sky Automated Survey for Supernovae (ASAS-SN; Shappee et al. 2014) and by the Gaia Science Alerts project, during two separate outbursts. A third outburst is seen in archival Pan-STARRS-1 (PS1; Schlafly et al. 2012; Tonry et al. 2012; Magnier et al. 2013) and ASAS-SN data. Spectroscopy reveals a hot, hydrogen-deficient spectrum with clear double-peaked emission lines, consistent with an accreting double degenerate classification. We use follow-up photometry to constrain the orbital parameters of the system. We find an orbital period of 49.71 min, which places Gaia14aae at the long period extremum of the outbursting AM CVn period distribution. Gaia14aae is dominated by the light from its accreting white dwarf. Assuming an orbital inclination of 90 degrees for the binary system, the contact phases of the white dwarf lead to lower limits of 0.78 M solar and 0.015 M solar on the masses of the accretor and donor respectively and a lower limit on the mass ratio of 0.019. Gaia14aae is only the third eclipsing AM CVn star known, and the first in which the WD is totally eclipsed. Using a helium WD model, we estimate the accretor's effective temperature to be 12900+-200 K. The three out-burst events occurred within 4 months of each other, while no other outburst activity is seen in the previous 8 years of Catalina Real-time Transient Survey (CRTS; Drake et al. 2009), Pan-STARRS-1 and ASAS-SN data. This suggests that these events might be rebrightenings of the first outburst rather than individual events.
The Magnetic Field and Polarization Properties of Radio Galaxies in Different Accretion States (1504.06679)
Shane P. O'Sullivan, B. M. Gaensler, M. A. Lara-López, S. van Velzen, J. K. Banfield, J. S. Farnes
April 25, 2015 astro-ph.CO, astro-ph.GA
We use the integrated polarized radio emission at 1.4 GHz ($\Pi_{\rm 1.4\,GHz}$) from a large sample of AGN (796 sources at redshifts $z<0.7$) to study the large-scale magnetic field properties of radio galaxies in relation to the host galaxy accretion state. We find a fundamental difference in $\Pi_{\rm 1.4\,GHz}$ between radiative-mode AGN (i.e. high-excitation radio galaxies, HERGs, and radio-loud QSOs) and jet-mode AGN (i.e. low-excitation radio galaxies, LERGs). While LERGs can achieve a wide range of $\Pi_{\rm 1.4\,GHz}$ (up to $\sim$$30\%$), the HERGs and radio-loud QSOs are limited to $\Pi_{\rm 1.4\,GHz} \lesssim 15\%$. A difference in $\Pi_{\rm 1.4\,GHz}$ is also seen when the sample is divided at 0.5% of the total Eddington-scaled accretion rate, where the weakly accreting sources can attain higher values of $\Pi_{\rm 1.4\,GHz}$. We do not find any clear evidence that this is driven by intrinsic magnetic field differences of the different radio morphological classes. Instead, we attribute the differences in $\Pi_{\rm 1.4\,GHz}$ to the local environments of the radio sources, in terms of both the ambient gas density and the magnetoionic properties of this gas. Thus, not only are different large-scale gaseous environments potentially responsible for the different accretion states of HERGs and LERGs, we argue that the large-scale magnetised environments may also be important for the formation of powerful AGN jets. Upcoming high angular resolution and broadband radio polarization surveys will provide the high precision Faraday rotation measure and depolarization data required to robustly test this claim.
SKA as a powerful hunter of jetted Tidal Disruption Events (1501.04640)
I. Donnarumma, E. M. Rossi, R. Fender, S. Komossa, Z. Paragi, S. Van Velzen, I. Prandoni
Jan. 19, 2015 astro-ph.HE
Observational consequences of the tidal disruption of stars by supermassive black holes (SMBHs) can enable us to discover quiescent SMBHs and constrain their mass function. Moreover, observing jetted TDEs (from previously non-active galaxies) provides us with a new means of studying the early phases of jet formation and evolution in an otherwise "pristine" environment. Although several (tens) TDEs have been discovered since 1999, only two jetted TDEs have been recently discovered in hard X-rays, and only one, Swift J1644+57, has a precise localization which further supports the TDE interpretation. These events alone are not sufficient to address those science issues, which require a substantial increase of the current sample. Despite the way they were discovered, the highest discovery potential for {\em jetted} TDEs is not held by current and up-coming X-ray instruments, which will yield only a few to a few tens events per year. In fact, the best strategy is to use the Square Kilometer Array to detect TDEs and trigger multi-wavelength follow-ups, yielding hundreds candidates per year, up to $z \sim 2$. Radio and X-ray synergy, however, can in principle constrain important quantities such as the absolute rate of jetted TDEs, their jet power, bulk Lorentz factor, the black hole mass function, and perhaps discover massive black holes (MBH) with $<10^{5} M_{\odot}$. Finally, when comparing SKA results with information from optical surveys like LSST, one can more directly constrain the efficiency of jet production.
LOFAR low-band antenna observations of the 3C295 and Bootes fields: source counts and ultra-steep spectrum sources (1409.5437)
R. J. van Weeren, W. L. Williams, C. Tasse, H. J. A. Rottgering, D. A. Rafferty, S. van der Tol, G. Heald, G. J. White, A. Shulevski, P. Best, H. T. Intema, S. Bhatnagar, W. Reich, M. Steinmetz, S. van Velzen, T. A. Ensslin, I. Prandoni, F. de Gasperin, M. Jamrozy, G. Brunetti, M. J. Jarvis, J. P. McKean, M. W. Wise, C. Ferrari, J. Harwood, J. B. R. Oonk, M. Hoeft, M. Kunert-Bajraszewska, C. Horellou, O. Wucknitz, A. Bonafede, N. R. Mohan, A. M. M. Scaife, H.-R. Klockner, I. M. van Bemmel, A. Merloni, K. T. Chyzy, D. Engels, H. Falcke, M. Pandey-Pommier, A. Alexov, J. Anderson, I. M. Avruch, R. Beck, M. E. Bell, M. J. Bentum, G. Bernardi, F. Breitling, J. Broderick, W. N. Brouw, M. Bruggen, H. R. Butcher, B. Ciardi, E. de Geus, M. de Vos, A. Deller, S. Duscha, J. Eisloffel, R. A. Fallows, W. Frieswijk, M. A. Garrett, J. Griessmeier, A. W. Gunst, J. P. Hamaker, T. E. Hassall, J. Horandel, A. van der Horst, M. Iacobelli, N. J. Jackson, E. Juette, V. I. Kondratiev, M. Kuniyoshi, P. Maat, G. Mann, D. McKay-Bukowski, M. Mevius, R. Morganti, H. Munk, A. R. Offringa, E. Orru, H. Paas, V. N. Pandey, G. Pietka, R. Pizzo, A. G. Polatidis, A. Renting, A. Rowlinson, D. Schwarz, M. Serylak, J. Sluman, O. Smirnov, B. W. Stappers, A. Stewart, J. Swinbank, M. Tagger, Y. Tang, S. Thoudam, C. Toribio, R. Vermeulen, C. Vocks, P. Zarka
Sept. 18, 2014 astro-ph.CO, astro-ph.GA
We present LOFAR Low Band observations of the Bootes and 3C295 fields. Our images made at 34, 46, and 62 MHz reach noise levels of 12, 8, and 5 mJy beam$^{-1}$, making them the deepest images ever obtained in this frequency range. In total, we detect between 300 and 400 sources in each of these images, covering an area of 17 to 52 deg$^{2}$. From the observations we derive Euclidean-normalized differential source counts. The 62 MHz source counts agree with previous GMRT 153 MHz and VLA 74 MHz differential source counts, scaling with a spectral index of $-0.7$. We find that a spectral index scaling of $-0.5$ is required to match up the LOFAR 34 MHz source counts. This result is also in agreement with source counts from the 38 MHz 8C survey, indicating that the average spectral index of radio sources flattens towards lower frequencies. We also find evidence for spectral flattening using the individual flux measurements of sources between 34 and 1400 MHz and by calculating the spectral index averaged over the source population. To select ultra-steep spectrum ($\alpha < -1.1$) radio sources, that could be associated with massive high redshift radio galaxies, we compute spectral indices between 62 MHz, 153 MHz and 1.4 GHz for sources in the Bo\"otes field. We cross-correlate these radio sources with optical and infrared catalogues and fit the spectral energy distribution to obtain photometric redshifts. We find that most of these ultra-steep spectrum sources are located in the $ 0.7 \lesssim z \lesssim 2.5$ range.
Reconstruction of inclined air showers detected with the Pierre Auger Observatory (1407.3214)
The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, J. Bäuml, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, Q. Dorosti Hasankiadeh, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, M. Fernandes, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, B.D. Fox, O. Fratu, U. Fröhlich, B. Fuchs, T. Fuji, R. Gaior, B. García, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, F. Gate, H. Gemmeke, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gómez Berisso, P.F. Gómez Vitale, P. Gonçalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, K. Islo, I. Jandt, S. Jansen, C. Jarne, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Kégl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, O. Krömer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leão, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. López, A. Lopez Agëra, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, A.J. Matthews, J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Mićanović, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, C. Morello, J.C. Moreno, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, M. Münchmeyer, R. Mussa, G. Navarra, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, D. Newton, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L. Ochilo, A. Olinto, M. Oliveira, V.M. Olmos-Gilbaja, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, P. Papenbreer, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, C. Peters, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, P. Privitera, M. Prouza, V. Purrello, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, G. Ros, J. Rosado, T. Rossler, M. Roth, E. Roulet, A.C. Rovero, C. Rühle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sarmento, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, A. Schulz, J. Schulz, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Śmiał kowski, R. Šmída, G.R. Snow, P. Sommers, J. Sorokin, R. Squartini, Y.N. Srivastava, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, O.A. Taborda, A. Tapia, M. Tartare, N.T. Thao, V.M. Theodoro, J. Tiffenberg, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, E. Trovato, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, B.J. Whelan, A. Widom, L. Wiencke, B. Wilczyńska, H. Wilczyński, M. Will, C. Williams, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski
July 11, 2014 hep-ex, astro-ph.HE
We describe the method devised to reconstruct inclined cosmic-ray air showers with zenith angles greater than $60^\circ$ detected with the surface array of the Pierre Auger Observatory. The measured signals at the ground level are fitted to muon density distributions predicted with atmospheric cascade models to obtain the relative shower size as an overall normalization parameter. The method is evaluated using simulated showers to test its performance. The energy of the cosmic rays is calibrated using a sub-sample of events reconstructed with both the fluorescence and surface array techniques. The reconstruction method described here provides the basis of complementary analyses including an independent measurement of the energy spectrum of ultra-high energy cosmic rays using very inclined events collected by the Pierre Auger Observatory.
A Targeted Search for Point Sources of EeV Neutrons (1406.4038)
The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, J. Bäuml, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, M. Fernandes, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, B.D. Fox, O. Fratu, U. Fröhlich, B. Fuchs, T. Fuji, R. Gaior, B. García, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, F. Gate, H. Gemmeke, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gómez Berisso, P.F. Gómez Vitale, P. Gonçalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q.D. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, K. Islo, I. Jandt, S. Jansen, C. Jarne, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Kégl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, O. Krömer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leão, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. López, A. Lopez Agüera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, A.J. Matthews, J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Mićanović, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, C. Morello, J.C. Moreno, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, M. Münchmeyer, R. Mussa, G. Navarra, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L. Ochilo, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, P. Papenbreer, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, C. Peters, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, P. Privitera, M. Prouza, V. Purrello, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, G. Ros, J. Rosado, T. Rossler, M. Roth, E. Roulet, A.C. Rovero, C. Rühle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sarmento, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, A. Schulz, J. Schulz, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, J. Sorokin, R. Squartini, Y.N. Srivastava, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, O.A. Taborda, A. Tapia, M. Tartare, N.T. Thao, V.M. Theodoro, J. Tiffenberg, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, E. Trovato, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, B.J. Whelan, A. Widom, L. Wiencke, B. Wilczyńska, H. Wilczyński, M. Will, C. Williams, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski
June 16, 2014 astro-ph.HE
A flux of neutrons from an astrophysical source in the Galaxy can be detected in the Pierre Auger Observatory as an excess of cosmic-ray air showers arriving from the direction of the source. To avoid the statistical penalty for making many trials, classes of objects are tested in combinations as nine "target sets", in addition to the search for a neutron flux from the Galactic Center or from the Galactic Plane. Within a target set, each candidate source is weighted in proportion to its electromagnetic flux, its exposure to the Auger Observatory, and its flux attenuation factor due to neutron decay. These searches do not find evidence for a neutron flux from any class of candidate sources. Tabulated results give the combined p-value for each class, with and without the weights, and also the flux upper limit for the most significant candidate source within each class. These limits on fluxes of neutrons significantly constrain models of EeV proton emission from non-transient discrete sources in the Galaxy.
A search for point sources of EeV photons (1406.2912)
Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, J. Bäuml, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, Q. Dorosti Hasankiadeh, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, M. Fernandes, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, B.D. Fox, O. Fratu, U. Fröhlich, B. Fuchs, T. Fuji, R. Gaior, B. García, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, F. Gate, H. Gemmeke, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gómez Berisso, P.F. Gómez Vitale, P. Gonçalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, K. Islo, I. Jandt, S. Jansen, C. Jarne, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Kégl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, O. Krömer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leão, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. López, A. Lopez Agüera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, A.J. Matthews, J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Mićanović, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, C. Morello, J.C. Moreno, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, M. Münchmeyer, R. Mussa, G. Navarra, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L. Ochilo, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, P. Papenbreer, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, C. Peters, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, P. Privitera, M. Prouza, V. Purrello, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, G. Ros, J. Rosado, T. Rossler, M. Roth, E. Roulet, A.C. Rovero, C. Rühle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sarmento, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, O. Scholten, H. Schoorlemmer, P. Schovánek, A. Schulz, J. Schulz, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, J. Sorokin, R. Squartini, Y.N. Srivastava, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, O.A. Taborda, A. Tapia, M. Tartare, N.T. Thao, V.M. Theodoro, J. Tiffenberg, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, E. Trovato, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, B.J. Whelan, A. Widom, L. Wiencke, B. Wilczyńska, H. Wilczyński, M. Will, C. Williams, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski
Measurements of air showers made using the hybrid technique developed with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for point sources of EeV photons anywhere in the exposed sky. A multivariate analysis reduces the background of hadronic cosmic rays. The search is sensitive to a declination band from -85{\deg} to +20{\deg}, in an energy range from 10^17.3 eV to 10^18.5 eV. No photon point source has been detected. An upper limit on the photon flux has been derived for every direction. The mean value of the energy flux limit that results from this, assuming a photon spectral index of -2, is 0.06 eV cm^-2 s^-1, and no celestial direction exceeds 0.25 eV cm^-2 s^-1. These upper limits constrain scenarios in which EeV cosmic ray protons are emitted by non-transient sources in the Galaxy.
The Very Large Array Low-frequency Sky Survey Redux (VLSSr) (1404.0694)
W. M. Lane, W. D. Cotton, S. van Velzen, T. E. Clarke, N. E. Kassim, J. F. Helmboldt, T. J. W. Lazio, A. S. Cohen
April 2, 2014 astro-ph.CO, astro-ph.IM
We present the results of a recent re-reduction of the data from the Very Large Array (VLA) Low-frequency Sky Survey (VLSS). We used the VLSS catalog as a sky model to correct the ionospheric distortions in the data and create a new set of sky maps and corresponding catalog at 73.8 MHz. The VLSS Redux (VLSSr) has a resolution of 75 arcsec, and an average map RMS noise level of $\sigma\sim0.1$ Jy beam$^{-1}$. The clean bias is $0.66\times\sigma$, and the theoretical largest angular size is 36 arcmin. Six previously un-imaged fields are included in the VLSSr, which has an unbroken sky coverage over 9.3 sr above an irregular southern boundary. The final catalog includes 92,964 sources. The VLSSr improves upon the original VLSS in a number of areas including imaging of large sources, image sensitivity, and clean bias; however the most critical improvement is the replacement of an inaccurate primary beam correction which caused source flux errors which vary as a function of radius to nearest pointing center in the VLSS.
Highlights from the Pierre Auger Observatory (1310.4620)
Antoine Letessier-Selvon, A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muniz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antivcic, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, R. Bardenet, J. Baeuml, C. Baus, J.J. Beatty, K.H. Becker, A. Belletoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blumer, M. Bohacova, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, R.E. Burton, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceicao, F. Contreras, H. Cook, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, J.C. Diaz, M.L. Diaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipcic, N. Foerster, B.D. Fox, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Frohlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. Garcia, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gomez Berisso, P.F. Gomez Vitale, P. Goncalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, P. Homola, J.R. Hoerandel, P. Horvath, M. Hrabovsky, D. Huber, T. Huege, A. Insolia, P.G. Isar, S. Jansen, C. Jarne, M. Josebachuili, K. Kadija, O. Kambeitz, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kegl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp d, R. Krause, N. Krohm, O. Kroemer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leao, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, I. Lhenry-Yvon, K. Link, R. Lopez, A. Lopez Aguera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martinez Bravo, D. Martraire, J.J. Masias Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Micanovic, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, J.C. Moreno, M. Mostafa, C.A. Moura, M.A. Muller, G. Muller, M. Munchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, L. Novzka, J. Oehlschlager, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pekala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, M. Pontz, A. Porcelli, T. Preda, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodriguez-Frias, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouille-d'Orfeuil, E. Roulet, A.C. Rovero, C. Ruhle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sanchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovanek, F.G. Schroeder, A. Schulz, J. Schulz, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Smialkowski, R. Smida, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, M. Straub, A. Stutz, F. Suarez, T. Suomijarvi, A.D. Supanitsky, T. Susa, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Tacscuau, R. Tcaciuc, N.T. Thao, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tome, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, D.B. Tridapalli, E. Trovato, M. Tueros, R. Ulrich, M. Unger, J.F. Valdes Galicia, I. Valino, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cardenas, G. Varner, J.R. Vazquez, R.A. Vazquez, D. Veberic, V. Verzi, J. Vicha, M. Videla, L. Villasenor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczynska, H. Wilczynski, M. Will, C. Williams, T. Winchen, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (the Pierre Auger Collaboration)
Oct. 19, 2013 astro-ph.HE
The Pierre Auger Observatory is the world's largest cosmic ray observatory. Our current exposure reaches nearly 40,000 km$^2$ str and provides us with an unprecedented quality data set. The performance and stability of the detectors and their enhancements are described. Data analyses have led to a number of major breakthroughs. Among these we discuss the energy spectrum and the searches for large-scale anisotropies. We present analyses of our X$_{max}$ data and show how it can be interpreted in terms of mass composition. We also describe some new analyses that extract mass sensitive parameters from the 100% duty cycle SD data. A coherent interpretation of all these recent results opens new directions. The consequences regarding the cosmic ray composition and the properties of UHECR sources are briefly discussed.
Pierre Auger Observatory and Telescope Array: Joint Contributions to the 33rd International Cosmic Ray Conference (ICRC 2013) (1310.0647)
The Telescope Array, Pierre Auger Collaborations: T. Abu-Zayyad, M. Allen, R. Anderson, R. Azuma, E. Barcikowski, J. W Belz, D. R. Bergman, S. A. Blake, R. Cady, M. J. Chae, B. G. Cheon, J. Chiba, M. Chikawa, W. R. Cho, T. Fujii, M. Fukushima, K. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C. C. H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H. B. Kim, J. H. Kim, J. H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y. J. Kwon, J. Lan, J. P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J. N. Matthews, M. Minamino, K. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, H. Nanpei, T. Nonaka, A. Nozato, S. Ogio, S. Oh, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I. H. Park, M. S. Pshirkov, D. C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, A. L. Sampson, L. M. Scott, P. D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B. K. Shin, T. Shirahama, J. D. Smith, P. Sokolsky, R. W. Springer, B. T. Stokes, S. R. Stratton, T. A. Stroman, M. Takamura, M. Takeda, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S. B. Thomas, G. B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, Y. Wada, T. Wong, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel, A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muniz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antivcic, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, R. Bardenet, J. Baeuml, C. Baus, J.J. Beatty, K.H. Becker, A. Belletoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blumer, M. Bohacova, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, R.E. Burton, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceicao, F. Contreras, H. Cook, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, J.C. Diaz, M.L. Diaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipcic, N. Foerster, B.D. Fox, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Frohlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. Garcia, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gomez Berisso, P.F. Gomez Vitale, P. Goncalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, P. Homola, J.R. Hoerandel, P. Horvath, M. Hrabovsky, D. Huber, T. Huege, A. Insolia, P.G. Isar, S. Jansen, C. Jarne, M. Josebachuili, K. Kadija, O. Kambeitz, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kegl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp, R. Krause, N. Krohm, O. Kroemer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leao, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. Lopez, A. Lopez Aguera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martinez Bravo, D. Martraire, J.J. Masias Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Micanovic, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, J.C. Moreno, M. Mostafa, C.A. Moura, M.A. Muller, G. Muller, M. Munchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, L. Novzka, J. Oehlschlager, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pekala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, M. Pontz, A. Porcelli, T. Preda, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodriguez-Frias, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouille-d'Orfeuil, E. Roulet, A.C. Rovero, C. Ruhle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sanchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovanek, F.G. Schroeder, A. Schulz, J. Schulz, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Smialkowski, R. Smida, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, M. Straub, A. Stutz, F. Suarez, T. Suomijarvi, A.D. Supanitsky, T. Susa, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Tacscuau, R. Tcaciuc, N.T. Thao, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tome, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, D.B. Tridapalli, E. Trovato, M. Tueros, R. Ulrich, M. Unger, J.F. Valdes Galicia, I. Valino, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cardenas, G. Varner, J.R. Vazquez, R.A. Vazquez, D. Veberic, V. Verzi, J. Vicha, M. Videla, L. Villasenor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczynska, H. Wilczynski, M. Will, C. Williams, T. Winchen, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (The Pierre Auger Collaboration)
Oct. 2, 2013 astro-ph.IM, astro-ph.HE
Joint contributions of the Pierre Auger and Telescope Array Collaborations to the 33rd International Cosmic Ray Conference, Rio de Janeiro, Brazil, July 2013: cross-calibration of the fluorescence telescopes, large scale anisotropies and mass composition.
A double radio relic in the merging galaxy cluster ZwCl 0008.8+5215 (1102.2235)
R. J. van Weeren, M. Hoeft, H. J. A. Rottgering, M. Bruggen, H. T. Intema, S. van Velzen
Feb. 10, 2011 astro-ph.CO
Some merging galaxy clusters host diffuse elongated radio sources, also called radio relics. It is proposed that these radio relics trace shock waves in the intracluster medium created during a cluster merger event. Within the shock waves particles are accelerated to relativistic energies, and in the presence of a magnetic field synchrotron radiation will be emitted. Here we present GMRT and WSRT observations of a new double relic in the galaxy cluster ZwCl 0008.8+5215. Optical V, R, and I band images of the cluster were taken with the INT. An optical spectrum, to determine the redshift of the cluster, was taken with the WHT. Our observations show the presence of a double radio relic in the galaxy cluster ZwCl 0008.8+5215, for which we find a spectroscopic redshift of z = 0.1032 \pm 0.0018 from an optical spectrum of one of the cD galaxies. The spectral index of the two relics steepens inwards to the cluster center. For part of the relics, we measure a polarization fraction in the range ~ 5-25%. A ROSAT X-ray image displays an elongated ICM and the large-scale distribution of galaxies reveals two cluster cores, all pointing towards a binary cluster merger event. The radio relics are located symmetrically with respect to the X-ray center of the cluster, along the proposed merger axis. The relics have a linear extent of 1.4 Mpc and 290 kpc. This factor of five difference in linear size is unlike that of previously known double relic systems, for which the sizes do not differ by more than a factor of two. We conclude that the double relics in ZwCl 0008.8+5215 are best explained by two outward moving shock waves in which particles are (re)accelerated trough the diffusive shock acceleration (DSA) mechanism. [abridged]
|
CommonCrawl
|
Asymptotic spreading speed for the weak competition system with a free boundary
Escape quartered theorem and the connectivity of the Julia sets of a family of rational maps
September 2019, 39(9): 5207-5222. doi: 10.3934/dcds.2019212
On attainability of Moser-Trudinger inequality with logarithmic weights in higher dimensions
Prosenjit Roy
Department of Mathematics and Statistics, Indian Institute of Technology, Kanpur, 208016, India
Received August 2018 Revised March 2019 Published May 2019
Fund Project: Part of this research is supported by Inspire programme under the contract number IFA14/MA-43.
Moser-Trudinger inequality was generalised by Calanchi-Ruf to the following version: If
$ \beta \in [0,1) $
$ w_0(x) = |\log |x||^{\beta(n-1)} $
$ \left( \log \frac{e}{|x|}\right)^{\beta(n-1)} $
$ \sup\limits_{\int_B | \nabla u|^nw_0 \leq 1 , u \in W_{0,rad}^{1,n}(w_0,B)} \;\; \int_B \exp\left(\alpha |u|^{\frac{n}{(n-1)(1-\beta)}} \;\;\; \right) dx < \infty $
if and only if
$ \alpha \leq \alpha_\beta = n\left[\omega_{n-1}^{\frac{1}{n-1}}(1-\beta) \right]^{\frac{1}{1-\beta}} $
$ \omega_{n-1} $
denotes the surface measure of the unit sphere in
$ \mathbb {R}^n $
. The primary goal of this work is to address the issue of existence of extremal function for the above inequality. A non-existence (of extremal function) type result is also discussed, for the usual Moser-Trudinger functional.
Keywords: Moser Trudinger embedding, extremal function.
Mathematics Subject Classification: Primary: 35B38, 35J20, 47N20, 26D10; Secondary: 46E35.
Citation: Prosenjit Roy. On attainability of Moser-Trudinger inequality with logarithmic weights in higher dimensions. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5207-5222. doi: 10.3934/dcds.2019212
Adimurthi and K. Sandeep, A singular Moser-Trudinger embedding and its applications, NoDEA Nonlinear Differential Equations Appl., 13 (2007), 585-603. doi: 10.1007/s00030-006-4025-9. Google Scholar
Adimurthi and C. Tintarev, On compactness in the Trudinger-Moser inequality, Ann. Sc. Norm. Super. Pisa Cl. Sci., 13 (2014), 399-416. Google Scholar
M. Calanchi, Some weighted inequalities of Trudinger–Moser type, Progress in Nonlinear Differential Equations and Appl., 85 (2014), 163-174. Google Scholar
M. Calanchi and B. Ruf, Trudinger-Moser type inequalities with logarithmic weights in dimension $N$, Nonlinear Anal., 121 (2015), 403-411. doi: 10.1016/j.na.2015.02.001. Google Scholar
M. Calanchi and B. Ruf, On Trudinger-Moser type inequalities with logarithmic weights, J. Differential Equations, 258 (2015), 1967-1989. doi: 10.1016/j.jde.2014.11.019. Google Scholar
L. Carleson and S.-Y. A. Chang, On the existence of an extremal function for an inequality by J. Moser, Bull. Sci. Math., 110 (1986), 113-127. Google Scholar
G. Csató and P. Roy, Extremal functions for the singular Moser-Trudinger inequality in 2 dimensions, Calc. Var. Partial Differential Equations, 54 (2015), 2341-2366. doi: 10.1007/s00526-015-0867-5. Google Scholar
G. Csató and P. Roy, The singular Moser-Trudinger inequality on simply connected domain, Comm. Partial Differential Equations, 41 (2016), 838-847. doi: 10.1080/03605302.2015.1123276. Google Scholar
G. Csató, N. H. Nguyen and P. Roy, Extremals for the Singular Moser-Trudinger Inequality via n-Harmonic Transplantation, preprint, arXiv: 1801.03932v3. Google Scholar
D. G. de Figueiredo, J. M. do O and B. Ruf, Elliptic equations and systems with critical Trudinger-Moser nonlinearities., Discrete Contin. Dyn. Syst., 30 (2011), 455-476. doi: 10.3934/dcds.2011.30.455. Google Scholar
M. Flucher, Extremal functions for the Trudinger-Moser inequality in 2 dimensions, Comment. Math. Helvetici, 67 (1992), 471-497. doi: 10.1007/BF02566514. Google Scholar
N. Lam, Equivalence of sharp Trudinger-Moser-Adams inequalities, Commun. Pure Appl. Anal, 16 (2017), 973-997. doi: 10.3934/cpaa.2017047. Google Scholar
N. Lam and G. Lu, A new approach to sharp Moser-Trudinger and Adams type inequalities: A rearrangement-free argument., J. Differential Equations, 255 (2013), 298-325. doi: 10.1016/j.jde.2013.04.005. Google Scholar
X. Li and Y. Yang, Extremal functions for singular Trudinger-Moser inequalities in the entire Euclidean space, prerpint, arXiv: 1612.08247 Google Scholar
K.-C. Lin, Extremal functions for Moser's inequality, Trans. Amer. Math. Soc., 348 (1996), 2663-2671. doi: 10.1090/S0002-9947-96-01541-3. Google Scholar
P.-L. Lions, The concentration-compactness principle in the Calculus of variations. The limit case. Ⅰ, Rev. Mat. Iberoamericana, 1 (1985), 145-201. doi: 10.4171/RMI/6. Google Scholar
G. Lu and H. Tang, Best constants for Moser-Trudinger inequalities on high dimensional hyperbolic spaces, Adv. Nonlinear Stud., 13 (2013), 1035-1052. doi: 10.1515/ans-2013-0415. Google Scholar
G. Lu and Y. Yang, Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension, Discrete Contin. Dyn. Syst., 25 (2009), 963-979. doi: 10.3934/dcds.2009.25.963. Google Scholar
S. Lula and G. Mancini, Extremal functions for singular Moser-Trudinger embeddings, Nonlinear Anal., 156 (2017), 215-248. doi: 10.1016/j.na.2017.02.029. Google Scholar
A. Malchiodi and L. Martinazzi, Critical points of the Moser-Trudinger functional on a disk, J. Eur. Math. Soc. (JEMS), 16 (2014), 893-908. doi: 10.4171/JEMS/450. Google Scholar
G. Mancini and L. Battaglia, Remarks on the Moser-Trudinger inequality, Adv. Nonlinear Anal, 2 (2013), 389-425. doi: 10.1515/anona-2013-0014. Google Scholar
G. Mancini and K. Sandeep, Moser-Trudinger inequality on conformal discs, Commun. Contemp. Math, 12 (2010), 1055-1068. doi: 10.1142/S0219199710004111. Google Scholar
G. Mancini, K. Sandeep and C. Tintarev, Trudinger-Moser inequality in the hyperbolic space $H^N$, Adv. Nonlinear Anal, 2 (2013), 309-324. doi: 10.1515/anona-2013-0001. Google Scholar
J. Moser, A sharp form of an inequality by N. Trudinger, Indiana Univ. Math. J, 20 (1971), 1077-1092. doi: 10.1512/iumj.1971.20.20101. Google Scholar
Q.-A. Ngo and V. H. Nguyen, An improved Moser-Trudinger inequality involving the first non-zero Neumann eigenvalue with mean value zero in $\mathbb{R}^2$, preprint, arXiv: 1702.08883 Google Scholar
V. H. Nguyen, A sharp Adams inequality in dimension four and its extremal functions, arXiv: 1701.08249 Google Scholar
V. H. Nguyen and F. Takahashi, On a weighted Trudinger-Moser type inequality on the whole space and its (non-)existence of maximizers, Differential Integral Equations, 31 (2018), 785-806. Google Scholar
P. Roy, Extremal function for Moser-Trudinger type inequality with logarithmic weight, Nonlinear Anal., 135 (2016), 194-204. doi: 10.1016/j.na.2016.01.024. Google Scholar
B. Ruf, A sharp Trudinger-Moser type inequality for unbounded domains in $R^2$, J. Funct. Anal, 219 (2005), 340-367. doi: 10.1016/j.jfa.2004.06.013. Google Scholar
M. Struwe, Critical points of embeddings of $H_0^{1, n}$ into Orlicz spaces, Ann. Inst. H. Poincaré Anal. Non Linéaire, 5 (1988), 425-464. doi: 10.1016/S0294-1449(16)30338-9. Google Scholar
N. S. Trudinger, On embeddings into Orlicz spaces and some applications, J. Math. Mech, 17 (1967), 473-484. doi: 10.1512/iumj.1968.17.17028. Google Scholar
Y. Yang, Extremal functions for a sharp Moser-Trudinger inequality, Internat. J. Math., 17 (2006), 331-338. doi: 10.1142/S0129167X06003503. Google Scholar
Y. Yang, Extremal functions for Moser-Trudinger inequalities on 2-dimensional compact Riemannian manifolds with boundary, Internat. J. Math., 17 (2006), 313-330. doi: 10.1142/S0129167X06003473. Google Scholar
Y. Yang, Extremal functions for Trudinger-Moser inequalities of Adimurthi-Druet type in dimension two, J. Differential Equations, 258 (2015), 3161-3193. doi: 10.1016/j.jde.2015.01.004. Google Scholar
Y. Yang, A Trudinger-Moser inequality on a compact Riemannian surface involving Gaussian curvature, J. Geom. Anal., 26 (2016), 2893-2913. doi: 10.1007/s12220-015-9653-z. Google Scholar
X. Zhu and Y. Yang, Blow-up analysis concerning singular Trudinger Moser inequalities in dimension two, J. Funct. Anal, 272 (2017), 3347-3374. doi: 10.1016/j.jfa.2016.12.028. Google Scholar
Guozhen Lu, Yunyan Yang. Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 963-979. doi: 10.3934/dcds.2009.25.963
Changliang Zhou, Chunqin Zhou. Extremal functions of Moser-Trudinger inequality involving Finsler-Laplacian. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2309-2328. doi: 10.3934/cpaa.2018110
Mengjie Zhang. Extremal functions for a class of trace Trudinger-Moser inequalities on a compact Riemann surface with smooth boundary. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1721-1735. doi: 10.3934/cpaa.2021038
Nguyen Lam. Equivalence of sharp Trudinger-Moser-Adams Inequalities. Communications on Pure & Applied Analysis, 2017, 16 (3) : 973-998. doi: 10.3934/cpaa.2017047
Kyril Tintarev. Is the Trudinger-Moser nonlinearity a true critical nonlinearity?. Conference Publications, 2011, 2011 (Special) : 1378-1384. doi: 10.3934/proc.2011.2011.1378
Xiaobao Zhu. Remarks on singular trudinger-moser type inequalities. Communications on Pure & Applied Analysis, 2020, 19 (1) : 103-112. doi: 10.3934/cpaa.2020006
Shiqiu Fu, Kanishka Perera. On a class of semipositone problems with singular Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1747-1756. doi: 10.3934/dcdss.2020452
Van Hoang Nguyen. The Hardy–Moser–Trudinger inequality via the transplantation of Green functions. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3559-3574. doi: 10.3934/cpaa.2020155
Xumin Wang. Singular Hardy-Trudinger-Moser inequality and the existence of extremals on the unit disc. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2717-2733. doi: 10.3934/cpaa.2019121
Djairo G. De Figueiredo, João Marcos do Ó, Bernhard Ruf. Elliptic equations and systems with critical Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems, 2011, 30 (2) : 455-476. doi: 10.3934/dcds.2011.30.455
Kanishka Perera, Marco Squassina. Bifurcation results for problems with fractional Trudinger-Moser nonlinearity. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 561-576. doi: 10.3934/dcdss.2018031
Françoise Demengel, Thomas Dumas. Extremal functions for an embedding from some anisotropic space, involving the "one Laplacian". Discrete & Continuous Dynamical Systems, 2019, 39 (2) : 1135-1155. doi: 10.3934/dcds.2019048
Yamin Wang. On nonexistence of extremals for the Trudinger-Moser functionals involving $ L^p $ norms. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4257-4268. doi: 10.3934/cpaa.2020191
Tomasz Cieślak. Trudinger-Moser type inequality for radially symmetric functions in a ring and applications to Keller-Segel in a ring. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2505-2512. doi: 10.3934/dcdsb.2013.18.2505
Changliang Zhou, Chunqin Zhou. On the anisotropic Moser-Trudinger inequality for unbounded domains in $ \mathbb R^{n} $. Discrete & Continuous Dynamical Systems, 2020, 40 (2) : 847-881. doi: 10.3934/dcds.2020064
Anouar Bahrouni. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity. Communications on Pure & Applied Analysis, 2017, 16 (1) : 243-252. doi: 10.3934/cpaa.2017011
Sami Aouaoui, Rahma Jlel. Singular weighted sharp Trudinger-Moser inequalities defined on $ \mathbb{R}^N $ and applications to elliptic nonlinear equations. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 781-813. doi: 10.3934/dcds.2021137
Randall Dougherty and Thomas Jech. Left-distributive embedding algebras. Electronic Research Announcements, 1997, 3: 28-37.
Viktor L. Ginzburg and Basak Z. Gurel. The Generalized Weinstein--Moser Theorem. Electronic Research Announcements, 2007, 14: 20-29. doi: 10.3934/era.2007.14.20
Thais Bardini Idalino, Lucia Moura. Embedding cover-free families and cryptographical applications. Advances in Mathematics of Communications, 2019, 13 (4) : 629-643. doi: 10.3934/amc.2019039
|
CommonCrawl
|
A formula for the Bloch norm of a C 1 -function on the unit ball of ℂ n
Miroslav Pavlović
Czechoslovak Mathematical Journal (2008)
Volume: 58, Issue: 4, page 1039-1043
Full (PDF)
For a C 1 -function f on the unit ball 𝔹 ⊂ ℂ n we define the Bloch norm by ∥ f ∥ 𝔅 = sup ∥ d ˜ f ∥ , where d ˜ f is the invariant derivative of f , and then show that ∥ f ∥ 𝔅 = sup z , w ∈ 𝔹 z ≠ w ( 1 - | z | 2 ) 1 / 2 ( 1 - | w | 2 ) 1 / 2 | f ( z ) - f ( w ) | | w - P w z - s w Q w z | .
Pavlović, Miroslav. "A formula for the Bloch norm of a $C^1$-function on the unit ball of $\mathbb {C}^n$." Czechoslovak Mathematical Journal 58.4 (2008): 1039-1043. <http://eudml.org/doc/37883>.
@article{Pavlović2008,
abstract = {For a $C^1$-function $f$ on the unit ball $\mathbb \{B\} \subset \mathbb \{C\} ^n$ we define the Bloch norm by $\Vert f\Vert _\mathfrak \{B\}=\sup \Vert \tilde\{d\}f\Vert ,$ where $\tilde\{d\}f$ is the invariant derivative of $f,$ and then show that \[ \Vert f\Vert \_\mathfrak \{B\}= \sup \_\{z,w\in \{\mathbb \{B\}\} \atop z\ne w\} (1-|z|^2)^\{1/2\}(1-|w|^2)^\{1/2\}\frac\{|f(z)-f(w)|\}\{|w-P\_wz-s\_wQ\_wz|\}.\]},
author = {Pavlović, Miroslav},
journal = {Czechoslovak Mathematical Journal},
keywords = {Bloch norm; Möbius transformation; Bloch norm; Möbius transformation},
language = {eng},
publisher = {Institute of Mathematics, Academy of Sciences of the Czech Republic},
title = {A formula for the Bloch norm of a $C^1$-function on the unit ball of $\mathbb \{C\}^n$},
AU - Pavlović, Miroslav
TI - A formula for the Bloch norm of a $C^1$-function on the unit ball of $\mathbb {C}^n$
JO - Czechoslovak Mathematical Journal
PB - Institute of Mathematics, Academy of Sciences of the Czech Republic
AB - For a $C^1$-function $f$ on the unit ball $\mathbb {B} \subset \mathbb {C} ^n$ we define the Bloch norm by $\Vert f\Vert _\mathfrak {B}=\sup \Vert \tilde{d}f\Vert ,$ where $\tilde{d}f$ is the invariant derivative of $f,$ and then show that \[ \Vert f\Vert _\mathfrak {B}= \sup _{z,w\in {\mathbb {B}} \atop z\ne w} (1-|z|^2)^{1/2}(1-|w|^2)^{1/2}\frac{|f(z)-f(w)|}{|w-P_wz-s_wQ_wz|}.\]
LA - eng
KW - Bloch norm; Möbius transformation; Bloch norm; Möbius transformation
Holland, F., Walsh, D., 10.1007/BF01451410, Math. Ann. 273 (1986), 317-335. (1986) Zbl0561.30025MR0817885DOI10.1007/BF01451410
Nowak, M., 10.1080/17476930108815339, Complex Variables Theory Appl. 44 (2001), 1-12. (2001) MR1826712DOI10.1080/17476930108815339
Pavlovi'c, M., On the Holland-Walsh characterization of Bloch functions, Proc. Edinb. Math. Soc. 51 (2008), 439-441. (2008) MR2465917
Rudin, W., Function Theory in the Unit Ball of C n , Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science], vol. 241, Springer-Verlag, New York (1980). (1980) MR0601594
Citations in EuDML Documents
Małgorzata Michalska, Maria Nowak, Paweł Sobolewski, Möbius invariant Besov spaces on the unit ball of C n
Bloch norm, Möbius transformation, Bloch norm, Möbius transformation
Linear function spaces and their duals
Banach spaces of continuous, differentiable or analytic functions
Entire and meromorphic functions, and related topics
30D45
Bloch functions, normal functions, normal families
Holomorphic functions of several complex variables
Bloch functions, normal functions
Other spaces of holomorphic functions (e.g. bounded mean oscillation (BMOA), vanishing mean oscillation (VMOA))
DML-CZ
Articles by Miroslav Pavlović
|
CommonCrawl
|
Shuffle Conjectures
The original Shuffle Conjecture (of HHLRU, 2003) is about an explicit combinatorial description of the bigraded Frobenius characteristic $\mathcal{D}_n^m(\mathbf{x};q,t)$ of the $\mathbb{S}_n$-module of (higher) diagonal harmonic polynomials. It is stated in terms of parking functions on $(mn,n)$-Dyck paths, and involves two statistics on these: the area and the dinv. Here, $(k,n)$-Dyck paths are north-east paths in the $(k\times n)$-rectangle, going from $(0,0)$ to $(k,n)$, while remaining above the diagonal $y=n x/k$. We allow the path to return to this diagonal, but this is only possible when $k$ and $n$ are not coprime.
For a given $(k,n)$-Dyck path $\gamma$, a parking function $\pi$ of that shape is simply a bijective labelling of the $n$ vertical steps of $\gamma$ by the numbers $\{1,2,\ldots,n\}$. We denote this by $\lambda(\gamma)=\pi$.
The area $\mathrm{area}(\pi)$ is the number of complete squares that lie between the path and the diagonal. The dinv-statistic is a bit more intricate (see here), and there is a composition $\mathrm{co}(\pi)$ of $n$, associated to $\pi$. We refer to Haglund for descriptions of these, in the case $k=mn$. The most general case is in the same spirit. The final ingredient in the formula is to use Schur functions indexed by compositions. These are simply obtained using a composition analog of the Jacobi-Trudi formula:
$S_\alpha(\mathbf{x}):=\mathrm{det}\ \left(h_{\alpha_i+j-i}(\mathbf{x})\right)_{i,j}.$
One may easily check that this evaluates to either $0$ or $\pm S_\lambda(\mathbf{x})$, for some partition $\lambda$. As it has become customary, we use Macdonald notation for symmetric functions. Thus $h_j(\mathbf{x})$ stands for the complete homogenous symmetric function.
Shuffle Conjecture (HHLRU, 2003)
For all $m$ and $n$,
$\displaystyle\mathcal{D}_n^m(\mathbf{x};q,t)= \sum_{\pi=\lambda(\gamma)}t^{\mathrm{area}(\pi)}q^{\mathrm{dinv}(\pi)}\,S_{\mathrm{co}(\pi)}(\mathbf{x}),$
with $\gamma$ running through all $(mn,n)$-Dyck paths.
N. Bergeron (the other one), Descouens and Zabrocki (2010) showed that the Frobenius characteristic of the space of diagonal harmonic polynomials has a natural filtration given by the number of returns to the diagonal of Dyck paths. This takes the form of $\nabla$-operator applied to a certain symmetric function. Inspired by this, Haglund, Morse and Zabrocki proposed a refinement of the shuffle conjecture (the HMZ conjecture), involving operators $C_\alpha=C_{a_1}C_{a_2}\cdots C_{a_\ell},$ on symmetric functions, where the $C_j$ are modified Hall-Littlewood vertex operators, and $\alpha=(a_1,a_2,\ldots,a_\ell)$ is a composition of $n$. It takes the form
HMZ Conjecture (2012)
For all composition $\alpha$ of $n$,
$\displaystyle\nabla^m\ C_\alpha\cdot 1=\sum_{\mathrm{ret}(\pi)=\alpha}t^{\mathrm{area}(\pi)}q^{\mathrm{dinv}(\pi)}\,S_{\mathrm{co}(\pi)}(\mathbf{x}),$
where $\mathrm{ret}(\pi)$ is the composition that indicates returns to the diagonal in the path underlying $\pi$. This path is assumed to be a $(mn,n)$-Dyck.
In the left hand side, $C_\alpha\cdot 1$ means that the operator is applied to symmetric function $1$. This implies the Shuffle Conjecture, since it may be shown that $\sum_\alpha C_\alpha\cdot 1=e_n$, and it has been shown that $\nabla^m(e_n)=\mathcal{D}_n^m$.
Recently, using the work of Burban and Schiffmann, Gorsky and Negut have come up with an analogous conjecture involving some operators $P_{k,n}$ on symmetric functions which generalize operators first considered in our paper (with A. M. Garsia, M. Haiman, and G. Tesler):
Identities and Positivity Conjectures for some remarkable Operators in the Theory of Symmetric Functions, Methods and Applications of Analysis, 6 no. 3 (1999), 363-420. See Theorem 4.4, where the operator $\Gamma_n$ corresponds to the operator $P_{n-1,n}$ in Gorsky-Negut notation.
Rational Shuffle Conjecture (2013)
Whenever $k$ and $n$ are coprime, then
$\displaystyle P_{k,n}\cdot 1=\sum_{\pi=\lambda(\gamma)}t^{\mathrm{area}(\pi)}q^{\mathrm{dinv}(\pi)}\,S_{\mathrm{co}(\pi)}(\mathbf{x}),$
where $\gamma$ runs over $(k,n)$-Dyck paths.
Although the operators $P_{k,n}$ make sense irrespective of the coprimality condition, they are not the right ones (nor are related operators considered by Gorsky-Negut) for this last conjecture to be generalized. Together with Adriano Garsia and Emily Leven, we have come up with entirely new operators $E_{k,n},$ that do allow this conjecture to be generalized to all cases (thus removing the coprimality condition). We even have more general operators $E_{k,n}^{(\alpha)},$ for $\alpha$ any composition of $\mathrm{GCD}(k,n),$ for which we have a joint generalization of the HMZ conjecture and the Rational Shuffle Conjecture.
Compositional Rational Shuffle Conjecture (see our paper B.-Garsia-Leven-Xin)
For all integers $k$ and $n,$ and $\alpha$ composition of $\mathrm{GCD}(k,n),$
$\displaystyle E_{k,n}^{(\alpha)}\cdot 1=\sum_{\mathrm{ret}(\pi)=c\alpha}t^{\mathrm{area}(\pi)}q^{\mathrm{dinv}(\pi)}\,S_{\mathrm{co}(\pi)}(\mathbf{x}),$
where $c=n/\mathrm{GCD}(k,n),$ and $c\alpha=(ca_1,ca_2,\ldots,ca_\ell)$. The summation is over parking functions whose underlying path returns at the diagonal at the position specified by $c\alpha$.
The fact that all previous conjecture are implied by this one, follows from the operator identity $E_{k,n}=\sum_{\alpha}E_{k,n}^{(\alpha)},$ and the fact that $E_{k,n}=P_{k,n}$ when $k$ and $n$ are coprime (and only then). Furthermore, we have
$E_{k+n,n}^{(\alpha)}=\nabla\, E_{k,n}^{(\alpha)}\,\nabla^{-1},$
and $E_{0,n}^{(\alpha)}\cdot 1=C_\alpha\cdot 1$.
For more on this, see
Open Questions for Operators Related to Rectangular Catalan Combinatorics, to appear in Journal of Combinarorics (arXiv:1603.04476), accepted 2016.
(with E. Leven, A. Garsia, and G. Xin), Compositional (km,kn)(km,kn)-Shuffle Conjectures, International Mathematics Research Notices,
Vol. 2016, 4229–4270 doi:10.1093/imrn/rnv272 (arXiv:1404.4616).
(with E. Leven, A. Garsia, and G. Xin), Some remarkable new Plethystic Operators in the Theory of Macdonald Polynomials, Journal of Combinarorics, Volume 7, Number 4, 671–714, 2016. (arXiv:1405.0316)
E. Gorsky and A. Negut, Refined knot invariants and Hilbert schemes, 2013, (arXiv:1304.3328).
T. Hikita, Affine Springer fibers of type A and combinatorics of diagonal coinvariants, 2013, (arXiv:1203.5878)
J. Haglund, J. Morse, and M. Zabrocki, A compositional shuffle conjecture specifying touch points of the Dyck path, Canad. J. Math. 64 (2012), 822-844. (arXiv:1008.0828).
N Bergeron, F Descouens, and M Zabrocki, A filtration of (q, t)-Catalan numbers, Advances in Applied Mathematics 44 (1), (2010) 16-36.
M. Haiman, J. Haglund, N. Loehr, J. B. Remmel and A. Ulyanov, A combinatorial formula for the character of the diagonal coinvariants, Duke Math. J. 126 (2005), no. 2, 195-232. (arXiv:math/0310424)
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.