content
stringlengths 7
2.61M
|
---|
<reponame>Aspern/anki-overdrive<filename>core/src/test/java/de/msg/iot/anki/track/TrackTestSuite.java
package de.msg.iot.anki.track;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
@RunWith(Suite.class)
@Suite.SuiteClasses({
TrackTest.class,
PieceTest.class
})
public class TrackTestSuite {
}
|
def has_permission(content):
parsed = BeautifulSoup(content, 'html.parser')
page_text = parsed.get_text()
if page_text.__contains__("Permission denied. You do not have access to this area"):
return False
else:
return True |
The Power of Cosmic Flexion in Testing Modified Matter and Gravity Flexion is the weak lensing effect responsible for the weakly skewed and arc-like appearance of lensed galaxies. The flexion signal-to-noise ratio can be an order of magnitude larger than that of shear. For the first time, we show how this makes flexion an invaluable tool for discriminating among alternative cosmological models. We analyse a scalar field model of unified dark matter and dark energy, a brane-world cosmology and two f(R) modified-action theories. We show that these models can be distinguished from LCDM at several standard deviations by measuring the power spectrum of cosmic flexion. Flexion is the weak lensing effect responsible for the weakly skewed and arc-like appearance of lensed galaxies. The flexion signal-to-noise ratio can be an order of magnitude larger than that of shear. For the first time, we show how this makes flexion an invaluable tool for discriminating between alternative cosmological models. We analyse a scalar field model of unified dark matter and dark energy, a brane-world cosmology and two f (R) modified-action theories. We show that these models can be distinguished from CDM at several standard deviations by measuring the power spectrum of cosmic flexion. Introduction.-In the last decades, cosmologists proposed several models alternative to the concordance cold dark matter (CDM) paradigm. These models attempt to find an agreement at least as good as that of CDM with current cosmological datasets, such as the temperature anisotropy pattern of the cosmic microwave background radiation, the dynamics of the large-scale structure of the Universe and the present-day cosmic accelerated expansion. However, in these theories, crucial topics such as the missing mass in galaxies and galaxy clusters and the current Universe's accelerated expansion are not explained by the usual dark matter and the cosmological constant. On the contrary, these models mainly rely on either a modification of the law of gravity or the introduction of additional scalar or vector fields in the Universe's content. The family of alternative models with additional fields, also named modified matter models, includes dynamical dark energy or quintessence, but also models which attempt to identify both the dark matter and dark energy effects with the properties of a single "dark fluid". Conversely, the class of modified gravity includes a variety of approaches, which can however be well represented by brane-world cosmologies and modified-action theories. Brane worlds describe a four-dimensional "brane," which is our own Universe, embedded into a higher-dimensional spacetime, the "bulk." In this scenario, Einstein's general relativity is still valid, but the higher-dimensional behaviour of gravity induces non-negligible signatures on the Universe's evolution and the growth of cosmic structures on the brane. Finally, modified-action theories directly modify the law of gravity by generalising the Hilbert-Einstein Lagrangian. Among all the possible theories, f (R) gravity, where Ricci's scalar R is replaced by a generic function f (R), is probably the most investigated approach. In this Letter, we choose three models to explore the space of modified matter and gravity theories. Specifically, we consider a model of unified dark matter and dark energy, a phenomenological extension of the wellknown DGP brane-world cosmology, and two f (R) models able to pass the Solar system gravity tests. All of them reproduce the CDM expansion history, thus representing viable alternatives for the description of the background evolution of the Universe. To be able to discriminate between them and CDM it is therefore crucial to investigate the rgime of cosmological perturbations. This analysis has been carried out using several observables, for instance the power spectrum of density fluctuations and cosmic shear. However, it is not rare that the predicted signal is very similar to what expected in CDM. Here, we show that the degeneracy between models can be lifted by cosmic flexion, namely the flexion correlation function whose signal originates from the large-scale structure of the Universe. We will present parameter forecasts and additional gravitational lensing statistics elsewhere. Cosmic Flexion.-The deflecting gravitational field of the extended large-scale structure of the Universe -which is simply the Newtonian potential, in general relativity -is responsible for deflection of light rays emitted by distant sources. This phenomenon is known as weak gravitational lensing. Therefore, photon paths from a galaxy located at on the sky are deflected by an angle where ∂ = ∂ 1 + i∂ 2 is the gradient with respect to directions perpendicular to the line of sight and is the projected deflecting potential. Unfortunately, the deflection angle is not observable directly. This is because one does not know the true two-dimensional distribution of the sources on the sky. On the other hand, its gradient, the distortion matrix ∂ a ∂ b, is measurable. In particular, the entries of the distortion matrix can be related to the effects of convergence and (complex) shear occurring to the source image, i.e. If convergence and shear are effectively constant within a source galaxy image, the galaxy transformation is and a, b = 1, 2 label the coordinates on the sky. Flexion arises from the fact that the shear and convergence are actually not constant within the image, it therefore represents local variability in the shear field that expresses itself as second-order distortions in the coordinate transformation between unlensed and lensed images. Thus, by expanding the observed galaxy position at the second-order in the deflection angle, it follows that a ≃ A ab b + D abc b c /2, with D abc ≡ ∂ c A ab. As the distortion matrix can be decomposed into the convergence and the shear, it is usual to define a spin-1 and a spin-3 flexion, which read respectively. Since measurements of G are noisier than F, we will restrict our analysis to F only. To construct the flexion correlation function from large-scale structure, we start from the definition of the projected deflecting potential, where d = dz/H(z) is the radial comoving distance, H(z) is the expansion history of the Universe and W () is the weak lensing selection function. W () depends on the redshift distribution of the sources n, normalised such that d n() = 1. In the flat-sky approximation, we expand the flexion in its Fourier modes F(ℓ). Hence, from the definition of angular power spectrum which is the Fourier transform of the two-dimensional correlation function, and from Eq., we finally get Modified Matter/Gravity Models.-We now briefly review the three models we use. We refer to them as: UDM for the model of unified dark matter and dark energy; eDGP for the phenomenologically extended DGP brane world; and St and HS for the two f (R) theories of and, respectively. In the class of UDM models we use, a single scalar field with a Born-Infeld kinetic term mimics both dark matter and dark energy. The energy density of the scalar field reads UDM = DM +, where DM ∝ a −3 and = const., that yields the CDM Hubble parameter exactly. However, there also is a pressure term p UDM = − which leads to a non-negligible speed of sound for the perturbations of the scalar field itself. This is a common feature in modified matter models and it typically causes an integrated Sachs-Wolfe effect incompatible with current observations. To solve this problem, we use the technique outlined in, where the authors construct a UDM model able to reproduce both the correct temperature power spectrum of the cosmic microwave background and the clustering properties of the large-scale structure we see today. The sound speed is parameterised by its late-time value c ∞ (in units of c = 1) and the growth of cosmic structures strongly depends on it. Indeed, the presence of the sound speed produces an effective Jeans length J for the Newtonian potential. Thus, its evolution is no more scale independent. Specifically, the Fourier modes k are suppressed on scales k > 1/ J and oscillate around zero. The larger is the value of c ∞, the earlier the Newtonian potential starts decreasing (for a fixed scale) or at a greater scale (for a fixed epoch). Since the Newtonian potential is responsible for light deflection, weak lensing is a powerful tool to constrain UDM models and in particular three-dimensional cosmic shear. However, UDM models with c ∞ 10 −3 still produce a signal virtually indistinguishable from that of CDM. In the eDGP model, the cross-over length r c, which defines the scale at which higher-dimensional gravitational effects become important, is tuned by a free parameter ∈. It is strictly related to the graviton propagator. Particularly, = 0 and = 1/2 reduce to CDM and standard DGP, respectively. Recently, it has been shown that the eDGP model excellently fits geometrical datasets such as the Hubble diagram of type Ia supernovae and gamma ray bursts, the scale of baryon acoustic oscillations and the CMB distance indicators. Therefore, it is crucial to test this model in the rgime of cosmological perturbations. As in other modified gravity theories, the two metric perturbations, the Newtonian potential and the metric potential, evolve differently even with no anisotropic stress. Contrarily, for general relativity = − holds in the matter dominated era. Thus, when we study gravitational lensing we have to deal with the deflecting potential ≡ ( − )/2. Moreover, its Poisson equation, which relates it to the distribution of the matter overdensities, is modified by the presence of an effective time-and scale-dependent gravitational constant. It is worth giving a final remark on the evolution of matter fluctuations. Unlike the linear growth of perturbations, that can be described analytically, the non-linear rgime has to be explored numerically. Two approaches have been followed and we refer to them as "KW" and "PPF." The former generalises the halofit procedure to the eDGP scenario according to recently performed N -body simulations, whilst the latter interpolates the eDGP non-linear matter power spectrum with that of CDM in order to reproduce general relativity at small scales and be thus able to pass Solar system gravity tests. The functional forms of this last approach have been obtained by perturbation theory and confirmed by N -body simulations. Unfortunately, were PPF the correct non-linear prescription, we would not be able to discriminate between the eDGP and CDM signals even with the present and next generation weak lensing surveys. Finally, we analyse the St and HS f (R) theories, which also are degenerate with CDM at background level. Their functional form allow them to achieve the late-time accelerated expansion of the Universe with no formal cosmological constant. On the contrary, they present three free parameters, c 1, c 2 and n. It has been shown that the growth of linear perturbations strongly depends on the function f (R), which acts by generating a time-and scale-dependent gravitational constant, as well as an effective anisotropic stress. Regarding the non-linear evolution of perturbations, the PPF technique is still valid, as confirmed by N -boy simulations. Cosmic shear studies on these models gave interesting results, but the St signal is nonetheless almost completely degenerate with CDM. Results and Discussion.-Here, we present the cosmic flexion power spectrum expected in the alternative models outlined above and we compare it with the CDM prediction. For this, we use a fiducial flat Universe where the Hubble constant is H 0 = 100 h km s −1 Mpc −1 and h = 0.7. The matter density in units of the critical density is m ≡ DM + b = 0.28, with DM and b = 2.22 10 −2 h −2 the dark matter and baryon fractions, respectively. The tilt of the primordial matter power spectrum is n s = 0.96 and the density fluctuation rms on the scale of 8 h −1 Mpc is 8 = 0.8. For the UDM model, we probe c ∞ = 5 10 −4 and c ∞ = 10 −3. The eDGP model parameters are = 0.116 and r c H 0 = 155.041. The St(HS) parameters read log 10 c 1 = 2.38(4.98), log 10 c 2 = −2.6(3.79) and n = 1.79(1.67). It is important to note that there currently is no linearto-non-linear mapping in UDM models. Nevertheless, differences between CDM and UDM models arise at scales smaller than the sound horizon. With a cross-over wavenumber k ≃ 1/ J, if the sound speed is small enough to guarantee that J is well within the non-linear regime, we can assume that the non-linear evolution of the UDM power spectrum will be similar to that of CDM. We use the specifics of the upcoming ESA Euclid satellite. 1 Euclid is one of the ESA Cosmic Vision 2010-1 http://sci.esa.int/science-e/www/area/index.cfm?fareaid=102 2015 approved projects and is currently in the timeline of M-class missions. Its survey area will be 20, 000 square degree, with a sky coverage f sky ≃ 0.48 and a source distribution over redshifts n(z) ∝ z 2 e − z z 0 where z 0 = z m /1.4 and z m = 0.9 is the median redshift of the survey. The number density of the sources, with redshift and shape estimates, isn = 35 arcmin −2. To compute errorbars, we use generalising thus the approach of. This is because -unlike shear -flexion has a dimension of length −1 (or angle −1 ). This means that the effect by flexion depends on the source size. Recently, it has been shown that the noise power spectrum N F ℓ for flexion is inversely proportional to the squared angular scale ; we therefore set with F int 2 0.5 ≃ 0.03 arcsec −1 the galaxy-intrinsic flexion rms. 1 shows the cosmic flexion power spectra C F (ℓ) of CDM (red, solid), eDGP (green) with both KW (dashed) and PPF (dot-dashed) linear-to-non-linear mappings and the f (R) models (red) of St (dashed) and HS (dot-dashed). As expected, the UDM signal is suppressed at small angular scales because of the presence of the scalar field sound speed. The eDGP model is still very close to the CDM prediction, particularly the PPF nonlinear power spectrum, for it being specifically designed to reproduce general relativity on small scales. On the other hand, the St and HS models clearly show the scale dependence of the Newtonian gravitational constant G. Indeed, in the so-called "scalar-tensor" rgime it reaches the value ∼ 4G/3. Nevertheless, cosmic flexion shows an outstanding improvement in the separation between the signals when compared to the cosmic shear power spectrum. Indeed, the dark-grey shaded area represents the 1-error region, whilst light-grey refers to errors six times larger. Flexion measurements are made on the shapes of the source galaxies, exactly as in the cosmic shear analysis. Therefore, the source number densityn is the same for the two observables and, with a space-based, wide-field survey such as Euclid, we can collect a fairly large statistics. However, the intrinsic flexion rms F int 2 0.5 is an order of magnitude smaller than the cosmic shear rms and the power spectrum is thus significantly less noisy. We conclude that cosmic flexion is an excellent tool for testing alternative cosmological models and discriminate between them. With realistic values for the mean galaxy number densityn and the flexion noise N F ℓ, which includes its angular scale-dependence, expected from the upcoming Euclid mission, we find an admirable separation between cosmic flexion power spectra C F (ℓ) of viable models which are almost degenerate with CDM when investigated with other observables, such as cosmic shear. We will provide a more detailed analysis of these outstanding results in. |
package com.didichuxing.datachannel.agent.integration.test.verify;
import java.util.List;
import com.didichuxing.datachannel.agent.integration.test.basic.BasicUtil;
import com.didichuxing.datachannel.agent.integration.test.format.LogEventFormat;
import com.didichuxing.datachannel.agent.integration.test.utils.Md5Util;
import org.apache.commons.lang.StringUtils;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import com.didichuxing.datachannel.agent.integration.test.datasource.BasicDataSource;
import com.didichuxing.datachannel.agent.integration.test.format.Format;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @description: 数据校验流程
* @author: huangjw
* @Date: 19/2/12 17:31
*/
public class DataVerifyTask implements Runnable {
private static final Logger LOGGER = LoggerFactory.getLogger(DataVerifyTask.class
.getName());
private BasicUtil basicUtil = BasicUtil.getInstance();
private DataVerifyConfig config;
// 60秒未消费到数据,直接退出
private int maxNoRecord = 60;
public DataVerifyTask(DataVerifyConfig config) {
this.config = config;
}
@Override
public void run() {
BasicDataSource basicDataSource = config.getBasicDataSource();
Format format = config.getFormat();
String topic = config.getTopic();
int num = 0;
while (num < maxNoRecord) {
ConsumerRecords<String, String> records = basicUtil.getNextMessageFromConsumer(topic);
if (records != null && !records.isEmpty()) {
for (ConsumerRecord<String, String> record : records) {
if (record != null && StringUtils.isNotBlank(record.value())) {
String message = record.value() + System.lineSeparator();
LOGGER.info("consumer topic[" + topic + "]:" + message);
Object object = format.unFormat(message);
if (format instanceof LogEventFormat) {
List<String> result = (List<String>) object;
for (String item : result) {
String md5 = Md5Util.getMd5(item);
if (basicDataSource.getMap().containsKey(md5)) {
basicDataSource.getMap().remove(md5);
}
}
} else {
String md5 = Md5Util.getMd5((String) object);
if (basicDataSource.getMap().containsKey(md5)) {
basicDataSource.getMap().remove(md5);
}
}
}
}
num = 0;
} else {
num++;
if (num > maxNoRecord / 2) {
LOGGER.info("topic[" + topic + "] has no any data for " + num + " times!");
}
try {
Thread.sleep(1000L);
} catch (Exception e) {
LOGGER.error("sleep is interrupted.", e);
}
}
}
// 判断basicDataSource下的map里的内容是否全部校验通过了
LOGGER.info("check " + basicDataSource.getClass().getName() + " end. topic is " + topic
+ ", result is " + (basicDataSource.getMap().size() == 0));
}
}
|
<filename>src/main/java/io/connectedhealth_idaas/eventbuilder/dataobjects/financial/hipaa/N8A.java
package io.connectedhealth_idaas.eventbuilder.dataobjects.financial.hipaa;
import org.apache.commons.lang3.builder.ReflectionToStringBuilder;
public class N8A {
private String N8A_01_WaybillCrossReferenceCode;
private String N8A_02_WaybillNumber;
private String N8A_03_Date;
private String N8A_04_ReferenceIdentification;
private String N8A_05_CityName;
private String N8A_06_StateorProvinceCode;
private String N8A_07_StandardCarrierAlphaCode;
private String N8A_08_FreightStationAccountingCode;
private String N8A_09_EquipmentInitial;
private String N8A_10_EquipmentNumber;
public String toString() { return ReflectionToStringBuilder.toString(this);}
}
|
<filename>iStar/game/Classes/WormholeMgr.h
//
// WormholeMgr.h
// iStar
//
// Created by <NAME> on 23/05/09.
// Copyright 2009 Folera.Com. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <OpenGLES/ES1/gl.h>
#import "GfxMgr.h"
#import "Constants.h"
#import "Vector.h"
@interface WormholeMgr : NSObject {
@public
Vector *location;
@private
GLfloat vertices[12];
int animDelay;
int animIdx;
}
- (id)initWithX:(GLfloat)x andY:(GLfloat)y;
- (void)release;
- (void)draw;
- (void)clearAnimation;
- (void)animate;
@end |
Millimeter wave systems that perform beam forming and steering typically include numerous antenna elements, integrated circuits and interconnects. Such systems are the foundation of a viable mechanism to provide high data rate short-range wireless connectivity for consumer applications. In order to achieve performance and cost points, a prevalent challenge is to develop an integration platform package that is compatible with volume manufacturing and assembly processes.
Such an integrated package is expected to accommodate a variety of functions as the level of integration increases. These functions include providing low-loss resonance-free mm-wave signal paths, embedding of multi-layer antenna elements and their feed network, integrating local oscillator (LO), intermediate frequency (IF) distribution and passive circuits and incorporating control and bias layers among others.
In a typical scenario where a millimeter-wave antenna is to be integrated with an integrated circuit (IC), both the antenna and the IC reside on the top layer of a substrate to ensure acceptable performance. This approach encounters problems when there are many antenna elements that need to be individually driven by distinct RF ports located on one or more ICs. First, routing congestion will limit the number of elements.
Moreover, the package will be large as ICs and antennas have to be located on the same surface with enough clearance. As the size of the package increases, the cost will increase, and in some cases, the substrate may even become too large to be manufactured. Finally, heat removal from the ICs would be difficult. |
BRD7, a Novel PBAF-specific SWI/SNF Subunit, Is Required for Target Gene Activation and Repression in Embryonic Stem Cells* The composition of chromatin-remodeling complexes dictates how these enzymes control transcriptional programs and cellular identity. In the present study we investigated the composition of SWI/SNF complexes in embryonic stem cells (ESCs). In contrast to differentiated cells, ESCs have a biased incorporation of certain paralogous SWI/SNF subunits with low levels of BRM, BAF170, and ARID1B. Upon differentiation, the expression of these subunits increases, resulting in a higher diversity of compositionally distinct SWI/SNF enzymes. We also identified BRD7 as a novel component of the Polybromo-associated BRG1-associated factor (PBAF) complex in both ESCs and differentiated cells. Using short hairpin RNA-mediated depletion of BRG1, we showed that SWI/SNF can function as both a repressor and an activator in pluripotent cells, regulating expression of developmental modifiers and signaling components such as Nodal, ADAMTS1, BMI-1, CRABP1, and thyroid releasing hormone. Knockdown studies of PBAF-specific BRD7 and of a signature subunit within the BAF complex, ARID1A, showed that these two subcomplexes affect SWI/SNF target genes differentially, in some cases even antagonistically. This may be due to their different biochemical properties. Finally we examined the role of SWI/SNF in regulating its target genes during differentiation. We found that SWI/SNF affects recruitment of components of the preinitiation complex in a promoter-specific manner to modulate transcription positively or negatively. Taken together, our results provide insight into the function of compositionally diverse SWI/SNF enzymes that underlie their inherent gene-specific mode of action. Embryonic stem cells (ESCs) 2 possess a distinctive global chromatin structure that is characterized by hyperdynamic architectural proteins and bivalent domains, ultimately resulting in elevated global transcription compared with differentiated cells. This chromatin structure is dictated by stem cell-specific transcription factors, chromatin architecture, and epigenetic regulation and is a prerequisite for self-renewal and the capacity to differentiate into the three germ layers. Important determinants of this unique genomic plasticity are ATP-dependent chromatin-remodeling complexes. These multisubunit enzymes catalyze non-covalent eviction, restructuring or repositioning of nucleosomes to modulate the accessibility of transcription factors and other regulatory proteins to chromosomal DNA. Multiple distinct families of chromatin-remodeling complexes exist, some of which have been implicated in developmental processes. For example, genomic disruption of specific chromatin-remodeling components results in early embryonic lethality (10 -14). Other remodeling modules are required to maintain the balance between ESC self-renewal and differentiation. Given the diverse nature of SWI/SNF enzymes and the requirement for some, but not all, of its subunits in embryonic stem cells, we investigated which complexes exist in pluripotent cells. Our results indicate that several BAF subunits form a core that is contained in the majority of SWI/SNF enzymes. Surprisingly in ESCs specific paralogues predominate in SWI/ SNF complexes, and incorporation of related proteins is restricted by transcriptional repression of their genes. This indicates a reduced diversity of SWI/SNF complexes in pluripotent cells that is reversed upon differentiation, probably reflecting the need to regulate more intricate transcriptional programs. Our functional analysis of BRG1 in pluripotent cells revealed that SWI/SNF can both repress and activate target genes. We also identified novel stoichiometric components of SWI/SNF complexes, among them the PBAF-specific bromodomain-containing protein 7 (BRD7) protein. Using an RNAibased approach for BRD7 and ARID1A, we showed that both BAF and PBAF complexes can play important roles in genespecific repression and activation. Overall our results add new insights into how the composition of SWI/SNF complexes imposes transcriptional regulation on individual target genes. EXPERIMENTAL PROCEDURES Cell Culture and Differentiation-293T and R1 mouse ESCs were obtained from ATCC. ES cells were cultivated on feeder cells according to ATCC guidelines. R201 cells, R218 cells, and their derivatives were maintained on gelatinized tissue culture dishes in Dulbecco's modified Eagle's medium supplied with 15% fetal bovine serum and leukemia-inhibitory factor. For retinoic acid (RA) differentiation, leukemia-inhibitory factor was omitted, and medium was supplied with 1 M all-trans-retinoic acid and changed daily. Lentiviral Production and Infection-Lentiviral expression vectors are based on pWPT-GFP, which contain an SV40-puro cassette. All transgenes are expressed by an EF1␣ promoter. Detailed maps are available upon request. Lentiviral shRNAs were constructed as described previously for BRG1 and the control hairpin targeting GLUT4 or purchased from Sigma (ARID1A, BRD7, and scrambled hairpin control). The targeted sequence in BRG1 is AAGCACCAGGAGTACCTCAAC. Lentiviral particles were produced and concentrated as described previously. To establish stably expressing cell lines, 10 5 ESCs were infected at low multiplicity and selected with puromycin (0.25 mg/liter) for 7 days. Cell lines R201 and R218 were derived by infection of R1 with viruses expressing Nanog-HA and Nanog-MYC, respectively, and single cell-cloned. Multidimensional Protein Identification Technique (Mud-PIT) Analysis-Trichloroacetic acid-precipitated IP samples were resuspended in 8 M urea, 100 mM Tris, pH 8.5; reduced; alkylated; diluted to 2 M urea, 1 mM CaCl 2, 100 mM Tris, pH 8.5; and digested with trypsin. The tryptic digests were supplemented with formic acid to 5% and analyzed using the anion and cation exchange MudPIT method as described previously. Briefly the samples were loaded onto a 250-m-inner diameter column with a Kasil frit containing a 2.5-cm reverse phase section packed with 5-m, 125- Aqua C 18 resin (Phenomenex, Torrance, CA) and a 2.5-cm anion and cation exchange section proximal to the frit. The anion and cation exchange section was packed with a 1:2 mixture of strong cation exchange resin (Partisphere 5-m SCX resin from Whatman) and anion exchange resin (PolyWAX LP from PolyLC Inc., Columbia, MD). After desalting, this biphasic column was connected to a 10-cm-long, 100-m-inner diameter analytical reverse phase column made of 3-m, 125- Aqua C 18 resin (Phenomenex). MS analysis was performed on a linear trap quadrupole mass spectrometer (ThermoFisher Scientific) using a three-step MudPIT method with salt pulses at 0, 40, and 100% buffer C. Each full MS scan was followed by five MS/MS scans. The MS/MS spectra were searched with SEQUEST against a mouse International Protein Index protein data base using a 3-atomic mass unit mass tolerance. The search results were filtered with a modified version of DTASelect with a 5% false positive cutoff at the spectrum level, requiring peptides to be half-or fully tryptic and a minimum of two peptides per protein identification. The false positive rate for protein identification is 2% or lower. SWI/SNF subunits were absent in the control sample ( Fig. 1, no tagged subunit) except BRG1 (three peptides) and BAF53A (two peptides). Mowse protein scores are derived from peptide scores as a non-probabilistic basis for ranking protein hits. Peptide score is 10 log(p) where p is the probability that the observed match is a random event. Individual peptide scores 27 indicate identity or extensive homology (p 0.05). Chromatin IP-ChIP was performed as described previously with buffers described by Upstate with the following modifications: 40 10 6 cross-linked cells were resuspended in 2 ml of SDS lysis buffer and sonicated 4 8 s, power 4. Soluble complexes were diluted in 3 volumes of ChIP dilution buffer, and lysate corresponding to 10 7 cells was incubated with 2 g of antibody (HA, TFIIB, TFIID, or POLII) or 2 l of J1 prebound on 20 l of Dynal protein G beads. After two washes in high salt wash buffer, two washes in LiCl wash buffer, and two washes in high salt wash buffer, complexes were decross-linked at 65°C for 6 h. DNA was precipitated using 10 g of yeast tRNA and 10 g of glycogen. RNA Extraction and Quantitative PCR-Total RNA was extracted using TRIzol (Invitrogen). Reverse transcription was performed with 0.5 g of total RNA, random hexamers, and SuperScript III polymerase (Invitrogen). Quantitative PCR was performed on a Stratagene Mx3005P system using SYBR Green (Applied Biosystems). The error bars shown represent duplicate measurements from independent biological duplicates. Primer sequences are listed in supplemental materials. Composition of SWI/SNF in Pluripotent ES Cells-Initially we investigated which forms of the SWI/SNF multisubunit complex exist in ESCs. We used a strategy in which epitope-tagged subunits are virally integrated into the ESC genome. Affinity purification is then accomplished in a simple one-step procedure, resulting in native multisubunit protein preparations of high yield and purity. We expressed several well characterized subunits to increase our ability to purify most of the compositionally distinct forms of SWI/SNF that may be present in ESCs. BAF47, BAF57, BAF155, BAF170, and BRG1 are all core subunits that associate with both BAF and PBAF complexes (Fig. 1A). To purify SWI/SNF from substantial amounts of homogeneously undifferentiated ESCs, we created the cell lines R201 and R218. Both were derived from the R1 ESC line by infection with lentiviruses expressing Nanog-HA and Nanog-MYC, respectively. Both cell lines exhibit morphological features similar to pluripotent ES cells and differentiate upon leukemia-inhibitory factor withdrawal or RA treatment (supplemental Fig. 1). We also compared gene expression in parental R1 and R218 cells in response to RA and found them to be very similar albeit with a slower Oct4 mRNA decrease in R218 cells. Additionally the R218 line was capable of differentiating into spontaneously beating cardiomyocytes with efficiency similar to that reported previously (data not shown). SWI/SNF in Embryonic Stem Cells Is Composed of a Limited Subset of Components-To prepare purified SWI/SNF complexes, R201 cells were transduced with lentiviruses expressing a C-terminal FLAG-tagged cDNA of Baf47, Baf57, Baf155, Baf170, or Brg1. Except for BAF170, ectopic expression of SWI/ SNF subunits did not lead to an overall increase in protein levels ( Fig. 1B). To examine SWI/SNF composition in an unbiased manner, we purified sufficient material to analyze by silver staining and MudPIT. As shown in Fig. 1C, anti-FLAG eluates from ESCs expressing individual FLAG-tagged BAFs 47, 57, 155, or 170 or BRG1 contained a similar set of proteins, which resembles the subunit pattern observed in the initial SWI/SNF purifications. Using Western blotting, mass spectroscopy on the individual bands, and differences in migration upon expression of the FLAG-tagged subunits (supplemental Fig. 2), we co-localized the known SWI/SNF subunits ARID1A, ARID2, BRG1, BAF170, BAF155, BAF60, BAF57, and BAF53 with the indicated bands. The Majority of SWI/SNF Complexes Contain the Core Subunits BAFs 47,57, and 155 and BRG1-We then examined by Western blotting whether any subunits were preferentially assembled into particular complexes by unique associations with other BAF proteins. We observed that the bait for the purification was in general slightly overrepresented (Fig. 1D). However, with the exception of the BAF170 sample, the preparations showed a remarkably similar ratio between the other examined core components. This is consistent with the notion that the majority of purified complexes contain all four subunits: BAFs 47, 57, and 155 and BRG1. BAF170 was below detection in all samples except where it was ectopically expressed, suggesting that it is rare in ESCs and that the lentiviral expression increased its abundance above normal, endogenous levels. As expected, its ectopic expression and incorporation in SWI/SNF complexes resulted in the replacement of one or both molecules of its paralogue BAF155, explaining why BAF155 levels are lower in complexes purified through BAF170. Because of the overexpression of BAF170, we did not analyze this sample any further. Several Paralogous Subunits Are Overrepresented by Specific Forms-A comparison of the samples by MudPIT revealed unique peptides of 14 previously documented SWI/SNF components in the four subunit-specific ESC and HeLa purifications ( Table 1). The control sample only contained a total of five peptides from two different subunits (see "Experimental Procedures"). Because certain groups of paralogous subunits (BRG1/ BRM, BAF155/BAF170, and ARID1A/ARID1B) share substantial sequence homology (50% in each group), we only included peptides in our analysis that could be unambiguously attributed to one specific polypeptide. Because BRM and BRG1 are mutually exclusive subunits, the single unique BRM peptide in our BRG1 purification reflects the expected false positive identification rate of 2% or lower. We expected roughly similar amounts of unambiguous peptides when comparing the paralogous subunits within their groups as observed in our analysis of SWI/SNF from HeLa cells. However, we found that in ESCs specific paralogous subunits were preferentially incorporated. In the case of BRG1/BRM, our results suggest that the underrepresented protein BRM might exist in negligible quantities in ESCs in agreement with its low expression during early development. This is also the case for the postmitotic neuron-specific BAF53B of which we could not find any unambiguous peptides. Also in contrast to HeLa cells, BAF170 and ARID1B had considerably fewer unambiguous peptides in ESCs than their counterparts BAF155 and ARID1A, implying that these subunits are less abundant. ARID1B-containing BAF enzymes were shown to interact with transcriptional activators as opposed to complexes with ARID1A that associate with transcriptional repressors, suggesting that in ESCs SWI/SNF might be compositionally better suited for a repressive role. Differentiation Increases Incorporation of Previously Underrepresented SWI/SNF Components-Because we observed a surprisingly biased usage of paralogous SWI/SNF subunits in pluripotent cells, we examined the changes in SWI/SNF composition during ESC differentiation. We used RA for 2 and 6 days to differentiate pluripotent stem cells into restricted descendants. SWI/SNF complexes were purified from differentiating cells using a tagged version of BAF47 as described previously and compared with a preparation from undifferentiated cells ( Fig. 2A). To identify subunit-specific changes, we performed Western blot analyses (Fig. 2B). Each SWI/SNF preparation was standardized to give approximately the same levels of the core components BAF47 and BAF57. Throughout the course of differentiation, we observed considerable induction of some subunits, such as BAF53, BAF170, and BRM. Conversely ARID2 and ARID1A decreased by day 6 of differentiation. BAF155 exhibited a decrease in the immunoreactive 155-kDa band but an increase in the signal at 120 kDa. Mass spectroscopy of this silver-stainable band at 120 kDa identified 18 individual peptides of BAF155, supporting the notion of a splice variant, specific cleavage, or degradation product of BAF155, which may represent the previously observed BAF110. RA-mediated Differentiation Increases ARID1B at the Expense of ARID1A-Several distinct mechanisms could be responsible for the changes in SWI/SNF composition, such as differences in subunit transcription, protein stability, or variations in subunit incorporation rates. It was shown previously that differentiation leads to changes in cellular protein levels that are very similar to the variations we observed in SWI/SNF composition. We therefore tested whether differences in subunit transcription might explain the changes in SWI/SNF composition by measuring their individual RNA expression levels (Fig. 2C). We observed little change, if any, in expression of Baf155 and Brg1, whereas considerable increases in Brm and Baf170 transcripts were apparent. This corroborated our measurements of protein levels and the indications obtained from Fig. 1C and analyzed by MudPIT mass spectroscopy. Listed are the numbers of individual peptides that could be unequivocally assigned to a particular subunit. Among different paralogous subunits, in particular the BRG1/BRM, BAF155/BAF170, and ARID1A/ARID1B show a strongly biased incorporation into ES cell complexes, whereas they are similarly represented in SWI/SNF complexes from HeLa cells. Differences in peptide recovery between individual samples also reflect variations in the total amount of complex subjected to mass spectroscopy. PB, Polybromo. MudPIT analysis of these samples ( Table 2). We also detected a reduction in Arid1a message, confirming the decrease observed in Western blotting by 6 days of differentiation. Interestingly Arid1b exhibits an opposing phenotype revealed by the increase in both transcript and unique peptides in mass spectroscopy. Unfortunately we could not confirm this by Western blotting because we lacked antibodies that recognized ARID1B under these conditions. The compositional changes we observed by different approaches (and suggested by MudPIT for ARID1B) are summarized in Fig. 2D. B-cell Leukemia Protein 7 (BCL7) Family Members Are Novel SWI/SNF Subunits-Our MudPIT analyses of complexes from ESCs and HeLa cells consistently identified unique peptides from a protein family and another single protein that were not previously recognized as SWI/SNF subunits (Fig. 3A). These peptides are found in all individual SWI/SNF preparations independent of the subunit used for purification. The family represented by the BCL7 A/B/C proteins is of unknown function except for its involvement in chromosomal translocations in Burkitt lymphoma. We also found peptides from BRD7, which was shown to interact with IRF-2 and acetylated histones. One additional group, represented by the proteins BAF45A/B/D, has been well characterized by Crabtree and co-workers. To confirm the incorporation of these proteins into SWI/SNF complexes in ESCs, we affinity-purified lentiviral expressed 2FLAG-tagged versions of these proteins and compared the interacting proteins with SWI/SNF from a 2FLAG-BAF57 purification (Fig. 3B). Both BAF45D and BCL7C preparations showed a remarkably similar interacting protein pattern compared with BAF57 as judged by Coomassie and silver staining. The only obvious differences between individual purifications correspond to shifted bands in the molecular weight range where the differentially tagged proteins are expected. We conclude that both BAF45D and BCL7C co-purify with SWI/SNF in a stoichiometry similar to that of BAF57, establishing these proteins as bona fide subunits of SWI/SNF. BRD7 Is a Novel PBAF Subunit and Defines Complexes with Distinct Biochemical Properties-Compared with BAF57, BRD7 exhibits a similar, albeit slightly different protein inter-action pattern (Fig. 3B). The absence of a 250-kDa band and the comparatively overrepresented 200-kDa band suggested that the BRD7-containing complexes lacked ARID1A/B proteins but had close to stoichiometric amounts of ARID2. This suggests that BRD7 is a potential PBAF-specific subunit. We tested this hypothesis by Western blot analysis (Fig. 3C). Both BRG1 and ARID2 co-precipitated with tagged versions of BAF57, BRD7, and BAF45D, underscoring the incorporation of all three baits into SWI/SNF complexes. In contrast, ARID1A can only be detected in association with BAF57 and BAF45D, confirming the selective incorporation of BRD7 into PBAF complexes. Different SWI/SNF subunits contribute specific functionalities to the enzymatic complex. We tested whether BRD7-containing PBAF complexes possessed the same biochemical properties as the mixture of SWI/SNF complexes containing both BAF and PBAF. To this end, we assayed both 2FLAG-BRD7and 2FLAG-BAF57-purified complexes in an ATP hydrolysis assay. Both preparations contained similar amounts of the catalytic subunit BRG1, the only ATPase present in ESCs (Fig. 3D, inset). As expected, the total mixture of SWI/SNF exhibited a basal ATPase activity that was stimulated by the addition of Fig. 2 from pluripotent cells or cells differentiated for the indicated time with RA using heterologous BAF47-FLAG expression. Listed are the numbers of individual peptides that could be unequivocally assigned to a particular subunit. Differences in peptide recovery between individual samples also reflect variations in the total amount of complex subjected to mass spectroscopy. PB, Polybromo. DNA (Fig. 3D, light and dark gray circles, respectively). Surprisingly BRD7-purified PBAF complexes had a substantially higher basal activity that was comparable to the DNA-stimulated activity of total SWI/SNF. This activity, however, was not further stimulated by the addition of DNA. This indicates that compositionally distinct subcomplexes have different enzymatic properties, which may impart specialized functions to BAF and PBAF and facilitate their ability to differentially regulate individual target genes. SWI/SNF Functions as Either an Activator or a Repressor-To define the transcriptional functions of SWI/SNF in pluripotent ESCs and identify its target genes, we performed a BRG1 depletion experiment coupled to a transcriptome survey. R1 ESCs were infected in duplicate with shRNA targeting BRG1 or a non-expressed control, and RNA was harvested. Microarray analysis revealed changes of 2-fold in a total of 104 individual genes of which 88 transcripts were induced by the RNAi and expression of 16 genes was reduced. This suggests that SWI/ SNF can act both as an activator and a repressor in pluripotent cells (supplemental Table 1). Interestingly when analyzing the gene ontology annotation of all target genes, we found that in the SWI/SNF-repressed group, angiogenic modulators were overrepresented (p value below 10 4 as annotated below). ESC Antagonism between BAF and PBAF Subcomplexes-Next we asked whether regulation of these BRG1-responsive genes relied on BAF, PBAF, or both subcomplexes. To this end, we infected R218 cells with shRNA targeting either the BAF signature subunit ARID1A, the PBAF-specific subunit BRD7, or the only ATPase present in ESCs, BRG1. All shRNAs reduced the protein level of their target significantly (Fig. 4A) with a decrease in target RNA to 60 (BRG1), 80 (ARID1A), and 80% (BRD7) (data not shown). Next we measured association of the PBAF complex with the genomic loci containing differently regulated groups of target genes (Fig. 4C). Our ChIP experiments in ESCs localized BRG1 binding near the transcriptional start site of all four tested target genes, Adamts1, Nodal, Trh, and Crabp1, as indicated by the more than 15-fold increase in DNA precipitated by the BRG1-specific antibody J1 compared with a control antibody (IgG). Comparing HA precipitates from ESCs infected with a control virus or with HA-tagged BAF57 or BRD7, respectively, we observed an increase of greater than 70-fold on all target genes, emphasizing the presence of both subunits. Taken together, these results suggest that BAF and PBAF subcomplexes can antagonize the expression of specific target genes possibly because of their intrinsically different biochemical properties.. BRD7 regulates a subset of SWI/SNF target genes. A, shRNAmediated inhibition of specific SWI/SNF subunits. R218 cells were infected with lentiviral shRNA against the indicated subunit or scrambled control. Proteins were extracted and detected by Western blot. B, quantification of SWI/ SNF target gene expression. RNA was isolated from cells treated with shRNA viruses and quantified by reverse transcription-quantitative PCR. Target gene expression was normalized to -actin, and induction over a control virus was plotted on a log scale. Indicated p values were calculated using Student's t test (two independent experiments). C, ChIP assay to quantify SWI/SNF binding to the promoter region of indicated target genes. For the first panel, lysates from cross-linked R218 cells were precipitated using antibodies directed against BRG1 (J1) or a non-expressed epitope (IgG ctrl). For the second panel, ChIP using an antibody directed to the HA epitope was used to precipitate from lysates of cells stably expressing the indicated HA-tagged subunit. Precipitated DNA is expressed as percentage of input DNA. WB, Western blot; ctrl, control; ko, knock-out; IN, input. The data presented are the mean values S.E. from two independent experiments. NOVEMBER 21, 2008 VOLUME 283 NUMBER 47 SWI/SNF Target Gene Regulation throughout Differentiation- To examine whether differentiation affects SWI/SNF-mediated regulation of target genes, we analyzed the chromatin status of two target genes throughout RA-induced differentiation (Fig. 5). We chose one SWI/SNF-activated gene, Trh, and one SWI/SNF-repressed gene, Adamts1, because of the extent and consistency of changes in their transcript levels throughout differentiation and BRG1 interference. TRH is a secreted hormone that participates in the cascade resulting in the release of thyroid hormones. Trh expression is reported to be modulated by several signaling pathways, including cAMP-response element-binding protein and signal transducers and activators of transcription, epidermal growth factor, and thyroid hormone. ADAMTS1 is an extracellular protease involved in heart development very recently shown to be repressed by BRG1 and speculated to be regulated by E2Fs, Sma and MAD-related proteins (SMADs), or the T-cell factor pathway. Retinoic acid differentiation of ESCs resulted in considerable changes in the expression profiles of both genes. Trh was strongly repressed, whereas expression of Adamts1 was substantially increased (Fig. 5A). To correlate any alterations in chromatin structure and transcription factor occupancy at these promoters with differentiation-induced changes in transcription, we performed ChIP analyses for acetylated histone H3; components of the preinitiation complex (PIC), TFIIB and TFIID; and POLII (Fig. 5B). On the Trh gene, we observed a general decrease of histone acetylation and a loss of both TFIIB and TFIID binding. Because this was accompanied by a significant reduction of total POLII, we conclude that retinoic acid differentiation led to transcriptional shutdown of the Trh gene in part through loss of stable PIC formation. The opposite was true for Adamts1 where differentiation led to a considerable increase in binding of PIC components and POLII accompanied by a slight increase in H3 acetylation. In contrast to the events at the proximal promoter, we saw little if any occupancy of these components at a control site 5 kb upstream of the transcriptional start sites of either gene. SWI/SNF Can Regulate Target Genes by Generating Accessibility to the PIC-Next we analyzed how differentiation affects binding of SWI/SNF to the Trh and Adamts1 promoters. To address this, we performed ChIP for the SWI/SNF subunit BAF57 in an ESC line expressing 3HA-BAF57 (Fig. 5C). Spatially resolved analysis revealed that SWI/SNF has two occupancy peaks in the Trh promoter at 2 kb and near the transcription start site. Differentiation led to a decrease in SWI/SNF occupancy in both sites, closely resembling the decrease of PIC components TFIIB and TFIID. To establish a causal relationship between SWI/SNF binding and PIC occupancy, we performed similar ChIP experiments in a population of cells where knockdown of BRG1 led to a 3-fold decrease in Trh expression (Fig. 5D). In these cells, decreased occupancy of PIC components TFIIB, TFIID, and POLII was observed at the proximal promoter despite any significant changes in H3 acetylation. We conclude that SWI/SNF regulates Trh expression by modulating PIC accessibility to the promoter in ESCs. Together with diminished H3 acetylation, loss of SWI/SNF occupancy is a possible mechanism by which the Trh gene is silenced upon differentiation. By contrast, this mechanism does not apply for Adamts1. Induction of Adamts1 expression by either differentiation or BRG1 interference was correlated with increased occupancy of PIC components and POLII (Fig. 5, B and E), and SWI/SNF remained bound during differentiation (Fig. 5C). Taken together, these results under- score the ability of SWI/SNF to regulate its target genes in a positive or negative manner using very distinct mechanisms. DISCUSSION In this study, we investigated different compositional aspects of the SWI/SNF complex in embryonic stem cells. Our results show that, in contrast to differentiated cells, ESCs have comparably low levels of BAF170 and BRM in agreement with other recent reports. During mouse development, Brm expression is concomitant with the onset of vasculogenesis. The mutually exclusive BRM and BRG1 compete for incorporation into the same remodeling complex, and both ATPases can recruit SWI/SNF to their specific target genes through selective interactions with transcription factors. Interestingly BRG1 negatively regulates a subset of angiogenic modulators in ESCs. Therefore, the onset of Brm expression is likely to impact BRG1 target genes by reducing the overall abundance of BRG1-containing SWI/SNF complexes. This may provide a potential Brm expression-dependent mechanism for the onset of vasculogenesis and heart development. However, neither Brm nor Baf170 expression alone in ESCs is sufficient to derepress a subset of target genes including several angiogenic modulators that are repressed by SWI/SNF in ESCs (supplemental Fig. 3). Selective incorporation of two mutually exclusive signature subunits of the BAF complex, ARID1A and ARID1B, was shown to confer activating versus repressive functions of SWI/SNF on cell cycle regulatory genes. Our BRG1 interference experiments indicate that in ESCs SWI/SNF has a predominantly repressive role, which correlates with higher incorporation of the repressive ARID1A subunit. Upon differentiation, the increase in ARID1B and decrease in ARID1A suggests that compositionally repressive SWI/SNF is converted into a more transcriptionally activating enzyme. Other studies have not reported an increase in ARID1B upon differentiation. However, Wang and co-workers monitored total ARID1B protein levels, whereas our data were derived from detection of ARID1B incorporated into SWI/SNF complexes by MudPIT mass spectrometry and transcriptional measurements of ARID1B message. Overall we found that the diversity of core and specificity subunits increases upon differentiation, presumably reflecting the need to regulate more intricate transcriptional programs. Among our main observations is the identification of BRD7 as a new PBAF-specific subunit. We found BRD7 present in purifications from pluripotent ESCs, differentiated ESCs, and HeLa cells, arguing for its incorporation in a variety of cell types. By performing MudPIT mass spectrometry on BRD7containing complexes, we found five unambiguous peptides of the Polybromo protein (data not shown), suggesting that the PBAF signature module consists of BRD7, ARID2, and Polybromo. We were unable to confirm the presence of Polybromo in BRD7-containing complexes because of the lack of antibodies that recognize the Polybromo protein. BRD7 contains a bromodomain that interacts with acetylated histone H3, potentially providing additional tethering of the PBAF complex to transcriptionally active chromatin. We also found that BRD7-containing PBAF complexes have very distinct biochemical properties from those of BAF as reflected by differences in the ability of DNA to stimulate their enzymatic activities. We estimate that PBAF complexes represent only a minor fraction of total SWI/SNF in ESCs, which contain predominately BAF complexes, because ARID2 was considerably less abundant in SWI/SNF complexes (purified by tagged BAF57) than ARID1A. It is possible that the basal activity of total SWI/SNF in our ATPase assays was contributed by the high ATPase activity of PBAF. The stimulation observed after addition of DNA would therefore be entirely due to the biochemical properties of BAF. Given the multitude of bromodomains in PBAF complexes, it will be interesting to determine whether PBAF exhibits a greater ATPase activity stimulation by specifically modified histones than BAF. It is well documented that PBAF and BAF complexes have non-redundant functions, for example, in transactivation of interferon-inducible genes. In addition, there is considerable overlap of common target genes as might be expected for genes in which common subunits are responsible for SWI/SNF recruitment. Interestingly in ESCs, BAF appears to be required for the majority of BRG1-dependent transcriptional regulation. By contrast, BRD7/PBAF function was dispensable for a sizeable subset of the genes we examined, although this complex could be readily detected at some of these promoters such as ADAMTS1. A possible explanation is that PBAF may be recruited through a subunit other than BRD7. Overall our RNAi results are consistent with the severity of the knock-out phenotype of respective signature subunits. ARID1A knockout mice arrest very early in development and display deficiencies in early germ layer formation. On the other hand, Polybromo knock-outs die relatively late (embryonic day 12.5-15) probably because of heart defects. A majority of SWI/SNF target genes were derepressed by RNAi-mediated knockdown of BRG1, indicating that functional SWI/SNFs are involved in repressing their expression. Repression by SWI/SNF is well documented and shown to be required for the repressive function of the RB (Retinoblastoma) protein complex and transcriptional repression of neuronal genes by the repressor element 1-silencing transcription factor (REST) protein. SWI/SNF has also been implicated in silencing of methylated genes. These studies highlight the ability of SWI/SNF to play critical roles as either a transcriptional coactivator or corepressor. In ESCs, this correlates with the biased incorporation of the repressive ARID1A subunit. However, our interference experiments of ARID1A show that ARID1A is not exclusively repressive as it was required for SWI/SNF-mediated activation of a subset of target genes. Different models have been proposed to explain the function of chromatin-remodeling complexes in the sequence of events that govern transcription. For example, some studies show that promoter activation requires SWI/SNF recruitment before association of the histone acetyltransferase Spt-Ada-Gcn5acetyltransferase (SAGA) and the PIC. In other models, transient recruitment of remodeling complexes to poised promoters allows release of elongating RNA polymerase II. Repression by SWI/SNF (and the Rsc complex in yeast) is mediated by establishing an inhibiting nucleosomal distribution, recruitment of histone deacetylases, or indirectly stimulating expression of inhibitory non-coding RNAs. Taken together, these studies reveal that chromatin remodeling can function at many discrete steps to impact transcriptional regulation. This mechanistic diversity is likely to be dictated by the particular chromatin environment of individual target genes and the specific remodeler. Because histone acetylation can serve as a recruitment platform for chromatin-remodeling complexes, we speculate that H3 acetylation is a contributing factor in selective SWI/SNF recruitment and function on the Trh promoter. Such a model could explain the loss of SWI/SNF occupancy throughout differentiation-mediated H3 acetylation decrease and also why BRG1 knockdown leads to lowered transcription without affecting H3 acetylation. From this perspective, it will be very interesting to analyze the basis of selective BAF and PBAF complex recruitment and how they regulate PIC access to the Trh core promoter in ESCs. From a broader viewpoint, several components of SWI/SNF have been shown to be necessary for long term survival of the inner cell mass. Moreover decreased cell numbers are observed after BRG1 depletion suggesting that BRG1 is required for ESC growth perhaps through functions in addition to transcriptional regulation. In this regard, SWI/ SNF has been linked to genome organization, recombination, cell cycle progression, and the DNA damage response. Future investigations into the potential roles for SWI/SNF in these and other processes should generate a more comprehensive understanding of its contribution to ES cell pluripotency. |
// Clone creates a clone of the transport.
//
// The returned transport won't keep references to the original transport values.
func (t *Transport) Clone() *Transport {
transport := Transport{Meta: t.Meta}
if t.Body != nil {
body := *t.Body
transport.Body = &body
}
if t.Files != nil {
transport.Files = t.Files.clone()
}
if t.Data != nil {
transport.Data = t.Data.clone()
}
if t.Relations != nil {
transport.Relations = t.Relations.clone()
}
if t.Links != nil {
transport.Links = t.Links.clone()
}
if t.Transactions != nil {
transport.Transactions = t.Transactions.clone()
}
if t.Calls != nil {
transport.Calls = t.Calls.clone()
}
if t.Errors != nil {
transport.Errors = t.Errors.clone()
}
return &transport
} |
// Gets the TV episode stored in an embedded entity and stores the result in
// item. Returns true if the TV episode was valid.
bool GetEpisode(const EpisodeCandidate& candidate, mojom::MediaFeedItem* item) {
if (!item->tv_episode)
item->tv_episode = mojom::TVEpisode::New();
item->tv_episode->episode_number = candidate.episode_number;
item->tv_episode->season_number = candidate.season_number;
item->action_status = candidate.action_status;
auto* name = GetProperty(candidate.entity, schema_org::property::kName);
if (!name || !IsNonEmptyString(*name))
return false;
item->tv_episode->name = name->values->string_values[0];
if (!ConvertProperty<mojom::TVEpisode>(
candidate.entity, item->tv_episode.get(),
schema_org::property::kIdentifier, false,
base::BindOnce(&GetIdentifiers<mojom::TVEpisode>))) {
return false;
}
auto* image = GetProperty(candidate.entity, schema_org::property::kImage);
if (image) {
auto converted_images = GetMediaImage(*image);
if (!converted_images.has_value())
return false;
}
if (!ConvertProperty<mojom::MediaFeedItem>(
candidate.entity, item, schema_org::property::kPotentialAction, true,
base::BindOnce(&GetAction<mojom::MediaFeedItem>,
candidate.action_status))) {
return false;
}
return true;
} |
<filename>src/i18n.ts
import flagCA from '@/assets/flags/flagCA.svg'
import flagEN from '@/assets/flags/flagEN.svg'
import flagES from '@/assets/flags/flagES.svg'
import type en from '@/locales/en.json'
import messages from '@intlify/vite-plugin-vue-i18n/messages'
import { createI18n } from 'vue-i18n'
type MessageSchema = typeof en
type Locales = 'en' | 'es' | 'ca'
const i18n = createI18n({
locale: navigator.language,
legacy: false,
fallbackLocale: {
ca: ['es'],
default: ['en', 'es', 'ca'],
},
messages: messages as Record<Locales, MessageSchema>,
fallbackWarn: false,
missingWarn: false,
})
export interface LocaleInfo {
id: Locales
name: string
icon: string
}
export const localesInfo: LocaleInfo[] = [
{ id: 'ca', name: 'Català', icon: flagCA },
{ id: 'en', name: 'English', icon: flagEN },
{ id: 'es', name: 'Español', icon: flagES },
]
export function getLocaleInfo(locale: Locales): LocaleInfo
export function getLocaleInfo(locale: string): LocaleInfo | undefined
export function getLocaleInfo(locale: string): LocaleInfo | undefined {
return localesInfo.find((l) => l.id === locale)
}
export default i18n
|
package com.github.amanganiello90.managecore;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.xml.sax.SAXException;
/**
*
* class for check SNAPSHOT dependency on pom.xml
*
* @author amanganiello90
*/
public class ReadPoms {
private static final String SNAPSHOT = "SNAPSHOT";
private static final String VERSION = "version";
private static final String PARENT = "parent";
private static final String MODULE = "module";
private static final String MODULES = "modules";
private static final String PROPERTIES = "properties";
private static final String ALL = "all";
private static final String DEPENDENCIES = "dependencies";
/**
* Constructor
*/
private ReadPoms() {
}
/**
* read poms and return an expection if a SNAPSHOT dependency exist
*
* @param directory
* of your project
* @param typology:
* all, parent or dependencies
* @param testPom:
* name of test pom
* @return null if there aren't SNAPSHOTS
* @throws ParserConfigurationException
* if pom.xml is not parsed
* @throws SAXException
* for pom.xml
* @throws IOException
* for pom.xml
*/
public static String checkSnapshots(String directory, String typology, String testPom)
throws ParserConfigurationException, SAXException, IOException {
if (!((ALL).equals(typology) || (PARENT).equals(typology) || (DEPENDENCIES).equals(typology))) {
throw new IllegalArgumentException(
"Illegal argument value for typology: set to 'all', 'dependency' or 'parent'");
}
String pomName = directory + "/pom.xml";
if (testPom != null) {
pomName = directory + testPom;
}
// Reading your main pom
File pomFile = new File(pomName);
DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder documentBuilder = documentBuilderFactory.newDocumentBuilder();
org.w3c.dom.Document document = documentBuilder.parse(pomFile);
if ((ALL).equals(typology) || (PARENT).equals(typology)) {
if (ReadPoms.checkSnapshotParent(document)) {
throw new IllegalStateException("YOU HAVE A SNAPSHOT VERSION IN THE PARENT OF YOUR POM! : " + pomName);
}
}
if ((ALL).equals(typology) || (DEPENDENCIES).equals(typology)) {
// check submodules
List<String> modules = ReadPoms.getMavenSubmodules(document);
if (ReadPoms.checkSnapshotProperties(document) || ReadPoms.checkSnapshotVersions(document)) {
throw new IllegalStateException("YOU HAVE SNAPSHOT VERSIONS ON YOUR MAIN POM! : " + pomName);
}
String childMessage = ReadPoms.checkChildSnapshotVersions(directory, modules, documentBuilder);
if (childMessage != null) {
throw new IllegalStateException(childMessage);
}
}
return null;
}
/**
* read parent pom version and return true if a SNAPSHOT dependency exists
*
* @param document
* of your pom.xml
* @return
*
*/
private static boolean checkSnapshotParent(org.w3c.dom.Document document) {
// get the parent version
int exist = document.getElementsByTagName(PARENT).getLength();
if (exist == 1) {
Node node = document.getElementsByTagName(PARENT).item(0);
NodeList children = node.getChildNodes();
for (int i = 0; i < children.getLength(); i++) {
String nodeName = children.item(i).getNodeName();
if ((VERSION).equals(nodeName)) {
if (children.item(i).getTextContent().contains(SNAPSHOT)) {
return true;
}
}
}
}
return false;
}
/**
* check maven submodules
*
* @param document
* of your pom.xml
* @return
*
*/
private static List<String> getMavenSubmodules(org.w3c.dom.Document document) {
List<String> result = new ArrayList<String>();
// get the the submodules name
int exist = document.getElementsByTagName(MODULE).getLength();
for (int i = 0; i < exist; i++) {
Node node = document.getElementsByTagName(MODULE).item(i);
if (MODULES.equals(node.getParentNode())) {
result.add(node.getTextContent());
}
}
return result;
}
/**
* check snapshot versions
*
* @param document
* of your pom.xml
* @return
*
*/
private static boolean checkSnapshotVersions(org.w3c.dom.Document document) {
// get the versions
int exist = document.getElementsByTagName(VERSION).getLength();
for (int i = 0; i < exist; i++) {
Node node = document.getElementsByTagName(VERSION).item(i);
if (node.getTextContent().contains(SNAPSHOT) && !(PARENT).equals(node.getParentNode().getNodeName())) {
return true;
}
}
return false;
}
/**
* check snapshot versions on properties
*
* @param document
* of your pom.xml
* @return
*
*/
private static boolean checkSnapshotProperties(org.w3c.dom.Document document) {
// get the versions
int exist = document.getElementsByTagName(PROPERTIES).getLength();
if (exist == 1) {
Node node = document.getElementsByTagName(PROPERTIES).item(0);
NodeList children = node.getChildNodes();
int numberChildren = children.getLength();
for (int i = 0; i < numberChildren; i++) {
String propertyVersion = children.item(i).getTextContent();
if (ReadPoms.isCommentedTag(document, children.item(i).getNodeName())) {
continue;
}
if (propertyVersion.contains(SNAPSHOT)) {
return true;
}
}
}
return false;
}
/**
* check snapshot versions on child poms
*
* @param directory
* of your project
* @param modules
* of your project
* @param documentBuilder
* to read your pom.xml
* @return null if there aren't SNAPSHOTS
* @throws IOException
* if pom.xml is not read
* @throws SAXException
* if pom.xml is not read
*
*/
private static String checkChildSnapshotVersions(String directory, List<String> modules,
DocumentBuilder documentBuilder) throws SAXException, IOException {
for (int i = 0; i < modules.size(); i++) {
// Reading your child pom
String pomNameChild = directory + "/" + modules.get(i) + "/pom.xml";
File pomChildFile = new File(pomNameChild);
org.w3c.dom.Document documentChild = documentBuilder.parse(pomChildFile);
// check submodules
List<String> modulesChild = ReadPoms.getMavenSubmodules(documentChild);
if (ReadPoms.checkSnapshotProperties(documentChild) || ReadPoms.checkSnapshotVersions(documentChild)) {
return "YOU HAVE SNAPSHOT VERSIONS ON YOUR CHILD POM! : " + pomNameChild;
}
if (!modulesChild.isEmpty()) {
return ReadPoms.checkChildSnapshotVersions(directory + "/" + modules.get(i), modulesChild,
documentBuilder);
}
}
return null;
}
/**
* check properties commented in pom.xml
*
* @param document
* of your pom.xml
* @param tagName
* of your pom.xml
* @throws ParserConfigurationException
* if pom.xml is not parsed
* @throws SAXException
* for pom.xml
* @throws IOException
* for pom.xml
*/
private static boolean isCommentedTag(org.w3c.dom.Document document, String tagName) {
try {
document.getElementsByTagName(tagName).item(0).getTextContent();
} catch (NullPointerException e) {
return true;
}
return false;
}
}
|
<filename>rust/protein-translation/src/lib.rs
use std::collections::HashMap;
pub struct CodonsInfo<'a> {
// This field is here to make the template compile and not to
// complain about unused type lifetime parameter "'a". Once you start
// solving the exercise, delete this field and the 'std::marker::PhantomData'
// import.
pairs: HashMap<&'a str, &'a str>,
}
impl<'a> CodonsInfo<'a> {
pub fn name_for(&self, codon: &str) -> Option<&'a str> {
self.pairs.get(codon).copied()
}
pub fn of_rna(&self, rna: &'a str) -> Option<Vec<&'a str>> {
let rna_vec: Vec<char> = rna.chars().collect();
let codons: Vec<Option<&str>> = rna_vec
.chunks(3)
.map(|chunk| self.name_for(chunk.iter().copied().collect::<String>().as_str()))
.take_while(|x| match x {
Some(x) => *x != "stop codon",
None => true,
})
.collect();
if codons.iter().any(|x| x.is_none()) {
return None;
}
let codons: Vec<&str> = codons
.iter()
.take_while(|x| x.is_some())
.map(|x| x.unwrap())
.collect();
if codons.is_empty() {
return None;
}
Some(codons)
}
}
pub fn parse<'a>(pairs: Vec<(&'a str, &'a str)>) -> CodonsInfo<'a> {
let mut map = HashMap::new();
for (codon, name) in pairs {
map.insert(codon, name);
}
CodonsInfo { pairs: map }
}
|
#include "../../../test_common.h"
#include <math.h>
#include <stddef.h>
int quo;
void runSuccess() {
remquof(0.1f,4.0f,&quo);
remquof(1.2f,0.0f,&quo);
remquof(2.0f,4.3f,&quo);
remquof(anyfloat(),anyfloat(),&quo);
}
void runFailure() {
remquof(0.1f,4.0f,NULL);
}
void runFailure1() {
remquof(1.2f,0.0f,NULL);
}
void runFailure2() {
remquof(anyfloat(),anyfloat(),NULL);
}
int f;
void testValues() {
f = 2;
float result;
remquof(0.1f,4.0f,&quo);
//@ assert f == 2;
//@ assert vacuous: \false;
}
|
/**
* Encodes a collection of objects, attaching them to the current root.
*
* @param xmlTag The name of the root XML element for the encoded collection.
* @param c The collection to encode.
*/
protected void encodeCollection(String xmlTag, Collection<Object> c, boolean useType) {
for (Object o : c) {
encodeProperty(xmlTag, o, useType);
}
if (c.size() == 1) {
((Element) parent.getLastChild()).setAttribute("forceList", "YES");
}
} |
"""
Desafio 035
Problema: Desenvolva um programa que leia o comprimento de três
retas e diga ao usuário se elas podem ou não formar um triângulo.
Resolução do problema:
"""
lado_A = float(input('Informe a medida do lado A: '))
lado_B = float(input('Informe a medida do lado B: '))
lado_C = float(input('Informe a medida do lado C: '))
analise = False
# Somente será validado como Triângulo passando nas três condições seguintes
if abs(lado_B - lado_C) < lado_A < lado_B + lado_C:
if abs(lado_A - lado_C) < lado_B < lado_A + lado_C:
if abs(lado_A - lado_B) < lado_C < lado_A + lado_B:
analise = True
if analise:
print('As retas PODEM FORMAR um triângulo.')
else:
print('As retas NÃO PODEM FORMAR um triângulo.')
|
Preparation, characterization, and release behavior of ceftiofur-loaded gelatin-based microspheres Drug-loaded microspheres prepared from biomacromolecules have received considerable interest. In this article, we report a facile method for preparing ceftiofur-loaded gelatin-based microspheres for controlled release. We investigated the effects of factors, including the rotational speed, concentration of surfactant, concentration of gelatin, and ratio of water to oil (W/O), on the morphologies of gelatin microspheres and obtained the optimized conditions; for a typical average diameter of about 15 m, these were 1000 rpm, a concentration of span 80 of 2.0%, a gelatin concentration of 20%, and a W/O of 1:20. Gelatin microspheres loaded with ceftiofur, ceftiofur-Na, and ceftiofur-HCl were prepared and characterized by scanning electron microscopy and laser light scattering. In vitro release studies were carefully performed for microspheres prepared with different crosslinker contents, loaded with different drugs, and blended with chitosan. The loaded ceftiofur showed an obviously longer release time compared with pure ceftiofur powder. A higher content of crosslinker led to a longer release time, but when the content reached 5%, the microspheres had a significantly cracked surface. The results also indicate that the blending of a small amount of chitosan could greatly prolong the release time. © 2013 Wiley Periodicals, Inc. J. Appl. Polym. Sci. 130: 23692376, 2013 |
import rdflib
def links_generator(graph):
for obj in g.objects():
if "][|-|" in obj:
idx = obj.find("http", 0, len(obj))
idx2 = obj.rfind("%5D%5B0", 0, len(obj))
link = obj[idx:idx2]
yield link
if __name__ == "__main__":
g = rdflib.Graph()
g.parse("tabmix_sessions-2015-11-14.rdf") # replace with your own
links = []
for link in links_generator(g):
links.append(link)
print "Found", len(links), "links"
with open("links.html", "w") as fp:
fp.write("<html><body>")
for link in links:
fp.write("<a href=\""+link+"\">" + link + "</a><br>\r\n")
fp.write("</body></html>")
|
<gh_stars>1-10
//A3_Data_F_Australis_EditorCategories
class CfgFactionClasses
{
/*class B_AU_F
{
displayName = "NATO (Oceanic)";
priority = 3; // Position in list.
side = 1; // Opfor = 0, Blufor = 1, Indep = 2.
icon = ""; //Custom Icon
};*/
class B_NZ_F
{
displayName = "New Zealand";
priority = 3;
side = 1;
icon = "";
};
class B_PACUNION_F
{
displayName = "Pacfic Union Peacekeepers";
priority = 3;
side = 1;
icon = "";
};
class B_Serco_F
{
displayName = "Defence Services";
priority = 3;
side = 2;
icon = "";
};
class O_ID_F
{
displayName = "Indonesia";
priority = 3;
side = 0;
icon = "";
};
class O_TPR_F
{
displayName = "Pirates (Tanoan)";
priority = 3;
side = 2;
icon = "";
};
class I_AFP_F
{
displayName = "Federal Police";
priority = 3;
side = 2;
icon = "";
};
class I_MRC_F
{
displayName = "Mercernaries";
priority = 3;
side = 2;
icon = "";
};
class I_TPR_F
{
displayName = "Pirates (Tanoan)";
priority = 3;
side = 2;
icon = "";
};
class I_Eco_F
{
displayName = "Ecowarriors";
priority = 3;
side = 2;
icon = "";
};
};
class CfgEditorSubcategories
{
class EdSubCat_Personnel_Navy
{
displayName = "Men (Navy)";
};
class EdSubCat_Personnel_NewCaledonia
{
displayName = "$STR_A3_Australis_Peacekeepers_NC";
};
class EdSubCat_Personnel_Fiji
{
displayName = "$STR_A3_Australis_Peacekeepers_FJ";
};
class EdSubCat_Personnel_Timor
{
displayName = "$STR_A3_Australis_Peacekeepers_TL";
};
};
|
It's minutes before the Warriors' summer-league game Saturday, and Jermareo Davidson sports an uncontrolled smile as he bounces around the Cox Pavilion court as if he's on a trampoline.
As Davidson readies to play his first back-to-back games since offseason surgery to repair a stress fracture in his left foot, he knows he's putting on a show.
A show aimed at everyone and sometimes no one in particular, all at the same time.
"I want to convince people that I'm doing well even in a time like this."
Davidson is about to enter his third professional season with a complete understanding that he still has a lot of convincing to do. He holds only a partially guaranteed contract with the Warriors, but those who know his story probably have been persuaded that he'll make his point in a time like this.
While playing at the University of Alabama, the 6-foot-10, 250-pounder endured the type of turmoil that shapes people.
In November 2006, his older brother was shot and paralyzed by an unknown gunman in Atlanta. Four days later, his girlfriend died in a car wreck there. Right before Christmas, Davidson's brother died in the same hospital as his girlfriend.
"I'll never get that out of my mind, but I have to pick times to sit and think about it," Davidson said. "As long as I stay busy and stay around people, I'm good. Sometimes when I'm by myself, it can get bad, so I try to avoid being by myself.
"Every once in awhile, I have to sit and let it all come out so it won't build and come out the wrong way."
Davidson appears to have channeled his emotions into something that's simply positive.
He took being drafted in 2007 by the Warriors and an immediate trade to Charlotte in stride. He took being cut a year later by Charlotte and a chance to play in the NBA Development League in Idaho in stride. And he took a season-ending injury in 2009 in stride, though it was two weeks after he secured his first professional double-double with the Warriors.
He has taken the same approach with his rehab. After he had surgery in March, Davidson has sweated his recovery routine from one to two to three and, ultimately, to four hours a day.
What started as time spent in the whirlpool and in therapy on his left foot has turned into strengthening the unused muscles around the knee on the same leg. That progressed to weightlifting and running. Finally, he's playing basketball, even on back-to-back nights.
"I'm doing everything I can to strengthen it, and I've been faithfully doing my rehab," Davidson said. "It's just a matter of time. I'm doing everything I possibly can, and I'm fighting through it."
The results aren't as obvious. Davidson was really sucking wind in his first summer-league game, going for four points and two rebounds in 19 minutes. In the back-to-back, he went for two offensive rebounds, an assist and no points.
"You can see that his timing is off, and he's pretty much playing on one leg," said assistant coach Keith Smart, who is the head man during the summer. "It's not there yet, so he needs to log some minutes to get back into it."
General manager Larry Riley reasserted last weekend that the Warriors are seeking some bulk, a player who can play both power forward and center. That could be Drew Gooden, Sean May, Joe Smith, Chris Wilcox or, as Davidson sees it, him.
"I want to be that guy, but I'll take the competition if it comes," Davidson said. "I'll be ready for whatever comes my way this year, proving I've established myself.
"I want to let them know that I've been through the downs, but I'm here. You can look where I came from, but that's in the past. I want them to look where I am now and look at where I'm going." |
Summer is the prime time for a Santa Barbara star party. The nights are usually cool and pleasant, and the skies are often clear and steady. The friendly amateur astronomers of the Santa Barbara Astronomical Unit hold numerous public events where you can indulge your need for a deeper look into the Universe. Unless otherwise noted, all of these events are free.
There are three primary SBAU public star parties that are held every month, all year, weather permitting. The SBAU is sponsored by the Santa Barbara Museum of Natural History, so our premier event is the SBMNH Monthly Public Star Party, held on the second Saturday of each month at their Palmer Observatory.
Telescopes run by SBAU members are set up along the pathway leading from the museum parking lot to the observatory.
There are also some telescopes set up in the circular observing pit adjacent to Palmer Observatory.
Inside the observatory is a 20-inch Ritchey-Chrétien telescope on a computerized mounting.
On the third Friday of every month, it’s time for the Public Telescope Night at the Keck Observatory, at the Westmont College campus, next to the baseball field.
The observatory houses a 24-inch Ritchey-Chrétien telescope on a massive computerized mounting. Students there use it to study asteroid light curves and cataclysmic variable stars, but we just use it for fun.
In addition, SBAU members set up smaller scopes in the observatory plaza and upstairs deck.
On the first Tuesday of each month (for 2017), we set up for Telescope Tuesday at the Camino Real Marketplace, in the plaza next to the Food Court and theater. This site has a lot of nighttime lighting, so we change the monthly Tuesday each year to make sure we usually have a good Moon phase to show. But, depending on the movies and restaurant specials, we also reach a lot of people.
For the summer, since we’re not busy with school science nights, we also bring star parties regularly to Cachuma Lake, Refugio State Beach, and Carpinteria State Beach.
At Cachuma, the events are free for campers, but an entry fee is charged per vehicle to the general public. To kick off the star party, we have an astronomy slide show in their Fireside Theater.
Following that presentation, everyone heads out to the line of telescopes in the field at Dakota Plains. Aside from some campfires and Coleman lanterns, the skies are pretty dark.
We have similar events at Carpinteria State Beach, with a slideshow in their Campfire Center and telescopes set up on the sidewalk toward the beach from their entry kiosk.
At the Refugio State Beach star parties, we set up in the southwest end of the day-use parking lot. This is a site where we consistently see excellent observing conditions, despite the proximity of the ocean.
In summary, if you want to learn about, or just look at and enjoy, the Universe we live in, come on out to a star party!
The SBAU is part of the NASA/JPL/Astronomical Society of the Pacific organization known as the Night Sky Network. If you’re not in Santa Barbara, you can enter your location in their website widget to find star parties and astronomy clubs throughout the USA.
Images courtesy of Tom Totton and other AU members. |
The Building Blocks of Battery Technology: Using Modified Tower Block Game Sets to Explain and Aid the Understanding of Rechargeable Li-Ion Batteries While Li-ion batteries are abundant in everyday life from smart phones to electric vehicles, there are a lack of educational resources that can explain their operation, particularly their rechargeable nature. It is also important that any such resource can be understood by a wide range of age groups and backgrounds. To this end, we describe how modified tower block games sets, such as Jenga, can be used to explain the operation of Li-ion batteries. The sets can also be utilized to explain more advanced topics such as battery degradation and challenges with charging these batteries at high rates. In order to make the resource more inclusive, we also illustrate modifications to prepare tactile tower block sets, so that the activity is also suitable for blind and partially sighted students. Feedback from a range of groups supports the conclusion that the tower block sets are a useful tool to explain Li-ion battery concepts. L i-ion batteries are everywhere, from smart phones to laptops, and in more recent times in electric vehicles due to targets of reducing greenhouse gas emissions to mitigate the effect on climate change. With a technology so readily accessible and the continuation of research efforts (30 years on from the commercialization of the initial cell 1 and following the award of the 2019 Nobel Prize for Chemistry to Akira Yoshino, M. Stanley Whittingham, and John B. Goodenough) from the development of novel materials to end-of-life recycling efforts, 2 such rechargeable batteries are still, more than ever, a hot topic. Therefore, it is paramount to be able to explain the basics of operation not only to the relevant undergraduate student body, but also to schools and the general public, especially with the ease of accessibility of these consumer products and developments in related and relevant environmental policy. The key to understanding battery operation relies on understanding the redox processes and the electrochemistry at play. When teaching this area, specifically the electrochemistry, multiple applications can be tied to these fundamental principles; 3 however, this topic is more than often associated with being a troublesome area to teach. 4 This issue is attributed to taught misconceptions which are formed either from misinterpretation or overgeneralization to inappropriate situations. 5 Multiple surveys to assess the drawbacks and misconceptions for this area have been conducted, in addition to the use of a cognitive conflict approach, for both teachers 6 and students, 5,7 respectively, in efforts to understand how to improve the teaching of this area. To support learning and to be engaging, hands-on demonstrations are often used. The most commonly used activity employs lemon-/potato-electrolyte batteries, 8 which are useful for introducing the concept of electrode potentials, electrical circuits, and a non-rechargeable battery. The basics of this demonstration involve piercing a zinc-containing nail and a copper coin into either a lemon or a potato. The metals are the electrodes in the circuit, and the lemon/potato acts as the electrolyte. The metals are connected to a voltmeter. This demonstration and other approaches are tactile and can be made visually and acoustically stimulating, with a connection to a music birthday card 8 or with a connected LED 9 or a clock. 10 In addition, quite often these demonstrations make use of classroom-based tools or use available household products. 11 Hence, these activities in general are suited for a wide range of students, including those who are either visually impaired or deaf. 8 Although these are good hands-on activities to introduce key aspects of batteries, these demonstrations cannot explain how rechargeable batteries work and, additionally, often lead to a misconception that the lemon or the potato is the powerhouse behind the circuit. Within the literature, efforts to explain and elaborate on the chemistry of the Li-ion battery appear to be limited to a degree level. 12−15 Of the limited resources in a different medium to the degree classroom and laboratorybased practical, a recently published video titled "Lithium ShuffleBattery Operation" explores the basic setup of these batteries before making use of a human-sized Li-ion battery to show the mobility of the ions involved. 16,17 The lack of demonstrations to explain Li-ion batteries has been our motivation to design a suitable activity in using tower block sets, such as Jenga, which can complement the nonrechargeable demonstrations. The archetypal Li-ion battery with electrode materials LiCoO 2 −graphite is a layered system, and hence, the stacking of blocks in a normal tower block game makes this traditional set ideal for customization into the appearance of this battery setup. This is not the first instance of this tower block game being used in an educational setting. They have been used to explore risk management concepts to senior nursing students whereby the student had to identify risks to patients without the tower toppling over. 18 In addition, it has been used for teaching institutional oppression, whereby the game is played normally but rules are introduced to make the game play increasingly difficult. 19 Both authors of these studies comment on how the use of the tower block game acts to promote engaging learning and reinforce/fortify the students' new understanding. A more recent case study involving a tower block game for educational purposes has been the "Scientific Scissors: Genetic Jenga" game, whereby laboratory tongs are used to remove or replace the blocks representing genetic code, and this allows resultant discussion of how genes are connected and how displacements can affect the surrounding genes. 20 A rechargeable Li-ion battery consists of two electrodes, such as a layered lithium transition metal oxide electrode (LiCoO 2, or Ni, Mn, Al doped analogues) and a graphite electrode. The electrodes are separated with an electrolyte. A simplified schematic is shown in Figure 1. On charging, the lithium ions traverse via the electrolyte, from the oxide electrode to the graphite electrode. An electron will move via the current collectors through an external circuit. On discharging, the reverse process will occur, and hence, the electron will do "work" and power our application. The redox processes in the discharging diagram, i.e., when our battery is powering a device, can be described with the following equations, eqs 1 and 2, such that the (negative) graphite electrode is being oxidized (losing an electron) and the (positive) oxide electrode is being reduced (gaining the electron). Through this redox process, the Li-ion from the graphite will migrate through the electrolyte back into the oxide electrode, while the electron traverses the external circuit powering the device. Negative ElectrodeOxidation Positive ElectrodeReduction Li CoO e LiCoO Earlier batteries made use of lithium metal (Li) as the negative electrode, and these, eqs 3 and 4, are given in the AQA Chemistry A-level specification. 21 ■ ACTIVITY We purchased two tower block sets, with each set consisting of 58 blocks, and before use, we set up each of the towers which reached a height of 0.6 m. One set was designated to be the lithium cobalt oxide (LiCoO 2 ) material (oxide electrode; cathode; positive electrode) and the second as the graphite electrode (anode; negative electrode). Note that battery chemists name the electrodes (cathode/anode) based on the processes occurring on discharging; from here on, the positive (cathode) and negative (anode) will be referred to what they are on discharge and be simplified to oxide and graphite electrodes, respectively. For our initial two sets, we opted to paint the sets as follows: cobalt oxide layer/purple and red spheres, graphite layer/ gradient gray, lithium/blue dots, white blanks, copper/orange, aluminum/gray, and orange/gray dual side with additional spares of 16 blocks (7 blanks, 1 orange, 8 lithium). The two tower sets at their full height are shown in Figure 2 without the additional spare blocks. The two tower block game sets in this form allowed us to show intercalation processes that occur on charging through the removal of the lithium-ion blocks between the longitudinal cobalt oxide blocks and insertion of them between the longitudinal layers of the graphite in our second tower set Journal of Chemical Education pubs.acs.org/jchemeduc Activity ( Figure 3). The reverse motion with the removal of the lithium from the graphite electrode back to the oxide electrode shows the processes that occur on discharging. One aspect of battery chemistry is that it is not possible to remove all the Li from between the CoO 2 layers. Typically, only half of the lithium ions present in LiCoO 2 can be removed, which we can visually show with the tower block sets. When running the demonstration of charging, participants select specific lithium-ion blocks, but never 3 lithium ions in one row; when students are asked why, they explained that this is to prevent the tower from collapsing. This is synonymous with this material, whereby overcharging (removing more than x = 0.5 Li from Li 1−x CoO 2 ) can result in the breakdown of the material, as CoO 2 is an unstable intermediate and there is a resultant oxygen release, which can oxidize the electrolyte and result in the danger of a battery fire. In addition, the battery tower block game sets can show the capacity fade concept (why the performance of the battery reduces over continued use) due to the degradation of the battery, through showing how the blocks become slightly displaced ( Figure 4) when removing and inserting the lithium blocks into the electrodes, illustrating distortions in the structure on removal/reinsertion of lithium. With our original graphite electrode set, on the reverse side, the blocks were painted in the same way as those for our oxide electrode. Through turning our graphite set round by 180°and placing it adjacent to our initial oxide electrode, the two oxide electrodes ( Figure 5) could be used to show what happens with the varying rates of charges applied (i.e., fast or slow removal of the lithium ions). In running this activity, two students were invited to participate, with one student operating at slow charge with slow removal of the lithium blocks, while the other student would be representing fast charge and would remove the lithium blocks as fast as they could; this invariably leads to the eventuality of structure collapse, as the displacement of the blocks becomes more severe. ■ TACTILE BATTERY TOWER BLOCK SET The initial painted tower block sets were further developed to enable the activity to be more inclusive, nominally for students who are blind or partially sighted. Therefore, this set needs to have a good contrast in colors, in addition to having distinct shapes and textures. For this version, one tower set was purchased and divided in two, with each tower having a lithium cobalt oxide electrode face and a graphite electrode on the rear to make multiple use of the sets ( Figure 6A). For the oxide electrode, the lithium ions are represented by blue painted wooden buttons and the cobalt oxide layers with purple glitter paper with embossed plastic gems. The graphite electrode was produced with painted gray cardboard albeit without the top layer, to leave a ridged texture akin to the layered structure of this material. The aluminum current collector was decorated with aluminum foil, while the copper current collector was painted orange and had a 1 pence piece affixed to the surface. The total number of pieces represented is as follows: cobalt oxide layers, graphite layers, lithium pieces on one end with a white blank on the opposite end, white blanks, orange copper on one square face and aluminum foil on the opposite end, and longitudinal 50/50 orange copper/ aluminum foil. As with the painted tower block set discussed previously, the different activities can be visualized with the tactile equivalent. To allow the lithium-ion insertion path to be clearer, vacancies in the graphite electrode can be removed beforehand, as shown in Figure 6B, before charging ( Figure 6C). Degradation ( Figure 6D) and charging rates ( Figure 6F) can also be demonstrated, as previously mentioned with our initial painted set. Producing a demonstration that is tactile benefits all learners, not just students who are blind or partially sighted. ■ FEEDBACK The demonstration has been used with multiple age groups since March 2019, from secondary school students having an introduction to rechargeable batteries to university level undergraduates receiving the demonstration to reinforce learning and key concepts for this type of battery, as well as to the general public at a range of museums and other events. From these earlier events, we received a range of positive comments on sticky notes as feedback. In order to assess the usefulness of this activity in more detail, a survey was handed out before and after the battery tower blocks demonstrations, during a week period in January 2020. The first two questions asked for words associated with "Rechargeable batteries" and "Li-ion batteries". The following two questions (nos. 3 and 4) in the pre-demonstration survey were presented with choices on the Likert scale with "I am confident in explaining which electrode is which in a LiCoO 2 −graphite cell for a Li-ion battery" and "I am confident in explaining how Li-ion batteries operate", with tick box options of "not at all", "a little", "not sure", "some", and "greatly". All of these initial questions were asked again after the use of the battery tower blocks demonstration in a post activity survey with the additional question of "The battery tower blocks are a useful prop in explaining how Li-ion batteries operate" with a final open feedback box on how the activity could be improved in understanding. Responses were gathered from year 3 chemistry undergraduate students (n = 13) who were receiving an introductory lecture to rechargeable batteries. The survey was then open to other chemistry undergraduates, who came along to a separate session (n = 16). Responses were then collected from year 9 to year 11 secondary school children before an energy-based demonstration lecture, where the battery tower blocks were used to explain the operation of Li-(and Na-) ion batteries (n = 49). Finally, the painted and tactile tower block game sets were shown to a small group of adults who specialize in public engagement and work with children with special educational needs (n = 3). The following section will consider these responses, with the generation of the word plots considering the frequency of associated words for the first question and tabulated answers for questions 3−5. The year 9−11 school children word chart responses are presented for question 1 in Figure 7, with pre-and postdemonstration answers. These responses are shown in particular, due to this group being the larger cohort and the least experienced in terms of chemistry. The word chart responses for question 2 for this group, along with the university students' answers for both initial questions are provided in the Supporting Information. The initial responses to the word associated with "Rechargeable batteries" appear to be words connected to electricity and the topic's physical quantities, in addition to the name of applications of where this type of battery can be found, i.e., phones and electric vehicles. The pre-demonstration survey answers for question 2 of words associated with Li-ion batteries, again, had applications stated; however, electrode terminology was stated, i.e., cathode and anode. The post-demonstration survey answers for this question are presented in the Supporting Information. The post-demonstration responses for both questions appear to consider more technical information on what these batteries consist of chemically and greater concerns for the environment. If no responses were given to either of the associated word questions, a question mark was counted as that response. The frequency of an unsure answer reduced after the postdemonstration. Note that, in the energy lecture, Na-ion batteries were mentioned, and hence, this is why some students have written this as an answer. The Likert responses from all groups, university students, adults, and year 9−11 school children, pre-and postdemonstration, will now be considered collectively for the following questions within Table 1. Question 3: I am confident in explaining which electrode is which in a LiCoO 2 −graphite cell for a Li-ion battery. Question 4: I am confident in explaining how Li-ion batteries operate. Question 5: The battery tower block game is a useful prop in explaining how Li-ion batteries operate. The general trend for questions 3 and 4 in the post-activity survey all show a positive response of students finding they would be more confident in answering these questions. The responses from the adult group seem to be a small improvement from the demonstration, but this could be due to this group being a small sample size, as well as to not having a chemical background and to receiving only a 10 min overview (since this was done in a session, where a range of tactile chemistry resources were being discussed), thus requiring more time to grasp the concept. The responses on whether the battery tower blocks demonstration is useful for explaining Li-ion batteries operation all had extremely positive responses, with no one stating it was not useful at all. The university students appear to have found the battery tower block sets the most useful, but this may be due to the fact that battery science is a university module. Overall, the results from these surveys, and general feedback from our prior use at a range of events, support the conclusion that the battery tower block game is a useful tool to explain Li-ion battery concepts. More in-depth written feedback was received from some students within the undergraduate group covering how the battery tower block game sets can support their learning from an assessment standpoint. The students comments centered around the demonstration offering a clear visualization of the structural chemistry of the electrodes, how the demonstration helped support the topic of redox reactions and from the application viewpoint, the rechargeable nature of these batteries, and how they degrade with time. From a structural point of view, the students commented that the tower block set was useful in showing "a visual differentiation of the that isn't always as clear from just diagrams", in addition to commenting that the battery tower block sets do "an excellent job of showing the distinct layers present in the electrodes" and show that "lithium exists in different layers to the cobalt oxide in one electrode and the graphite in another". This student went on further to comment how the 'white "blanks" between the graphite layers really useful in defining the layers and showing exactly where and how the lithium ions intercalate'. Another student commented on the movement of the lithium ions which "helped understanding of how a lithium-ion battery works" and said that "it was clear that not all lithium ions from one layer could be removed as then the lithium cobalt oxide would not be able to retain its structure". In terms of the topics, which the demonstration can support, redox reactions and (electrochemical) potentials were cited. "Although the activity itself didn't explicitly detail the reduction potentials of the processes", when put "into the context of charging and discharging an electrical device, it became clear where work was being done". In addition, another student commented that "an understanding of the redox processes alone means that these scenarios are a bit difficult to visualize effectively and that's where the demo has its real strength; a visual representation which accurately Journal of Chemical Education pubs.acs.org/jchemeduc Activity mirrors concepts that are potentially difficult to grasp otherwise". In terms of explaining battery characteristics with operation, rates of charges, and degradation, the students praised the demonstration highly for the ease of visualizing these different aspects. "This is a really useful tool for showing lithium charge and discharge, which is a really important concept in the batteries course I have taken this year. I also like how this introduces some important aspects of battery chemistry such as high rate of charge (balancing a fast rate for applications such as charging cars/portables quickly vs. avoiding collapse of the structure when too many lithium ions are removed at once)". "The demonstration involving many charge/discharge cycles is useful in demonstrating that batteries are rechargeable but that their structure will degrade over time" and highlights the "down fall of fast charge cycles and the dangers of overextraction of Li-ions which damage the structure of electrodes". Connecting the tower block set activities to real life observations when explaining these mechanisms to students can reinforce learning, such that one student drew a connection by commenting that the "degradation of the cobalt structure happens due to charging and discharging of the lithium ions, causing the cobalt oxide to become degraded and this is the reason the battery on my phone worsens with the more I charge and use it". Another commented that having a physical representation helped support their lecture material, such that "being told in lectures that many cycles of charge and discharge can result in a breakdown of structure is difficult to visualise. However, moving the lithium blocks in and out of the structure repeatedly resulting in loss of alignment gives a really nice visual aid that I can use in my revision." Finally, with the multiple scenarios the tower block sets can show, one student "was surprised how many concepts the set could introduce and it's interesting to see how it could be used as an interactive demonstration taking up an entire exercise in class, or as the starting point for much more complex chemistry in later years, leading into discussion of how removal of lithium is accompanied by a change in cobalt oxidation state and what this means for the battery setup." A suggested improvement to this demonstration was to show the external circuit with the movement of electrons; to show this, a pipe cleaner attached to the current collector topped with moveable beads to represent the electrons was suggested. The tactile set has been reviewed by two teachers who specialize in the education of visually impaired students. The feedback we have received was extremely positive, where the set was praised for its use of textures and contrasting colors and for the highly interactive nature of the set, such as when the tower block set is collapsed when fast charging is performed. From this feedback and to ensure suitability, these are the suggested improvements related to a modification of the reversible set which was used to show the effect of different charge rates: In this set, the blocks were constructed so that the graphite electrode was on one side and the oxide electrode on the other side, as mentioned previously ( Figure 5). The feedback suggested that having both electrodes on the same tower (one at front, the other at the back) may confuse students, who rely heavily on exploring the textures. Thus, the recommendation was to have each tower represent a single electrode, with no decoration on the reverse. With this amendment, the only activity which will be impacted with this modification will be the rate-charging demonstration, where two students are required. Instead of running the activity simultaneously, one student should operate as a slow charge such that the structure remains intact, and after resetting, a second student uses the same tower block set operating at fast charge. The second recommendation, following this alteration, would be for the educator to play a sounding beep to set the contrasting paces of lithium-ion removal from the tower sets. This will help support the students in timing the block removal and prevent any confusion in what they should be aiming for. This addition would benefit sighted students also. We acknowledge that feedback that comes directly from blind and visually impaired students will be most beneficial, and we plan to conduct this survey at a later date and make suitable adjustments based on students' feedback to ensure the activity is the most beneficial it could possibly be for them. ■ CONCLUSIONS To summarize and conclude, we have illustrated a standard tower block set to be akin to the two electrodes that are found in a Li-ion battery. The battery tower block sets allow us to show the intercalation chemistry of the lithium ions, through the removal of the blocks from the oxide electrode to the graphite electrode upon charging, with the reverse process occurring on discharging. A range of other battery chemistry effects can be considered through comparing the rate of charge, with an activity involving two students who remove the blocks at different rates. The lack of ability to remove three lithium-ion blocks from one layer also helps reinforce structural aspects of the LiCoO 2 material, whereby not all of the lithium can be removed. Finally, when removing and reinserting blocks into the tower sets, the longitudinal blocks of the purple/red cobalt oxide layers or the gray graphite layers are gradually knocked off center. This effect can help to explain the degradation effects Li-ion batteries can experience from repeated cycling. A survey was undertaken, and n = 81 responses were collected. The responses consisted of university students, a specialist adult group, and secondary school children for pre-and post-demonstration. Overall, the battery tower block game received positive feedback for its use in reinforcing battery education. Furthermore, we believe that additional activities can also be developed using this resource, including multiple sets to help to explain how the battery management system of a full EV (electric vehicle) pack operates, and this is the subject of our future development in this area. List of items and instructions to reproduce the battery tower block game sets, with the associated components of the battery of the correct dimensions, which can be printed directly onto the recommended sticker paper and affixed to the tower blocks, and additional results from the surveys (PDF, DOCX) |
Hypertension and cardiovascular disease endpoints by ethnic group: the promise of data linkage Hypertension is the most important risk factor for cardiovascular diseases (CVD), accounting for approximately 45% of global CVD morbidity and mortality.1 Evidence suggests striking differences in blood pressure (BP) and hypertension prevalence between ethnic groups. West African descent adults living in Europe and North America, whether they come directly from Africa or indirectly from the Caribbean, generally have higher BP levels and a higher prevalence of hypertension than European descent populations (henceforth, white individuals), with this being seen at all ages in North America and only from adulthood in the UK.2,3 Chinese-origin people also have slightly higher BP and prevalence of hypertension than white individuals.4,5 The evidence is mixed when it comes to the South-Asian descent populations (ie, Indian, Pakistani, Bangladeshi and Sri Lankan people). In a systematic review in the UK, BP levels among South-Asian individuals were generally similar to that of the UK general population, but there were stark differences among the South-Asian subgroups, with slightly higher BP in Indian individuals, slightly lower BP in Pakistani individuals, and much lower BP in Bangladeshi individuals.6 Studies in The Netherlands7 and Canada,5,8 however, show a higher hypertension prevalence in South-Asian than in white individuals. In the Ontario Health Survey, the age-standardised hypertension prevalence among South-Asian individuals was 30.1% compared with 20.7% among white Canadian people.8 South-Asian were still more likely than white Canadian individuals to have hypertension even after adjustment for age, sex and body mass index. While hypertension remains the most important risk factor for CVD, its contribution to the ethnic differences in CVD outcomes is still sometimes puzzling. In the UK, although the BP levels are similar or lower in the South-Asian relative to the general population, they have a higher mortality from stroke and |
Hunter Shinkaruk believes he made a point prior to arriving a Team Canada's selection camp, even though the ethos of the whole exercise is that everyone is starting at zero.
Hockey Canada cutting down to a 25-player camp has reduced a lot of uncertainly about who will be around on Boxing Day when the country's obsessed-over team of teenagers opens the IIHF world U20 championship in Malmo, Sweden. There is some mystery with where Shinkaruk fits. The Medicine Hat Tigers captain has been a proven scorer since he bagged 49 goals during his sophomore season in the Western Hockey League and he also lasted until the final cut with the Vancouver Canucks this fall.
Scroll to continue with content Ad
The more complicated part of the narrative is that the 19-year-old has sustained hip and shoulder injuries this fall. Plus there was the juxtaposition at the NHL draft. Shinkaruk was one of the prospects whom the NHL had doing the media whirl, a treat usually saved for surefire top-10 picks, yet he stayed on board until Vancouver took him at No. 24.
In any event, after being a late cut from the ill-fated 2013 squad, Shinkaruk at least has a clean bill of health.
"Playing through injuries wasn’t fun, but it was something that I felt like I wanted to do because I wanted to get an invite to try to make this team," said Shinkaruk, who's only played in 18 of the Tigers' 31 games. "I took some time off to rehab my body and get it to 100 per cent. I feel good now. It’s my last kick at the can now."
Story continues
During Friday's first team practice at the MasterCard Centre, Shinkaruk was at left wing on a de facto top line of with two fellow Canadian NHL team prospects, Winnipeg Jets-drafted centre Nic Petan and Ottawa Senators first-rounder Curtis Lazar. A keen observer might have wondered if that could also be a comment on the absence of Quebec League dynamo Jonathan Drouin. The 18-year-old Drouin, who is recovering from a mild brain injury caused from a hit from behind on Nov. 29, played left wing during his first two junior seasons before shifting to centre this fall after the Tampa Bay Lightning returned him to the Halfiax Mooseheads.
One refrain with coach Brent Sutter is that the final roster could include as many as nine natural centres. The competition for the scoring-line spots is fierce. That puts a spotlight on Shinkaruk, to see whether he can recapture the peak potential he's shown when healthy.
"I felt great when I was in Vancouver, I felt great when I came back in October," he said. "My first six games in Medicine Hat I had 10 points. I got hit awkwardly in a game against Portland. I have played through that until now. Then I got hit from behind in one of our other games.
"Now it probably feels the best it has since a few games after Vancouver."
Sutter has no set deadline on cuts. So everything is kind of vague. The coach did talk about how he would like to fill the 13th-forward slot, but that's not necessarily a solid as oak commitment to not take eight defencemen on the final roster.
"It's nice to have a guy that’s just not a natural one-position type guy," he said. "He can play in different situations, He can fit into certain roles if you get into injury troubles. Be a good penalty killer. Get some minutes that he may not get 5-on-5. Usually you don’t use a guy like that on the power play. As you can tell, it’s a pretty unique group of 25 guys."
It's open to question how that might apply to an offensive-oriented talent such as Shinkaruk or QMJHL scoring leader Anthony Mantha, who has 73 points in 35 games. All it can really mean in Shinkaruk is taking nothing for granted.
"We have basically two days to solidify our spot or bump someone out if that’s how the coaches are looking at it," he said.
Neate Sager is a writer for Yahoo! Canada Sports. Follow him on Twitter @neatebuzzthenet. Please address any questions, comments or concerns to [email protected]. |
// Adds declarations for the needed helper functions from the runtime wrappers.
// The types in comments give the actual types expected/returned but the API
// uses void pointers. This is fine as they have the same linkage in C.
void GpuLaunchFuncToGpuRuntimeCallsPass::declareGpuRuntimeFunctions(
Location loc) {
ModuleOp module = getOperation();
OpBuilder builder(module.getBody()->getTerminator());
if (!module.lookupSymbol(kGpuModuleLoadName)) {
builder.create<LLVM::LLVMFuncOp>(
loc, kGpuModuleLoadName,
LLVM::LLVMType::getFunctionTy(
getGpuRuntimeResultType(),
{
getPointerPointerType(),
getPointerType()
},
false));
}
if (!module.lookupSymbol(kGpuModuleGetFunctionName)) {
builder.create<LLVM::LLVMFuncOp>(
loc, kGpuModuleGetFunctionName,
LLVM::LLVMType::getFunctionTy(
getGpuRuntimeResultType(),
{
getPointerPointerType(),
getPointerType(),
getPointerType()
},
false));
}
if (!module.lookupSymbol(kGpuLaunchKernelName)) {
builder.create<LLVM::LLVMFuncOp>(
loc, kGpuLaunchKernelName,
LLVM::LLVMType::getFunctionTy(
getGpuRuntimeResultType(),
{
getPointerType(),
getIntPtrType(),
getIntPtrType(),
getIntPtrType(),
getIntPtrType(),
getIntPtrType(),
getIntPtrType(),
getInt32Type(),
getPointerType(),
getPointerPointerType(),
getPointerPointerType()
},
false));
}
if (!module.lookupSymbol(kGpuGetStreamHelperName)) {
builder.create<LLVM::LLVMFuncOp>(
loc, kGpuGetStreamHelperName,
LLVM::LLVMType::getFunctionTy(getPointerType(), false));
}
if (!module.lookupSymbol(kGpuStreamSynchronizeName)) {
builder.create<LLVM::LLVMFuncOp>(
loc, kGpuStreamSynchronizeName,
LLVM::LLVMType::getFunctionTy(getGpuRuntimeResultType(),
getPointerType() ,
false));
}
if (!module.lookupSymbol(kGpuMemHostRegisterName)) {
builder.create<LLVM::LLVMFuncOp>(
loc, kGpuMemHostRegisterName,
LLVM::LLVMType::getFunctionTy(getVoidType(),
{
getPointerType(),
getInt64Type()
},
false));
}
} |
<filename>jmix-security/security/src/main/java/io/jmix/security/constraint/EntityAttributeConstraint.java
/*
* Copyright 2020 Haulmont.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.jmix.security.constraint;
import io.jmix.core.accesscontext.EntityAttributeContext;
import io.jmix.core.constraint.EntityOperationConstraint;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Component;
@Component("sec_EntityAttributeConstraint")
@Scope(BeanDefinition.SCOPE_PROTOTYPE)
public class EntityAttributeConstraint implements EntityOperationConstraint<EntityAttributeContext> {
protected PolicyStore policyStore;
protected SecureOperations secureOperations;
@Autowired
public void setPolicyStore(PolicyStore policyStore) {
this.policyStore = policyStore;
}
@Autowired
public void setSecureOperations(SecureOperations secureOperations) {
this.secureOperations = secureOperations;
}
@Override
public Class<EntityAttributeContext> getContextType() {
return EntityAttributeContext.class;
}
@Override
public void applyTo(EntityAttributeContext context) {
if (!secureOperations.isEntityAttrUpdatePermitted(context.getPropertyPath(), policyStore)) {
context.setModifyDenied();
}
if (!secureOperations.isEntityAttrReadPermitted(context.getPropertyPath(), policyStore)) {
context.setViewDenied();
}
}
}
|
/**
* @file workspace tree view
* @author weijiaxun <<EMAIL>>
*/
import React, {useState, useEffect, useMemo} from 'react';
import ReactTreeView, {TreeItem} from 'react-vsc-treeview';
import * as vscode from 'vscode';
import {counterIncreaseEvent, counterDecreaseEvent} from '../events/treeview';
import {getUser, User} from './api';
const Counter = () => {
const [count, setCount] = useState(0);
const contextValue = useMemo(
() => {
const availableCommands: string[] = [];
availableCommands.push(
'counterIncrease',
'counterDecrease'
);
return availableCommands.join('.');
},
[]
);
useEffect(
() => {
const disposable = counterIncreaseEvent.event(() => {
setCount(count + 1);
})
return () => {
disposable.dispose();
};
},
[count, setCount]
);
useEffect(
() => {
const disposable = counterDecreaseEvent.event(() => {
setCount(count - 1);
});
return () => {
disposable.dispose();
};
},
[count, setCount]
);
return (
<TreeItem
label={`Count: ${count}`}
contextValue={contextValue}
/>
);
};
const App = () => {
const [user, setUser] = useState<User | null>(null);
useEffect(() => {
getUser().then((res) => {
setUser(res);
});
}, []);
if (user === null) {
return <TreeItem label="Loading..." />;
}
return (
<>
<Counter />
<TreeItem
label={user.username}
iconPath={vscode.ThemeIcon.Folder}
/>
<TreeItem
label={user.email}
iconPath={vscode.ThemeIcon.Folder}
/>
<TreeItem
label="Operation"
iconPath={vscode.ThemeIcon.Folder}
>
{
user.collection.map((item) => (
<TreeItem key={item} label={item} />
))
}
</TreeItem>
</>
);
};
const treeview = ReactTreeView.render(<App />, 'baidu.tree.example');
export default treeview;
|
"""
Terrain Tiles
Hosted on AWS S3
https://registry.opendata.aws/terrain-tiles/
Description
Gridded elevation tiles
Resource type
S3 Bucket
Amazon Resource Name (ARN)
arn:aws:s3:::elevation-tiles-prod
AWS Region
us-east-1
Documentation: https://mapzen.com/documentation/terrain-tiles/
Attribution
-----------
- Some source adapted from https://github.com/tilezen/joerd
- See required attribution when using terrain tiles:
https://github.com/tilezen/joerd/blob/master/docs/attribution.md
Attributes
----------
TILE_FORMATS : list
list of support tile formats
Notes
-----
See https://github.com/racemap/elevation-service/blob/master/tileset.js
for example skadi implementation
"""
import os
import re
from itertools import product
import logging
from io import BytesIO
import traitlets as tl
import numpy as np
import podpac
from podpac.core.data.rasterio_source import RasterioRaw
from podpac.compositor import TileCompositorRaw
from podpac.interpolators import InterpolationMixin
from podpac.interpolators import RasterioInterpolator, ScipyGrid, ScipyPoint
from podpac.utils import cached_property
from podpac.authentication import S3Mixin
####
# private module attributes
####
# create log for module
_logger = logging.getLogger(__name__)
ZOOM_SIZES = [
8271.5169531233,
39135.75848200978,
19567.87924100587,
9783.939620502935,
4891.969810250487,
2445.9849051252454,
1222.9924525636013,
611.4962262818025,
305.7481131408976,
152.8740565714275,
76.43702828571375,
38.218514142856876,
19.109257072407146,
9.554628536203573,
4.777314268103609,
]
class TerrainTilesSourceRaw(RasterioRaw):
"""DataSource to handle individual TerrainTiles raster files
Parameters
----------
source : str
Path to the sourcefile on S3
Attributes
----------
dataset : :class:`rasterio.io.DatasetReader`
rasterio dataset
"""
anon = tl.Bool(True)
@tl.default("crs")
def _default_crs(self):
if "geotiff" in self.source:
return "EPSG:3857"
if "terrarium" in self.source:
return "EPSG:3857"
if "normal" in self.source:
return "EPSG:3857"
def download(self, path="terraintiles"):
"""
Download the TerrainTile file from S3 to a local file.
This is a convience method for users and not used by PODPAC machinery.
Parameters
----------
path : str
Subdirectory to put files. Defaults to 'terraintiles'.
Within this directory, the tile files will retain the same directory structure as on S3.
"""
filename = os.path.split(self.source)[1] # get filename off of source
joined_path = os.path.join(path, os.path.split(self.source)[0].replace("s3://", "")) # path to file
filepath = os.path.abspath(os.path.join(joined_path, filename))
# make the directory if it hasn't been made already
if not os.path.exists(joined_path):
os.makedirs(joined_path)
# download the file
_logger.debug("Downloading terrain tile {} to filepath: {}".format(self.source, filepath))
self.s3.get(self.source, filepath)
# this is a little crazy, but I get floating point issues with indexing if i don't round to 7 decimal digits
def get_coordinates(self):
coordinates = super(TerrainTilesSourceRaw, self).get_coordinates()
for dim in coordinates:
coordinates[dim] = np.round(coordinates[dim].coordinates, 6)
return coordinates
class TerrainTilesComposite(TileCompositorRaw):
"""Terrain Tiles gridded elevation tiles data library
Hosted on AWS S3
https://registry.opendata.aws/terrain-tiles/
Description
Gridded elevation tiles
Resource type
S3 Bucket
Amazon Resource Name (ARN)
arn:aws:s3:::elevation-tiles-prod
AWS Region
us-east-1
Documentation: https://mapzen.com/documentation/terrain-tiles/
Parameters
----------
zoom : int
Zoom level of tiles, in [0, ..., 14]. Defaults to 7. A value of "-1" will automatically determine the zoom level.
WARNING: When automatic zoom is used, evaluating points (stacked lat,lon) uses the maximum zoom level (level 14)
tile_format : str
One of ['geotiff', 'terrarium', 'normal']. Defaults to 'geotiff'
PODPAC node can only evaluate 'geotiff' formats.
Other tile_formats can be specified for :meth:`download`
No support for 'skadi' formats at this time.
bucket : str
Bucket of the terrain tiles.
Defaults to 'elevation-tiles-prod'
"""
# parameters
zoom = tl.Int(default_value=-1).tag(attr=True)
tile_format = tl.Enum(["geotiff", "terrarium", "normal"], default_value="geotiff").tag(attr=True)
bucket = tl.Unicode(default_value="elevation-tiles-prod").tag(attr=True)
sources = [] # these are loaded as needed
urls = tl.List(trait=tl.Unicode()).tag(attr=True) # Maps directly to sources
dims = ["lat", "lon"]
anon = tl.Bool(True)
def _zoom(self, coordinates):
if self.zoom >= 0:
return self.zoom
crds = coordinates.transform("EPSG:3857")
if coordinates.is_stacked("lat") or coordinates.is_stacked("lon"):
return len(ZOOM_SIZES) - 1
steps = []
for crd in crds.values():
if crd.name not in ["lat", "lon"]:
continue
if crd.size == 1:
continue
if isinstance(crd, podpac.coordinates.UniformCoordinates1d):
steps.append(np.abs(crd.step))
elif isinstance(crd, podpac.coordinates.ArrayCoordinates1d):
steps.append(np.abs(np.diff(crd.coordinates)).min())
else:
continue
if not steps:
return len(ZOOM_SIZES) - 1
step = min(steps) / 2
zoom = 0
for z, zs in enumerate(ZOOM_SIZES):
zoom = z
if zs < step:
break
return zoom
def select_sources(self, coordinates, _selector=None):
# get all the tile sources for the requested zoom level and coordinates
sources = get_tile_urls(self.tile_format, self._zoom(coordinates), coordinates)
urls = ["s3://{}/{}".format(self.bucket, s) for s in sources]
# create TerrainTilesSourceRaw classes for each url source
self.sources = self._create_composite(urls)
if self.trait_is_defined("interpolation") and self.interpolation is not None:
for s in self.sources:
if s.has_trait("interpolation"):
s.set_trait("interpolation", self.interpolation)
return self.sources
def find_coordinates(self):
return [podpac.coordinates.union([source.coordinates for source in self.sources])]
def download(self, path="terraintiles"):
"""
Download active terrain tile source files to local directory
Parameters
----------
path : str
Subdirectory to put files. Defaults to 'terraintiles'.
Within this directory, the tile files will retain the same directory structure as on S3.
"""
try:
for source in self.sources[0].sources:
source.download(path)
except tl.TraitError as e:
raise ValueError("No terrain tile sources selected. Evaluate node at coordinates to select sources.") from e
def _create_composite(self, urls):
# Share the s3 connection
sample_source = TerrainTilesSourceRaw(
source=urls[0],
cache_ctrl=self.cache_ctrl,
force_eval=self.force_eval,
cache_output=self.cache_output,
cache_dataset=True,
)
return [
TerrainTilesSourceRaw(
source=url,
s3=sample_source.s3,
cache_ctrl=self.cache_ctrl,
force_eval=self.force_eval,
cache_output=self.cache_output,
cache_dataset=True,
)
for url in urls
]
class TerrainTiles(InterpolationMixin, TerrainTilesComposite):
"""Terrain Tiles gridded elevation tiles data library
Hosted on AWS S3
https://registry.opendata.aws/terrain-tiles/
Description
Gridded elevation tiles
Resource type
S3 Bucket
Amazon Resource Name (ARN)
arn:aws:s3:::elevation-tiles-prod
AWS Region
us-east-1
Documentation: https://mapzen.com/documentation/terrain-tiles/
Parameters
----------
zoom : int
Zoom level of tiles. Defaults to 6.
tile_format : str
One of ['geotiff', 'terrarium', 'normal']. Defaults to 'geotiff'
PODPAC node can only evaluate 'geotiff' formats.
Other tile_formats can be specified for :meth:`download`
No support for 'skadi' formats at this time.
bucket : str
Bucket of the terrain tiles.
Defaults to 'elevation-tiles-prod'
"""
pass
############
# Utilities
############
def get_tile_urls(tile_format, zoom, coordinates=None):
"""Get tile urls for a specific zoom level and geospatial coordinates
Parameters
----------
tile_format : str
format of the tile to get
zoom : int
zoom level
coordinates : :class:`podpac.Coordinates`, optional
only return tiles within coordinates
Returns
-------
list of str
list of tile urls
"""
# get all the tile definitions for the requested zoom level
tiles = _get_tile_tuples(zoom, coordinates)
# get source urls
return [_tile_url(tile_format, x, y, z) for (x, y, z) in tiles]
############
# Private Utilites
############
def _get_tile_tuples(zoom, coordinates=None):
"""Query for tiles within podpac coordinates
This method allows you to get the available tiles in a given spatial area.
This will work for all :attr:`TILE_FORMAT` types
Parameters
----------
coordinates : :class:`podpac.coordinates.Coordinates`
Find available tiles within coordinates
zoom : int, optional
zoom level
Raises
------
TypeError
Description
Returns
-------
list of tuple
list of tile tuples (x, y, zoom) for zoom level and coordinates
"""
# if no coordinates are supplied, get all tiles for zoom level
if coordinates is None:
# get whole world
tiles = _get_tiles_grid([-90, 90], [-180, 180], zoom)
# down select tiles based on coordinates
else:
_logger.debug("Getting tiles for coordinates {}".format(coordinates))
if "lat" not in coordinates.udims or "lon" not in coordinates.udims:
raise TypeError("input coordinates must have lat and lon dimensions to get tiles")
# transform to WGS84 (epsg:4326) to use the mapzen example for transforming coordinates to tilespace
# it doesn't seem to conform to standard google tile indexing
c = coordinates.transform("epsg:4326")
# point coordinates
if "lat_lon" in c.dims or "lon_lat" in c.dims:
lat_lon = zip(c["lat"].coordinates, c["lon"].coordinates)
tiles = []
for (lat, lon) in lat_lon:
tile = _get_tiles_point(lat, lon, zoom)
if tile not in tiles:
tiles.append(tile)
# gridded coordinates
else:
lat_bounds = c["lat"].bounds
lon_bounds = c["lon"].bounds
tiles = _get_tiles_grid(lat_bounds, lon_bounds, zoom)
return tiles
def _tile_url(tile_format, x, y, zoom):
"""Build S3 URL prefix
The S3 bucket is organized {tile_format}/{z}/{x}/{y}.tif
Parameters
----------
tile_format : str
One of 'terrarium', 'normal', 'geotiff'
zoom : int
zoom level
x : int
x tilespace coordinate
y : int
x tilespace coordinate
Returns
-------
str
Bucket prefix
Raises
------
TypeError
"""
tile_url = "{tile_format}/{zoom}/{x}/{y}.{ext}"
ext = {"geotiff": "tif", "normal": "png", "terrarium": "png"}
return tile_url.format(tile_format=tile_format, zoom=zoom, x=x, y=y, ext=ext[tile_format])
def _get_tiles_grid(lat_bounds, lon_bounds, zoom):
"""
Convert geographic bounds into a list of tile coordinates at given zoom.
Adapted from https://github.com/tilezen/joerd
Parameters
----------
lat_bounds : :class:`np.array` of float
[min, max] bounds from lat (y) coordinates
lon_bounds : :class:`np.array` of float
[min, max] bounds from lon (x) coordinates
zoom : int
zoom level
Returns
-------
list of tuple
list of tuples (x, y, zoom) describing the tiles to cover coordinates
"""
# convert to mercator
xm_min, ym_min = _mercator(lat_bounds[1], lon_bounds[0])
xm_max, ym_max = _mercator(lat_bounds[0], lon_bounds[1])
# convert to tile-space bounding box
xmin, ymin = _mercator_to_tilespace(xm_min, ym_min, zoom)
xmax, ymax = _mercator_to_tilespace(xm_max, ym_max, zoom)
# generate a list of tiles
xs = range(xmin, xmax + 1)
ys = range(ymin, ymax + 1)
tiles = [(x, y, zoom) for (y, x) in product(ys, xs)]
return tiles
def _get_tiles_point(lat, lon, zoom):
"""Get tiles at a single point and zoom level
Parameters
----------
lat : float
latitude
lon : float
longitude
zoom : int
zoom level
Returns
-------
tuple
(x, y, zoom) tile url
"""
xm, ym = _mercator(lat, lon)
x, y = _mercator_to_tilespace(xm, ym, zoom)
return x, y, zoom
def _mercator(lat, lon):
"""Convert latitude, longitude to x, y mercator coordinate at given zoom
Adapted from https://github.com/tilezen/joerd
Parameters
----------
lat : float
latitude
lon : float
longitude
Returns
-------
tuple
(x, y) float mercator coordinates
"""
# convert to radians
x1, y1 = lon * np.pi / 180, lat * np.pi / 180
# project to mercator
x, y = x1, np.log(np.tan(0.25 * np.pi + 0.5 * y1) + 1e-32)
return x, y
def _mercator_to_tilespace(xm, ym, zoom):
"""Convert mercator to tilespace coordinates
Parameters
----------
x : float
mercator x coordinate
y : float
mercator y coordinate
zoom : int
zoom level
Returns
-------
tuple
(x, y) int tile coordinates
"""
tiles = 2 ** zoom
diameter = 2 * np.pi
x = int(tiles * (xm + np.pi) / diameter)
y = int(tiles * (np.pi - ym) / diameter)
return x, y
if __name__ == "__main__":
from podpac import Coordinates, clinspace
c = Coordinates([clinspace(40, 43, 1000), clinspace(-76, -72, 1000)], dims=["lat", "lon"])
c2 = Coordinates(
[clinspace(40, 43, 1000), clinspace(-76, -72, 1000), ["2018-01-01", "2018-01-02"]], dims=["lat", "lon", "time"]
)
print("TerrainTiles")
node = TerrainTiles(tile_format="geotiff", zoom=8)
output = node.eval(c)
print(output)
output = node.eval(c2)
print(output)
print("TerrainTiles cached")
node = TerrainTiles(tile_format="geotiff", zoom=8, cache_ctrl=["ram", "disk"])
output = node.eval(c)
print(output)
# tile urls
print("get tile urls")
print(np.array(get_tile_urls("geotiff", 1)))
print(np.array(get_tile_urls("geotiff", 9, coordinates=c)))
print("done")
|
<reponame>zaleo-yt/imagn3gdbot
package com.github.alex1304.ultimategdbot.modules.reply;
import java.util.Timer;
import java.util.TimerTask;
import java.util.function.Predicate;
import com.github.alex1304.ultimategdbot.utils.Procedure;
import sx.blah.discord.handle.obj.IMessage;
import sx.blah.discord.handle.obj.IUser;
import sx.blah.discord.util.DiscordException;
/**
* Allows users to reply to a message sent by the bot. They are specific to a channel and is designed to close
* after a certain time of inactivity
*
* @author Alex1304
*/
public class Reply {
public static final long DEFAULT_TIMEOUT_MILLIS = 600000;
private IMessage initialMessage;
private IUser user;
private Predicate<IMessage> replyHandler;
private long timeout;
private Procedure onSuccess, onFailure, onCancel;
private Timer timer;
/**
* @param initialMessage
* - The message initially sent by the bot to ask the user to
* reply
* @param user
* - the user who is supposed to reply
* @param replyHandler
* - Executes what should happen when the user replies. The
* predicate should return false if the bot received an
* unexpected reply, true otherwise
* @param timeout
* - delay given to the user to reply timeout
*/
public Reply(IMessage initialMessage, IUser user, Predicate<IMessage> replyHandler, long timeout) {
this.initialMessage = initialMessage;
this.user = user;
this.replyHandler = replyHandler;
this.timeout = timeout;
this.onSuccess = () -> {};
this.onFailure = () -> {};
this.onCancel = () -> {};
this.timer = null;
}
/**
* @param initialMessage
* - The message initially sent by the bot to ask the user to
* reply
* @param user
* - the user who is supposed to reply
* @param replyHandler
* - Executes what should happen when the user replies. The
* predicate should return false if the bot received an
* unexpected reply, true otherwise
*/
public Reply(IMessage initialMessage, IUser user, Predicate<IMessage> replyHandler) {
this(initialMessage, user, replyHandler, DEFAULT_TIMEOUT_MILLIS);
}
/**
* Starts the timeout, in other words it schedules the deletion of the
* intial message. The timer is cancelled when either {@link Reply#cancel()}
* or {@link Reply#handle(IMessage)} is called
*/
public synchronized void startTimeout() {
if (timer != null)
return;
this.timer = new Timer();
timer.schedule(new TimerTask() {
@Override
public void run() {
Reply.this.cancel();
}
}, timeout);
}
/**
* Cancels the reply. Equivalent to {@code cancel(true)}
*/
public synchronized void cancel() {
this.cancel(true);
}
/**
* Cancels the reply.
*
* @param runOnCancel - Whether to run the onCancel procedure
*/
public synchronized void cancel(boolean runOnCancel) {
if (timer == null)
return;
timer.cancel();
this.timer = null;
if (runOnCancel)
onCancel.run();
}
/**
* Handles the reply given by the user. Executes onSuccess and onFailure accordingly.
* The reply is no longer opened after the reply has been handled.
*
* @param message
*/
public synchronized void handle(IMessage message) {
this.cancel(false);
if (replyHandler.test(message))
onSuccess.run();
else
onFailure.run();
}
/**
* Deletes the initial message. Doesn't do anything if message is already deleted
* or if deleteOnCancel is set to false
*/
public void deleteInitialMessage() {
try {
if (!initialMessage.isDeleted())
initialMessage.delete();
} catch (DiscordException e) {
return;
}
}
private Procedure emptyProcedureIfNull(Procedure p) {
return p == null ? () -> {} : p;
}
/**
* Gets the initialMessage
*
* @return IMessage
*/
public IMessage getInitialMessage() {
return initialMessage;
}
/**
* Sets the initialMessage
*
* @param initialMessage - IMessage
*/
public void setInitialMessage(IMessage initialMessage) {
this.initialMessage = initialMessage;
}
/**
* Gets the user
*
* @return IUser
*/
public IUser getUser() {
return user;
}
/**
* Sets the user
*
* @param user - IUser
*/
public void setUser(IUser user) {
this.user = user;
}
/**
* Gets the replyHandler
*
* @return Predicate<IMessage>
*/
public Predicate<IMessage> getReplyHandler() {
return replyHandler;
}
/**
* Sets the replyHandler
*
* @param replyHandler - Predicate<IMessage>
*/
public void setReplyHandler(Predicate<IMessage> replyHandler) {
this.replyHandler = replyHandler;
}
/**
* Gets the timeout
*
* @return long
*/
public long getTimeout() {
return timeout;
}
/**
* Sets the timeout
*
* @param timeout - long
*/
public void setTimeout(long timeout) {
this.timeout = timeout;
}
/**
* Gets the onSuccess
*
* @return Procedure
*/
public Procedure getOnSuccess() {
return onSuccess;
}
/**
* Sets the onSuccess. Won't do anything if the timeout is running
*
* @param onSuccess - Procedure
*/
public void setOnSuccess(Procedure onSuccess) {
if (timer == null)
this.onSuccess = emptyProcedureIfNull(onSuccess);
}
/**
* Gets the onFailure
*
* @return Procedure
*/
public Procedure getOnFailure() {
return onFailure;
}
/**
* Sets the onFailure. Won't do anything if the timeout is running
*
* @param onFailure - Procedure
*/
public void setOnFailure(Procedure onFailure) {
if (timer == null)
this.onFailure = emptyProcedureIfNull(onFailure);
}
/**
* Gets the onCancel
*
* @return Procedure
*/
public Procedure getOnCancel() {
return onCancel;
}
/**
* Sets the onCancel. Won't do anything if the timeout is running
*
* @param onCancel - Procedure
*/
public void setOnCancel(Procedure onCancel) {
if (timer == null)
this.onCancel = emptyProcedureIfNull(onCancel);
}
}
|
/**
* Checks if the store is open and throws an exception otherwise.
*/
private void checkOpen() {
if (!opened) {
throw new StoreClosed("The store is closed");
}
} |
<gh_stars>1-10
/*
* The contents of this file are subject to the license and copyright
* detailed in the LICENSE and NOTICE files at the root of the source
* tree and available online at
*
* http://www.dspace.org/license/
*/
package org.dspace.app.iiif.model.generator;
import java.util.ArrayList;
import java.util.List;
import javax.validation.constraints.NotNull;
import de.digitalcollections.iiif.model.sharedcanvas.Canvas;
import de.digitalcollections.iiif.model.sharedcanvas.Range;
import de.digitalcollections.iiif.model.sharedcanvas.Resource;
import org.dspace.app.iiif.service.RangeService;
/**
* This generator wraps the domain model for IIIF {@code ranges}.
*
* In Presentation API version 2.1.1, adding a range to the manifest allows the client to display a structured
* hierarchy to enable the user to navigate within the object without merely stepping through the current sequence.
*
* This is used to populate the "structures" element of the Manifest. The structure is derived from the iiif.toc
* metadata and the ordered sequence of bitstreams (canvases)
*
* @author <NAME> <EMAIL>
* @author <NAME> (andrea.bollini at 4science.it)
*/
public class RangeGenerator implements IIIFResource {
private String identifier;
private String label;
private final List<Canvas> canvasList = new ArrayList<>();
private final List<Range> rangesList = new ArrayList<>();
private final RangeService rangeService;
/**
* The {@code RangeService} is used for defining hierarchical sub ranges.
* @param rangeService range service
*/
public RangeGenerator(RangeService rangeService) {
this.rangeService = rangeService;
}
/**
* Sets mandatory range identifier.
* @param identifier range identifier
*/
public RangeGenerator setIdentifier(@NotNull String identifier) {
if (identifier.isEmpty()) {
throw new RuntimeException("Invalid range identifier. Cannot be an empty string.");
}
this.identifier = identifier;
return this;
}
public String getIdentifier() {
return identifier;
}
/**
* Sets the optional range label.
* @param label range label
*/
public RangeGenerator setLabel(String label) {
this.label = label;
return this;
}
/**
* Adds canvas to range canvas list.
* @param canvas list of canvas generators
*/
public RangeGenerator addCanvas(CanvasGenerator canvas) {
canvasList.add((Canvas) canvas.generateResource());
return this;
}
/**
* Sets the range identifier and adds a sub range to the ranges list.
* @param range range generator
*/
public void addSubRange(RangeGenerator range) {
range.setIdentifier(identifier + "-" + rangesList.size());
RangeGenerator rangeReference = rangeService.getRangeReference(range);
rangesList.add((Range) rangeReference.generateResource());
}
@Override
public Resource<Range> generateResource() {
if (identifier == null) {
throw new RuntimeException("The Range resource requires an identifier.");
}
Range range;
if (label != null) {
range = new Range(identifier, label);
} else {
range = new Range(identifier);
}
for (Canvas canvas : canvasList) {
range.addCanvas(canvas);
}
for (Range rangeResource : rangesList) {
range.addRange(rangeResource);
}
return range;
}
}
|
In-home intervention with families in distress: changing places to promote change. This article examines the benefits of in-home family therapy with severely distressed families through the analysis of four cases that demonstrate the creative use of this intervention with families whose children were placed in a full-time day care facility. Although the efficacy of home intervention with distressed families has been documented, the case illustrations here analyze the process more fully--the how and the why it works. The first three cases explicate the contribution of home intervention to the engagement of social worker and client. Each case highlights how home intervention enhances the therapeutic alliance by promoting change from a different starting point--the client (home as a secure base for change), the worker (viewing the client from a different perspective), and the client-worker interaction (power sharing in setting boundaries). The fourth case (in vivo narrative reconstruction) serves as a striking example of how the home--as a multisystemic, intergenerational container of the family's past, present, and future--can be enlisted as a partner in reconstructing silenced chapters of the family narrative. |
def upgrade():
with op.batch_alter_table('encounter', schema=None) as batch_op:
batch_op.add_column(sa.Column('time_guid', app.extensions.GUID(), nullable=True))
batch_op.create_index(batch_op.f('ix_encounter_time_guid'), ['time_guid'], unique=False)
batch_op.create_foreign_key(batch_op.f('fk_encounter_time_guid_complex_date_time'), 'complex_date_time', ['time_guid'], ['guid']) |
<gh_stars>1000+
#pragma once
#include <array>
#include "certain/options.h"
#include "network/inet_addr.h"
#include "src/command.h"
#include "utils/array_timer.h"
#include "utils/header.h"
#include "utils/light_list.h"
#include "utils/singleton.h"
#include "utils/thread.h"
namespace certain {
struct EntryInfo;
struct EntityInfo {
uint64_t entity_id;
uint32_t acceptor_num;
uint32_t local_acceptor_id;
uint64_t max_cont_chosen_entry;
uint64_t max_catchup_entry;
uint64_t max_chosen_entry;
uint64_t max_plog_entry;
uint64_t pre_auth_entry;
uint32_t active_peer_acceptor_id;
// The newest msg received for the entity when the entity_info is loading.
std::unique_ptr<PaxosCmd> waiting_msg;
// The current client cmd.
std::unique_ptr<ClientCmd> client_cmd;
uint64_t uuid_base;
int32_t ref_count;
LIGHTLIST(EntryInfo) entry_list;
ArrayTimer<EntityInfo>::EltEntry timer_entry;
uint64_t recover_timestamp_msec = 0;
bool loading;
bool range_loading;
bool recover_pending;
};
std::string ToString(EntityInfo* entity_info);
class EntityInfoMng {
public:
EntityInfoMng(Options* options)
: options_(options), monitor_(options->monitor()) {}
~EntityInfoMng() {}
EntityInfo* FindEntityInfo(uint64_t entity_id);
EntityInfo* CreateEntityInfo(uint64_t entity_id, uint32_t acceptor_num,
uint32_t local_acceptor_id);
void DestroyEntityInfo(EntityInfo* entity_info);
EntityInfo* NextEntityInfo();
bool MakeEnoughRoom() { return true; }
private:
Options* options_;
Monitor* monitor_;
// entity_id -> entity_info
std::unordered_map<uint64_t, std::unique_ptr<EntityInfo>> entity_info_table_;
decltype(entity_info_table_)::iterator table_iter_;
};
class EntityInfoGroup : public Singleton<EntityInfoGroup> {
public:
void RegisterEntityInfo(uint64_t entity_id, EntityInfo* info);
void RemoveEntityInfo(uint64_t entity_id);
int GetMaxChosenEntry(uint64_t entity_id, uint64_t* max_chosen_entry,
uint64_t* max_cont_chosen_entry);
private:
struct Shard {
std::unordered_map<uint64_t, EntityInfo*> table;
ReadWriteLock lock;
};
std::array<Shard, 128> shards_;
};
} // namespace certain
|
<gh_stars>10-100
/* Copyright Joyent, Inc. and other Node contributors. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include <signal.h>
#include "uv.h"
#include "task.h"
#ifndef _WIN32
#include <unistd.h>
#endif
TEST_IMPL(kill_invalid_signum) {
uv_pid_t pid;
pid = uv_os_getpid();
ASSERT(uv_kill(pid, -1) == UV_EINVAL);
#ifdef _WIN32
/* NSIG is not available on all platforms. */
ASSERT(uv_kill(pid, NSIG) == UV_EINVAL);
#endif
ASSERT(uv_kill(pid, 4096) == UV_EINVAL);
MAKE_VALGRIND_HAPPY();
return 0;
}
/* For Windows we test only signum handling */
#ifdef _WIN32
#define NSIG 32
static void signum_test_cb(uv_signal_t* handle, int signum) {
FATAL("signum_test_cb should not be called");
}
TEST_IMPL(win32_signum_number) {
uv_signal_t signal;
uv_loop_t* loop;
loop = uv_default_loop();
uv_signal_init(loop, &signal);
ASSERT(uv_signal_start(&signal, signum_test_cb, 0) == UV_EINVAL);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGINT) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGBREAK) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGHUP) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGWINCH) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGILL) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGABRT_COMPAT) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGFPE) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGSEGV) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGTERM) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, SIGABRT) == 0);
ASSERT(uv_signal_start(&signal, signum_test_cb, -1) == UV_EINVAL);
ASSERT(uv_signal_start(&signal, signum_test_cb, NSIG) == UV_EINVAL);
ASSERT(uv_signal_start(&signal, signum_test_cb, 1024) == UV_EINVAL);
MAKE_VALGRIND_HAPPY();
return 0;
}
#else
#include <errno.h>
#include <signal.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#define NSIGNALS 10
struct timer_ctx {
unsigned int ncalls;
uv_timer_t handle;
int signum;
};
struct signal_ctx {
enum { CLOSE, STOP, NOOP } stop_or_close;
unsigned int ncalls;
uv_signal_t handle;
int signum;
int one_shot;
};
static void signal_cb(uv_signal_t* handle, int signum) {
struct signal_ctx* ctx = container_of(handle, struct signal_ctx, handle);
ASSERT(signum == ctx->signum);
if (++ctx->ncalls == NSIGNALS) {
if (ctx->stop_or_close == STOP)
uv_signal_stop(handle);
else if (ctx->stop_or_close == CLOSE)
uv_close((uv_handle_t*)handle, NULL);
else
ASSERT(0);
}
}
static void signal_cb_one_shot(uv_signal_t* handle, int signum) {
struct signal_ctx* ctx = container_of(handle, struct signal_ctx, handle);
ASSERT(signum == ctx->signum);
ASSERT(++ctx->ncalls == 1);
if (ctx->stop_or_close == CLOSE)
uv_close((uv_handle_t*)handle, NULL);
}
static void timer_cb(uv_timer_t* handle) {
struct timer_ctx* ctx = container_of(handle, struct timer_ctx, handle);
raise(ctx->signum);
if (++ctx->ncalls == NSIGNALS)
uv_close((uv_handle_t*)handle, NULL);
}
static void start_watcher(uv_loop_t* loop,
int signum,
struct signal_ctx* ctx,
int one_shot) {
ctx->ncalls = 0;
ctx->signum = signum;
ctx->stop_or_close = CLOSE;
ctx->one_shot = one_shot;
ASSERT(0 == uv_signal_init(loop, &ctx->handle));
if (one_shot)
ASSERT(0 == uv_signal_start_oneshot(&ctx->handle, signal_cb_one_shot, signum));
else
ASSERT(0 == uv_signal_start(&ctx->handle, signal_cb, signum));
}
static void start_timer(uv_loop_t* loop, int signum, struct timer_ctx* ctx) {
ctx->ncalls = 0;
ctx->signum = signum;
ASSERT(0 == uv_timer_init(loop, &ctx->handle));
ASSERT(0 == uv_timer_start(&ctx->handle, timer_cb, 5, 5));
}
TEST_IMPL(we_get_signal) {
struct signal_ctx sc;
struct timer_ctx tc;
uv_loop_t* loop;
loop = uv_default_loop();
start_timer(loop, SIGCHLD, &tc);
start_watcher(loop, SIGCHLD, &sc, 0);
sc.stop_or_close = STOP; /* stop, don't close the signal handle */
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc.ncalls == NSIGNALS);
start_timer(loop, SIGCHLD, &tc);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc.ncalls == NSIGNALS);
sc.ncalls = 0;
sc.stop_or_close = CLOSE; /* now close it when it's done */
uv_signal_start(&sc.handle, signal_cb, SIGCHLD);
start_timer(loop, SIGCHLD, &tc);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc.ncalls == NSIGNALS);
MAKE_VALGRIND_HAPPY();
return 0;
}
TEST_IMPL(we_get_signals) {
struct signal_ctx sc[4];
struct timer_ctx tc[2];
uv_loop_t* loop;
unsigned int i;
loop = uv_default_loop();
start_watcher(loop, SIGUSR1, sc + 0, 0);
start_watcher(loop, SIGUSR1, sc + 1, 0);
start_watcher(loop, SIGUSR2, sc + 2, 0);
start_watcher(loop, SIGUSR2, sc + 3, 0);
start_timer(loop, SIGUSR1, tc + 0);
start_timer(loop, SIGUSR2, tc + 1);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
for (i = 0; i < ARRAY_SIZE(sc); i++)
ASSERT(sc[i].ncalls == NSIGNALS);
for (i = 0; i < ARRAY_SIZE(tc); i++)
ASSERT(tc[i].ncalls == NSIGNALS);
MAKE_VALGRIND_HAPPY();
return 0;
}
TEST_IMPL(we_get_signal_one_shot) {
struct signal_ctx sc;
struct timer_ctx tc;
uv_loop_t* loop;
loop = uv_default_loop();
start_timer(loop, SIGCHLD, &tc);
start_watcher(loop, SIGCHLD, &sc, 1);
sc.stop_or_close = NOOP;
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc.ncalls == 1);
start_timer(loop, SIGCHLD, &tc);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(sc.ncalls == 1);
sc.ncalls = 0;
sc.stop_or_close = CLOSE; /* now close it when it's done */
uv_signal_start_oneshot(&sc.handle, signal_cb_one_shot, SIGCHLD);
start_timer(loop, SIGCHLD, &tc);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc.ncalls == 1);
MAKE_VALGRIND_HAPPY();
return 0;
}
TEST_IMPL(we_get_signals_mixed) {
struct signal_ctx sc[4];
struct timer_ctx tc;
uv_loop_t* loop;
loop = uv_default_loop();
/* 2 one-shot */
start_timer(loop, SIGCHLD, &tc);
start_watcher(loop, SIGCHLD, sc + 0, 1);
start_watcher(loop, SIGCHLD, sc + 1, 1);
sc[0].stop_or_close = CLOSE;
sc[1].stop_or_close = CLOSE;
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc[0].ncalls == 1);
ASSERT(sc[1].ncalls == 1);
/* 2 one-shot, 1 normal then remove normal */
start_timer(loop, SIGCHLD, &tc);
start_watcher(loop, SIGCHLD, sc + 0, 1);
start_watcher(loop, SIGCHLD, sc + 1, 1);
sc[0].stop_or_close = CLOSE;
sc[1].stop_or_close = CLOSE;
start_watcher(loop, SIGCHLD, sc + 2, 0);
uv_close((uv_handle_t*)&(sc[2]).handle, NULL);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc[0].ncalls == 1);
ASSERT(sc[1].ncalls == 1);
ASSERT(sc[2].ncalls == 0);
/* 2 normal, 1 one-shot then remove one-shot */
start_timer(loop, SIGCHLD, &tc);
start_watcher(loop, SIGCHLD, sc + 0, 0);
start_watcher(loop, SIGCHLD, sc + 1, 0);
sc[0].stop_or_close = CLOSE;
sc[1].stop_or_close = CLOSE;
start_watcher(loop, SIGCHLD, sc + 2, 1);
uv_close((uv_handle_t*)&(sc[2]).handle, NULL);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc[0].ncalls == NSIGNALS);
ASSERT(sc[1].ncalls == NSIGNALS);
ASSERT(sc[2].ncalls == 0);
/* 2 normal, 2 one-shot then remove 2 normal */
start_timer(loop, SIGCHLD, &tc);
start_watcher(loop, SIGCHLD, sc + 0, 0);
start_watcher(loop, SIGCHLD, sc + 1, 0);
start_watcher(loop, SIGCHLD, sc + 2, 1);
start_watcher(loop, SIGCHLD, sc + 3, 1);
sc[2].stop_or_close = CLOSE;
sc[3].stop_or_close = CLOSE;
uv_close((uv_handle_t*)&(sc[0]).handle, NULL);
uv_close((uv_handle_t*)&(sc[1]).handle, NULL);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc[0].ncalls == 0);
ASSERT(sc[1].ncalls == 0);
ASSERT(sc[2].ncalls == 1);
ASSERT(sc[2].ncalls == 1);
/* 1 normal, 1 one-shot, 2 normal then remove 1st normal, 2nd normal */
start_timer(loop, SIGCHLD, &tc);
start_watcher(loop, SIGCHLD, sc + 0, 0);
start_watcher(loop, SIGCHLD, sc + 1, 1);
start_watcher(loop, SIGCHLD, sc + 2, 0);
start_watcher(loop, SIGCHLD, sc + 3, 0);
sc[3].stop_or_close = CLOSE;
uv_close((uv_handle_t*)&(sc[0]).handle, NULL);
uv_close((uv_handle_t*)&(sc[2]).handle, NULL);
ASSERT(0 == uv_run(loop, UV_RUN_DEFAULT));
ASSERT(tc.ncalls == NSIGNALS);
ASSERT(sc[0].ncalls == 0);
ASSERT(sc[1].ncalls == 1);
ASSERT(sc[2].ncalls == 0);
ASSERT(sc[3].ncalls == NSIGNALS);
MAKE_VALGRIND_HAPPY();
return 0;
}
#endif /* _WIN32 */
|
The Effect of Changing the Contraction Mode During Resistance Training on mTORC1 Signaling and Muscle Protein Synthesis Acute resistance exercise (RE) increases muscle protein synthesis (MPS) via activation of mechanistic target of rapamycin complex (mTORC), and chronic resistance exercise training (RT) results in skeletal muscle hypertrophy. Although MPS in response to RE is blunted over time during RT, no effective restorative strategy has been identified. Since eccentric muscle contraction (EC) has the potential to strongly stimulate mTORC1 activation and MPS, changing the muscle contraction mode to EC might maintain the MPS response to RE during chronic RT. Male rats were randomly divided into RE (1 bout of RE) and RT (13 bouts of RE) groups. Additionally, each group was subdivided into isometric contraction (IC) and EC subgroups. The RE groups performed acute, unilateral RE using IC or EC. The RT groups performed 12 bouts of unilateral RE using IC. For bout 13, the RT-IC subgroup performed a further IC bout, while the RT-EC subgroup changed to EC. All muscle contractions were induced by percutaneous electrical stimulation. Muscle samples were obtained at 6 h post exercise in all groups. After the 1st RE bout, the EC group showed significantly higher p70S6K Thr389 phosphorylation than the IC group. However, the phosphorylation of other mTORC1-associated proteins (4E-BP1 and ribosomal protein S6) and the MPS response did not differ between the contraction modes. After the 13th bout of RE, mTORC1 activation and the MPS response were significantly blunted as compared with the 1st bout of RE. Changing from IC to EC did not improve these responses. In conclusion, changing the contraction mode to EC does not reinvigorate the blunted mTORC1 activation and MPS in response to RE during chronic RT. INTRODUCTION Resistance exercise is known to stimulate muscle protein synthesis (MPS), and chronic resistance exercise training induces muscle hypertrophy (). Muscle hypertrophy is believed to occur as increased MPS enables new myofibrils to be added to pre-existing muscle fibers. Although the detailed molecular mechanisms of the resistance exercise-induced increase in MPS remain unclear, recent research revealed that the both rapamycin-sensitive and -insensitive mechanistic target of rapamycin complex (e.g., mTORC1 and/or mTORC2) plays a role in this event (;;Ogasawara and Suginohara, 2018). Several studies have reported that acute resistance exercise induces mTORC1 activation, which is typically evaluated by measuring the phosphorylation of its downstream targets, such as p70S6K and 4E-BP1 (a,b). The time course of muscle hypertrophy by resistance training has been well studied. In general, muscle hypertrophy is greater during the early phase of resistance training (e.g., up to 2-3 months) than during the later phase (;b;). Similarly, recent studies have reported that both mTORC1 activation and the MPS response are greater during the early phase of resistance training than during the later phase (a;). Therefore, although the MPS response in the early phase of resistance training may contribute not only to muscle hypertrophy, but also to remodeling of muscle structure (), greater stimulation of mTORC1 and MPS by acute resistance exercise should contribute to continuous muscle hypertrophy during the later phase of resistance training. Essentially, muscle contraction during exercise is classified into three modes: eccentric (EC), concentric, and isometric (IC). A few previous studies have reported that EC has no or slightly positive effects on acute contraction-induced increases in MPS in untrained subjects (;). However, some studies have reported that EC can induce greater mTORC1 activation (i.e., p70S6K phosphorylation) as compared with the other modes of contraction (;;;). These observations indicate the possibility that EC reactivates mTORC1 signaling during chronic resistance training, resulting in greater increases in MPS. Therefore, we hypothesized that changing the contraction mode during resistance training could recover post-exercise mTORC1 activation and the MPS response. To test this hypothesis, we evaluated the influence of changing the contraction mode from IC to EC after chronic resistance training on post-exercise mTORC1 activation and MPS using a rat model of resistance exercise. Animal Experimental Procedures The study protocol was approved by the Ethics Committee for Animal Experiments at Nippon Sport Science University, Japan. Twenty male Sprague-Dawley rats, aged 10 weeks (350-390 g body weight), were purchased from CREA Japan (Tokyo, Japan). The animals were housed for 1 week in an environment maintained at 22-24 C with a 12 h-12 h light-dark cycle and received food and water ad libitum. The rats were randomly classified into an acute resistance exercise group (one bout of resistance exercise; n = 10) and a chronic resistance training group (13 bouts of resistance exercise; n = 10). Additionally, each of these groups was subdivided into an IC group and an EC group (n = 5 per group). The acute resistance exercise group performed a single bout of unilateral exercise. The chronic resistance training group performed 12 bouts of unilateral training using IC. For the 13th bout of exercise, the IC subgroup performed a further bout of exercise using IC and the EC subgroup performed a further bout using EC (experimental scheme was shown in Figure 1). The rats in all groups were sacrificed by blood removal from the aorta, and gastrocnemius muscle was removed at 6 h after the final bout of exercise. The tissue samples were quickly frozen in liquid nitrogen and stored at −80 C until analysis. Acute Resistance Exercise After an overnight fast, the lower legs of each rat were shaved under inhaled isoflurane anesthesia. The rats were then positioned with their right foot on a footplate in the prone posture. The triceps surae muscle was stimulated percutaneously with disposable electrodes (Vitrode V, Nihon Kohden, Tokyo, Japan), which were cut to 10 mm 5 mm size and connected to an electrical stimulator and an isolator. The right gastrocnemius muscle was exercised with five sets of muscle contraction separated by rest intervals of 3 min. Each set comprised 3 s stimulation 10 contractions, with a 7 s interval between contractions. The non-exercised left gastrocnemius muscle served as the internal control. The voltage (∼30 V) and stimulation frequency (100 Hz) were adjusted to produce maximal isometric tension. The contraction mode was switched by changing the foot-tibia angle during contraction (60 -105 ; the joint angular velocity was set at 15 /s in the EC group). Chronic Resistance Training Chronic training was performed as previously described (). In brief, acute resistance exercise using IC was performed every other day (e.g., either Monday, Wednesday, and Friday or Tuesday, Thursday, and Saturday) for 4 weeks (12 bouts in total). After the overnight fast, a 13th bout of exercise was performed using either IC or EC. Western Blotting Western blot analysis was performed as reported previously (). Briefly, frozen muscle samples were powdered using a bead crusher (T-12, TAITEC, Saitama, Japan), and 20 mg of each powdered sample was homogenized in 10 volumes of homogenization buffer containing 20 mM Tris-HCl (pH 7.5), 1% Nonidet TM P40, 1% sodium deoxycholate, 1 mM EDTA, 1 mM EGTA, 150 mM NaCl, and Halt TM protease and phosphatase inhibitor cocktail (Thermo Fisher Scientific, Waltham, MA, United States). Homogenates were centrifuged at 10,000 g for 10 min at 4 C and the supernatants were collected. The protein concentration of each sample was then determined using the Protein Assay Rapid kit (Wako Pure Chemical Industries, Osaka, Japan). The samples were diluted in Laemmli sample buffer and boiled at 95 C for 5 min. Using 5-20% SDS-polyacrylamide gels, both 20 and 50 g of protein were separated by electrophoresis and subsequently transferred to polyvinylidene difluoride membranes. After transfer, the membranes were washed in Tris-buffered saline containing 0.1% Tween R 20 and then blocked with the Bullet Blocking One for Western Blotting buffer (Nacalai Tesque, Kyoto, Japan) for 5 min at room temperature. After blocking, the membranes were washed and incubated overnight at 4 C with primary antibodies including p-p70S6K (Thr389, cat#9205; Ser421/Thr424, cat#9204, Cell Signaling Technology, Danvers, MA, United States), p-ribosomal protein S6 (Ser235/236, cat#2211; Ser240/244, cat#2215, Cell Signaling Technology), and total-4E-BP1 (cat#9452, Cell Signaling Technology), p-FAK (Tyr397, cat#3283), p-p38MAPK (Thr180/Tyr182 cat#9211, Cell Signaling Technology), embryonic myosin heavy chain (cat#cs-53091, Santa Cruz Biotechnology, Dallas, TX, United States). The membranes were then washed again in Tris-buffered saline containing 0.1% Tween R 20 and incubated for 1 h at room temperature with the appropriate secondary antibodies. Chemiluminescent reagents (ImmunoStar R LD, Wako Pure Chemical Industries) were used to facilitate the detection of protein bands. Images were scanned using a chemiluminescence detector (C-DiGit R blot scanner, LI-COR Biosciences, Lincoln, NE, United States). Band intensities were quantified using Image Studio TM Lite Ver. 5.2 (LI-COR Biosciences). After the chemiluminescence detection, membranes were stained with Coomassie Brilliant Blue (CBB) solution, and the intensity of each protein band was normalized to that of the stained blot. Muscle Protein Synthesis The in vivo SUnSET technique was used for MPS measurements (). Briefly, 0.04 mol puromycin/g body weight (Wako, Tokyo, Japan) diluted in 0.02 M PBS was injected intraperitoneally under anesthesia 15 min before harvest. Following homogenization as described above and centrifugation at 2,000 g for 3 min at 4 C, the supernatant was collected and processed for Western blotting. A mouse monoclonal antipuromycin antibody (cat#MABE343, Millipore, Billerica, MA, United States) was used to detect puromycin-labeled nascent polypeptides, and the sum of the intensities of the protein ladder bands on each Western blot was evaluated. Statistical Analyses Two-way ANOVA (contraction mode training) was used to evaluate changes in the phosphorylation and/or expression of proteins. Post hoc analyses were performed using t-tests, with Benjamini-Hochberg false discovery rate correction for multiple comparisons when appropriate. All values were expressed as means ± standard error. The level of significance was set at p < 0.05. RESULTS The changes in p70S6K phosphorylation in response to acute muscle contraction at the 1st and 13th bouts of exercise are shown in Figure 2 and Supplementary Figures S1, S2. The Thr389 residue of p70S6K became significantly phosphorylated in both the IC and EC groups at the 1st bout of exercise (Supplementary Figure S1, p < 0.05 vs. control). The level of phosphorylation was significantly higher in the EC group as compared with the IC group after the 1st bout (Figure 2A, IC: 291 ± 30%, EC: 588 ± 58%. p < 0.05 vs. IC). At the 13th bout of exercise, chronic resistance training significantly reduced Thr389 phosphorylation in response to acute muscle contraction in both the IC and EC groups as compared with their responses at the 1st bout (IC: 195 ± 19%, EC: 265 ± 36%). Additionally, the EC group did not exhibit a significantly different response from that of the IC group at the 1st bout. In contrast to the 1st bout of exercise, there was no significant difference in phosphorylation between the IC and EC groups at the 13th bout of exercise (Figure 2A). The phosphorylation of the p70S6K Thr421/Ser424 residues significantly increased in both the IC and EC groups in response to the 1st bout of exercise (Supplementary Figure S2, p < 0.05 vs. control). The magnitude of phosphorylation did not differ between the IC and EC groups (IC: 290 ± 59%, EC: 255 ± 46%). At the 13th bout of exercise, no change in Thr421/Ser424 phosphorylation was detected in either group ( Figure 2B and Supplementary Figure S2). Additionally, in both groups, Thr421/Ser424 phosphorylation in the exercised leg was significantly reduced as compared with that measured after the 1st bout of exercise ( Figure 2B, p < 0.05). The changes in the activity of 4E-BP1 in response to acute muscle contraction at the 1st and 13th bouts of exercise are shown in Figure 3 and Supplementary Figure S3. The expression of the isoform of 4E-BP1 significantly increased in both the IC and EC groups after the 1st bout of exercise with no statistical difference between group (Supplementary Figure S3, IC: 180 ± 30%, EC: 230 ± 27%. p < 0.05 vs. control). After chronic resistance training, neither the IC nor EC group showed a difference in the expression of the isoform of 4E-BP1 as compared with the contralateral leg (Figure 3 and Supplementary Figure S3). The results for the induction of ribosomal protein S6 phosphorylation by acute resistance exercise at the 1st and 13th bouts of exercise are shown in Figure 4 and Supplementary Figures S4, S5. At the 1st bout of exercise, the Ser240/244 residues of ribosomal protein S6 were significantly phosphorylated in both the IC and EC groups (Supplementary Figure S4, p < 0.05 vs. control). The level of phosphorylation was similar between the IC and EC groups ( Figure 4A, IC: 907 ± 122%, EC: 711 ± 52%). At the 13th bout of exercise, both the IC and EC groups exhibited significantly increased phosphorylation on the Ser240/244 residues to the same degree ( Figure 4A and Supplementary Figure S4, IC: 361 ± 54%, EC: 311 ± 41%. p < 0.05 vs. control). In both groups, the magnitude of phosphorylation was significantly lower as compared with that measured at the 1st bout of exercise ( Figure 4A, p < 0.05). In addition, the Ser235/236 residues of ribosomal protein S6 were significantly phosphorylated in both the IC and EC groups at the 1st bout of exercise to the same extent ( Figure 4B and Supplementary Figure S5, IC: 619 ± 106%, EC: 553 ± 172%). At the 13th bout of exercise, Ser235/236 phosphorylation was significantly lower in both groups as compared with the 1st bout ( Figure 4B DISCUSSION In this study, we investigated the influence of changing muscle contraction modes from IC to EC after chronic resistance training on the post-exercise responses of mTORC1 activation and MPS. We hypothesized that since there is a possibility that EC might stimulate mTORC1 and MPS to a greater extent than IC, EC might therefore reinvigorate blunted mTORC1 activation and MPS after chronic training. However, acute EC enhanced only p70S6K phosphorylation and this did not contribute to the MPS response. Moreover, changing the contraction mode to EC failed to reinvigorate blunted mTORC1 activation and MPS in response to acute exercise during a period of chronic training. Effect of EC on the MPS Response After Acute Resistance Exercise Previous studies have reported that EC potently induces higher mTORC1 activation and MPS than other modes of contraction in rodents and humans (;;). Similarly, we observed in the present study that acute exercise-induced p70S6K Thr389 phosphorylation was higher in the EC group than in the IC group after 1st bout of exercise. Thus, it is apparent that acute EC potently augments p70S6K Thr389 phosphorylation. p70S6K on Thr421/Ser424 is involved in the initiation of kinase activation through the release of kinase autoinhibition by phosphorylation (). In this study, the phosphorylation status of p70S6K Thr421/Ser424 did not statistically differ between the IC and EC groups after the 1st bout of exercise. Additionally, we investigated the downstream substrates of p70S6K and found that the phosphorylation of ribosomal protein S6 Ser240/244 and Ser235/236 increased after acute exercise to a similar extent in both the IC and EC groups after 1st bout of exercise. Therefore, the present results suggested that although EC more strongly induces the phosphorylation of p70S6K Thr389 than IC does, the activity of p70S6K might not robustly differ between these contraction modes. Secondly, the mTORC1 target substrate, 4E-BP1, is usually bound to eIF4E and limits translation at ribosomes (). Acute resistance exercise inactivates 4E-BP1, causing it to dissociate from eIF4E, resulting in increased MPS (;;). In this study, the expression of the 4E-BP1 isoform, which indicates 4E-BP1 dissociation from eIF4E, was increased after acute resistance exercise in agreement with previous observations ((Ogasawara et al.,, 2017. However, unlike p70S6K Thr389 phosphorylation, the expression level of the 4E-BP1 isoform did not differ between the IC and EC groups after exercise. In a previous study by another group, it was observed that muscle contraction induced p70S6K phosphorylation and 4E-BP1 inactivation to different degrees (). In addition, we recently demonstrated that p70S6K Thr389 phosphorylation is predominantly regulated by rapamycin-sensitive mTORC, but 4E-BP1 is controlled by both rapamycin-sensitive mTORC1 and rapamycin-insensitive mTORC in skeletal muscle (Ogasawara and Suginohara, 2018;). Therefore, our results indicate that EC might specifically stimulate rapamycin sensitive mTORC1 via unknown mechanisms after 1st bout of exercise in this study. In the current study, the rate of MPS did not differ between the IC and EC groups after 1st bout of exercise, as is also the case in previous studies (;;). Classically, the level of p70S6K Thr389 phosphorylation is known to correspond with MPS after acute RE. However, recent studies have observed that the MPS response after RE does not necessarily reflect the level of p70S6K phosphorylation, suggesting that rapamycin-sensitive mTORC plays a minor role in the regulation of MPS after muscle contraction (Ogasawara and Suginohara, 2018;). Therefore, although EC simulates rapamycin-sensitive mTORC1 to a greater extent than IC, it does not induce greater increases in MPS. On the other hand, we previously reported that the MPS response to IC gradually increased with set number (i.e., forcetime integral) and then plateaued at five sets in our experimental model (). Therefore, the MPS response to muscle contraction may have reached plateau. Effect of Chronic Resistance Training on the MPS Response After Acute Exercise Muscle hypertrophy in response to resistance training is robust during the early phase of training and subsequently wanes with continuous training (a;). It has previously been reported that the induction of mTORC1 activation and MPS in response to acute exercise is blunted by chronic resistance training, and this phenomenon is associated with a stagnation of muscle hypertrophy during chronic resistance training (a;). Concordantly, we observed in the current study that the activation of downstream targets of mTORC1 in response to acute exercise was significantly reduced after 13 bouts of exercise as compared with the 1st bout of exercise. Taken together, previous studies and our current results suggest that reduced activation of mTORC1-regulated molecules is involved in the blunted the MPS response during continuous resistance training, and this may lead to a plateau of muscle hypertrophy. Previous studies have suggested that the MPS response in the early phase of resistance training is accompanied by muscle damage and/or remodeling (). However, in this study, we could not detect any change in the molecules that associate with tissue inflammation or muscle structure remodeling (p38MAPK, FAK) to acute exercise after both 1st and 13th bouts of exercise (Supplementary Figures S7, S8). In addition, although we measured embryonic myosin heavy chain as a muscle damage marker, no obvious band could be detected in the present study (data not shown). Thus, our results indicate that these responses may not necessarily be related to higher mTORC1 activation and MPS during acute exercise in the early phase of training. However, it should be noted that although the sampling point in this study was 6 h post exercise, previous studies have shown that acute resistance exercise increases the phosphorylation of p38MAPK and FAK only during the early phase of recovery (<2 h post exercise) (;;). Previous studies have also observed that EC greatly increases p38MAPK phosphorylation (). In addition, FAK phosphorylation is also strongly stimulated by EC (). Therefore, we could not exclude the possibility that these molecules contribute to mTORC1 activation and the MPS response during the early phase of recovery after resistance exercise. Future study is needed to investigate whether the inhibition of those molecules during and after resistance exercise affects mTOR activation and MPS. Effect of Changing Contraction Mode on the MPS Response to Acute Resistance Exercise During Successive Resistance Training The primary purpose of this study was to evaluate whether changing the contraction mode from IC to EC could rescue the acute exercise-induced the MPS response during chronic resistance training. Considering previous reports that EC has potential to stimulate mTORC1 and MPS (;;;), we hypothesized that changing the contraction mode to EC during chronic resistance training may reinvigorate mTORC1 activation and the MPS response to exercise. In the current study, however, changing contraction mode did not re-invigorate mTORC1 activation and MPS in response to acute exercise during chronic resistance training. Interestingly, even though EC produced significantly greater p70S6K Thr389 phosphorylation than IC after 1st bout of exercise, this difference disappeared after chronic resistance training. Furthermore, mTORC1 activation after EC was blunted after 13 bouts compared with after just one bout, even though the first 12 bouts of chronic resistance training were performed using IC. Recent studies also observed that in humans, resistance exercise-induced the MPS response is blunted by chronic resistance training with no relation to distinct contraction modes (). Therefore, these results indicate that chronic resistance training reduced the responsivity of mTORC1 activation to exercise independently of the contraction mode and that changing the contraction mode from IC to EC is not suitable for improving mTORC1 activation and the MPS response to acute resistance exercise during continuous resistance training. Previous studies and the results herein show that chronic high force contractile activity such as resistance training, diminishes mTORC1 activation and MPS in response to acute exercise (a;;). Nevertheless, the mechanisms underlying these phenomena remain unclear. In contrast, a reduction in contractile activity (detraining or unloading) sensitizes mTORC1 activation and MPS in response to acute resistance exercise or intense muscle contraction (a;). Hence, the constitutive muscle contractile status may modulate the responsivity of mTORC1 and MPS to acute exercise. Clarification of the mechanisms connecting chronic contractile activity and the sensitivity of the MPS response to acute muscle contraction should provide an efficient way to overcome the stagnation of muscle hypertrophy during chronic resistance training. ETHICS STATEMENT The study protocol was approved by the Ethics Committee for Animal Experiments at Nippon Sport Science University, Japan. AUTHOR CONTRIBUTIONS KN and RO contributed to the conception and design of the experiments. SA, DT, YI, TS, YM, and RO collected, analyzed, and interpreted the data. SA, KN, and RO drafted the article and revised it critically for important intellectual content. All authors approved the final version of the manuscript and qualify for authorship. |
A 27-Year-Old Brazilian Woman with a History of Left Salpingectomy and Late Diagnosis of an Extra-Uterine Intra-Abdominal Pregnancy and Live Birth at 26 Weeks Gestation Patient: Female, 27-year-old Final Diagnosis: Abdominal pregnancy Symptoms: None Medication: Clinical Procedure: Specialty: Obstetrics and Gynecology Objective: Rare disease Background : Abdominal pregnancy is a special type of ectopic pregnancy, characterized by implantation of the embryo in the peritoneal cavity, with tubal, ovarian, and intraligamentary pregnancies excluded, accounting for approximately 1% of all cases. It was first reported in 1708 after an autopsy and since then numerous cases have been reported, with a current incidence of 1: 10 000 to 1: 30 000 pregnancies. Case Report: We report the case of a 27-year-old woman, resident of the city of Caxias do Sul, Brazil, with an extra-uterine pregnancy by ultrasound diagnosis at 25 weeks and 1 day of gestational age and a live fetus. Conclusions: Abdominal gestation is a rare type of ectopic pregnancy and is characterized as a life-threatening situation. Its biggest challenge is to make an early diagnosis, since most cases go unnoticed at the ultrasound performed in the first trimester, and when symptomatic, they do not present themselves in a specific way. When necessary, MRI has been shown to greatly elucidate such cases. Moreover, the therapeutic decision also presents some disparities in the literature. Although it is known that open surgery is best option, there are still many doubts regarding whether to perform placental extraction since its removal process can cause abundant bleeding, putting the patient at risk during the surgical procedure, in the same way that its maintenance and the use of drug treatment can also aggravate the patients clinical picture. Background Abdominal pregnancy is a special type of ectopic pregnancy, characterized by implantation of the embryo in the peritoneal cavity, with tubal, ovarian, and intraligamentary pregnancies excluded, accounting for approximately 1% of all cases. It was first reported in 1708 after an autopsy and since then numerous cases have been reported, with a current incidence of 1: 10 000 to 1: 30 000 pregnancies. Therefore, we present this case report of late-diagnosed abdominal pregnancy with a live fetus. Case Report A 27-year-old woman underwent the first ultrasound of the pregnancy in an external service to the General Hospital and was referred to the same due to suspicion of extra-uterine pregnancy. Ultrasound imaging showed a fetus, apparently located in the abdominal cavity to the right of the uterus, placenta inserted behind the uterus, absent amniotic fluid, fetus with heartbeat, estimated weight of 670 g, and gestational age 25 weeks and 1 day. The patient had a history of pregnancy in the left uterine horn, with resection and raffia of the same in 2014, with the tube and ovary preserved. She had no previous comorbidities, except a diagnosis of gestational diabetes with lifestyle change treatment; other prenatal exams within the normal range. She arrived at the service without concerns, with good fetal movement, and without adverse events during this pregnancy. On physical examination, it was possible to identify a uterus of increased volume, approximately at the umbilical scar line, and easy palpation of fetal parts and auscultation of fetal heartbeat. On vaginal touch, bulging of the vaginal sac was observed with anteriorization of the cervix. The patient was admitted to an obstetric inpatient unit to clarify the case and schedule the interruption of pregnancy. Magnetic resonance imaging of the total abdomen was performed on the day of admission. No topical pregnancy was identified. There was placental formation in the topography of the left adnexal region, surrounded by the ipsilateral fallopian tube. The fetus was inside the abdominal cavity, predominantly located on the right flank and iliac fossa, with the cephalic portion between the bladder and the rectum, and the lower limbs interlocking on the right flank. We found marked pyelocaliceal dilatation with transition in caliber of the ureters at the level of the described alterations (Figures 1-3). Aiming at lung maturation, 2 ampoules, each with 1 mL containing 3 mg betamethasone acetate and 3.945 mg betamethasone disodium phosphate were injected intramuscularly and a new obstetric ultrasound was performed for a better fetal evaluation, showing an irregular cranial contour. There was a cystic image with thin septa adjacent to the fetal neck measuring 3.52.7 cm. The heart had 4 cavities and was apparently balanced. The thorax was apparently reduced in size, and it was not possible to evaluate the integrity of the diaphragmatic dome.The feet were in plantar flexion and the bladder had increased in size. After a multidisciplinary discussion with the oncologic surgery team (which was invited to participate in the case due to its complexity) in which maternal and fetal risks and benefits were extensively evaluated, and also after explanation and discussion with the patient, aiming at a better evaluation of placental implantation in large vessels, it was decided to perform an angiotomography with abdominal contrast to ensure a better surgical planning to assure the patient's safety (Figure 4). The examination demonstrated increased arterial impregnation of the intrauterine vessels; signs of volumetric increase of the tubal region to the left, with increased vascularization by contrast; placenta located in the hypogastric/iliac fossa to the right with extension to the mesogastric region/left flank, without cleavage plane with posterior body wall of the uterus and left uterine attachment; calibrous vessels from the uterine and left ovarian arteries apparently supplying the placenta were visualized, as well as signs of venous drainage from the placenta through the left gonadal vein, which presented with increased caliber; and there were no signs of involvement of the intestinal loops by the placenta. After performing an angiotomography, the team talked to the patient about interrupting the pregnancy due to the increased risk to the mother's life as the pregnancy progressed. Vascular surgery, urology, oncology surgery, and obstetrics teams met. For preoperative preparation, magnesium sulfate (8 mL at 50% diluted in 12 mL of distilled water in an attack dose with maintenance with 10 mL at 50% in 240 mL of saline solution in an infusion pump at 50 mL/h) for fetal neuroprotection and ampicillin due to prematurity were performed, as well as an 8-h fast and a supply of blood components (4 units of packed red blood cells, 8 units of plasma, and 4 units of cryoprecipitate). Surgery was performed in 3 stages. Before starting the surgery, spinal anesthesia was performed and a conversion to general anesthesia was planned after delivery. In the first step, the urological surgery team implanted a double-J catheter bilaterally to facilitate identification of the ureters in case more extensive surgery involving hysterectomy was needed during the transoperative period. In the second step, the vascular surgery team performed catheterization of the right femoral artery with balloon placement in the left hypogastric artery to reduce major bleeding. In a third and last step, exploratory laparotomy was performed. A fetus was visualized (Figure 5), without amniotic sac, between intestinal loops in the abdominal cavity, removed in pelvic presentation with immediate clamping of the umbilical cord (Figure 6), with the placenta implanted in the left uterine artery and left gonadal artery and posterior uterine wall with invasion of the sigmoid rectum (Figure 7). Due to massive bleeding after fetal removal and the patient's evolution to hemodynamic instability, uterine preservation and conservative management with placenta maintenance and clinical treatment for absorption were not possible and a monoblock extended total hysterectomy with sigmoid (Figures 8, 9) and colostomy was performed. In the transoperative period, the patient presented hypovolemic shock, requiring replacement of blood products, with good recovery. After being moved to the ICU for postoperative recovery, she was discharged to the ward in less than 24 h. The female neonate was immediately attended to by the neonatal ICU team. She presented an Apgar score of 2 in the first minute and 6 in the fifth minute, weighing 960 g, and with a physical examination compatible with 26 weeks and 3 days of gestational age. She required orotracheal intubation soon after birth. General inspection identified skin remnants on the head and neck and 3 arteries and 1 vein in the umbilical cord stump. On physical examination on pulmonary auscultation, she presented ruddy vesicular murmurs, with air intake, but no chest expansion; furthermore, she remained cyanotic even under mechanical ventilation, with no response to surfactant use (200 mg per kilogram). A chest X-ray identified hypoextended lungs. On the day after birth, with less than 24 h of life, she died after cardiorespiratory arrest, with no response to management. The ballon in the left hypogastric artery was removed soon after the procedure. The patient remained in a ward bed for 4 days, received cabergoline 1 g, ampicillin 2 g in association with Sulbactam 1 g, and diet progression. She was discharged from the hospital for follow-up at the gynecological and oncologic surgery outpatient clinic, with a plan for intestinal transit reconstruction which had not yet been reestablished to date. The double-J catheter was removed at an outpatient visit 2 months after surgery. Discussion Although the etiology of abdominal pregnancy still remains unknown, it is known that embryo implantation in the abdominal cavity can be either primary or secondary. Secondary cases are more common and occur after rupture of a tubal pregnancy, tubal abortion, or even after rupture of a hysterotomy scar, a rudimentary horn, or uterine perforation, as in the case of our patient who had a history of pregnancy in the left uterine horn, with resection and raffia of the same. It is a life-threatening situation due to high maternal and fetal morbidity and mortality, with a maternal mortality rate around 0.5% to 18% and a perinatal mortality rate between 40% and 95%. In addition, it is estimated that approximately 21% of newborns from an abdominal pregnancy have some malformation, probably due to compression of the fetus in the absence of amniotic fluid. Typical deformities include limb defects, facial and cranial asymmetry, joint abnormalities, and central nervous system malformations. However, its early detection is difficult, with only 45% of diagnoses being made prenatally, with most cases diagnosed after complications or only intraoperatively. When symptomatic, the clinical signs are not very specific, but some authors believe that some signs can alert to the possibility of the diagnosis: abdominal pain with disordered bowel movements, abdominal pain during active fetal movements, spread of the abdomen due to an irregular presentation, palpation of fetal parts below the maternal abdominal wall, and failure to trigger labor. Unfortunately, these signs only appear during advanced abdominal pregnancy. Ultrasonography remains the main test for diagnosis, and The Royal College of Obstetricians and Gynecologists has recommended the use of the sonographic criteria described by Gerli et al for diagnosis: absence of an intrauterine gestational sac, absence of dilatation of both uterine tubes and of complex ovarian masses, a gestational sac surrounded by intestinal loops and separated from them by the peritoneum, and a wide mobility similar to flotation of the sac, particularly evident with pressure from the transvaginal transducer toward the posterior fundus.. However, due to the complexity and rarity of the condition, these criteria are not always present and clear even to an experienced echographer, and it is essential to perform an additional imaging exam to elucidate the case. MRI has proven to be a great help, as it provides an easier diagnosis and helps to localize and identify the relationship between the placenta and adjacent organ and tissues. The location of the placenta can help to decide whether to continue with the pregnancy and to develop a relatively safe and reasonable treatment option and surgical planning. Open surgery has been the main means of treatment for advanced abdominal pregnancies due to better control of the risk of bleeding related to placental extraction, justified only if the placenta is easily removed with low risk of bleeding. It is believed that for placental delivery to regions rich in vessels and with low mobility (pelvic ligaments, the region of the iliac cases, liver, and spleen), surgery should be meticulous to avoid separation of the placenta, which should be left in its implantation site to be resorbed spontaneously postoperatively whenever possible. The use of medications to assist the absorption process has been studied, the most prominent being methotrexate, but its use remains controversial because it involves a high risk of infection due to accelerated necrosis. Moreover, in cases where the placenta remains, we should be alert for the appearance of postoperative complications such as intestinal obstruction, infection, hemorrhage, anemia, and fistula, among others. These complications can worsen the maternal prognosis, with a mortality rate above 18%. Conclusions Abdominal pregnancy is a rare type of ectopic pregnancy and is a life-threatening situation. Although this is not a frequent situation in obstetrics, it requires attention and care by specialists so that everyone is prepared to act in the best possible way. Its biggest challenge is to make an early diagnosis, since most cases go unnoticed through the ultrasound performed in the first trimester, and when symptomatic, they do not present themselves in a specific way. In our case, we were lucky because, although our patient had her first ultrasound done in an external service and late, she did it with an experienced physician and member of our clinical staff, which ensured, besides the diagnosis, an immediate referral to the obstetric center for a better evaluation and investigation of the case. It was a great challenge because, although it is a reference hospital for high-risk pregnancy, most of the obstetrics team had never had experience with a similar case, which required many team members to seek information and ensure the best care for this patient. For this, a multi-specialized team, involving, besides obstetrics, the radiology sector, surgical oncology, vascular surgery, urological surgery, anesthesiologists, transfusion agency, adult ICU, and neonatologist, was involved, since the fetus was alive and viable according to the gestational age. Unfortunately, from the fetal point of view, we did not have the best desired outcome; however, from the maternal point of view, although our first plan was for conservative treatment with metrotexate, since the patient did not have any living child, in view of the severity of the condition, the surgical complexity, and the unforeseen events during the transopetarean section, we consider that we had the best possible outcome. |
Association between matrix metallopeptidase 1 and type 2 diabetes mellitus coexisting with coronary heart disease in a Han Chinese population. Matrix metallopeptidase 1 (MMP-1) has been reported to be involved in the coexistence of type 2 diabetes mellitus (T2DM) and coronary heart disease (CHD). We sought to examine the association between the MMP-1 gene polymorphism and coexistence of T2DM and CHD in a Han Chinese population. We extracted genomic DNA from the peripheral blood of 794 subjects, including 378 patients with coexisting T2DM and CHD and 416 healthy controls. We selected several single nucleotide polymorphisms of the MMP-1 gene and genotyped them using the MassARRAY system, before analyzing the data with Haploview 4.0 and SPSS 20.0. A statistical difference was found in the distribution of rs1799750 genotypes between the patient and control groups (P = 0.041). The frequency of the 2G/2G genotype was 44.25 and 37.0% among patients and control subjects, respectively. Moreover, the frequency of the 2G allele was 65.9% among patients and 59.6% in the control group, and this difference was found to be significant (P = 0.010). Elevated body mass index was also associated with the 2G/2G genotype. Thus, MMP-1 rs1799750 may be involved in the development of coexisting T2DM and CHD in the Han Chinese population. |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.cassandra.io.sstable;
import java.io.File;
import java.io.IOError;
import java.io.IOException;
import java.util.*;
import java.util.regex.Pattern;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Objects;
import com.google.common.base.Splitter;
import org.apache.cassandra.db.Directories;
import org.apache.cassandra.io.sstable.format.SSTableFormat;
import org.apache.cassandra.io.sstable.format.Version;
import org.apache.cassandra.io.sstable.metadata.IMetadataSerializer;
import org.apache.cassandra.io.sstable.metadata.MetadataSerializer;
import org.apache.cassandra.utils.Pair;
import org.apache.cassandra.utils.UUIDGen;
import static org.apache.cassandra.io.sstable.Component.separator;
/**
* A SSTable is described by the keyspace and column family it contains data
* for, a generation (where higher generations contain more recent data) and
* an alphabetic version string.
*
* A descriptor can be marked as temporary, which influences generated filenames.
*/
public class Descriptor
{
private final static String LEGACY_TMP_REGEX_STR = "^((.*)\\-(.*)\\-)?tmp(link)?\\-((?:l|k).)\\-(\\d)*\\-(.*)$";
private final static Pattern LEGACY_TMP_REGEX = Pattern.compile(LEGACY_TMP_REGEX_STR);
public static String TMP_EXT = ".tmp";
private static final Splitter filenameSplitter = Splitter.on('-');
/** canonicalized path to the directory where SSTable resides */
public final File directory;
/** version has the following format: <code>[a-z]+</code> */
public final Version version;
public final String ksname;
public final String cfname;
public final int generation;
public final SSTableFormat.Type formatType;
private final int hashCode;
/**
* A descriptor that assumes CURRENT_VERSION.
*/
@VisibleForTesting
public Descriptor(File directory, String ksname, String cfname, int generation)
{
this(SSTableFormat.Type.current().info.getLatestVersion(), directory, ksname, cfname, generation, SSTableFormat.Type.current());
}
/**
* Constructor for sstable writers only.
*/
public Descriptor(File directory, String ksname, String cfname, int generation, SSTableFormat.Type formatType)
{
this(formatType.info.getLatestVersion(), directory, ksname, cfname, generation, formatType);
}
@VisibleForTesting
public Descriptor(String version, File directory, String ksname, String cfname, int generation, SSTableFormat.Type formatType)
{
this(formatType.info.getVersion(version), directory, ksname, cfname, generation, formatType);
}
public Descriptor(Version version, File directory, String ksname, String cfname, int generation, SSTableFormat.Type formatType)
{
assert version != null && directory != null && ksname != null && cfname != null && formatType.info.getLatestVersion().getClass().equals(version.getClass());
this.version = version;
try
{
this.directory = directory.getCanonicalFile();
}
catch (IOException e)
{
throw new IOError(e);
}
this.ksname = ksname;
this.cfname = cfname;
this.generation = generation;
this.formatType = formatType;
hashCode = Objects.hashCode(version, this.directory, generation, ksname, cfname, formatType);
}
public Descriptor withGeneration(int newGeneration)
{
return new Descriptor(version, directory, ksname, cfname, newGeneration, formatType);
}
public Descriptor withFormatType(SSTableFormat.Type newType)
{
return new Descriptor(newType.info.getLatestVersion(), directory, ksname, cfname, generation, newType);
}
public File tmpFileFor(Component component)
{
return new File(tmpFilenameFor(component));
}
public String tmpFilenameFor(Component component)
{
return filenameFor(component) + TMP_EXT;
}
/**
* @return a unique temporary file name for given component during entire-sstable-streaming.
*/
public String tmpFilenameForStreaming(Component component)
{
// Use UUID to handle concurrent streamings on the same sstable.
// TMP_EXT allows temp file to be removed by {@link ColumnFamilyStore#scrubDataDirectories}
return String.format("%s.%s%s", filenameFor(component), UUIDGen.getTimeUUID(), TMP_EXT);
}
public File fileFor(Component component)
{
return new File(filenameFor(component));
}
public String filenameFor(Component component)
{
return baseFilename() + separator + component.name();
}
public String baseFilename()
{
StringBuilder buff = new StringBuilder();
buff.append(directory).append(File.separatorChar);
appendFileName(buff);
return buff.toString();
}
private void appendFileName(StringBuilder buff)
{
buff.append(version).append(separator);
buff.append(generation);
buff.append(separator).append(formatType.name);
}
public String relativeFilenameFor(Component component)
{
final StringBuilder buff = new StringBuilder();
if (Directories.isSecondaryIndexFolder(directory))
{
buff.append(directory.getName()).append(File.separator);
}
appendFileName(buff);
buff.append(separator).append(component.name());
return buff.toString();
}
public SSTableFormat getFormat()
{
return formatType.info;
}
/** Return any temporary files found in the directory */
public List<File> getTemporaryFiles()
{
File[] tmpFiles = directory.listFiles((dir, name) ->
name.endsWith(Descriptor.TMP_EXT));
List<File> ret = new ArrayList<>(tmpFiles.length);
for (File tmpFile : tmpFiles)
ret.add(tmpFile);
return ret;
}
public static boolean isValidFile(File file)
{
String filename = file.getName();
return filename.endsWith(".db") && !LEGACY_TMP_REGEX.matcher(filename).matches();
}
/**
* Parse a sstable filename into a Descriptor.
* <p>
* This is a shortcut for {@code fromFilename(new File(filename))}.
*
* @param filename the filename to a sstable component.
* @return the descriptor for the parsed file.
*
* @throws IllegalArgumentException if the provided {@code file} does point to a valid sstable filename. This could
* mean either that the filename doesn't look like a sstable file, or that it is for an old and unsupported
* versions.
*/
public static Descriptor fromFilename(String filename)
{
return fromFilename(new File(filename));
}
/**
* Parse a sstable filename into a Descriptor.
* <p>
* SSTables files are all located within subdirectories of the form {@code <keyspace>/<table>/}. Normal sstables are
* are directly within that subdirectory structure while 2ndary index, backups and snapshot are each inside an
* additional subdirectory. The file themselves have the form:
* {@code <version>-<gen>-<format>-<component>}.
* <p>
* Note that this method will only sucessfully parse sstable files of supported versions.
*
* @param file the {@code File} object for the filename to parse.
* @return the descriptor for the parsed file.
*
* @throws IllegalArgumentException if the provided {@code file} does point to a valid sstable filename. This could
* mean either that the filename doesn't look like a sstable file, or that it is for an old and unsupported
* versions.
*/
public static Descriptor fromFilename(File file)
{
return fromFilenameWithComponent(file).left;
}
/**
* Parse a sstable filename, extracting both the {@code Descriptor} and {@code Component} part.
*
* @param file the {@code File} object for the filename to parse.
* @return a pair of the descriptor and component corresponding to the provided {@code file}.
*
* @throws IllegalArgumentException if the provided {@code file} does point to a valid sstable filename. This could
* mean either that the filename doesn't look like a sstable file, or that it is for an old and unsupported
* versions.
*/
public static Pair<Descriptor, Component> fromFilenameWithComponent(File file)
{
// We need to extract the keyspace and table names from the parent directories, so make sure we deal with the
// absolute path.
if (!file.isAbsolute())
file = file.getAbsoluteFile();
String name = file.getName();
List<String> tokens = filenameSplitter.splitToList(name);
int size = tokens.size();
if (size != 4)
{
// This is an invalid sstable file for this version. But to provide a more helpful error message, we detect
// old format sstable, which had the format:
// <keyspace>-<table>-(tmp-)?<version>-<gen>-<component>
// Note that we assume it's an old format sstable if it has the right number of tokens: this is not perfect
// but we're just trying to be helpful, not perfect.
if (size == 5 || size == 6)
throw new IllegalArgumentException(String.format("%s is of version %s which is now unsupported and cannot be read.",
name,
tokens.get(size - 3)));
throw new IllegalArgumentException(String.format("Invalid sstable file %s: the name doesn't look like a supported sstable file name", name));
}
String versionString = tokens.get(0);
if (!Version.validate(versionString))
throw invalidSSTable(name, "invalid version %s", versionString);
int generation;
try
{
generation = Integer.parseInt(tokens.get(1));
}
catch (NumberFormatException e)
{
throw invalidSSTable(name, "the 'generation' part of the name doesn't parse as a number");
}
String formatString = tokens.get(2);
SSTableFormat.Type format;
try
{
format = SSTableFormat.Type.validate(formatString);
}
catch (IllegalArgumentException e)
{
throw invalidSSTable(name, "unknown 'format' part (%s)", formatString);
}
Component component = Component.parse(tokens.get(3));
Version version = format.info.getVersion(versionString);
if (!version.isCompatible())
throw invalidSSTable(name, "incompatible sstable version (%s); you should have run upgradesstables before upgrading", versionString);
File directory = parentOf(name, file);
File tableDir = directory;
// Check if it's a 2ndary index directory (not that it doesn't exclude it to be also a backup or snapshot)
String indexName = "";
if (Directories.isSecondaryIndexFolder(tableDir))
{
indexName = tableDir.getName();
tableDir = parentOf(name, tableDir);
}
// Then it can be a backup or a snapshot
if (tableDir.getName().equals(Directories.BACKUPS_SUBDIR))
tableDir = tableDir.getParentFile();
else if (parentOf(name, tableDir).getName().equals(Directories.SNAPSHOT_SUBDIR))
tableDir = parentOf(name, parentOf(name, tableDir));
String table = tableDir.getName().split("-")[0] + indexName;
String keyspace = parentOf(name, tableDir).getName();
return Pair.create(new Descriptor(version, directory, keyspace, table, generation, format), component);
}
private static File parentOf(String name, File file)
{
File parent = file.getParentFile();
if (parent == null)
throw invalidSSTable(name, "cannot extract keyspace and table name; make sure the sstable is in the proper sub-directories");
return parent;
}
private static IllegalArgumentException invalidSSTable(String name, String msgFormat, Object... parameters)
{
throw new IllegalArgumentException(String.format("Invalid sstable file " + name + ": " + msgFormat, parameters));
}
public IMetadataSerializer getMetadataSerializer()
{
return new MetadataSerializer();
}
/**
* @return true if the current Cassandra version can read the given sstable version
*/
public boolean isCompatible()
{
return version.isCompatible();
}
@Override
public String toString()
{
return baseFilename();
}
@Override
public boolean equals(Object o)
{
if (o == this)
return true;
if (!(o instanceof Descriptor))
return false;
Descriptor that = (Descriptor)o;
return that.directory.equals(this.directory)
&& that.generation == this.generation
&& that.ksname.equals(this.ksname)
&& that.cfname.equals(this.cfname)
&& that.version.equals(this.version)
&& that.formatType == this.formatType;
}
@Override
public int hashCode()
{
return hashCode;
}
}
|
One-Carbon Metabolism Links Nutrition Intake to Embryonic Development via Epigenetic Mechanisms Beyond energy production, nutrient metabolism plays a crucial role in stem cell lineage determination. Changes in metabolism based on nutrient availability and dietary habits impact stem cell identity. Evidence suggests a strong link between metabolism and epigenetic mechanisms occurring during embryonic development and later life of offspring. Metabolism regulates epigenetic mechanisms such as modifications of DNA, histones, and microRNAs. In turn, these epigenetic mechanisms regulate metabolic pathways to modify the metabolome. One-carbon metabolism (OCM) is a crucial metabolic process involving transfer of the methyl groups leading to regulation of multiple cellular activities. OCM cycles and its related micronutrients are ubiquitously present in stem cells and feed into the epigenetic mechanisms. In this review, we briefly introduce the OCM process and involved micronutrients and discuss OCM-associated epigenetic modifications, including DNA methylation, histone modification, and microRNAs. We further consider the underlying OCM-mediated link between nutrition and epigenetic modifications in embryonic development. Introduction Nutrition encompasses the relationships between development and a multitude of processes such as ingestion and digestion of food for metabolism and synthesis of nutrients and is profoundly influenced by various lifestyle factors and eating habits. Different dietary factors like carbohydrates, proteins, lipids, and microelements are all "fundamental materials" for organism development. These nutrient substances and their metabolites not only supply adequate energy for cell activities but also play regulatory roles in various pathways of basal metabolism. Pregnancy is a critical period of cell division and differentiation occurring in utero. The maternal nutritional status greatly influences the fetal development, pregnancy outcome, and further disease development of offspring. In the early stages of fetal development, the stem cell fate determination is regulated by epigenetic modification, which is closely related with the metabolic supply from maternal nutrition intake. The remarkable breakthroughs in exploring epigenetic mechanisms have coincided with the focus on the roles of diet and nutrient metabolites in fetal development. Several recent studies reported a potential interplay between gene expression and metabolic microenvironment, which is involved in modulating and regulating the epigenome of cells during early development and stem cell fate determination. The one-carbon metabolism (OCM) is a vital metabolic process involved in the methyl group donation or transfer during cellular activities. These metabolic pathways utilizing one-carbon unit and related micronutrients provide essential signals involved in the interplay between biochemical pathways and epigenetic mechanisms. In this review, we summarize recent studies on the interaction between epigenetics and nutrition underlying one-carbon metabolism, including their roles in early life development and stem cell fate determination. We also highlight the identification of potential molecular targets, with an update on modulating cell fate as a therapeutic strategy. One-Carbon Metabolism and Related Micronutrients 2.1. One-Carbon Metabolism (OCM). During the process of embryogenesis, metabolites and associated biochemical pathways are essential for cellular activity and stem cell fate determination. Among these metabolic processes, OCM is widely studied for the effect of one-carbon addition, transfer, or removal on cellular activity. OCM is a cyclical network that includes a series of processes such as folate and methionine cycles, nucleotide synthesis, and methyl transferase reactions ( Figure 1). Various metabolites in these cycles participate in the methyl (one-carbon units) group transfer and are subsequently involved in major epigenetic and epigenomic mechanisms. Methionine and folate cycles are entwined and contribute to the methyl group transfers in key methylation reactions that may cause epigenetic changes in cells. Under an ATP-driven reaction, methionine, the immediate source of the methyl groups, is initially converted into S-adenosyl methionine (SAM) by methionine adenosyl transferase (MAT). SAM then actively contributes the methyl group to DNA, proteins, and other metabolites, via reactions catalyzed by substrate-specific methyltransferases. The S-adenosyl homocysteine (SAH), a byproduct generated from the methylation cycles, is subsequently reversibly cleaved into homocysteine (Hcy). During these cycles, the released methyl groups become an essential signal participating in cellular methyltransferase reactions feeding into epigenetic mechanisms. Generally, cellular methyltransferases show a higher affinity of binding SAH than SAM. Thus, almost all the SAM-dependent methylation reactions rely on SAH removal. Methionine can be regenerated via the process of folate cycle, which involves remethylation of Hcy by 5-methyltetrahydrofolic acid (5-methyl-THF) to form methionine in the presence of vitamin B 12 as a cofactor. Notably, 5-methyl-THF is a one-carbon donor playing a role in the methyl group transfers underlying the process of amino acid and vitamin metabolism. OCM-Related Micronutrients. Methionine is an essential amino acid and primary methyl donor in the methylation cycle of OCM. Notably, methionine metabolism can be influenced by nutritional deficiencies of relevant cosubstrates and coenzymes derived from vitamin B complex and abnormalities in their metabolism. Vitamin B family consists of eight compounds, which function as coenzymes in synergistic reactions. Among these, vitamin B 9 (folate) is the most studied owing to its crucial role in cellular metabolism during embryonic development. Folate in OCM acts as a coenzyme in the formation of tetrahydrofolate (THF), which is involved in the methyl group transfers. Vitamins B 6 (pyridoxine) and B 12 (cobalamin) are also indispensable for their functions in the folate cycle as cofactors in OCM. B 12, as mentioned above, plays as a cofactor during regeneration of methionine, while B 6 is essential for the transfer of sulfur (thiol) in the transsulfuration pathway of Hcy. Timely and optimal supplementation of vitamin B from food and dietary supplements during the periconceptional period is known to promote neural tube development and protect against birth defects of offsprings. Choline and betaine are important metabolites widely existing in mammals and plants. Under conditions of folate deficiency, choline and betaine provide the methyl groups and catalyze the Hcy conversion into methionine in an alternative pathway. Since the concentrations of choline and betaine were found to be higher in the umbilical cord than in the maternal plasma, they are likely required for fetal development. Moreover, studies with animal models suggested that maternal choline deficiency or supplementation has effects on neuron development during the second trimester of gestation and later development of offspring. The status of folate, cobalamin, choline, and betaine and their interactions during pregnancy have direct effects on OCM and subsequently regulate fetal growth and pregnancy outcome. OCM with its related nutrient substances is ubiquitously present in stem cells during early stage of fetal development. The maternal dietary intake influences the key metabolic reactions in OCM and potentially participates in subsequent DNA synthesis and epigenetic modification via methylation reactions. As a result, OCM influences gene expression and cellular functions such as proliferation, metabolism, pluripotency, and cytodifferentiation and may regulate the growth of the embryo and fetus and even affect future disease development in offsprings. Mechanisms of Epigenetic Modification Epigenetics involves the study of changes in gene expression without any fundamental alterations in the DNA sequence. The genome can be functionally modified at several levels of regulation without changing the nucleotide sequence that is genetically inherited. The complex epigenetic alterations include DNA methylation, histone modifications, chromatin remodeling, and noncoding RNA (ncRNA) regulation. These epigenetic modifications converge to modulate chromatin structure and transcription programs, allowing or preventing the access of the transcriptional machinery to genomic information. Thus, the expression of gene sequences can be "switched on or off" for timely gene activation or repression during cell lineage determination. Various studies have revealed that the epigenome profiles differed in specific cell types and differentiation stages. 3.1. DNA Methylation. DNA methylation describes a process wherein the methyl groups are added to DNA molecules, like cytosine and adenine. The methylation process does not change the DNA sequence but may affect the activity of a DNA segment. The methylation status of a DNA sequence regulates gene expression by modulating the chromatin structure and consequently regulates the development and maintenance of cellular homeostasis. The pattern of DNA methylation in mammals is mostly erased and then reestablished between generations, with the demethylation and remethylation processes occurring each time during early embryogenesis. It should be noted that the DNA methylation at individual genomic regions is a dynamic pattern influenced by nutritional, environmental, and other factors. A family of DNA methyltransferases (DNMTs) catalyzes these methylation reactions. DNMTs, associated with the methylation cycle of OCM, attach the methyl groups to the carbon-5 position of cytosine, resulting in the generation of 5-methylcytosine. These epigenetic processes occur during specific stages of organism development and dynamically change during the lifespan. Histone Modification. Nucleosomes, the basic structural units of chromatin, are formed by DNA sequences wrapped around histone proteins (H2A, H2B, H3, and H4). The amino-terminal tails of histones can be biochemically modified in multiple ways, including methylation, phosphorylation, acetylation, and ubiquitination. Posttranslational modifications of histone proteins result in distinct landscapes in the cellular epigenome and determine the cell lineage fate by regulating transcriptional and metabolic activities. Studies have uncovered that the histone modification patterns can be diagnostic for the cell type and differentiation stage in the embryos and embryonic stem cells. Among these modifications, methylation of histones can modulate gene transcription depending on how many methyl groups are attached and which amino acids are in the methylated histones. Histone methylation status is mediated by the histone methyltransferase and demethylases, which donate or transfer the methyl groups as part of OCM. These histone-modifying enzymes are modulated by maternal dietary habits and nutritional intake and are linked to the early development of offspring as discussed below. 3.3. MicroRNA. Noncoding RNA (ncRNA) is a group of regulatory RNAs that do not code for a protein, but rather function to regulate gene expression at multiple regulatory levels, thereby influencing cellular physiology and development. NcRNAs include long noncoding RNA (lncRNA), microRNA (miRNA), and small interfering RNA (siRNA). Among these, miRNA is widely studied for its function in various cellular activities including proliferation, differentiation, and apoptosis. miRNA is a category of short (~21 nucleotides) ncRNAs that affect gene expression in a posttranscriptional mechanism, wherein the miRNA directly binds to the 3 -untranslated regions (3 -UTRs) of a target mRNA for subsequent repression or degradation. Studies have uncovered the expression profiles and regulatory roles of miRNAs during embryogenesis and early life development. Comparative analysis revealed dynamic changes in miRNAs and their targets during embryonic stem cell (ESC) maintenance and differentiation process. Notably, miRNAs were secreted and transferred into the uterine fluid, whose contents were proposed to be involved in a crosstalk between the mother and conceptus. The maternal nutritional environment undoubtedly affected the utero status and the miRNAs of either Metabolites Play a Role in Epigenetic Mechanism Stem cell fate determination is affected by changes in transcriptional programs, which lead to a defined cell lineage under certain microenvironment stimuli. The important role of epigenetics in driving stem cell fate has been widely investigated at and between different regulatory levels such as chromosomal, transcriptional, and posttranscriptional levels. Recent studies reported evidence that the regulation of epigenetics not only affects the chemical modification of DNA and histones but also is closely linked with the nutritional status. An essential role of nutrition and nutrition-related metabolism is generating amino acids and other metabolites in rapidly dividing cells. Furthermore, the metabolite levels in stem cells have a direct influence on the epigenome through histone and DNA modifications and expression of miRNAs. Generally, nutrition and micronutrients involved in metabolic pathways can interfere with epigenetic mechanism in different ways: the utilization of the methyl groups from OCM in the DNA methylation and histone modification by shifting the activity of methyl transferase. The metabolic status alters miRNA profiles, and in turn, the OCM-related genes could be regulated by miRNA. For these above reasons, micronutrients and metabolic status, influenced by dietary habits, play an essential role in regulating epigenetic modification and stem cell determination during the early stage of fetal development. In humans, micronutrients from diet influence the production of the methyl groups from OCM and subsequently affect the methylation of DNA. Different feeding strategies of female larvae were found to result in two different phenotypes in honeybees. Barchuk et al. found a total of 240 differentially expressed genes that were activated in early larval stages stimulated by different nutrition status. DNA methylation, influenced by the nutritional input, further impacted the honeybee's developmental fate. Among OCM-related micronutrients, methionine is vital for epigenetic reactions to methylate cytosine in CpG islands. High dietary supplementation of methionine would alter mammalian OCM and increase the DNA methylation status, thus potentially regulating the expression of epigenetically labile genes. In the folate cycle of OCM, folate is catabolized to a series of metabolites that serve as the methyl group donors, which feed into the methylation cycle and convert Hcy to methionine (Figure 1). Upon feeding murine offspring with low-folate diet, epigenetic marks were observed to persist into adulthood. Some studies reported that the maternal folate intake can influence the methyl pool in folate-mediated OCM and the patterns of DNA methylation in the placenta. Additionally, other B vitamins also act as cofactors to support methylation reactions. Maternal vitamin B 12 level in serum was inversely correlated with the global methylation status of offspring at birth. Maternal choline and betaine intake have potential effects against the methylation process in male infants' cord blood. Nutrition can affect the utilization of the methyl groups by shifting the activity of methyltransferases catalyzing the methylation cycle. SAM and SAH levels could indicate transmethylation potential and methylation status to a certain extent. SAM is converted into SAH by DNMT; conversely, a high SAH concentration inhibits the DNMT activity. As described by Yi et al., high affinity of cellular methyltransferases to SAH results in reduced methylation reactions. It was suggested that the deficiency of folate cycle might increase SAH levels and thereby negatively affect the cellular methylation reactions. In addition, glycine N-methyltransferase (GNMT) also regulates the ratio of SAM/SAH in the methylation cycle, and its enzymatic activity was further found to be inhibited by the 5-methyl-THF in folate cycle. Thus, transmethylation metabolic pathway is closely related to the methionine and folate-related cycles, which in turn are associated with several micronutrients. If these micronutrient levels are altered, these pathways may cause compensatory changes that influence the DNA methylation status. It was revealed that the dynamic DNA methylation patterns throughout the life period are regulated by OCM process. OCM and Histone Modification. Methyl deficiency can also influence the regulation of histone modifications by the OCM pathway. The effects of a methyl-deficient diet on histone methylation patterns were found to be similar to that caused by the alternation of DNA methylation resulting in deficiency of the methyl groups. Various studies identified that lack of nutrients like methionine, choline, folic acid, and vitamin B 12 causes aberrant SAM content and impacts the histone modification profiles; as a result, associated epigenomic changes influence the cell activity and lineage fate. The metabolome could regulate epigenetic modifications from preimplantation to postimplantation during embryonic stem cell transition in the early life development. In mouse ESCs, the histone methylation marks can be regulated by threonine deficiency leading to decreased accumulation of SAM. In another study with human ESCs, the depletion of methionine was found to decrease SAM levels, leading to a decrease in H3K4me3 marks and defects in cellular self-renewal. These two studies indicate the crucial role of SAM in regulating ESC differentiation. Mechanistically, these studies focused on threonine and SAM metabolism associated with energy production and acetyl-coA metabolism. The term "methylation index" was used to describe the ratio of SAM to SAH; the influence of SAM/SAH in embryonic stem cells is important part of the interaction between micronutrient and epigenetics. Further studies identified that aberrant SAM/SAH status caused by different levels of methyl diet directly affected histone modifications. Zhou et al. reported that an imbalanced methyl diet resulted in a decrease in SAM level and an upregulation of histone lysine methyltransferase-(KMT-) 8 level in the livers of mice. However, a methyl-deficient diet caused a decrease in histone H3K9me3, H3K9ac, and H4K20me3 in hepatic tissues, as a result of which the cell cycle arrest was released. In intestinal stem cells, deprivation of methionine also resulted in cell proliferation and promoted lineage differentiation. Furthermore, Mentch et al. revealed that methionine metabolism plays a key role in regulating SAM and SAH. This dynamic interplay causes changes in H3K4me3, resulting in altered gene transcription as a feedback to regulate OCM. Certain amounts of methionine were required in the maintenance of hESCs and induced pluripotent stem cells (iPSCs). Methionine deficiency resulted in reduced intracellular SAM and NANOG expression by triggering the p53-p38 signaling pathway, potentiating the differentiation of hESCs and iPSCs into all three germ layers. Notably, a prolonged period of methionine deficiency resulted in cellular apoptosis. These findings suggest that SAM status in OCM plays a key role in maintaining stem cells in an undifferentiated pluripotent status and in regulating their differentiation process. Additionally, the nuclear lysine-specific demethylase 1 (LSD1), a histone demethylase, was identified to be a folate-binding protein with high affinity. It was suggested that folic acid participates in the demethylation of histones and thereby functions in regulating gene expression. However, its relationship with OCM needs to be further investigated. OCM and miRNA. In mice fed with a methyl-deficient diet, a total of 74 miRNAs were differentially expressed in the liver, suggesting a relationship between the expression of miRNAs and methyl deficiency. To further study the potential ability of miRNA in regulating OCM, a computational Monte Carlo algorithm was used to identify candidate master miRNAs of 42 OCM-related genes. As a result, miR-22 was identified as a novel and top OCM regulator that targeted OCM genes (MAT2A, MTHFR, MTHFD2, SLC19A1, TCblR, and TCN2) involved in the transportation, distribution, and methylation of folate and vitamin B 12. The results also suggested that miR-344-5p/484 and miR-488 function cooperatively as master regulators of the OCM cycle. Using DNA sequencing and by establishing gene network, a total of 48 genes involved in the folate-related OCM pathway were extracted from the KEGG pathway and literature survey. Using this information, a complex database was generated including CpGs, miRNAs, copy number variations (CNVs), and single-nucleotide polymorphisms (SNPs) underlying the OCM pathways (http://slsdb.manipal.edu/ocm/). Based on these data, recent studies have focused on the potential mechanism between OCM and miRNAs. Song et al. found that the folate exposure of chondrocytes, obtained from individual with osteoarthritis (OA), caused an increase in levels of hydroxymethyltransferase-(HMT-) 2, methyl-CpG-binding protein-(MECP-) 2, and DNMT-3B. Additionally, they reported that miR-373 and miR-370 may, respectively, target MECP-2 and SHMT-2 to directly regulate OCM. Koturbash et al. and Koufaris et al. demonstrated the inhibitory role of miR-29b and miR-22 in regulating the expression of OCM-related genes, including methionine adenosyltransferase I, alpha (Mat1a), and 5,10-methylenetetrahydrofolate reductase (MTHFR). These investigations also showed the role of miR-22 as a regulator in stem cell differentiation and cancer development. In recent years, the bidirectional analysis of the interplay between miRNA profiles and folate status was examined and the strong interaction between OCM and miRNA expression was shown. In folate-deficient media, cultured mESCs showed differential expression of 12 miRNAs and failed to proliferate and underwent apoptosis. In particular, miR-302a was found to mediate these effects of folate by directly targeting the Lats2 gene. Furthermore, maternal folate supplementation during the late stage of development could restore the folate deficiency-associated defects such as the cerebral layer atrophy and interhemispheric suture defects. These findings suggest that folate deficiencyassociatedconsequences might be mediated by miRNAs, indicating their critical roles in mammalian development. Though multiple lines of evidence clearly show the role of miRNAs in regulating OCM and OCM-related genes, there is still a need to elucidate the direct mechanism between nutritional status and functional miRNAs and the potential role of these miRNA as prognostic factors for diseases. Future of Dietary Epigenetic Modulators Since nearly a century, researchers have identified embryonic cells with stable but epigenetically distinct states of pluripotency. Maternal environment and nutrient status can influence the metabolism of fetus through epigenetic modifications in early stage of fetal development. OCM is a crucial metabolic process involving methyl transfers from micronutrients in a cyclical process. The donation and transfer of the methyl groups link the nutrient status to epigenetic mechanism involved in modulation of cellular activities during early development. Notably, epigenetic mechanisms can also modify metabolism and influence the signaling cascades involved in metabolic regulation. In summary, epigenetic factors and metabolic mechanisms form a complex network regulating the cell fate determination during developmental processes. Detailed investigation on the potential mechanism underlying the effect of maternal dietary factors on epigenome modulations of offspring is needed. Furthermore, improvement of dietary component for achieving favorable effects on the epigenetic pattern of the organism may be a promising therapeutic strategy that should be explored. Conflicts of Interest The authors declare that there is no conflict of interests regarding the publication of this paper. |
The present invention relates to a method of manufacturing a connecting rod for coupling the piston and the crankshaft of an automotive reciprocal engine such as a gasoline engine.
Hot-forged medium carbon steel has been used as a material for a conventional connecting rod. In recent years, however, a high-strength sintered material has come to be used for purposes of reducing the machining steps and the machining margin.
Japanese Patent Laid-Open No. 63-128102, for example, discloses a method of manufacturing a sintered connecting rod, in which a provisional formed body of a bearing section made of bearing metal powder is assembled on a provisional formed body of the connecting rod of metal powder, and the resulting assembly is forged or sintered thereby to produce a sintered connecting rod integrated with the bearing section.
The above-mentioned conventional sintered connecting rod, in which the bearing section is integrated with the connecting rod body, can be easily assembled on the crankshaft. In view of the fact that the sintering or forging step is performed after a provisional formed body of the bearing section is assembled on a provisional formed body of the connecting rod, however, the two provisional formed members cannot be easily set in position relative to each other in the course of manufacture, and an increased number of manufacturing steps is required. Also, the low adherence between the connecting rod body and the bearing section makes it difficult to transfer the heat of the bearing section, thereby leading to the problem of the bearing section being easily overheated. These problems of the connecting rod remain unsolved. |
The Anti-Undead Necromancer
The necromancer is an archetype whose power players crave, but whose darkness many gamemasters will look askance at. This is particularly true for gamemasters who want groups of heroes, not simply characters who are a lighter shade of gray than the servants of darkness they’re standing against. Those who deal with the dead are often seen as unclean, secretive, or malicious… but what if they used their powers in service of a greater good?
Like in service to a god of death and fate, for whom the undead are a scourge to be destroyed wherever they’re found? Someone like Midgard’s Charun, for example?
The Mechanics
Before we delve too deep into the story with this one, it’s important to choose what form of necromancer you’re going to become. For example, are you going to go the traditional route with the specialist wizard, gaining command or turn undead? Will you, instead, go the sorcerer route and play a character who is afflicted by the undead bloodline? While there are other ways to focus on necromancy, these are the two biggest archetypes.
Once you’ve decided on your method of achieving the title of necromancer, you need to decide in what ways you’re going to use your powers to fight against the undead. For example, will you use the undead bloodline’s arcana to debuff undead using spells they’re typically immune to? Spells like daze, charm person, or touch of idiocy are all fair game, and they can throw a huge monkey wrench into undead foes’ plans. The same strategy could be adopted by wizards who use the Thanatoptic Spell or Threnodic Spell metamagic feats (Ultimate Magic, p. 157) to overcome an undead foe’s immunity to certain magics.
An alternative strategy is to use the feat Skeleton Summoner (Ultimate Magic, p. 155). This feat modifies your summoned monsters, allowing you to summon skeletons, skeletal champions, and once per day to summon any creature off the list with the skeleton template added onto it for some additional oomph. The benefit of this approach is that, rather than desecrating the dead by raising corpses as zombies, you’re calling out to those from another plane. When their duties are done, they go back where they came from.
The Flavor
You send a thief to catch a thief, and if you need someone to battle the threat of the undead there is no one more savvy or knowledgeable than a necromancer. With a potent array of spells at his or her command, a necromancer who fights against the undead would be able to untie the knots binding them to the material plane, sending them on to where they’re meant to be. Whether they’re priests of a holy order, or simply worshipers who are using their gifts to do the will of a god of death, dealers in death magic are not foes to be taken lightly.
Specialist wizards who choose the Command Undead feat can tear the strings of command away from the puppet masters, forcing the dead to stand aside. Even powerful undead creatures may be forced to bow to the will and authority of the necromancer. Those who have proven their service to a god or goddess of death and fate may even be given the authority to call on the dead for reinforcements. Skeletal beasts may stand before these arcane masters, defending them and their causes. Skeletal warriors fallen in battles both recent and ancient may answer the call from far-off realms, bringing their swords and shields to bear. Even mighty champions of ages past may draw steel on behalf of these necromancers, perhaps hoping to earn mercy when their sins are tallied.
Who your necromancer is, and why he or she serves the Lord or Lady of the afterlife is up to you. Was he a stillbirth who took a breath as soon as his mother died, granting him a simultaneous familiarity with the graveyard and the birthing room? Is she a dhampir whose heritage has given her terrible magics in addition to an unholy thirst? Was this necromancer a servant made to work in the catacombs? Perhaps the latest in a long line of arcane warriors from, trained to fight the undead hordes of an enemy nation?
There are a lot of possibilities, just remember, not all necromancers wear black.
For more unusual character concepts and tabletop tips, check out Neal F. Litherland’s blog Improved Initiative! You might also be interested in the White Necromancer. |
/*******************************************************************************
* Copyright 2019-2020 Intel Corporation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*******************************************************************************/
#include <CL/cl.h>
#include "oneapi/dnnl/dnnl_ocl.h"
#include "common/c_types_map.hpp"
#include "common/engine.hpp"
#include "common/stream.hpp"
#include "common/utils.hpp"
#include "gpu/ocl/ocl_engine.hpp"
#include "gpu/ocl/ocl_stream.hpp"
using namespace dnnl::impl;
using namespace dnnl::impl::gpu::ocl;
status_t dnnl_ocl_interop_stream_create(
stream_t **stream, engine_t *engine, cl_command_queue queue) {
bool args_ok = !utils::any_null(stream, engine, queue)
&& engine->runtime_kind() == runtime_kind::ocl;
if (!args_ok) return status::invalid_arguments;
auto *ocl_engine = utils::downcast<ocl_gpu_engine_t *>(engine);
return ocl_engine->create_stream(stream, queue);
}
status_t dnnl_ocl_interop_stream_get_command_queue(
stream_t *stream, cl_command_queue *queue) {
bool args_ok = !utils::any_null(queue, stream)
&& stream->engine()->runtime_kind() == runtime_kind::ocl;
if (!args_ok) return status::invalid_arguments;
auto *ocl_stream = utils::downcast<ocl_stream_t *>(stream);
*queue = ocl_stream->queue();
return status::success;
}
|
/*
* This header is generated by classdump-dyld 1.0
* on Tuesday, November 5, 2019 at 2:45:06 AM Mountain Standard Time
* Operating System: Version 13.0 (Build 17J586)
* Image Source: /System/Library/PrivateFrameworks/SlideshowKit.framework/PlugIns/OpusMarimbaProducer.opplugin/OpusMarimbaProducer
* classdump-dyld is licensed under GPLv3, Copyright © 2013-2016 by <NAME>.
*/
@protocol MRMarimbaHitBlobSupport
@required
-(void)beginGesture:(id)arg1;
-(BOOL)beginLiveUpdateForHitBlob:(id)arg1;
-(CGPoint*)convertPoint:(CGPoint)arg1 toHitBlob:(id)arg2;
-(BOOL)endLiveUpdateForHitBlob:(id)arg1;
-(void)endGesture:(id)arg1;
-(id)blobHitAtPoint:(CGPoint)arg1 fromObjectsForObjectIDs:(id)arg2 localPoint:(CGPoint*)arg3;
-(BOOL)getOnScreenVertices:(CGPoint)arg1 forHitBlob:(id)arg2;
-(void)doGesture:(id)arg1;
-(void)cancelGesture:(id)arg1;
@end
|
def adjustedTransp(self):
transp = self.propertyValue("TRANSP")
if transp is None and self.name() == "VEVENT" and self.propertyValue("DTSTART").isDateOnly():
return "TRANSPARENT"
else:
return "OPAQUE" if transp is None else transp |
Rapid identification and preparative isolation of antioxidant components in licorice. This study employed the online HPLC-2,2'-azinobis-(3-ethylbenzothiazoline-6-sulfonate radical cation (ABTS(+*)) bioassay to rapidly determine antioxidant compounds occurring in the licorice extract of Glycyrrhiza uralensis. The negative peaks of the ABTS(+*) radical scavenging detection system, which indicated the presence of antioxidant activity, were monitored by measuring the decrease in absorbance at 734 nm. The ABTS(+)-based antioxidant activity profile showed that three peaks exhibited antioxidant activity, and then the high-speed counter-current chromatography technique of preparative scale was successfully applied to separate the three peaks I-III in one step from the licorice extract. The high-speed counter-current chromatography was performed using a two-phase solvent system composed of n-hexane-ethyl acetate-methanol-water (6.5:5.5:6:4, v/v). Yields of the three peaks, dehydroglyasperin C (I, 95.1% purity), dehydroglyasperin D (II, 96.2% purity), and isoangustone A (III, 99.5% purity), obtained were 10.33, 10.43, and 6.7% respectively. Chemical structures of the purified dehydroglyasperin C (I), dehydroglyasperin D (II), and isoangustone A (III) were identified by ESI-MS and H- and C-NMR analysis. |
# -*- coding: utf-8 -*-
from django.forms import ModelForm, Form
from django.forms.models import inlineformset_factory
from django.utils import timezone
from django.utils.translation import ugettext_lazy as _
from betterforms.multiform import MultiModelForm
from collections import OrderedDict
from datetimewidget.widgets import DateTimeWidget
from redactor.widgets import RedactorEditor
from questions.models import *
class QuestionForm(ModelForm):
def __init__(self, *args, **kargs):
# Due we don't want to send to client info about user's request, we have created the below var to store
# in it this info and validate it.
self.form_asker = None
super(QuestionForm, self).__init__(*args)
class Meta:
model = Question
fields = [ 'question', 'context', 'date_begin', 'date_end', 'allow_anonymous_voter', 'data_require_vote', 'hide_data', 'public','asker', 'fromIp',]
exclude = ['hide_data','asker', 'fromIp']
labels = {
'question': _('Question title:'),
'context': _('Write something more:'),
'date_begin': _('Open vote in:'),
'date_end': _('Close vote in:'),
'allow_anonymous_voter': _('Allow anonymous vote.'),
'data_require_vote': _('Require vote to show charts.'),
'hide_data': _('Hide chart.'),
'public': _('Shown in \'Random question\'.'),
}
widgets = {
'date_begin': DateTimeWidget(usel10n = True, bootstrap_version=3),
'date_end': DateTimeWidget(usel10n = True, bootstrap_version=3),
'context': RedactorEditor(
allow_file_upload=False,
allow_image_upload=False,
attrs={'rows': '2', 'cols':'2',}),
#'asker': forms.HiddenInput(),
#'fromIp': forms.HiddenInput(),
}
def clean(self):
################
# RECHECK THIS #
######################################################################################################
# print self.cleaned_data['date_begin']
#
# It seems that date_begin it's a timezone-aware object but doesn't store zone offset.
# This makes it imposible to convert it to UTC timezone in order to compare it at level server.
#
# if self.cleaned_data['date_begin'] < timezone.now():
# self.add_error('date_begin', ValidationError(_('Begin date has to be equal or after current time.'), code='error_date_begin'))
"""
print self.cleaned_data
print "Date_begin: ", self.cleaned_data['date_begin']
if self.cleaned_data['date_begin']:
self.add_error('date_begin', ValidationError(_('You need to enter a date to open vote'), code='error_date_begin'))
"""
if 'date_begin' in self.cleaned_data and self.cleaned_data['date_begin'] != None:
if 'date_end' in self.cleaned_data and self.cleaned_data['date_end'] != None:
if self.cleaned_data['date_end'] < self.cleaned_data['date_begin']:
self.add_error('date_end', ValidationError(_('End date has to be after begin date.'), code='error_date_end'))
# Here we use form_asker var that we have created before to validate public in anon question.
if self.form_asker == None and self.cleaned_data['public'] == False:
self.add_error('public', ValidationError(_('Anon questions have to be public.'), code='error_public'))
# When QuestionEditForm inherit from QuestionForm, update view doesn't load question instance, rendering a new object.
# It seems that overriden init method it's causing this issue.
class QuestionEditForm(ModelForm):
class Meta:
model = Question
fields = [ 'question', 'context', 'date_begin', 'date_end', 'hide_data', 'allow_anonymous_voter', 'data_require_vote', 'public','asker', 'fromIp',]
exclude = ['asker', 'fromIp']
labels = {
'question': _('Question title:'),
'context': _('Write something more:'),
'date_begin': _('Open vote in:'),
'date_end': _('Close vote in:'),
'allow_anonymous_voter': _('Allow anonymous voter.'),
'data_require_vote': _('Require vote to show charts.'),
'hide_data': _('Hide chart.'),
'public': _('This question could be shown in \'Random question\'.'),
}
widgets = {
'date_begin': DateTimeWidget(usel10n = True, bootstrap_version=3),
'date_end': DateTimeWidget(usel10n = True, bootstrap_version=3),
#'context': forms.Textarea(attrs={'rows': '2'}),
'context': RedactorEditor(allow_file_upload=False, allow_image_upload=False),
#'asker': forms.HiddenInput(),
#'fromIp': forms.HiddenInput(),
}
def clean(self):
######################################################################################################
# print self.cleaned_data['date_begin']
#
# It seems that date_begin it's a timezone-aware object but doesn't store zone offset.
# This makes it imposible to convert it to UTC timezone in order to compare it at level server.
#
# if self.cleaned_data['date_begin'] < timezone.now():
# self.add_error('date_begin', ValidationError(_('Begin date has to be equal or after current time.'), code='error_date_begin'))
if 'date_begin' in self.cleaned_data and self.cleaned_data['date_begin'] != None:
if 'date_end' in self.cleaned_data and self.cleaned_data['date_end'] != None:
if self.cleaned_data['date_end'] < self.cleaned_data['date_begin']:
self.add_error('date_end', ValidationError(_('End date has to be after begin date.'), code='error_date_end'))
# Here we use form_asker var that we have created before to validate public in anon question.
if self.instance.asker == None and self.cleaned_data['public'] == False:
self.add_error('public', ValidationError(_('Anon questions have to be public.'), code='error_public'))
class ReplyForm(ModelForm):
class Meta:
model = Reply
fields = ['replyText', 'question',]
exclude = ['question',]
labels = {
'replyText': _('Reply text'),
}
help_texts = {
#'replyText': _('If you need a long text reply, we recommend you to write here a number/letter reference and make a full description at question level.'),
}
widgets = {
#'question': forms.HiddenInput(),
}
class QuestionReplyMultiForm(MultiModelForm):
form_classes = OrderedDict((
('question', QuestionForm),
('reply', ReplyForm),
))
def save(self, commit=True):
objects = super(QuestionReplyMultiForm, self).save(commit=False)
if commit:
question = objects['question']
question.save()
reply = objects['reply']
reply.question = question
reply.save()
return objects |
# generic imports
import django
import json
import math
import os
import sys
# pyspark imports
# pylint: disable=no-name-in-module
from pyspark.sql.functions import lit
# pylint: enable=no-name-in-module
from pyspark.sql.types import StringType
# check for registered apps signifying readiness, if not, run django.setup() to run as standalone
# pylint: disable=wrong-import-position
if not hasattr(django, 'apps'):
os.environ['DJANGO_SETTINGS_MODULE'] = 'combine.settings'
sys.path.append('/opt/combine')
django.setup()
# import django settings
from django.conf import settings
# import from core
from core.models import CombineBackgroundTask
# import XML2kvp from uploaded instance
try:
from core.xml2kvp import XML2kvp
except:
from xml2kvp import XML2kvp
############################################################################
# Background Tasks
############################################################################
def export_records_as_xml(spark, ct_id):
"""
Function to export multiple Jobs, with folder hierarchy for each Job
Notes:
- exports to s3 as parquet
- with limited columns, can benefit from parquest's compression
Args:
ct_id (int): CombineBackgroundTask id
"""
# init logging support
spark.sparkContext.setLogLevel('INFO')
log4jLogger = spark.sparkContext._jvm.org.apache.log4j
logger = log4jLogger.LogManager.getLogger(__name__)
# hydrate CombineBackgroundTask
ct = CombineBackgroundTask.objects.get(pk=int(ct_id))
# clean base path
output_path = "file:///%s" % ct.task_params['output_path'].lstrip(
'file://').rstrip('/')
# write DataFrame to S3
if ct.task_params.get('s3_export', False) and ct.task_params.get('s3_export_type', None) == 'spark_df':
# dynamically set credentials
spark.sparkContext._jsc.hadoopConfiguration().set(
"fs.s3a.access.key", settings.AWS_ACCESS_KEY_ID)
spark.sparkContext._jsc.hadoopConfiguration().set(
"fs.s3a.secret.key", settings.AWS_SECRET_ACCESS_KEY)
# init dfs and col_set across all published sets
dfs = []
col_set = set()
# loop through published sets (includes non-set Records)
for folder_name, job_ids in ct.task_params['job_dict'].items():
# get dfs and columns
for job_id in job_ids:
print("Adding job #%s" % job_id)
# get df
df = get_job_as_df(spark, job_id)
# add to total set of columns
col_set.update(df.columns)
# append to dfs
dfs.append(df)
# convert col_set to list
col_set = list(col_set)
logger.info("column final set: %s" % col_set)
# add empty columns to dfs where needed
n_dfs = []
for df in dfs:
n_df = df
for col in col_set:
if col not in df.columns:
n_df = n_df.withColumn(col, lit('').cast(StringType()))
n_dfs.append(n_df)
# get union of all RDDs to write
rdd_to_write = spark.sparkContext.union(
[df.select(col_set).rdd for df in n_dfs])
# repartition
rdd_to_write = rdd_to_write.repartition(
math.ceil(rdd_to_write.count() / settings.TARGET_RECORDS_PER_PARTITION))
# convert to DataFrame and write to s3 as parquet
rdd_to_write.toDF().write.mode('overwrite').parquet(
's3a://%s/%s' % (ct.task_params['s3_bucket'], ct.task_params['s3_key']))
# write to disk
else:
# determine column subset
col_subset = ['document']
# loop through keys and export
for folder_name, job_ids in ct.task_params['job_dict'].items():
# handle single job_id
if len(job_ids) == 1:
# get Job records as df
rdd_to_write = get_job_as_df(
spark, job_ids[0]).select(col_subset).rdd
# handle multiple jobs
else:
rdds = [get_job_as_df(spark, job_id).select(
col_subset).rdd for job_id in job_ids]
rdd_to_write = spark.sparkContext.union(rdds)
# repartition, wrap in XML dec, and write
rdd_to_write.repartition(math.ceil(rdd_to_write.count()/int(ct.task_params['records_per_file'])))\
.map(lambda row: row.document.replace('<?xml version=\"1.0\" encoding=\"UTF-8\"?>', ''))\
.saveAsTextFile('%s/%s' % (output_path, folder_name))
def generate_validation_report(spark, output_path, task_params):
job_id = task_params['job_id']
validation_scenarios = [int(vs_id)
for vs_id in task_params['validation_scenarios']]
# get job validations, limiting by selected validation scenarios
pipeline = json.dumps({'$match': {'job_id': job_id, 'validation_scenario_id': {
'$in': validation_scenarios}}})
rvdf = spark.read.format("com.mongodb.spark.sql.DefaultSource")\
.option("uri", "mongodb://%s" % settings.MONGO_HOST)\
.option("database", "combine")\
.option("collection", "record_validation")\
.option("partitioner", "MongoSamplePartitioner")\
.option("spark.mongodb.input.partitionerOptions.partitionSizeMB", settings.MONGO_READ_PARTITION_SIZE_MB)\
.option("pipeline", pipeline).load()
# get job as df
records_df = get_job_as_df(spark, job_id)
# merge on validation failures
mdf = rvdf.alias('rvdf').join(records_df.alias(
'records_df'), rvdf['record_id'] == records_df['_id'])
# select subset of fields for export, and rename
mdf = mdf.select(
'records_df._id.oid',
'records_df.record_id',
'rvdf.validation_scenario_id',
'rvdf.validation_scenario_name',
'rvdf.results_payload',
'rvdf.fail_count'
)
# if mapped fields requested, query ES and join
if len(task_params['mapped_field_include']) > 0:
# get mapped fields
mapped_fields = task_params['mapped_field_include']
# get mapped fields as df
if 'db_id' not in mapped_fields:
mapped_fields.append('db_id')
es_df = get_job_es(spark, job_id=job_id).select(mapped_fields)
# join
mdf = mdf.alias('mdf').join(es_df.alias(
'es_df'), mdf['oid'] == es_df['db_id'])
# cleanup columns
mdf = mdf.select([c for c in mdf.columns if c != 'db_id']
).withColumnRenamed('oid', 'db_id')
# write to output dir
if task_params['report_format'] == 'csv':
mdf.write.format('com.databricks.spark.csv').option(
"delimiter", ",").save('file://%s' % output_path)
if task_params['report_format'] == 'tsv':
mdf.write.format('com.databricks.spark.csv').option(
"delimiter", "\t").save('file://%s' % output_path)
if task_params['report_format'] == 'json':
mdf.write.format('json').save('file://%s' % output_path)
def export_records_as_tabular_data(spark, ct_id):
"""
Function to export multiple Jobs, with folder hierarchy for each Job
Notes:
- writes to s3 as JSONLines to avoid column names which contain characters
that parquet will not accept
- much less efficient storage-wise, but flexible for the field/column variety
that tabular data has
Args:
ct_id (int): CombineBackgroundTask id
Expecting from CombineBackgroundTask:
output_path (str): base location for folder structure
job_dict (dict): dictionary of directory name --> list of Job ids
- e.g. single job: {'j29':[29]}
- e.g. published records: {'foo':[2,42], 'bar':[3]}
- in this case, a union will be performed for all Jobs within a single key
records_per_file (int): number of XML records per file
fm_export_config_json (str): JSON of configurations to be used
tabular_data_export_type (str): 'json' or 'csv'
"""
# hydrate CombineBackgroundTask
ct = CombineBackgroundTask.objects.get(pk=int(ct_id))
# reconstitute fm_export_config_json
fm_config = json.loads(ct.task_params['fm_export_config_json'])
# clean base path
output_path = "file:///%s" % ct.task_params['output_path'].lstrip(
'file://').rstrip('/')
# write DataFrame to S3
if ct.task_params.get('s3_export', False) and ct.task_params.get('s3_export_type', None) == 'spark_df':
# dynamically set credentials
spark.sparkContext._jsc.hadoopConfiguration().set(
"fs.s3a.access.key", settings.AWS_ACCESS_KEY_ID)
spark.sparkContext._jsc.hadoopConfiguration().set(
"fs.s3a.secret.key", settings.AWS_SECRET_ACCESS_KEY)
# determine column subset
col_subset = ['*']
# loop through keys and export
rdds = []
for folder_name, job_ids in ct.task_params['job_dict'].items():
# handle single job_id
if len(job_ids) == 1:
rdds.extend([get_job_as_df(spark, job_ids[0]).select(
['document', 'combine_id', 'record_id']).rdd])
# handle multiple jobs
else:
rdds.extend(
[get_job_as_df(spark, job_id).select(['document', 'combine_id', 'record_id']).rdd for job_id in
job_ids])
# union all
batch_rdd = spark.sparkContext.union(rdds)
# convert rdd
kvp_batch_rdd = _convert_xml_to_kvp(batch_rdd, fm_config)
# repartition to records per file
kvp_batch_rdd = kvp_batch_rdd.repartition(
math.ceil(kvp_batch_rdd.count() / settings.TARGET_RECORDS_PER_PARTITION))
# convert to dataframe
kvp_batch_df = spark.read.json(kvp_batch_rdd)
# write to bucket as jsonl
kvp_batch_df.write.mode('overwrite').json(
's3a://%s/%s' % (ct.task_params['s3_bucket'], ct.task_params['s3_key']))
# write to disk
else:
# loop through potential output folders
for folder_name, job_ids in ct.task_params['job_dict'].items():
# handle single job_id
if len(job_ids) == 1:
# get Job records as df
batch_rdd = get_job_as_df(spark, job_ids[0]).select(
['document', 'combine_id', 'record_id']).rdd
# handle multiple jobs
else:
rdds = [get_job_as_df(spark, job_id).select(['document', 'combine_id', 'record_id']).rdd for job_id in
job_ids]
batch_rdd = spark.sparkContext.union(rdds)
# convert rdd
kvp_batch_rdd = _convert_xml_to_kvp(batch_rdd, fm_config)
# repartition to records per file
kvp_batch_rdd = kvp_batch_rdd.repartition(
math.ceil(kvp_batch_rdd.count()/int(ct.task_params['records_per_file'])))
# handle json
if ct.task_params['tabular_data_export_type'] == 'json':
_write_tabular_json(
spark, kvp_batch_rdd, output_path, folder_name, fm_config)
# handle csv
if ct.task_params['tabular_data_export_type'] == 'csv':
_write_tabular_csv(spark, kvp_batch_rdd,
output_path, folder_name, fm_config)
def _convert_xml_to_kvp(batch_rdd, fm_config):
"""
Sub-Function to convert RDD of XML to KVP
Args:
batch_rdd (RDD): RDD containing batch of Records rows
fm_config (dict): Dictionary of XML2kvp configurations to use for kvp_to_xml()
Returns
kvp_batch_rdd (RDD): RDD of JSONlines
"""
def kvp_writer_udf(row, fm_config):
"""
Converts XML to kvpjson
"""
# get handler, that includes defaults
xml2kvp_defaults = XML2kvp(**fm_config)
# convert XML to kvp
xml2kvp_handler = XML2kvp.xml_to_kvp(
row.document, return_handler=True, handler=xml2kvp_defaults)
# loop through and convert lists/tuples to multivalue_delim
for k, v in xml2kvp_handler.kvp_dict.items():
if type(v) in [list, tuple]:
xml2kvp_handler.kvp_dict[k] = xml2kvp_handler.multivalue_delim.join(
v)
# mixin other row attributes to kvp_dict
xml2kvp_handler.kvp_dict.update({
'record_id': row.record_id,
'combine_id': row.combine_id
})
# return JSON line
return json.dumps(xml2kvp_handler.kvp_dict)
# run UDF
return batch_rdd.map(lambda row: kvp_writer_udf(row, fm_config))
def _write_tabular_json(spark, kvp_batch_rdd, base_path, folder_name, fm_config):
# write JSON lines
kvp_batch_rdd.saveAsTextFile('%s/%s' % (base_path, folder_name))
def _write_tabular_csv(spark, kvp_batch_rdd, base_path, folder_name, fm_config):
# read rdd to DataFrame
kvp_batch_df = spark.read.json(kvp_batch_rdd)
# load XML2kvp instance
_ = XML2kvp(**fm_config)
# write to CSV
kvp_batch_df.write.csv('%s/%s' % (base_path, folder_name), header=True)
def _write_rdd_to_s3(
spark,
rdd,
bucket,
key,
access_key=settings.AWS_ACCESS_KEY_ID,
secret_key=settings.AWS_SECRET_ACCESS_KEY):
"""
Function to write RDD to S3
Args:
rdd (RDD): RDD to write to S3
bucket (str): bucket string to write to
key (str): key/path to write to in S3 bucket
access_key (str): default to settings, override with access key
secret_key (str): default to settings, override with secret key
"""
# dynamically set s3 credentials
spark.sparkContext._jsc.hadoopConfiguration().set("fs.s3a.access.key", access_key)
spark.sparkContext._jsc.hadoopConfiguration().set("fs.s3a.secret.key", secret_key)
# write rdd to S3
rdd.saveAsTextFile('s3a://%s/%s' % (bucket, key.lstrip('/')))
############################################################################
# Convenience Functions
############################################################################
def get_job_as_df(spark, job_id, remove_id=False):
"""
Convenience method to retrieve set of records as Spark DataFrame
"""
pipeline = json.dumps({'$match': {'job_id': job_id}})
mdf = spark.read.format("com.mongodb.spark.sql.DefaultSource")\
.option("uri", "mongodb://%s" % settings.MONGO_HOST)\
.option("database", "combine")\
.option("collection", "record")\
.option("partitioner", "MongoSamplePartitioner")\
.option("spark.mongodb.input.partitionerOptions.partitionSizeMB", settings.MONGO_READ_PARTITION_SIZE_MB)\
.option("pipeline", pipeline).load()
# if remove ID
if remove_id:
mdf = mdf.select([c for c in mdf.columns if c != '_id'])
return mdf
def get_job_es(spark,
job_id=None,
indices=None,
doc_type='record',
es_query=None,
field_include=None,
field_exclude=None,
as_rdd=False):
"""
Convenience method to retrieve mapped fields from ElasticSearch
Args:
job_id (int): job to retrieve
indices (list): list of index strings to retrieve from
doc_type (str): defaults to 'record', but configurable (e.g. 'item')
es_query (str): JSON string of ES query
field_include (str): comma seperated list of fields to include in response
field_exclude (str): comma seperated list of fields to exclude in response
as_rdd (boolean): boolean to return as RDD, or False to convert to DF
"""
# handle indices
if job_id:
es_indexes = 'j%s' % job_id
elif indices:
es_indexes = ','.join(indices)
# prep conf
conf = {
"es.resource": "%s/%s" % (es_indexes, doc_type),
"es.output.json": "true",
"es.input.max.docs.per.partition": "10000",
"es.nodes": "%s:9200" % settings.ES_HOST,
"es.nodes.wan.only": "true",
}
# handle es_query
if es_query:
conf['es.query'] = es_query
# handle field exclusion
if field_exclude:
conf['es.read.field.exclude'] = field_exclude
# handle field inclusion
if field_include:
conf['es.read.field.include'] = field_exclude
# get es index as RDD
es_rdd = spark.sparkContext.newAPIHadoopRDD(
inputFormatClass="org.elasticsearch.hadoop.mr.EsInputFormat",
keyClass="org.apache.hadoop.io.NullWritable",
valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
conf=conf)
# return rdd
if as_rdd:
return es_rdd
# read json
es_df = spark.read.json(es_rdd.map(lambda row: row[1]))
# return
return es_df
def get_sql_job_as_df(spark, job_id, remove_id=False):
sqldf = spark.read.jdbc(settings.COMBINE_DATABASE['jdbc_url'], 'core_record', properties=settings.COMBINE_DATABASE)
sqldf = sqldf.filter(sqldf['job_id'] == job_id)
# if remove ID
if remove_id:
sqldf = sqldf.select([c for c in sqldf.columns if c != 'id'])
return sqldf
def copy_sql_to_mongo(spark, job_id):
# get sql job
sdf = get_sql_job_as_df(spark, job_id, remove_id=True)
# repartition
sdf = sdf.rdd.repartition(200).toDF(schema=sdf.schema)
# insert
sdf.write.format("com.mongodb.spark.sql.DefaultSource")\
.mode("append")\
.option("uri", "mongodb://%s" % settings.MONGO_HOST)\
.option("database", "combine")\
.option("collection", "record").save()
def copy_sql_to_mongo_adv(spark, job_id, lowerBound, upperBound, numPartitions):
sqldf = spark.read.jdbc(
settings.COMBINE_DATABASE['jdbc_url'],
'core_record',
properties=settings.COMBINE_DATABASE,
column='id',
lowerBound=lowerBound,
upperBound=upperBound,
numPartitions=numPartitions
)
db_records = sqldf.filter(sqldf.job_id == int(job_id))
db_records.write.format("com.mongodb.spark.sql.DefaultSource")\
.mode("append")\
.option("uri", "mongodb://%s" % settings.MONGO_HOST)\
.option("database", "combine")\
.option("collection", "record").save()
|
package com.zahid.factorymethod.matcha;
import java.util.Map;
public class MatchaViewEngine implements ViewEngine {
@Override
public String render(String viewName, Map<String, Object> context) {
return "view rendered by Matcha";
}
}
|
// NewJSONReader returns a json RecordReader which expects to find one json object
// per row of dataset. Using WithChunk can control how many rows are processed
// per record, which is how many objects become a single record from the file.
//
// If it is desired to write out an array of rows, then simply use RecordToStructArray
// and json.Marshal the struct array for the same effect.
func NewJSONReader(r io.Reader, schema *arrow.Schema, opts ...Option) *JSONReader {
rr := &JSONReader{
r: json.NewDecoder(r),
schema: schema,
refs: 1,
chunk: 1,
}
for _, o := range opts {
o(rr)
}
if rr.mem == nil {
rr.mem = memory.DefaultAllocator
}
rr.bldr = NewRecordBuilder(rr.mem, schema)
switch {
case rr.chunk < 0:
rr.next = rr.nextall
case rr.chunk > 1:
rr.next = rr.nextn
default:
rr.next = rr.next1
}
return rr
} |
Complex Langevin dynamics for dynamical QCD at nonzero chemical potential: a comparison with multi-parameter reweighting We study lattice QCD at non-vanishing chemical potential using the complex Langevin equation. We compare the results with multi-parameter reweighting both from $\mu=0$ and phase quenched ensembles. We find a good agreement for lattice spacings below $\approx$0.15 fm. On coarser lattices the complex Langevin approach breaks down. Four flavors of staggered fermions are used on $N_t=4, 6$ and 8 lattices. For one ensemble we also use two flavors to investigate the effects of rooting. I. INTRODUCTION AND OVERVIEW Dense and/or high temperature phases of strongly interacting matter are becoming experimentally accessible nowadays due to heavy ion collision experiments at the Relativistic Heavy Ion Collider, the Large Hadron Collider, and especially the FAIR facility at GSI, as well as astrophysical observations of neutron starts. Theoretical understanding of the dense, strongly interacting phases and the first principles determination of the phase diagram of QCD as a function of the temperature and chemical potential are still lacking. This is a consequence of the sign problem, which makes lattice calculations at nonzero baryon density challenging. The standard non-perturbative tool for QCD, lattice QCD is defined by the path integral with the Yang-Mills action S Y M of the gluons and the fermion determinant M () on a cubic space-time lattice. At nonzero chemical potential the determinant is non-real, therefore importance sampling methods are not applicable. For a review of ideas to circumvent the sign problem see. One of the ways to avoid the sign problem is using the analyticity of the action and complexifing the field manifolds of the theory with the complex Langevin equation. (See also the related but distinct approach of the Lefschetz thimbles, where the integration contours are pushed into the complex plane.) After promising initial results, it was noticed that the complex Langevin equation can also deliver convergent but wrong results in some cases. Also technical problems could arise which are avoided using adaptive step-sizes for the Langevin equation. In the last decade the method has enjoyed increasing attention related to real-time systems, as well as finite-density problems. The method showed remarkable success in the case of finite density Bose gas or the SU spin-model, but the breakdown of the method was also observed a few times. The theoretical understanding of the successes and the failures of the method has improved: it has been proved that provided a few requirements (some 'offline' such as the holomorphicity of the action and the observables, and some 'online' such as the quick decay of the field distributions at infinity) the method will provide correct results. It has been recently demonstrated that complex Langevin simulations of gauge theories are made feasible using the procedure of gauge cooling, see also, which helps to reduce the fluctuations corresponding to the complexified gauge freedom of the theory. This method was first used to solve HDQCD (heavy dense QCD) where the quarks are kept static (their spatial hopping terms are dropped), and it has been also extended to full QCD using light quarks in the staggered as well as the Wilson formulation. Gauge cooling makes the investigation of QCD with a theta term also possible. In this paper we compare results of the reweighting approach and the Complex Langevin approach for N F = 4 and N F = 2 QCD using staggered fermions. In Section II. we give a brief overview of the complex Langevin method. In Section III. we summarize the reweighting method. In section IV. we present our numerical results comparing the reweighting and complex Langevin simulations. Finally, we conclude in Section V. II. THE COMPLEX LANGEVIN EQUATION The Complex Langevin equation is a straightforward generalization of the real Langevin equation. For the link variables U x, of lattice QCD an update with Langevin timestep reads : with a the generators of the gauge group, i.e. the Gell-Mann matrices, and the Gaussian noise ax. The drift force K ax is determined from the action S by with the left derivative In case the drift term is non-real the manifold of the link variables is complexified to SL(3,C). The original theory is recovered by taking averages of the observables analytically continued to the complexified manifold. For the case of QCD the action of the theory involves the fermionic determinant through the complex logarithm function The drift term in turn is given by where the second term is calculated using one CG inversion per update using noise vectors. The action we are interested in is thus non holomorphic, and in turn this results in a drift term which has singularities where the fermionic measure detM (, U ) is vanishing. The theoretical understanding of the behavior of the theory with a meromorphic drift term is still lacking, but we have some observations as detailed below. Such a drift term seems to lead to incorrect results in toy models if the trajectories encircle the origin frequently. In other cases the simulations yield a correct result in spite of a logarithm in the action. In an explicit example is presented where the simulations give correct results in spite of the frequent rotations of the phase of the measure. The condition for correctness is that the distribution of configurations vanishes sufficiently fast (faster than linearly) near the pole. For QCD itself we have a few indications that the poles do not affect the simulations at high temperatures: observing the spectrum of the Dirac operator, comparisons with expansions which use a holomorphic action, and the results presented in this paper. It remains to see whether simulations in the confined phase are affected. The 'distance' of a configuration from the original SU manifold can be monitored with the unitarity norm where = N 3 s N t is the volume of the lattice. In naive complex Langevin simulations, this distance grows exponentially, and the simulation breaks down because of numerical problems if it gets too large. This behavior can be countered with gauge cooling, which means that several gauge transformations of the enlarged manifold are performed in the direction of the steepest descent of the unitarity norm. With gauge cooling, the unitarity norm remains bounded at a safe level as long as the parameter of the action is not too small. The value min corresponds to a maximal lattice spacing, which seems to depend weakly on the lattice size, as can be checked easily for the cheaper HDQCD theory. III. REWEIGHTING In the multi-parameter reweighting approach one rewrites the partition function as : where 0 is chosen such that the second line contains a positive definite measure which can be used to generate the configurations and the terms in the curly bracket in the last line are taken into account as an observable. The expectation value of any observable can be then written in the form: with w(, 0,, 0 ) being the weights of the configurations defined by the curly bracket of eqn.. Note that gauge observables do not explicitly depend on, therefore their dependence comes entirely from the weight factors. Fermionic observables, on the other hand also explicitly depend on the chemical potential. In this paper we use two choices for the original, positive measure ensemble. The first choice is to use 0 = 0, i.e. reweighting from zero chemical potential. For any choice of the target, parameters one can find the optimal 0 for which the fluctuation of the weights w(, ) is minimal. This corresponds to the best reweighting line as discussed in. We generated configurations at = 0 for in the range 4.9 − 5.5. These were then used to reach the entire, plane via multi-parameter reweighting. Our second choice is to use the phase quenched ensemble, i.e. replacing det M ( 0 ) by | det M ()| in eqs. and. In this case the reweighting factor contains only the phase of the determinant. For staggered fermions an additional rooting is required, for N F flavors the weights become Since for N F < 4 a fractional power is taken which has cuts on the complex plane it is important to choose these cuts such that the weights are analytic for real values. This can be achieved by expressing det M () analytically as a function of as discussed in. IV. RESULTS We use the Wilson plaquette action for the gauge sector of the theory and unimproved staggered fermions with N F = 4 flavors if not otherwise noted. We have used three different lattice sizes for this study, 8 3 4, 12 3 6 and 16 3 8, all having the aspect ratio L s /L t = 2. Our main observables are the plaquette averages, the spatial average of the trace of the Polyakov loop and its inverse x TrP −1 (x)/N 3 s, the chiral condensate and the fermionic density n defined as with the volume of the space-time lattice. We are also interested in the average phase of the fermion determinant, which measures the severity of the sign problem We perform the complex Langevin simulations using adaptive step-size, with a control parameter which puts the typical step-sizes in the range ≈ 10 −5 − 5 10 −5. Using such small step sizes allows us to avoid having to take the → 0 limit as the results are in the zero Langevin step limit within errors. We use initial conditions on the SU manifold, and allow = 10 − 30 Langevin time for thermalization, after which we perform the measurements for an other = 10 − 30 Langevin time. We checked that proper thermalization is reached by observing that halving the thermalization time leads to consistent results. We have determined the pion masses as well as the lattice spacing using the w 0 scale as proposed in for several quark masses, see in Table I. One sees that choosing the quark masses ma = 0.05 for the N t = 4 lattice, ma = 0.02 for the N t = 6 lattice and ma = 0.01 for the N t = 8 lattice, in the vicinity of the critical temperature we have m /T c ≈ 2.2 − 2.4. We have additionally investigated the N t = 8 lattice with am = 0.05, which corresponds to the rather heavy pion mass of m /T c ≈ 4.8. First we have tested the theory at a fixed = 5.4 at N t = 4 as a function of, which is well above the deconfinement transition which at = 0 and m = 0.05 is at c ≈ 5.04. In Fig. 1 we show the comparison of the gauge observables: plaquette averages and Polyakov loops. We generated O(10 4 ) independent configurations in the = 0 ensemble with the usual HMC algorithm (using every 50th configuration of the Markov chain), and we calculated the reweighting as detailed in section III. One notes that the reweighting performs well for small chemical potentials /T < 1 − 1.5, where there is a nice agreement between reweighting and CLE. The errors of the reweighting approach start to grow large as one increases above 1.5T, where the average of the reweighting is dominated by a few configurations. This is the manifestation of the overlap problem: the ensemble we have sampled has typical configurations which are not the typical configurations of the ensemble we wish to study. Next we turn to the fermionic observables: chiral condensate, fermionic density in Fig. 2. One notes that the reweighting of these quantities is possible to much higher values of /T. This is the consequence of their explicit dependence on, which dominates their change as the chemical potential is changed. This is in contrast to the gauge observables in Fig. 1, where the change is given entirely by the change in the measure of the path integral. The downward turn of the Polyakov loop and its inverse around /T = 3 is the result of the phenomenon of saturation: at this chemical potential half of all of the available fermionic states on the lattice are filled, as visible on Fig. 2. This lattice artifact can also be observed with static quarks, and even in the strong coupling expansion. Finally, the average phase factor in Fig. 2 is a good indicator of the severeness of the sign problem in the theory. One sees that the average phase in the region /T > 1.5 − 2 indeed gets very small. Note that to see agreement between CLE and reweighting one has to be careful to choose the observable to be the analytic continuation of an observable on the SU manifold. In this case we define the phase factor from the analytic continuation of the determinants, as written in. In Fig. 3 we show the histogram of the absolute value of the weights of the configurations normalized by the biggest weight in the ensemble. This illustrates the overlap problem: the 'further' one tries to reweight from the original ensemble, the less and less will be the contribution of an average configuration to the average, which becomes dominated by very few configurations. Thus the fluctuations of the result become larger, and even the errorbars are not reliable as the distribution of the observables becomes non-Gaussian. As we show below, this situation improves if one chooses an ensemble 'closer' to the target ensemble: in this case taking the phasequenched ensemble (|detM ()|) instead of the zero ensemble. In Fig. 4 we use a theory with N F = 2 flavors of fermions, by taking the square root of the staggered fermion determinant. We perform reweighting from the = 0 ensemble using ≈ 1700 configurations. To maintain analyticity, in the reweighting procedure one must make sure that no cut of the complex square root function is crossed while the chemical potential is changed. In the complex Langevin simulations the rooting is implemented simply by multiplying the fermion drift terms with an appropriate factor. We observe good agreement for small values /T, similarly to case of the N F = 4 theory, indicating that the effect of rooting is the same in these different approaches. B. Reweighting from the phasequenched ensemble We have investigated the efficiency of reweighting from the 'phasequenched' ensemble. On Fig. 5 we show the comparison of the plaquette averages as well as the Polyakov loop averages. We have used about 4000-5000 independent configurations at N t = 4 for each value. One notes that the agreement is much better when compared to the reweighting from the = 0 ensemble, also for higher /T values (compare with Fig. 1). Note that this comparison is in the deconfined phase, therefore no phase transition corresponding to the pion condensation is expected in the phasequenched ensemble, makeing reweighting easier. For the = 5.4 value used for these plots, the complex Langevin simulation breaks down in the saturation region /T > 5 (not shown in the plots), also signaled by a large 'skirt' of the distributions (meaning a slow, typically power law decay) and the disagreement of the reweighting and CLE simulations, most detectable in the plaquette averages. C. Comparisons as a function of Next we have investigated the appearance of a discrepancy of the CLE and reweighting results at smaller values arising from a 'skirt' of the complexified distributions. In Fig. 6 we compare reweighting and CLE as a function of the parameter at fixed /T = 1 on an 8 3 * 4 lattice. One observes that the reweighting is nicely reproduced by the Complex Langevin simulations as long as > 5.10 − 5. 15 skirt and CL simulations become instable, also signaled by large unitarity norm and the conjugate gradient algorithm (needed for the calculation of the drift terms in the CLE) failing to converge. Similar behavior is detected on the fermionic observables in Fig. 7. This behavior has been observed also in HDQCD simulations, where a limit value min = 5.6 − 5.7 was seen independent of the value of N t ≥ 6, and min was slightly smaller for N t = 4. This minimal parameter corresponds to a maximal lattice spacing a max ≈ 0.2fm in HDQCD. Apparently the limiting value is different in full QCD, but it turns out that the corresponding lattice spacing is roughly equal for N F = 4 with am = 0.05: a max ≈ 0.2 − 0.25 fm. This breakdown is also visible on histograms of various observables. In Fig. 8 we show the histograms of the spatial plaquettes at various values. One notices that the 'skirt' of the distribution is indeed large at = 5.1, where the CLE breaks down. Although a small skirt is also present at = 5.2, it is not visited frequently enough to change the averages noticeably. A similar behavior is observed on the finer 12 3 6 lattice, as depicted in Fig. 9. We used 200-300 configurations for the reweighting procedure on N t = 6 lattices at every value. We observe a limiting min ≈ 5.15 corresponding to a max ≈ 0.15 fm which at N t = 6 allows simulations right down to the transition temperature, but not below. Finally we investigated N t = 8 lattices. In Fig. 10 we show the behavior of the gauge observables, in Fig. 11 the fermionic density. We used 200-300 independent configurations at each value to perform the reweighting. At small betas the complex Langevin simulations become instable also on these lattices, which can be observed in Figs. 10 and 11 by the absence of results. One observes that the CLE breaks down above the lattice spacing a ≈ 0.15fm. V. CONCLUSIONS In this paper we have compared complex Langevin simulations of finite density QCD with reweighting from the positive ensembles of the phasequenched theory and = 0. Both methods have a limited region of parameter space where they are applicable. The complex Langevin method fails for too small parameters, as noted earlier, but this still allows the exploration of the whole phase diagram in HDQCD. Reweighting from zero breaks down because of the overlap and sign problems around T ≈ 1 − 1.5. In contrast, the reweighting from the phasequenched ensemble in the deconfined phase performs better also for large, suggesting that the sign problem is not that severe. We observe good agreement of these two methods in the region where they are both applicable. The failure of both methods can be assessed independently of the comparison: the complex Langevin simulations develop 'skirted' distributions as the gauge cooling loses its effectiveness, and the errors of the reweighting start to grow large signaling sign and overlap problems. An important question for the applicability of the complex Langevin method to explore the phase diagram of QCD is the behavior of min, the lattice parameter below which gauge cooling is not effective. In this study we have determined that using N t = 4, N t = 6 and N t = 8 lattices (with pion mass m /T c ≈ 2.2 − 2.4) this breakdown prevents the exploration of the deconfinement transition and the location of a possible critical point. |
1 'And now, priests, this commandment is for you.
2 If you will not listen, if you will not sincerely resolve to glorify my name, says Yahweh Sabaoth, I shall certainly lay a curse on you and I shall curse your blessing. Indeed I will lay a curse, for none of you makes this resolve.
3 Now, I am going to break your arm and throw offal in your faces -- the offal of your solemn feasts -- and sweep you away with it.
4 Then you will know that I sent this commandment to you, to affirm my intention to maintain my covenant with Levi, says Yahweh Sabaoth.
5 My covenant was with him -- a covenant of life and peace, and these were what I gave him -- a covenant of respect, and he respected me and held my name in awe.
6 The law of truth was in his mouth and guilt was not found on his lips; he walked in peace and justice with me and he converted many from sinning.
7 The priest's lips ought to safeguard knowledge; his mouth is where the law should be sought, since he is Yahweh Sabaoth's messenger.
9 so I in my turn have made you contemptible and vile to the whole people, for not having kept my ways and for being partial in applying the law.
10 'Is there not one Father of us all? Did not one God create us? Why, then, do we break faith with one another, profaning the covenant of our ancestors?
11 Judah has broken faith; a detestable thing has been done in Israel and in Jerusalem. For Judah has profaned Yahweh's beloved sanctuary; he has married the daughter of an alien god.
12 May Yahweh deprive such an offender of witness and advocate in the tents of Jacob among those who present offerings to Yahweh Sabaoth!
13 'And here is something else you do: you cover the altar of Yahweh with tears, with weeping and wailing, because he now refuses to consider the offering or to accept it from you.
14 And you ask, "Why?" Because Yahweh stands as witness between you and the wife of your youth, with whom you have broken faith, even though she was your partner and your wife by covenant.
15 Did he not create a single being, having flesh and the breath of life? And what does this single being seek? God -- given offspring! Have respect for your own life then, and do not break faith with the wife of your youth.
16 For I hate divorce, says Yahweh, God of Israel, and people concealing their cruelty under a cloak, says Yahweh Sabaoth. Have respect for your own life then, and do not break faith.
17 'You have wearied Yahweh with your talk. You ask, "How have we wearied him?" When you say, "Any evil-doer is good as far as Yahweh is concerned; indeed he is delighted with them"; or when you say, "Where is the God of fair judgement now?" |
/**
* A {@link BindingHandler} decorator that resolves the target {@link BindingHandler} in a lazy fashion. Lazy loading is used to ensure the full initialization
* of the target instance.
*/
public class BindingHandlerLazyLoadDecorator<T> implements BindingHandler<T> {
private URI handlerUri;
private ComponentManager componentManager;
private volatile ScopedComponent delegate;
public BindingHandlerLazyLoadDecorator(URI handlerUri, ComponentManager componentManager) {
this.handlerUri = handlerUri;
this.componentManager = componentManager;
}
public QName getType() {
return inject().getType();
}
public void handleOutbound(Message message, T context) {
inject().handleOutbound(message, context);
}
public void handleInbound(T context, Message message) {
inject().handleInbound(context, message);
}
@SuppressWarnings("unchecked")
private BindingHandler<T> inject() {
if (delegate == null) {
synchronized (this) {
if (delegate == null) {
Component component = componentManager.getComponent(handlerUri);
if (component == null) {
throw new ServiceUnavailableException("Handler component not found: " + handlerUri);
}
if (!(component instanceof ScopedComponent)) {
throw new ServiceRuntimeException("Handler component must be a scoped component type: " + handlerUri);
}
delegate = (ScopedComponent) component;
}
}
}
// resolve the instance on every invocation so that stateless scoped components receive a new instance
return (BindingHandler<T>) delegate.getInstance();
}
} |
<gh_stars>1-10
import type { ICallerInfo } from './ICallerInfo';
export type VoipEvents = {
registered: undefined;
registrationerror: unknown;
unregistered: undefined;
unregistrationerror: unknown;
connected: undefined;
connectionerror: unknown;
callestablished: undefined;
incomingcall: ICallerInfo;
callterminated: undefined;
hold: undefined;
holderror: undefined;
muteerror: undefined;
unhold: undefined;
unholderror: undefined;
stateChanged: undefined;
};
|
// Checks if the bitrate is valid for Celt.
bool ACMCodecDB::IsCeltRateValid(int rate) {
if ((rate >= 48000) && (rate <= 128000)) {
return true;
} else {
return false;
}
} |
<reponame>minefled/js-test
import { Interface } from "./interface.js";
import { TestConfig } from "./TestConfig";
import { Unit } from "./Unit.js";
export class Test {
public units:Unit[];
public interface:Interface;
public units_failed = 0;
public units_succeeded = 0;
constructor(
public name:string,
public config:TestConfig
) {
this.units = this.config.units.map(x => { return new Unit(x.name, x.validation); });
this.interface = new Interface(this);
if(process.stdout.rows < this.interface.lineCount) console.log("\x1b[1m\x1b[33mIf you cant see all of your tests, try increasing the height of your terminal window!\x1b[0m");
this.interface.update();
}
unit(name:string, verification_callback:Function) {
return new Unit(name, verification_callback);
}
async execute() {
for(var unit of this.units) {
unit.status = "executing";
this._updateInterface();
let success = await unit.execute();
if(success) this.units_succeeded += 1;
else this.units_failed += 1;
this._updateInterface();
}
// Log Test Result
let total_execution_time = this.units.map(x => x.execution_time).reduce((acc, curr) => acc + curr);
console.log("\x1b[1mResult:\x1b[0m");
console.log(` \x1b[32mSucceeded\x1b[0m: ${this.units_succeeded} | \x1b[31mFailed\x1b[0m: ${this.units_failed}`);
console.log(` \x1b[34mTotal Test Time\x1b[0m: ${total_execution_time}ms`);
console.log();
}
private _updateInterface() {
this.interface.update();
}
get max_unit_name_length():number {
return Math.max( ...this.units.map(x => x.name.length) );
}
} |
def view_definition(self, spec):
template = "CREATE OR REPLACE {6} VIEW {0}.{1} AS \n{3}"
with self.conn.cursor() as cur:
sql = self.view_definition_query
_logger.debug("View Definition Query. sql: %r\nspec: %r", sql, spec)
try:
cur.execute(sql, (spec,))
except psycopg2.ProgrammingError:
raise RuntimeError(f"View {spec} does not exist.")
result = cur.fetchone()
view_type = "MATERIALIZED" if result[2] == "m" else ""
return template.format(*result + (view_type,)) |
HOW MUCH RISK IS TOO MUCH?
We keep hearing that the eyes of Wall Street are on Connecticut. Or, more specifically, they're on a Superior Court room in Rockville, where state officials are duking it out with the barons of the financial services industry.
At stake is the reputation of Forstmann Little & Co., a private equity investment company, and about $125 million in Connecticut pension fund investments.
Although the lawsuit filed by Connecticut is grounded in the vocabulary and technical jargon of the world of high finance, state officials say the issue is very simple: The state pension fund and its 165,000 beneficiaries were duped by an unscrupulous company that engaged in overly risky and unauthorized activities.
"You don't have to be an expert to understand what Forstmann Little did," said state Treasurer Denise Nappier, who filed suit along with Attorney General Richard Blumenthal. "Forstmann Little cheated Connecticut workers and retirees of their hard-earned pension money."
The two officials claim that the firm breached both its contractual obligations and its fiduciary responsibilities and violated securities law. How so? The firm sunk $95 million of Connecticut pension money in XO Communications and $31 million in McLeodUSA. The two foundering telecommunications companies later filed for bankruptcy, rendering the investments just about worthless.
Ms. Nappier and Mr. Blumenthal say the contract with Forstmann Little precluded the firm from investing large sums of money in any one venture, particularly a highly speculative one. Moreover, Forstmann Little is said to have misled, misrepresented and concealed information regarding the nature of the investments.
Naturally, the company disagrees. Forstmann Little officials say Ms. Nappier and Mr. Blumenthal are looking for a scapegoat for a deal gone bad. Sometimes investments pan out and sometimes they don't. In this case, they didn't. C'est la vie.
This lawsuit is expected to drag out over the course of the summer. Though the outcome is unclear, one thing seems certain: If Connecticut prevails, a flood of suits will be unleashed by shareholders unhappy about their losses. No wonder the financial services world is eyeing little Rockville. |
/**
* Created by Administrator on 2017/5/25.
*/
public class ShowContactsAdapter extends RecyclerView.Adapter<ShowContactsAdapter.ViewHolder>{
private List<ContactInfo> contactInfos;
private OnItemClickListener onItemClickListener;
private int count = 0;
//private List<Integer> checkboxUserIdList = new ArrayList<>();
//private boolean[] flags;
private List<Integer> checkedItems;
public ShowContactsAdapter(List<ContactInfo> contactInfos) {
this.contactInfos = contactInfos;
checkedItems = OnekeyForHelpActivity.getCheckedItems();
//flags = new boolean[contactInfos.size()];
}
public void setOnItemClickListener(OnItemClickListener onItemClickListener) {
this.onItemClickListener = onItemClickListener;
}
@Override
public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
return new ViewHolder(LayoutInflater.from(parent.getContext()).inflate(R.layout.show_contacts_item_layout, parent, false));
}
@Override
public void onBindViewHolder(final ViewHolder holder, final int position) {
holder.itemView.setTag(Integer.valueOf(position));
final ContactInfo contactInfo = contactInfos.get(position);
String name = contactInfo.getName();
if(name.length() == 1){
holder.rtv_head.setText(name);
}else if(name.length() == 2 || name.length() == 3 ){
holder.rtv_head.setText(name.substring(1));
}else{
holder.rtv_head.setText(name.substring(2));
}
holder.rtv_head.setTextColor(Color.WHITE);
holder.rtv_head.setFillColor(contactInfo.getHeadColor());
holder.contact_name.setText(contactInfo.getName());
holder.contact_phone.setText(contactInfo.getPhone());
holder.tv_address.setText(contactInfo.getAttribute());
//方法一:将checkbox的选中状态保存到map<Integer,boolean> http://blog.csdn.net/qq_16265959/article/details/53399466
//方法二:定义一个boolean类型的数组 http://blog.csdn.net/jiang547860818/article/details/53126990
//方法三:给条目设置tag标签 http://www.cnblogs.com/CharlesGrant/p/5171133.html
//方法四:定义一个list集合,用于保存用户点击的位置,如果用户点击了,则将点击位置保存到list集合当中,如果取消点击,则从list集合当中移除
/*holder.cb_agree.setOnCheckedChangeListener(null);
holder.cb_agree.setChecked(flags[position]);
holder.cb_agree.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
flags[position] = isChecked;
}
});*/
holder.cb_agree.setChecked(checkedItems.contains(Integer.valueOf(position)));
//绑定点击事件
holder.itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
//当点击条目的时候,复选框被选中
// 1、所有的都会被选中 不能是非手机号码
// 2、一旦选中无法取消 应该是如果是选中,点击取消,如果是不选中的话,点击选中
// 3、不超过三个?
if(!contactInfo.getPhone().matches("^1[34578]\\d{9}$")){
Toast.makeText(v.getContext(), "请选择正确的手机号码", Toast.LENGTH_SHORT).show();
return;
}
if(checkedItems.size() == 3){
//Toast.makeText(v.getContext(), "您选择的联系人已经超过上限啦", Toast.LENGTH_SHORT).show();
// return;
//只能移除,不能添加
//如果你点的那个条目已经被选中,那我们就移除
//如果你点击的那个条目没有被选中,那么我们就不允许选中
if(holder.cb_agree.isChecked()){
checkedItems.remove(Integer.valueOf(position));
}else{
Toast.makeText(v.getContext(), "您选择的联系人已经超过上限啦", Toast.LENGTH_SHORT).show();
return;
}
}
holder.cb_agree.setChecked(!holder.cb_agree.isChecked());
if(holder.cb_agree.isChecked()){
checkedItems.add(position);
}else{
checkedItems.remove(Integer.valueOf(position));
}
/*if(holder.cb_agree.isChecked()){
holder.cb_agree.setChecked(false);
}else{
holder.cb_agree.setChecked(true);
}*/
/*if(count == 3){
if(holder.cb_agree.isChecked()){
holder.cb_agree.setChecked(false);
count--;
}else{
Toast.makeText(v.getContext(), "您已经选了三个联系人啦", Toast.LENGTH_SHORT).show();
}
return;
}
if(holder.cb_agree.isChecked()){
count--;
}else{
//被选中
count++;
}
holder.cb_agree.setChecked(!holder.cb_agree.isChecked());*/
//当你点击条目的时候,那么就会触发条目点击事件
if(onItemClickListener != null){
onItemClickListener.onItemClick(holder);
}
}
});
}
@Override
public int getItemCount() {
return contactInfos == null ? 0 : contactInfos.size();
}
class ViewHolder extends RecyclerView.ViewHolder{
@BindView(R.id.rtv_head)
CircleTextImageView rtv_head;
@BindView(R.id.contact_name)
TextView contact_name;
@BindView(R.id.contact_phone)
TextView contact_phone;
@BindView(R.id.tv_address)
TextView tv_address;
@BindView(R.id.cb_agree)
CheckBox cb_agree;
public ViewHolder(View itemView) {
super(itemView);
ButterKnife.bind(this, itemView);
}
}
} |
Update:
If you read this post (I’m Crazy and I’m gaming for 24 hours for an amazing cause! Sponsor Me!) then you know that I’m doing a bit of fundraising for my favourite dog rescue — CaliCan Rescue.
But I’ve noticed that I’ve still not received any sponsorships, even though I’ve gone to the extraordinary effort of posting once about it on Facebook, and writing the aforementioned blog post. Hmm, I think I need something more to get my friends involved. 🙂
So, unlike Rene (one of CaliCan’s awesome head honchos), I’ll not be wagering anything that brings (or doesn’t bring) a razor near my noggin.
Rather, I thought I’d reach out to some friends at BioWare and see if they can provide assistance.
And they have. Behold!
My friends at Bioware have generously donated the perfect things for gaming into the wee small hours. And I get to give them away to folk who sponsor me!
Toques
Contigo water bottles
Contigo hot drink travellers
A couple of other branded drink containers
Bioware pens
A nice, lightweight Bioware backpack!
The details
Now, given the anticipated high demand for these premier branded items we’re looking for a minimum donation of $20.00. One donation per item. And we’ll figure something out for the pens. 🙂
We’ve only got a couple of each item — what you see in the photo is what we’ve got — so I anticipate the more popular ones will go quickly, especially the toques… come on! Bioware invented the gamer toque (or beanie as it’s called elsewhere)… I know, I was with the company at the time!
But anyway, there you go. Bioware branded swag and the warm fuzzy feeling of supporting one of the best dog rescue organizations on the planet.
Simply by sponsoring me 🙂
(oh, and after you’ve done that (thank you!) just send me an email and we’ll figure out how to get your swag to you — bradblog At gmail.com) |
A real-time database management system for logistics systems: A case study Due to the effect of globalization, the supply chain network has become more complex than before. In order to provide seamless integration among different supply chain parties within the network, and achieve high rewards in business activities, real-time information sharing is an essential element. Thus, numerous researches focus on instant data capturing techniques and effective data management tools separately for enhancing the performance of logistics operations, such as order picking, storage operation, etc. However, the attention paid by researchers to the integration of logistics operations, real-time information collection techniques and database management is relatively limited. In this paper, a Warehouse Resources Management System, an interactive database management system for logistics operations will be introduced in order to optimize the action of information retrieval performed by front-end users using Structured Query Language within the system. Real-time warehouse resources status will be captured by Radio Frequency Identification. Thus, interactive instant responses will be shown in the system for assisting users in making real-time decisions. The query optimization technique has been applied to the system for minimizing the expected cost of retrieving the required information. |
The US House of Representatives Financial Services Committee is holding a hearing on virtual currencies this week.
According to a published memorandum, the Thursday hearing is being hosted by the Terrorism and Illicit Finance Subcommittee, and is entitled “Virtual Currency: Financial Innovation and National Security Implications”.
Set to appear as witnesses are Jerry Brito, executive director of the nonprofit advocacy group Coin Center; Scott Dueweke, who serves as president of the Identity and Payments Association; Kathryn Haun, assistant US Attorney and Digital Currency Coordinator for the Department of Justice; Jonathan Levin, co-founder of blockchain startup Chainalysis; and Luke Wilson, vice president of business development for blockchain startup Elliptic.
The committee, according to the memorandum, will focus on exploring “terrorists and illicit use of … FinTech, the national security implications of virtual currencies such as bitcoin, and the use of ‘blockchain’ technologies to record transactions and uncover illicit activities”.
It goes on: “Witnesses will provide testimony about the exploitation of virtual currency by terrorists and transnational criminal groups, as well as provide risk assessments and policy considerations to mitigate illicit financing but not to impede the development of FinTech innovations.”
The hearing comes as members of Congress turn their attention toward digital currencies, primarily through the lens of terrorism financing.
A House subcommittee focused on intelligence is considering a bill to study the area, and last week, a pair of influential senators introduced a bill of their own that calls for more oversight of digital currency business activities in the US.
Congress image via Shutterstock |
Reading boss Brian McDermott praises his side for their "spirit" and "desire" in the 1-1 draw at Queens Park Rangers.
Reading took the lead in the first half when Kaspars Gorkss scored against his former club, but Djibril Cisse equalised for QPR after the interval.
The result means both teams remain in the Premier League's bottom three, with neither having won a league match this season. |
# -*- coding: utf-8 -*-
r'''
Manage the Windows registry
Hives
-----
Hives are the main sections of the registry and all begin with the word HKEY.
- HKEY_LOCAL_MACHINE
- HKEY_CURRENT_USER
- HKEY_USER
Keys
----
Keys are the folders in the registry. Keys can have many nested subkeys. Keys
can have a value assigned to them under the (Default)
When passing a key on the CLI it must be quoted correctly depending on the
backslashes being used (``\`` vs ``\\``). The following are valid methods of
passing the the key on the CLI:
Using single backslashes:
``"SOFTWARE\Python"``
``'SOFTWARE\Python'`` (will not work on a Windows Master)
Using double backslashes:
``SOFTWARE\\Python``
-----------------
Values or Entries
-----------------
Values or Entries are the name/data pairs beneath the keys and subkeys. All keys
have a default name/data pair. The name is ``(Default)`` with a displayed value
of ``(value not set)``. The actual value is Null.
Example
-------
The following example is an export from the Windows startup portion of the
registry:
.. code-block:: bash
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run]
"RTHDVCPL"="\"C:\\Program Files\\Realtek\\Audio\\HDA\\RtkNGUI64.exe\" -s"
"NvBackend"="\"C:\\Program Files (x86)\\NVIDIA Corporation\\Update Core\\NvBackend.exe\""
"BTMTrayAgent"="rundll32.exe \"C:\\Program Files (x86)\\Intel\\Bluetooth\\btmshellex.dll\",TrayApp"
In this example these are the values for each:
Hive:
``HKEY_LOCAL_MACHINE``
Key and subkeys:
``SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run``
Value:
- There are 3 value names:
- `RTHDVCPL`
- `NvBackend`
- `BTMTrayAgent`
- Each value name has a corresponding value
:depends: - salt.utils.win_reg
'''
# When production windows installer is using Python 3, Python 2 code can be removed
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import logging
# Import Salt libs
import salt.utils.platform
from salt.exceptions import CommandExecutionError
log = logging.getLogger(__name__)
# Define the module's virtual name
__virtualname__ = 'reg'
def __virtual__():
'''
Only works on Windows systems with PyWin32
'''
if not salt.utils.platform.is_windows():
return (False, 'reg execution module failed to load: '
'The module will only run on Windows systems')
if 'reg.read_value' not in __utils__:
return (False, 'reg execution module failed to load: '
'The reg salt util is unavailable')
return __virtualname__
def key_exists(hive, key, use_32bit_registry=False):
r'''
Check that the key is found in the registry. This refers to keys and not
value/data pairs.
Args:
hive (str): The hive to connect to
key (str): The key to check
use_32bit_registry (bool): Look in the 32bit portion of the registry
Returns:
bool: True if exists, otherwise False
CLI Example:
.. code-block:: bash
salt '*' reg.key_exists HKLM SOFTWARE\Microsoft
'''
return __utils__['reg.key_exists'](hive=hive,
key=key,
use_32bit_registry=use_32bit_registry)
def value_exists(hive, key, vname, use_32bit_registry=False):
r'''
Check that the value/data pair is found in the registry.
.. versionadded:: 3000
Args:
hive (str): The hive to connect to
key (str): The key to check in
vname (str): The name of the value/data pair you're checking
use_32bit_registry (bool): Look in the 32bit portion of the registry
Returns:
bool: True if exists, otherwise False
CLI Example:
.. code-block:: bash
salt '*' reg.value_exists HKLM SOFTWARE\Microsoft\Windows\CurrentVersion CommonFilesDir
'''
return __utils__['reg.value_exists'](hive=hive,
key=key,
vname=vname,
use_32bit_registry=use_32bit_registry)
def broadcast_change():
'''
Refresh the windows environment.
.. note::
This will only effect new processes and windows. Services will not see
the change until the system restarts.
Returns:
bool: True if successful, otherwise False
CLI Example:
.. code-block:: bash
salt '*' reg.broadcast_change
'''
return salt.utils.win_functions.broadcast_setting_change('Environment')
def list_keys(hive, key=None, use_32bit_registry=False):
'''
Enumerates the subkeys in a registry key or hive.
Args:
hive (str):
The name of the hive. Can be one of the following:
- HKEY_LOCAL_MACHINE or HKLM
- HKEY_CURRENT_USER or HKCU
- HKEY_USER or HKU
- HKEY_CLASSES_ROOT or HKCR
- HKEY_CURRENT_CONFIG or HKCC
key (str):
The key (looks like a path) to the value name. If a key is not
passed, the keys under the hive will be returned.
use_32bit_registry (bool):
Accesses the 32bit portion of the registry on 64 bit installations.
On 32bit machines this is ignored.
Returns:
list: A list of keys/subkeys under the hive or key.
CLI Example:
.. code-block:: bash
salt '*' reg.list_keys HKLM 'SOFTWARE'
'''
return __utils__['reg.list_keys'](hive=hive,
key=key,
use_32bit_registry=use_32bit_registry)
def list_values(hive, key=None, use_32bit_registry=False):
r'''
Enumerates the values in a registry key or hive.
.. note::
The ``(Default)`` value will only be returned if it is set, otherwise it
will not be returned in the list of values.
Args:
hive (str):
The name of the hive. Can be one of the following:
- HKEY_LOCAL_MACHINE or HKLM
- HKEY_CURRENT_USER or HKCU
- HKEY_USER or HKU
- HKEY_CLASSES_ROOT or HKCR
- HKEY_CURRENT_CONFIG or HKCC
key (str):
The key (looks like a path) to the value name. If a key is not
passed, the values under the hive will be returned.
use_32bit_registry (bool):
Accesses the 32bit portion of the registry on 64 bit installations.
On 32bit machines this is ignored.
Returns:
list: A list of values under the hive or key.
CLI Example:
.. code-block:: bash
salt '*' reg.list_values HKLM 'SYSTEM\\CurrentControlSet\\Services\\Tcpip'
'''
return __utils__['reg.list_values'](hive=hive,
key=key,
use_32bit_registry=use_32bit_registry)
def read_value(hive, key, vname=None, use_32bit_registry=False):
r'''
Reads a registry value entry or the default value for a key. To read the
default value, don't pass ``vname``
Args:
hive (str): The name of the hive. Can be one of the following:
- HKEY_LOCAL_MACHINE or HKLM
- HKEY_CURRENT_USER or HKCU
- HKEY_USER or HKU
- HKEY_CLASSES_ROOT or HKCR
- HKEY_CURRENT_CONFIG or HKCC
key (str):
The key (looks like a path) to the value name.
vname (str):
The value name. These are the individual name/data pairs under the
key. If not passed, the key (Default) value will be returned.
use_32bit_registry (bool):
Accesses the 32bit portion of the registry on 64bit installations.
On 32bit machines this is ignored.
Returns:
dict: A dictionary containing the passed settings as well as the
value_data if successful. If unsuccessful, sets success to False.
bool: Returns False if the key is not found
If vname is not passed:
- Returns the first unnamed value (Default) as a string.
- Returns none if first unnamed value is empty.
CLI Example:
The following will get the value of the ``version`` value name in the
``HKEY_LOCAL_MACHINE\\SOFTWARE\\Salt`` key
.. code-block:: bash
salt '*' reg.read_value HKEY_LOCAL_MACHINE 'SOFTWARE\Salt' 'version'
CLI Example:
The following will get the default value of the
``HKEY_LOCAL_MACHINE\\SOFTWARE\\Salt`` key
.. code-block:: bash
salt '*' reg.read_value HKEY_LOCAL_MACHINE 'SOFTWARE\Salt'
'''
return __utils__['reg.read_value'](hive=hive,
key=key,
vname=vname,
use_32bit_registry=use_32bit_registry)
def set_value(hive,
key,
vname=None,
vdata=None,
vtype='REG_SZ',
use_32bit_registry=False,
volatile=False):
'''
Sets a value in the registry. If ``vname`` is passed, it will be the value
for that value name, otherwise it will be the default value for the
specified key
Args:
hive (str):
The name of the hive. Can be one of the following
- HKEY_LOCAL_MACHINE or HKLM
- HKEY_CURRENT_USER or HKCU
- HKEY_USER or HKU
- HKEY_CLASSES_ROOT or HKCR
- HKEY_CURRENT_CONFIG or HKCC
key (str):
The key (looks like a path) to the value name.
vname (str):
The value name. These are the individual name/data pairs under the
key. If not passed, the key (Default) value will be set.
vdata (str, int, list, bytes):
The value you'd like to set. If a value name (vname) is passed, this
will be the data for that value name. If not, this will be the
(Default) value for the key.
The type of data this parameter expects is determined by the value
type specified in ``vtype``. The correspondence is as follows:
- REG_BINARY: Binary data (str in Py2, bytes in Py3)
- REG_DWORD: int
- REG_EXPAND_SZ: str
- REG_MULTI_SZ: list of str
- REG_QWORD: int
- REG_SZ: str
.. note::
When setting REG_BINARY, string data will be converted to
binary.
.. note::
The type for the (Default) value is always REG_SZ and cannot be
changed.
.. note::
This parameter is optional. If ``vdata`` is not passed, the Key
will be created with no associated item/value pairs.
vtype (str):
The value type. The possible values of the vtype parameter are
indicated above in the description of the vdata parameter.
use_32bit_registry (bool):
Sets the 32bit portion of the registry on 64bit installations. On
32bit machines this is ignored.
volatile (bool):
When this parameter has a value of True, the registry key will be
made volatile (i.e. it will not persist beyond a system reset or
shutdown). This parameter only has an effect when a key is being
created and at no other time.
Returns:
bool: True if successful, otherwise False
CLI Example:
This will set the version value to 2015.5.2 in the SOFTWARE\\Salt key in
the HKEY_LOCAL_MACHINE hive
.. code-block:: bash
salt '*' reg.set_value HKEY_LOCAL_MACHINE 'SOFTWARE\\Salt' 'version' '2015.5.2'
CLI Example:
This function is strict about the type of vdata. For instance this
example will fail because vtype has a value of REG_SZ and vdata has a
type of int (as opposed to str as expected).
.. code-block:: bash
salt '*' reg.set_value HKEY_LOCAL_MACHINE 'SOFTWARE\\Salt' 'str_data' 1.2
CLI Example:
In this next example vdata is properly quoted and should succeed.
.. code-block:: bash
salt '*' reg.set_value HKEY_LOCAL_MACHINE 'SOFTWARE\\Salt' 'str_data' vtype=REG_SZ vdata="'1.2'"
CLI Example:
This is an example of using vtype REG_BINARY.
.. code-block:: bash
salt '*' reg.set_value HKEY_LOCAL_MACHINE 'SOFTWARE\\Salt' 'bin_data' vtype=REG_BINARY vdata='Salty Data'
CLI Example:
An example of using vtype REG_MULTI_SZ is as follows:
.. code-block:: bash
salt '*' reg.set_value HKEY_LOCAL_MACHINE 'SOFTWARE\\Salt' 'list_data' vtype=REG_MULTI_SZ vdata='["Salt", "is", "great"]'
'''
return __utils__['reg.set_value'](hive=hive,
key=key,
vname=vname,
vdata=vdata,
vtype=vtype,
use_32bit_registry=use_32bit_registry,
volatile=volatile)
def delete_key_recursive(hive, key, use_32bit_registry=False):
r'''
.. versionadded:: 2015.5.4
Delete a registry key to include all subkeys and value/data pairs.
Args:
hive (str):
The name of the hive. Can be one of the following
- HKEY_LOCAL_MACHINE or HKLM
- HKEY_CURRENT_USER or HKCU
- HKEY_USER or HKU
- HKEY_CLASSES_ROOT or HKCR
- HKEY_CURRENT_CONFIG or HKCC
key (str):
The key to remove (looks like a path)
use_32bit_registry (bool):
Deletes the 32bit portion of the registry on 64bit
installations. On 32bit machines this is ignored.
Returns:
dict: A dictionary listing the keys that deleted successfully as well as
those that failed to delete.
CLI Example:
The following example will remove ``delete_me`` and all its subkeys from the
``SOFTWARE`` key in ``HKEY_LOCAL_MACHINE``:
.. code-block:: bash
salt '*' reg.delete_key_recursive HKLM SOFTWARE\\delete_me
'''
return __utils__['reg.delete_key_recursive'](hive=hive,
key=key,
use_32bit_registry=use_32bit_registry)
def delete_value(hive, key, vname=None, use_32bit_registry=False):
r'''
Delete a registry value entry or the default value for a key.
Args:
hive (str):
The name of the hive. Can be one of the following
- HKEY_LOCAL_MACHINE or HKLM
- HKEY_CURRENT_USER or HKCU
- HKEY_USER or HKU
- HKEY_CLASSES_ROOT or HKCR
- HKEY_CURRENT_CONFIG or HKCC
key (str):
The key (looks like a path) to the value name.
vname (str):
The value name. These are the individual name/data pairs under the
key. If not passed, the key (Default) value will be deleted.
use_32bit_registry (bool):
Deletes the 32bit portion of the registry on 64bit installations. On
32bit machines this is ignored.
Returns:
bool: True if successful, otherwise False
CLI Example:
.. code-block:: bash
salt '*' reg.delete_value HKEY_CURRENT_USER 'SOFTWARE\\Salt' 'version'
'''
return __utils__['reg.delete_value'](hive=hive,
key=key,
vname=vname,
use_32bit_registry=use_32bit_registry)
def import_file(source, use_32bit_registry=False):
'''
Import registry settings from a Windows ``REG`` file by invoking ``REG.EXE``.
.. versionadded:: 2018.3.0
Args:
source (str):
The full path of the ``REG`` file. This can be either a local file
path or a URL type supported by salt (e.g. ``salt://salt_master_path``)
use_32bit_registry (bool):
If the value of this parameter is ``True`` then the ``REG`` file
will be imported into the Windows 32 bit registry. Otherwise the
Windows 64 bit registry will be used.
Returns:
bool: True if successful, otherwise an error is raised
Raises:
ValueError: If the value of ``source`` is an invalid path or otherwise
causes ``cp.cache_file`` to return ``False``
CommandExecutionError: If ``reg.exe`` exits with a non-0 exit code
CLI Example:
.. code-block:: bash
salt machine1 reg.import_file salt://win/printer_config/110_Canon/postinstall_config.reg
'''
cache_path = __salt__['cp.cache_file'](source)
if not cache_path:
error_msg = "File/URL '{0}' probably invalid.".format(source)
raise ValueError(error_msg)
if use_32bit_registry:
word_sz_txt = "32"
else:
word_sz_txt = "64"
cmd = 'reg import "{0}" /reg:{1}'.format(cache_path, word_sz_txt)
cmd_ret_dict = __salt__['cmd.run_all'](cmd, python_shell=True)
retcode = cmd_ret_dict['retcode']
if retcode != 0:
raise CommandExecutionError(
'reg.exe import failed',
info=cmd_ret_dict
)
return True
|
<filename>lib/models/v1UpdateProcessorResponse.d.ts
import type { v1Processor } from './v1Processor';
export declare type v1UpdateProcessorResponse = {
processor?: v1Processor;
};
|
<reponame>ckeitz/arrayfire<filename>src/api/cpp/device.cpp
/*******************************************************
* Copyright (c) 2014, ArrayFire
* All rights reserved.
*
* This file is distributed under 3-clause BSD license.
* The complete license agreement can be obtained at:
* http://arrayfire.com/licenses/BSD-3-Clause
********************************************************/
#include <af/device.h>
#include <af/compatible.h>
#include "error.hpp"
namespace af
{
void info()
{
AF_THROW(af_info());
}
void deviceprop(char* d_name, char* d_platform, char *d_toolkit, char* d_compute)
{
AF_THROW(af_deviceprop(d_name, d_platform, d_toolkit, d_compute));
}
int getDeviceCount()
{
int devices = -1;
AF_THROW(af_get_device_count(&devices));
return devices;
}
int devicecount() { return getDeviceCount(); }
void setDevice(const int device)
{
AF_THROW(af_set_device(device));
}
void deviceset(const int device) { setDevice(device); }
int getDevice()
{
int device = 0;
AF_THROW(af_get_device(&device));
return device;
}
bool isDoubleAvailable(const int device)
{
bool temp;
AF_THROW(af_get_dbl_support(&temp, device));
return temp;
}
int deviceget() { return getDevice(); }
void sync(int device)
{
AF_THROW(af_sync(device));
}
///////////////////////////////////////////////////////////////////////////
// Alloc and free host, pinned, zero copy
static unsigned size_of(af::dtype type)
{
switch(type) {
case f32: return sizeof(float);
case f64: return sizeof(double);
case s32: return sizeof(int);
case u32: return sizeof(unsigned);
case u8 : return sizeof(unsigned char);
case b8 : return sizeof(unsigned char);
case c32: return sizeof(float) * 2;
case c64: return sizeof(double) * 2;
default: return sizeof(float);
}
}
void *alloc(size_t elements, af::dtype type)
{
void *ptr;
AF_THROW(af_alloc_device(&ptr, elements * size_of(type)));
// FIXME: Add to map
return ptr;
}
void *pinned(size_t elements, af::dtype type)
{
void *ptr;
AF_THROW(af_alloc_pinned(&ptr, elements * size_of(type)));
// FIXME: Add to map
return ptr;
}
void free(const void *ptr)
{
//FIXME: look up map and call the right free
AF_THROW(af_free_device((void *)ptr));
}
void freePinned(const void *ptr)
{
//FIXME: look up map and call the right free
AF_THROW(af_free_pinned((void *)ptr));
}
#define INSTANTIATE(T) \
template<> AFAPI \
T* alloc(size_t elements) \
{ \
return (T*)alloc(elements, (af::dtype)dtype_traits<T>::af_type); \
} \
template<> AFAPI \
T* pinned(size_t elements) \
{ \
return (T*)pinned(elements, (af::dtype)dtype_traits<T>::af_type); \
}
INSTANTIATE(float)
INSTANTIATE(double)
INSTANTIATE(cfloat)
INSTANTIATE(cdouble)
INSTANTIATE(int)
INSTANTIATE(unsigned)
INSTANTIATE(unsigned char)
INSTANTIATE(char)
}
|
export class MetadataAttributeMetadata {
public readonly traitType: string;
public readonly displayType: string;
public readonly probability: number;
public readonly minValue?: number;
public readonly maxValue?: number;
constructor(trait: string, display: string, prob: number, min = -1, max = -1) {
this.traitType = trait;
this.displayType = display;
this.probability = prob;
this.minValue = min;
this.maxValue = max;
}
}
|
<reponame>PabloDiablo/Realtime-Pub-Quiz
import React, { useState } from 'react';
import { RouteComponentProps } from '@reach/router';
import { makeStyles, Container, Typography, TextField, Button, FormControl, InputLabel, Select, MenuItem, Collapse } from '@material-ui/core';
import { FastAnswerOptions } from '../../types/state';
import { postCreateGame } from '../../services/game';
import { baseUrl } from '../../config';
const useStyles = makeStyles(theme => ({
paper: {
marginTop: theme.spacing(8),
display: 'flex',
flexDirection: 'column',
alignItems: 'center'
},
switchLabel: {
marginLeft: '11px',
marginRight: 0,
width: '100%'
},
form: {
width: '100%',
marginTop: theme.spacing(1)
},
textField: {
backgroundColor: 'white'
},
formControl: {
margin: theme.spacing(1),
width: '100%'
},
submit: {
margin: theme.spacing(3, 0, 2)
},
error: {
color: 'red'
}
}));
const CreateGame: React.FC<RouteComponentProps> = ({ navigate }) => {
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState('');
const [roomName, setRoomName] = useState('');
const [correctPoints, setCorrectPoints] = useState<number | ''>(10);
const [randomPrizePosition, setRandomPrizePosition] = useState<number | ''>('');
const [fastOption, setFastOption] = useState(FastAnswerOptions.None);
const [fastBonusPoints, setFastBonusPoints] = useState<number | ''>('');
const [fastBonusNumTeams, setFastBonusNumTeams] = useState<number | ''>('');
const handleFastOptionChange = (e: React.ChangeEvent<{ value: FastAnswerOptions }>) => {
setFastOption(e.target.value);
switch (e.target.value) {
case FastAnswerOptions.None:
setFastBonusPoints('');
setFastBonusNumTeams('');
break;
case FastAnswerOptions.FastSingle:
setFastBonusNumTeams('');
break;
}
};
const onSubmit = async (e: React.FormEvent) => {
e.preventDefault();
setIsLoading(true);
setError('');
const res = await postCreateGame({
roomName,
correctPoints: correctPoints || 0,
randomPrizePosition: randomPrizePosition || 0,
fastAnswerMethod: fastOption,
bonusPoints: fastBonusPoints || 0,
bonusNumTeams: fastBonusNumTeams || 0
});
if (!res.success) {
setError('There was a connection error. Please try again.');
} else if (res.gameRoomAlreadyExists) {
setError('Game room name is already taken.');
} else if (res.validationError) {
setError('There was a validation error. Check the values and try again.');
} else {
navigate(`${baseUrl}/game/${roomName.toUpperCase()}`);
return;
}
setIsLoading(false);
};
const classes = useStyles();
return (
<Container component="main" maxWidth="xs">
<div className={classes.paper}>
<Typography component="h1" variant="h5">
Create a new game
</Typography>
{error && <Typography className={classes.error}>{error}</Typography>}
<form className={classes.form} noValidate onSubmit={onSubmit}>
<TextField
variant="outlined"
margin="normal"
required
fullWidth
id="roomname"
label="Game Room Name"
name="roomname"
autoComplete="off"
autoFocus
inputProps={{ className: classes.textField }}
disabled={isLoading}
value={roomName}
onChange={e => setRoomName(e.target.value)}
/>
<TextField
variant="outlined"
margin="normal"
required
fullWidth
id="correctpoints"
label="Points per Correct Answer"
name="correctpoints"
autoComplete="off"
inputProps={{ className: classes.textField }}
disabled={isLoading}
value={correctPoints}
onChange={e => setCorrectPoints(Number(e.target.value) || '')}
/>
<TextField
variant="outlined"
margin="normal"
fullWidth
id="randompos"
label="Random Prize Position"
name="randompos"
autoComplete="off"
inputProps={{ className: classes.textField }}
disabled={isLoading}
value={randomPrizePosition}
onChange={e => setRandomPrizePosition(Number(e.target.value) || '')}
/>
<FormControl className={classes.formControl}>
<InputLabel id="fast-answer-bonus-label">Fast Answer Bonus</InputLabel>
<Select labelId="fast-answer-bonus-label" id="fast-answer-bonus-select" value={fastOption} onChange={handleFastOptionChange} disabled={isLoading}>
<MenuItem value={FastAnswerOptions.None}>None</MenuItem>
<MenuItem value={FastAnswerOptions.FastSingle}>Single Fastest</MenuItem>
<MenuItem value={FastAnswerOptions.FastX}>Fastest X</MenuItem>
<MenuItem value={FastAnswerOptions.Sliding}>Sliding</MenuItem>
<MenuItem value={FastAnswerOptions.Descending}>Descending</MenuItem>
</Select>
</FormControl>
<Collapse in={fastOption === FastAnswerOptions.FastSingle || fastOption === FastAnswerOptions.Sliding} timeout="auto" unmountOnExit>
<TextField
variant="outlined"
margin="normal"
fullWidth
id="bonuspoints"
label="Fastest Answer Bonus"
name="bonuspoints"
autoComplete="off"
inputProps={{ className: classes.textField }}
disabled={isLoading}
value={fastBonusPoints}
onChange={e => setFastBonusPoints(Number(e.target.value) || '')}
/>
</Collapse>
<Collapse in={fastOption === FastAnswerOptions.FastX} timeout="auto" unmountOnExit>
<TextField
variant="outlined"
margin="normal"
fullWidth
id="bonuspoints"
label="Fastest Answer Bonus"
name="bonuspoints"
autoComplete="off"
inputProps={{ className: classes.textField }}
disabled={isLoading}
value={fastBonusPoints}
onChange={e => setFastBonusPoints(Number(e.target.value) || '')}
/>
<TextField
variant="outlined"
margin="normal"
fullWidth
id="bonuspointspct"
label="Number of Teams"
name="bonuspointspct"
autoComplete="off"
inputProps={{ className: classes.textField }}
disabled={isLoading}
value={fastBonusNumTeams}
onChange={e => setFastBonusNumTeams(Number(e.target.value) || '')}
/>
</Collapse>
<Button type="submit" fullWidth variant="contained" color="primary" className={classes.submit} disabled={isLoading}>
{isLoading ? 'Saving...' : 'Save'}
</Button>
</form>
</div>
</Container>
);
};
export default CreateGame;
|
/**
* This job can be used to parse Java jobs on the sql.ru web site.
*/
public class SQLParser implements Job {
private final static Logger LOG = Logger.getLogger(SQLParser.class);
private Properties properties;
private static File file;
private final static String FIRST_LAUNCH_KEY = "Launch.firstLaunch";
private Storage storage = new Storage();
/**
* Opens page, gets job offer table raws and adds them to collection as JobOffer objects. If
* connection was not stable - page is added into skipped pages collection.
*
* @param page
* @param filterDate
* @return
*/
private boolean filterPage(String page, Timestamp filterDate) {
boolean needToGoFurther = true;
Document doc;
try {
doc = Jsoup.connect(page).get();
} catch (IOException e) {
e.printStackTrace();
storage.addSkippedRaws(page);
LOG.warn(page + "was skipped.");
return true;
}
Element forumTable = doc.select("table.forumTable").first();
Elements raws = forumTable.select("tr");
for (Element e: raws) {
JobOffer offer = new JobOffer();
boolean needToSave = false;
Element topic = e.select("td.postslisttopic").first();
if (topic != null) {
Element href = topic.select("a").first();
if (href != null) {
String s = href.text().toLowerCase();
if (s.contains("java") && !s.contains("script")) {
offer.setTitle(href.text());
offer.setHref(href.attr("href"));
needToSave = true;
}
}
}
if (needToSave) {
Element date = e.select("td:nth-child(6)").first();
offer.setPostDate(date.text());
if (offer.getPostDate().before(filterDate)) {
needToGoFurther = false;
LOG.info(String.format("Offer date %s before filter date %s. Stop loading vacancies.", offer.getPostDate(), filterDate));
break;
}
LOG.info(String.format("Found Java job! Title: %s. Saving...", offer.getTitle()));
storage.addJavaRaws(offer);
}
}
return needToGoFurther;
}
/**
* Gets last page number.
*
* @return int
*/
private int getLastPageNumber() {
Document doc = null;
try {
doc = Jsoup.connect(Storage.SQL_MAIN_PAGE).timeout(60_000).get();
} catch (IOException e) {
e.printStackTrace();
}
Element pages = doc.select("#content-wrapper-forum > table:nth-child(6) > tbody > tr > td:nth-child(1)").first();
int lastPage = Integer.valueOf(pages.children().last().text());
LOG.info(String.format("Last page number is %d", lastPage));
return lastPage;
}
/**
* Creates all paths for all pages numbers, because all pages are not availible from html document.
*
* @param lastPageNumber
* @return list of all pages paths.
*/
private LinkedList<String> getAllPages(int lastPageNumber) {
LinkedList<String> list = new LinkedList<>();
for (int i = 1; i <= lastPageNumber; i++) {
list.add(Storage.SQL_MAIN_PAGE + "/" + i);
}
return list;
}
/**
* Cut offer id from reference for each selected job offer and set as forum id.
*/
private void setIDForOffers() {
for (JobOffer o: storage.getJavaRaws()) {
String href = o.getHref();
String[] arr = href.split("/");
o.setForumID(arr[4]);
}
}
/**
* Retuns first launch flag from properties.
*
* @return
*/
private boolean isFirstLaunch() {
return Boolean.valueOf(properties.getProperty("Launch.firstLaunch"));
}
/**
* Get through all pages and filter them.
*
* @param filterDate
* @param pages
*/
private void action(Timestamp filterDate, LinkedList<String> pages) {
for (String s: pages) {
LOG.info(s);
if (!this.filterPage(s, filterDate)) {
break;
}
}
this.setIDForOffers();
}
/**
* Main execution of the job.
*
* @param jobExecutionContext used to get some parameters from scheduler.
*/
@Override
public void execute(JobExecutionContext jobExecutionContext) {
LOG.info("Parser has just started his dirty job.");
try {
properties = (Properties) jobExecutionContext.getScheduler().getContext().get("properties");
file = (File) jobExecutionContext.getScheduler().getContext().get("file");
} catch (SchedulerException e) {
e.printStackTrace();
}
if (properties == null) {
LOG.error("Cannot resume execution, properties are null.");
System.exit(0);
}
DataBaseObject dataBaseObject = new DataBaseObject(properties);
Date date = new Date(System.currentTimeMillis());
Timestamp t;
if (isFirstLaunch()) {
t = new Timestamp(DateUtils.truncate(date, Calendar.YEAR).getTime());
LOG.info(String.format("First launch, loading all jobs since %s", t));
dataBaseObject.executeDBScripts("sql/parserDBInit.sql");
action(t, getAllPages(getLastPageNumber()));
markFirstLaunchAsFalse(file);
} else {
t = new Timestamp(DateUtils.truncate(date, Calendar.DATE).getTime());
LOG.info(String.format("Regular launch, loading all jobs since today - %s", t));
action(t, getAllPages(getLastPageNumber()));
}
if (storage.getSkippedPages().size() > 0) {
action(t, storage.getSkippedPages());
}
dataBaseObject.saveDataToDatabase(storage.getJavaRaws());
LOG.info("See you soon space cowboy...");
}
/**
* Mark first launch as 'false' and saves parameter back to file.
*
* @param file - original file that was supplied on the start of application.
*/
private void markFirstLaunchAsFalse(File file) {
properties.setProperty(FIRST_LAUNCH_KEY, String.valueOf(false));
try (OutputStream out = new FileOutputStream(file)) {
if (out != null) {
properties.store(out, "Launch flag was changed to 'false'");
LOG.info("Properties were changed.");
} else {
LOG.error("Cannot change properties.");
}
} catch (FileNotFoundException e) {
LOG.error(e.getMessage(), e);
} catch (IOException e) {
LOG.error(e.getMessage(), e);
}
}
} |
/*************************************************************************
* Copyright (C) [2019] by Cambricon, Inc. All rights reserved
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
* OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*************************************************************************/
#include <rapidjson/document.h>
#include <rapidjson/rapidjson.h>
#include <rapidjson/stringbuffer.h>
#include <rapidjson/writer.h>
#include <dirent.h>
#include <dlfcn.h>
#include <errno.h>
#include <getopt.h>
#include <stdio.h>
#include <string.h>
#include <algorithm>
#include <fstream>
#include <iomanip>
#include <string>
#include <vector>
#include "cnstream_logging.hpp"
#include "cnstream_module.hpp"
#include "cnstream_pipeline.hpp"
#include "cnstream_version.hpp"
static void Usage() {
std::cout << "Usage:" << std::endl;
std::cout << "\t inspect-tool [OPTION...] [MODULE-NAME]" << std::endl;
std::cout << "Options: " << std::endl;
std::cout << std::left << std::setw(40) << "\t -h, --help"
<< "Show usage" << std::endl;
std::cout << std::left << std::setw(40) << "\t -a, --all"
<< "Print all modules" << std::endl;
std::cout << std::left << std::setw(40) << "\t -m, --module-name"
<< "List the module parameters" << std::endl;
std::cout << std::left << std::setw(40) << "\t -v, --version"
<< "Print version information\n"
<< std::endl;
}
static const struct option long_option[] = {{"help", no_argument, nullptr, 'h'},
{"all", no_argument, nullptr, 'a'},
{"module-name", required_argument, nullptr, 'm'},
{"version", no_argument, nullptr, 'v'},
{nullptr, 0, nullptr, 0}};
static void PrintVersion() {
std::cout << "CNStream: " << cnstream::VersionString() << std::endl;
return;
}
static uint32_t GetFirstLetterPos(std::string desc, uint32_t begin, uint32_t length) {
if (begin + length > desc.length()) {
length = desc.length() - begin;
}
for (uint32_t i = 0; i < length; i++) {
if (desc.substr(begin + i, 1) != " ") {
return begin + i;
}
}
return begin;
}
static uint32_t GetLastSpacePos(std::string desc, uint32_t end, uint32_t length) {
if (end > desc.length()) {
end = desc.length();
}
if (end < length) {
length = end;
}
for (uint32_t i = 0; i < length; i++) {
if (desc.substr(end - i, 1) == " ") {
return end - i;
}
}
return end;
}
static uint32_t GetSubStrEnd(std::string desc, uint32_t begin, uint32_t sub_str_len) {
if (begin + sub_str_len < desc.length()) {
return GetLastSpacePos(desc, begin + sub_str_len, sub_str_len);
} else {
return desc.length();
}
}
static void PrintDesc(std::string desc, uint32_t indent, uint32_t sub_len) {
uint32_t len = desc.length();
uint32_t sub_begin = GetFirstLetterPos(desc, 0, sub_len);
uint32_t sub_end = GetSubStrEnd(desc, sub_begin, sub_len);
// std::cout << std::left << std::setw(first_width) << desc.substr(sub_begin, sub_end - sub_begin) << std::endl;
std::cout << desc.substr(sub_begin, sub_end - sub_begin) << std::endl;
while (sub_begin + sub_len < len) {
sub_begin = GetFirstLetterPos(desc, sub_end, sub_len);
sub_end = GetSubStrEnd(desc, sub_begin, sub_len);
std::cout << std::left << std::setw(indent) << "" << desc.substr(sub_begin, sub_end - sub_begin) << std::endl;
if (sub_end != len && sub_end + sub_len >= len) {
sub_begin = GetFirstLetterPos(desc, sub_end, len - sub_end);
std::cout << std::left << std::setw(indent) << "" << desc.substr(sub_begin, len - sub_begin) << std::endl;
}
}
}
static void PrintAllModulesDesc() {
const uint32_t width = 40;
const uint32_t sub_str_len = 80;
std::vector<std::string> modules = cnstream::ModuleFactory::Instance()->GetRegisted();
cnstream::ModuleCreatorWorker creator;
std::cout << "\033[01;32m"<< std::left << std::setw(width) << "Module Name"
<< "Description" << "\033[01;0m" << std::endl;
for (auto& it : modules) {
cnstream::Module* module = creator.Create(it, it);
std::cout << "\033[01;1m" << std::left << std::setw(width) << it << "\033[0m";
std::string desc = module->param_register_.GetModuleDesc();
PrintDesc(desc, width, sub_str_len);
std::cout << std::endl;
delete module;
}
}
static void PrintModuleCommonParameters() {
const uint32_t width = 30;
const uint32_t sub_str_len = 80;
std::cout << "\033[01;32m" << " " << std::left << std::setw(width) << "Common Parameter"
<< "Description" << "\033[0m" << std::endl;
std::cout << "\033[01;1m" << " " << std::left << std::setw(width) << "class_name" << "\033[0m";
PrintDesc("Module class name.", width + 2, sub_str_len);
std::cout << std::endl;
std::cout << "\033[01;1m" << " " << std::left << std::setw(width) << "parallelism" << "\033[0m";
PrintDesc("Module parallelism.", width + 2, sub_str_len);
std::cout << std::endl;
std::cout << "\033[01;1m" << " " << std::left << std::setw(width) << "max_input_queue_size" << "\033[0m";
PrintDesc("Max size of module input queue.", width + 2, sub_str_len);
std::cout << std::endl;
std::cout << "\033[01;1m" << " " << std::left << std::setw(width) << "next_modules" << "\033[0m";
PrintDesc("Next modules.", width + 2, sub_str_len);
std::cout << std::endl;
}
static void PrintModuleParameters(const std::string& module_name) {
const uint32_t width = 30;
const uint32_t sub_str_len = 80;
std::string name = module_name;
cnstream::ModuleCreatorWorker creator;
cnstream::Module* module = creator.Create(name, name);
if (nullptr == module) {
name = "cnstream::" + name;
module = creator.Create(name, name.substr(10));
if (nullptr == module) {
std::cout << "No such module: '" << module_name << "'." << std::endl;
return;
}
}
auto module_params = module->param_register_.GetParams();
std::cout <<"\033[01;33m" << module_name << " Details:" << "\033[0m" << std::endl;
PrintModuleCommonParameters();
std::cout << "\033[01;32m" << " " << std::left << std::setw(width) << "Custom Parameter"
<< "Description" << "\033[0m" << std::endl;
for (auto& it : module_params) {
std::cout << "\033[01;1m" << " " << std::left << std::setw(width) << it.first << "\033[0m";
PrintDesc(it.second, width + 2, sub_str_len);
std::cout << std::endl;
}
delete module;
}
int main(int argc, char* argv[]) {
int opt = 0;
bool getopt = false;
std::string config_file;
std::string module_name;
std::stringstream ss;
if (argc == 1) {
PrintAllModulesDesc();
return 0;
}
while ((opt = getopt_long(argc, argv, "ham:c:v", long_option, nullptr)) != -1) {
getopt = true;
switch (opt) {
case 'h':
Usage();
break;
case 'a':
PrintAllModulesDesc();
break;
case 'm':
ss.clear();
ss.str("");
ss << optarg;
module_name = ss.str();
PrintModuleParameters(module_name);
break;
case 'v':
PrintVersion();
break;
default:
return 0;
}
}
if (!getopt) {
for (int i = 1; i < argc; i++) {
ss.clear();
ss.str("");
ss << argv[i];
module_name = ss.str();
PrintModuleParameters(module_name);
std::cout << std::endl;
}
}
return 0;
}
|
// UseTempLogFile creates a temporary file and set it as the logfile name.
func UseTempLogFile(prefix string) error {
file, err := ioutil.TempFile(os.TempDir(), prefix)
if err != nil {
CRITICAL.Println(err)
return err
}
fmt.Println("Logging to file:", file.Name())
LogFileWriter = file
arrangeLoggers()
return nil
} |
The present invention relates to a tripod type constant velocity joint, which is disposed between rotating shafts connected at a joint angle with each other in a drive axle of, for example, an automobile, for transmitting a rotational torque.
Tripod type constant velocity joints are one of a number of types of constant velocity joints used in drive axles of, for example, automobiles.
For example, Japanese Laid Open Patent Application Nos. S63(1988)186036 and S62(1987)-233522 disclose a tripod type constant velocity joint 1, as shown in FIGS. 18 and 19(Axe2x80x94A cross sectional view of FIG. 18). This constant velocity joint 10 is provided with a hollow cylindrical housing 13 which is secured to an end of a first rotating shaft 12 serving as a drive shaft or the like on the differential gear side, and a tripod 15 which is secured to an end of a second rotating shaft 14 serving as driven shaft or the like on the wheel side. Grooves 16 are formed at three locations on the internal face of the housing 13 at an even spacing in the circumferential direction and extend outwardly in the radial direction of the housing 13 from said internal face.
On the other hand, the tripod 15 secured at one end of the second rotating shaft 14 comprises a unified form of a boss 17 for supporting the tripod 15 at one end of the second rotating shaft 14, and cylindrical trunnions 18 extending radially from three locations at equal spacing around the boss 17 in the circumferential direction. Around the tip end of the respective trunnions 18, rollers 19 are rotatably supported through a needle bearing 10, while allowing the rollers to be displaced in the axial direction by certain distances. A tripod type constant velocity joint 10 is provided by engaging the respective rollers 19 with the respective guide grooves 16 on an inner face of the housing 13. The respective pairs of side faces 11, on which each of the above guide grooves 16 is provided, are formed to circular recesses. Accordingly, each of the rollers 9 is rotatably and pivotably supported between the respective pairs of the side faces 11.
When the constant velocity joint 10 as described above is used, for example, the first rotational shaft 12 is rotated. The rotational force of the first rotational shaft 12 is, from the housing 13, through the roller 19, the needle bearing 20 and the trunnion 18, transmitted to the boss 17 of the tripod 15, thereby rotating the second rotational shaft 14. Further, if a central axis of the first rotational shaft 12 is not aligned with that of the second rotational shaft 14 (namely, a joint angle is not zero in the constant velocity joint 10), each of the trunnion 18 displaces relative to the side face 16a of each of the guide groove 16 to move around the tripod 15, as shown in FIGS. 18 and 19. At this time, the rollers 19 supported at the ends of the trunnions 18 move along the axial directions of the trunnions 18, respectively, while rolling on the side faces 16a of the guide grooves 16, respectively. Such movements ensure that a constant velocity between the first and second rotational shafts 12 and 14 is achieved.
If the first and second rotational shafts 12 and 14 are rotated with the joint angle present, in the case of the constant velocity joint 10 which is constructed and operated as described above, each of the rollers 19 moves with complexity. For example, each of the rollers 19 moves in the rotational axis 12 of the housing 13 along each of the side faces 16a of the respective guide grooves 16, while the orientations of the rollers 19 are being changed and further the rollers 19 displace in the axial direction of the trunnion 18. Such complex movements of the rollers 19 cannot cause a relative movement between a peripheral outside face of each of the rollers 19 and each of the side faces 16a of the guide grooves 16 to be smoothly effected. Thus, a relatively large friction occurs between the faces. As a result, in the constant velocity joint 10, three-directional axial forces occurs per one rotation. It is known that an adverse oscillation referred to as xe2x80x9cshudderxe2x80x9d may occur in some cases, if a large torque is transmitted with a relatively large joint angle present.
In order to solve the above problem, FR275280 discloses a structure as shown in FIG. 20 and the Japanese Laid-Open patent application No. H3-172619 discloses a structure as shown in FIG. 21. In the structure shown in FIG. 20, a roller is guided parallel to a housing groove and a spherical trunnion 18 can swing and pivot around a inner spherical surface of an inner roller 19b. Further, a contact area between the inner spherical surface of the inner roller 19b and the trunnion 18 when receiving a torque for a load is shaped to an ellipse having a larger long diameter, because a radius xe2x80x9crxe2x80x9d of a longitudinal cross-sectional shape of the spherical trunnion 18 is smaller than a radius xe2x80x9cr3xe2x80x9d of the trunnion 18. In the structure shown in FIG. 21, a torque for a load is received between an inner cylindrical surface of an inner roller 19b and a spherical trunnion 18. Thus, a width (a short diameter) xe2x80x9cbxe2x80x9d of a contact ellipse shaped therebetween is smaller and a contact length xe2x80x9caxe2x80x9d, in the circumference of a contact area, which corresponds to a long diameter of the contact ellipse is larger. In fact, the contact ellipse is positioned on the side of the trunnion 18 facing with the side face 16a of the guide groove 16, although the contact ellipse is shown at the front side for clarification in FIG. 21. When these joints rotate with joint angles present upon receiving loads, as shown in FIG. 22, a pivotal movement (of a direction indicated by an arrow xe2x80x9cHxe2x80x9d) of the trunnion 18 causes a pivotal sliding action to be occurred on the contact ellipse. Then the pivotal sliding action operates as a spin moment (of a rotational direction indicated by arrows xe2x80x9cBxe2x80x9d) so as to change a rolling direction of the roller assembly 19 comprising the inner roller 19b and the outer roller 19a, which are assembled together via a needle bearing 21. As a result, the direction of the roller assembly 19 is changed until it is in contact with inner or outer face of the guide groove 16, and in addition a contact force is increased. Moreover, the roller assembly 19 displaces not to be parallel to the guide groove 16. Hence, it is difficult for the roller assembly 19 to be smoothly rolled, bringing about a significant rolling resistance.
It is contemplated to enlarge a difference between an inner diameter of the inner roller 19b and an outer diameter of the trunnion 18, in order to reduce the long diameter xe2x80x9caxe2x80x9d of the contact ellipse. In this case, however, there is raised a new problem in which the joint fluctuates when moving along the rotational direction.
The object of the present invention is to overcome the above disadvantages of prior art, that is, to provide a tripod type constant velocity joint having a simple structure which is both highly strong and durable, which can diminish a spin moment acting on the contact ellipse formed between the outer face of the trunnion and the inner face of the inner roller, due to the pivotal sliding movement of the trunnion axis, and which can minimize a rolling resistance when rotating with any joint angle present.
To solve the above problems, according to the invention, a constant velocity joint of tripod type comprises:
A cylindrical hollow housing defining an opening at one end, and being secured at its opposite end to a first rotating shaft such that a central axis of the housing is aligned with that of the first rotating shaft, an inner face of the housing being provided with three guide grooves extending in a axial direction of the housing and being spaced apart equally in a circumferential direction, each groove having a pair of side faces opposed to each other, extending in the axial direction, and a bottom portion connecting between the side faces; and
A tripod provided at an angle normal to a second rotating shaft and secured to one end of the second rotating shaft, the tripod having three trunnions positioned in the grooves, the trunnions being spaced apart equally in a circumferential direction and securing equally to the second rotating shaft at an angle normal, with respective inner rollers being mounted to outside end portions of respective trunnions, and with respective outer rollers being mounted on the outer faces of inner rollers through a needle bearing, the outer faces of the outer rollers being shaped so as to allow movement only in an axial direction of the guide grooves, the side faces receiving a load, and a part of the bottom portion guiding the rolling of the outer roller.
The constant velocity joint of tripod type is characterized in that, the inner rollers have a spherical inner circumferential surface, respectively; and
The trunnions have a elliptical shape in the sectional view normal to each of their axes, respectively and is positioned so that the short diameter of the ellipse is substantially parallel to the second rotating shaft.
According to the invention as constructed above, elliptical contact areas formed in transmitting torque between the inner spherical face of the inner roller and the outer face of the generally spherical trunnion can be maintained relatively small without significant fluctuations during rotating. Thus, it is possible to diminish a spin moment acting on the contact ellipse due to the pivotal sliding movement of the trunnion axis. Accordingly, the invention can bring about advantages including a relatively small contact force between the outer roller and the guide groove, a stable rolling of the outer roller, a smaller rolling resistance and a lower axial force of the joint.
These and other objects and advantages of the present invention will be more apparent from the following detailed description and drawings in which: |
Special Issue: Modeling Epidemics The present invention provides an aqueous cleaning solution for aluminum-based metals which comprises an inorganic acid in an amount to provide a pH value of 2 or less, an oxidized form metal ion and a surfactant represented by the following formula (I)R-O-(EO)nH(I)wherein R represents an alkyl group having on average 10 to 18 carbon atoms per molecule, n represents an integer of 8 or greater, and EO represents an ethyleneoxy group which may contain a small proportion of a propyleneoxy group. The degradation of cleaning properties due to the accumulation of lubricating oil or decomposition of surfactants is lessened even when the cleaning operation is carried out for a long period of time. |
The core has its peaks and valleys. Among the peaks for me were those moments in the statistics requirement when I understood, in a flash of p-set success, just how powerful regressions and confidence intervals can be. At the other end of the spectrum, I have spent many evenings staring at equations, eyes glazed over in total non-comprehension, wondering how economics can call itself a ‘rigorous discipline’ when its fundamental models assume things like zero population growth, total rationality and two-item economies. If an alien arrived on Earth and found only a p-set from ECON 50, it would think our world was a land full of robots who only consumed apples and bananas.
After talking to other students about core requirements, it seems fair to say that the Economics department is not the only department with a difficult and sometimes overly abstract core curriculum. We are all in some form or another — through PWR, if nothing else — subjected to classes that we may not find intrinsically interesting or meaningful all the time.
This surprised me when I first arrived at Stanford. College, I thought, was supposed to be about learning for the sake of learning, about the immediate gratification of world-expanding insight being taught in the classroom. I was supposed to spend class periods learning about how the Fed responds to labor shocks, how tax structures determine social outcomes and why communism doesn’t work so well in practice from an economic perspective. Instead, lectures often involved people named Bob and Jane, who often lived alone on islands where the only thing they could produce was coconuts. Not exactly the Keynesian response to the 2008 recession I was hoping for.
It was not until this quarter, when I enrolled in my first elective — Development Economics — that I began to see what the Econ core had taught me about the world. During the core, concepts like marginal utility and omitted variable bias felt theoretically logical but practically disconnected. Once I got to Development Economics, however, I saw many of those concepts resurfacing in very real-world applications: the marginal benefit of an additional year of schooling in Uganda, for example. Doing my p-sets, I watched as STATA crunched real numbers about population, health outcomes and gender. Suddenly, all the abstract apples-and-bananas nonsense of the core was not nonsense at all; it was the fundamental structure on which education, public health and microfinance stood.
So, if you are a freshman or sophomore (or junior or senior) still plowing your way through your own core sequence and starting to wonder what it all amounts to, don’t despair. Maybe coconut production on a hypothetical desert island seems a bit too removed to have any instructive value, but as soon as you start thinking about Nobel-Prize winning models for economic growth, coconuts on a desert island will suddenly be the difference between confusion and comprehension. Whatever the subject, remember that the abstract and dry material in your classes exists for a reason: to act as scaffolding for the intricacies of planet Earth and all its people, institutions and inventions.
If you still aren’t convinced — or if you’re simply excited to see what’s ahead — ask your professors about the real-world implications of the theories and models they present in class. I guarantee they will be more than happy to explain how their class material relates to real problems; after all, the intersection of theory and reality is what our faculty have dedicated their lives to. I recently went to an office hours appointment with my macroeconomics professor and asked about how a certain model of utility (Cobb-Douglas utility, for all the Econ people out there) could be used in the real world. She proceeded to explain, with great excitement, that this model happens to closely mirror the behavior of Americans when deciding how much to spend on housing (though interestingly, it differs in her native country of Germany). Had I not gone to office hours and asked the question, I would have gone on thinking that Cobb-Douglas was yet another simplification for undergraduate classes that had no real-world value.
So ask questions, stay engaged and trust that Stanford’s pedagogical approach is not all based on confusing fluff. Core curricula are not perfect, but they exist for a reason. Take advantage of your time here and find out what that reason is.
Contact Avery Rogers with your core requirement complaints at averyr ‘at’ stanford.edu.
have pretty bad insomnia. This means I think a lot: the Grind section is a place to put all of that overthinking down on paper. I love to make people think and [hopefully] give people some inspiration and solace about whatever they've had on their minds! |
// If true, this MIME type can likely be interpreted directly by browsers and
// should not be recorded as a plugin.
bool DetermineAllowedFromMime(const std::string& mime_type) {
for (size_t i = 0; i < arraysize(kAllowedMimeTypes); ++i) {
if (StringCaseStartsWith(mime_type, kAllowedMimeTypes[i])) {
return true;
}
}
return false;
} |
package com.sohu.tv.mq.cloud.service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
import javax.annotation.PostConstruct;
import org.apache.commons.lang3.math.NumberUtils;
import org.apache.rocketmq.tools.admin.MQAdminExt;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.sohu.tv.mq.cloud.bo.Cluster;
import com.sohu.tv.mq.cloud.dao.ClusterDao;
import com.sohu.tv.mq.cloud.util.Result;
/**
* 集群服务
*
* @author yongfeigao
* @date 2018年10月10日
*/
@Service
public class ClusterService {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
@Autowired
private ClusterDao clusterDao;
// cluster持有,初始化后即缓存到内存
private volatile Cluster[] mqClusterArray;
@PostConstruct
public void refresh() {
Result<List<Cluster>> clusterListResult = queryAll();
if (clusterListResult.isEmpty()) {
logger.error("no cluster data found!");
return;
}
List<Cluster> list = clusterListResult.getResult();
if (mqClusterArray == null || mqClusterArray.length != list.size()) {
logger.info("cluster config refreshed, old:{} new:{}", Arrays.toString(mqClusterArray), list);
mqClusterArray = clusterListResult.getResult().toArray(new Cluster[list.size()]);
}
}
/**
* 查询所有集群
*
* @return
*/
public Result<List<Cluster>> queryAll() {
List<Cluster> list = null;
try {
list = clusterDao.select();
} catch (Exception e) {
logger.error("queryAll", e);
return Result.getDBErrorResult(e);
}
return Result.getResult(list);
}
public Cluster[] getAllMQCluster() {
return mqClusterArray;
}
/**
* 获取第一个集群id
*
* @return
*/
public int getFirstClusterId() {
return mqClusterArray[0].getId();
}
/**
* 获取trace集群id
*
* @return
*/
public int getTraceClusterId() {
for (Cluster cluster : mqClusterArray) {
if (cluster.isEnableTrace()) {
return cluster.getId();
}
}
return -1;
}
/**
* 获取trace集群id
*
* @return
*/
public List<Integer> getTraceClusterIdList() {
List<Integer> ids = new ArrayList<Integer>();
if (mqClusterArray == null) {
return ids;
}
for (Cluster cluster : mqClusterArray) {
if (cluster.isEnableTrace()) {
ids.add(cluster.getId());
}
}
return ids;
}
/**
* 根据id查找集群
*
* @param id
* @return
*/
public Cluster getMQClusterById(long id) {
for (Cluster mqCluster : getAllMQCluster()) {
if (id == mqCluster.getId()) {
return mqCluster;
}
}
return null;
}
/**
* 根据name查找集群
*
* @param id
* @return
*/
public Cluster getMQClusterByName(String name) {
for (Cluster mqCluster : getAllMQCluster()) {
if (mqCluster.getName().equals(name)) {
return mqCluster;
}
}
return null;
}
/**
* 保存数据
*
* @return
*/
public Result<List<Cluster>> save(Cluster cluster) {
Integer result = null;
try {
result = clusterDao.insert(cluster);
refresh();
} catch (Exception e) {
logger.error("save:{}", cluster, e);
return Result.getDBErrorResult(e);
}
return Result.getResult(result);
}
/**
* 更新fileReservedTime
*
* @param mqAdmin
* @param cid
* @param brokerAddr
* @throws Exception
*/
public void updateFileReservedTime(MQAdminExt mqAdmin, int cid, String brokerAddr) throws Exception {
Properties properties = mqAdmin.getBrokerConfig(brokerAddr);
String fileReservedTime = properties.getProperty("fileReservedTime");
update(cid, NumberUtils.toInt(fileReservedTime));
}
public void update(int cid, int fileReservedTime) {
Cluster cluster = getMQClusterById(cid);
if (cluster == null) {
return;
}
if (fileReservedTime > cluster.getFileReservedTime()) {
cluster.setFileReservedTime(fileReservedTime);
}
}
}
|
<gh_stars>0
begin_unit|revision:0.9.5;language:Java;cregit-version:0.0.1
begin_comment
comment|/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */
end_comment
begin_package
DECL|package|org.apache.hadoop.tools.mapred
package|package
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|tools
operator|.
name|mapred
package|;
end_package
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|conf
operator|.
name|Configuration
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|fs
operator|.
name|Path
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|mapreduce
operator|.
name|Job
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|mapreduce
operator|.
name|JobContext
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|mapreduce
operator|.
name|OutputCommitter
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|mapreduce
operator|.
name|TaskAttemptContext
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|mapreduce
operator|.
name|lib
operator|.
name|output
operator|.
name|TextOutputFormat
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|mapreduce
operator|.
name|security
operator|.
name|TokenCache
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|hadoop
operator|.
name|tools
operator|.
name|DistCpConstants
import|;
end_import
begin_import
import|import
name|java
operator|.
name|io
operator|.
name|IOException
import|;
end_import
begin_comment
comment|/** * The CopyOutputFormat is the Hadoop OutputFormat used in DistCp. * It sets up the Job's Configuration (in the Job-Context) with the settings * for the work-directory, final commit-directory, etc. It also sets the right * output-committer. * @param<K> * @param<V> */
end_comment
begin_class
DECL|class|CopyOutputFormat
specifier|public
class|class
name|CopyOutputFormat
parameter_list|<
name|K
parameter_list|,
name|V
parameter_list|>
extends|extends
name|TextOutputFormat
argument_list|<
name|K
argument_list|,
name|V
argument_list|>
block|{
comment|/** * Setter for the working directory for DistCp (where files will be copied * before they are moved to the final commit-directory.) * @param job The Job on whose configuration the working-directory is to be set. * @param workingDirectory The path to use as the working directory. */
DECL|method|setWorkingDirectory (Job job, Path workingDirectory)
specifier|public
specifier|static
name|void
name|setWorkingDirectory
parameter_list|(
name|Job
name|job
parameter_list|,
name|Path
name|workingDirectory
parameter_list|)
block|{
name|job
operator|.
name|getConfiguration
argument_list|()
operator|.
name|set
argument_list|(
name|DistCpConstants
operator|.
name|CONF_LABEL_TARGET_WORK_PATH
argument_list|,
name|workingDirectory
operator|.
name|toString
argument_list|()
argument_list|)
expr_stmt|;
block|}
comment|/** * Setter for the final directory for DistCp (where files copied will be * moved, atomically.) * @param job The Job on whose configuration the working-directory is to be set. * @param commitDirectory The path to use for final commit. */
DECL|method|setCommitDirectory (Job job, Path commitDirectory)
specifier|public
specifier|static
name|void
name|setCommitDirectory
parameter_list|(
name|Job
name|job
parameter_list|,
name|Path
name|commitDirectory
parameter_list|)
block|{
name|job
operator|.
name|getConfiguration
argument_list|()
operator|.
name|set
argument_list|(
name|DistCpConstants
operator|.
name|CONF_LABEL_TARGET_FINAL_PATH
argument_list|,
name|commitDirectory
operator|.
name|toString
argument_list|()
argument_list|)
expr_stmt|;
block|}
comment|/** * Getter for the working directory. * @param job The Job from whose configuration the working-directory is to * be retrieved. * @return The working-directory Path. */
DECL|method|getWorkingDirectory (Job job)
specifier|public
specifier|static
name|Path
name|getWorkingDirectory
parameter_list|(
name|Job
name|job
parameter_list|)
block|{
return|return
name|getWorkingDirectory
argument_list|(
name|job
operator|.
name|getConfiguration
argument_list|()
argument_list|)
return|;
block|}
DECL|method|getWorkingDirectory (Configuration conf)
specifier|private
specifier|static
name|Path
name|getWorkingDirectory
parameter_list|(
name|Configuration
name|conf
parameter_list|)
block|{
name|String
name|workingDirectory
init|=
name|conf
operator|.
name|get
argument_list|(
name|DistCpConstants
operator|.
name|CONF_LABEL_TARGET_WORK_PATH
argument_list|)
decl_stmt|;
if|if
condition|(
name|workingDirectory
operator|==
literal|null
operator|||
name|workingDirectory
operator|.
name|isEmpty
argument_list|()
condition|)
block|{
return|return
literal|null
return|;
block|}
else|else
block|{
return|return
operator|new
name|Path
argument_list|(
name|workingDirectory
argument_list|)
return|;
block|}
block|}
comment|/** * Getter for the final commit-directory. * @param job The Job from whose configuration the commit-directory is to be * retrieved. * @return The commit-directory Path. */
DECL|method|getCommitDirectory (Job job)
specifier|public
specifier|static
name|Path
name|getCommitDirectory
parameter_list|(
name|Job
name|job
parameter_list|)
block|{
return|return
name|getCommitDirectory
argument_list|(
name|job
operator|.
name|getConfiguration
argument_list|()
argument_list|)
return|;
block|}
DECL|method|getCommitDirectory (Configuration conf)
specifier|private
specifier|static
name|Path
name|getCommitDirectory
parameter_list|(
name|Configuration
name|conf
parameter_list|)
block|{
name|String
name|commitDirectory
init|=
name|conf
operator|.
name|get
argument_list|(
name|DistCpConstants
operator|.
name|CONF_LABEL_TARGET_FINAL_PATH
argument_list|)
decl_stmt|;
if|if
condition|(
name|commitDirectory
operator|==
literal|null
operator|||
name|commitDirectory
operator|.
name|isEmpty
argument_list|()
condition|)
block|{
return|return
literal|null
return|;
block|}
else|else
block|{
return|return
operator|new
name|Path
argument_list|(
name|commitDirectory
argument_list|)
return|;
block|}
block|}
comment|/** {@inheritDoc} */
annotation|@
name|Override
DECL|method|getOutputCommitter (TaskAttemptContext context)
specifier|public
name|OutputCommitter
name|getOutputCommitter
parameter_list|(
name|TaskAttemptContext
name|context
parameter_list|)
throws|throws
name|IOException
block|{
return|return
operator|new
name|CopyCommitter
argument_list|(
name|getOutputPath
argument_list|(
name|context
argument_list|)
argument_list|,
name|context
argument_list|)
return|;
block|}
comment|/** {@inheritDoc} */
annotation|@
name|Override
DECL|method|checkOutputSpecs (JobContext context)
specifier|public
name|void
name|checkOutputSpecs
parameter_list|(
name|JobContext
name|context
parameter_list|)
throws|throws
name|IOException
block|{
name|Configuration
name|conf
init|=
name|context
operator|.
name|getConfiguration
argument_list|()
decl_stmt|;
if|if
condition|(
name|getCommitDirectory
argument_list|(
name|conf
argument_list|)
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|IllegalStateException
argument_list|(
literal|"Commit directory not configured"
argument_list|)
throw|;
block|}
name|Path
name|workingPath
init|=
name|getWorkingDirectory
argument_list|(
name|conf
argument_list|)
decl_stmt|;
if|if
condition|(
name|workingPath
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|IllegalStateException
argument_list|(
literal|"Working directory not configured"
argument_list|)
throw|;
block|}
comment|// get delegation token for outDir's file system
name|TokenCache
operator|.
name|obtainTokensForNamenodes
argument_list|(
name|context
operator|.
name|getCredentials
argument_list|()
argument_list|,
operator|new
name|Path
index|[]
block|{
name|workingPath
block|}
argument_list|,
name|conf
argument_list|)
expr_stmt|;
block|}
block|}
end_class
end_unit
|
<filename>Godeps/_workspace/src/github.com/sclevine/agouti/matchers/internal/page/have_popup_text_test.go
package page_test
import (
"errors"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/sclevine/agouti/matchers/internal/mocks"
. "github.com/sclevine/agouti/matchers/internal/page"
)
var _ = Describe("HavePopupTextMatcher", func() {
var (
matcher *HavePopupTextMatcher
page *mocks.Page
)
BeforeEach(func() {
page = &mocks.Page{}
page.PopupTextCall.ReturnText = "some text"
matcher = &HavePopupTextMatcher{ExpectedText: "some text"}
})
Describe("#Match", func() {
Context("when the actual object is page", func() {
Context("when the expected text matches the actual text", func() {
It("should successfully return true", func() {
page.PopupTextCall.ReturnText = "some text"
Expect(matcher.Match(page)).To(BeTrue())
})
})
Context("when the expected text does not match the actual text", func() {
It("should successfully return false", func() {
page.PopupTextCall.ReturnText = "some other text"
Expect(matcher.Match(page)).To(BeFalse())
})
})
Context("when retrieving the popup text fails", func() {
It("should return an error", func() {
page.PopupTextCall.Err = errors.New("some error")
_, err := matcher.Match(page)
Expect(err).To(MatchError("some error"))
})
})
})
Context("when the actual object is not a page", func() {
It("should return an error", func() {
_, err := matcher.Match("not a page")
Expect(err).To(MatchError("HavePopupText matcher requires a Page. Got:\n <string>: not a page"))
})
})
})
Describe("#FailureMessage", func() {
It("should return a failure message", func() {
page.PopupTextCall.ReturnText = "some other text"
matcher.Match(page)
message := matcher.FailureMessage(page)
Expect(message).To(ContainSubstring("Expected page to have popup text matching\n some text"))
Expect(message).To(ContainSubstring("but found\n some other text"))
})
})
Describe("#NegatedFailureMessage", func() {
It("should return a negated failure message", func() {
page.PopupTextCall.ReturnText = "some text"
matcher.Match(page)
message := matcher.NegatedFailureMessage(page)
Expect(message).To(ContainSubstring("Expected page not to have popup text matching\n some text"))
Expect(message).To(ContainSubstring("but found\n some text"))
})
})
})
|
def _process_cv_results(self, cv_scores: DefaultDict[str, List], scores: Dict[str, float]) -> None:
scoring = ['accuracy', 'recall', 'precision', 'f1']
for name in scoring:
cv_scores[name].append(scores[name]) |
<reponame>chrisba11/stock_portfolio<filename>migrations/versions/18d63d7c4eb2_.py
"""empty message
Revision ID: 18d63d7c4eb2
Revises: <PASSWORD>
Create Date: 2019-03-08 16:45:45.518227
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '18d63d7c4eb2'
down_revision = 'c<PASSWORD>'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('portfolios', sa.Column('user_id', sa.Integer(), nullable=False))
op.drop_index('ix_portfolios_portfolio_name', table_name='portfolios')
op.create_index(op.f('ix_portfolios_portfolio_name'), 'portfolios', ['portfolio_name'], unique=False)
op.create_foreign_key(None, 'portfolios', 'users', ['user_id'], ['id'])
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, 'portfolios', type_='foreignkey')
op.drop_index(op.f('ix_portfolios_portfolio_name'), table_name='portfolios')
op.create_index('ix_portfolios_portfolio_name', 'portfolios', ['portfolio_name'], unique=True)
op.drop_column('portfolios', 'user_id')
# ### end Alembic commands ###
|
/**
* Reset everything when cancelling analysis. Called by parent controller.
*/
protected void resetOnCancel() {
cellTracksChartPanels = new ArrayList<>();
rosePlotChartPanels = new ArrayList<>();
cellTracksData = new ArrayList<>();
filteredData = Boolean.FALSE;
speedBoxPlotChartPanel = new ChartPanel(null);
speedBoxPlotChartPanel.setOpaque(false);
directPlotChartPanel = new ChartPanel(null);
directPlotChartPanel.setOpaque(false);
speedKDEChartPanel = new ChartPanel(null);
speedKDEChartPanel.setOpaque(false);
singleCellStatisticsController.resetOnCancel();
analysisPanel.getCellTracksRadioButton().setSelected(true);
} |
/**
*
* Structured data type containing several elements
* Attention in the catalog this is referred to as TableType, but a TableType in CDS really means
* several rows!)
*
*
* <p>Java class for StructureType complex type.
*
* <p>The following schema fragment specifies the expected content contained within this class.
*
* <pre>
* <complexType name="StructureType">
* <complexContent>
* <extension base="{http://www.sap.com/ndb/DataModelType.ecore}DataType">
* <sequence>
* <element name="element" type="{http://www.sap.com/ndb/DataModelType.ecore}Element" maxOccurs="unbounded"/>
* <element name="keyElement" type="{http://www.sap.com/ndb/RepositoryModelResource.ecore}Identifier" maxOccurs="unbounded" minOccurs="0"/>
* <element name="displayFolder" type="{http://www.sap.com/ndb/DataModelType.ecore}DisplayFolder" maxOccurs="unbounded" minOccurs="0"/>
* </sequence>
* <attribute name="catalogOnly" type="{http://www.w3.org/2001/XMLSchema}boolean" />
* <attribute name="physicalSchemaName" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="authoringSchemaName" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="physicalDatabaseName" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="authoringDatabaseName" type="{http://www.w3.org/2001/XMLSchema}string" />
* </extension>
* </complexContent>
* </complexType>
* </pre>
*
*
*/
@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "StructureType", propOrder = {
"element",
"keyElement",
"displayFolder"
})
@XmlSeeAlso({
Entity.class
})
public class StructureType
extends DataType
{
@XmlElement(required = true)
protected List<Element> element;
@XmlJavaTypeAdapter(CollapsedStringAdapter.class)
@XmlSchemaType(name = "token")
protected List<String> keyElement;
protected List<DisplayFolder> displayFolder;
@XmlAttribute(name = "catalogOnly")
protected Boolean catalogOnly;
@XmlAttribute(name = "physicalSchemaName")
protected String physicalSchemaName;
@XmlAttribute(name = "authoringSchemaName")
protected String authoringSchemaName;
@XmlAttribute(name = "physicalDatabaseName")
protected String physicalDatabaseName;
@XmlAttribute(name = "authoringDatabaseName")
protected String authoringDatabaseName;
/**
* Gets the value of the element property.
*
* <p>
* This accessor method returns a reference to the live list,
* not a snapshot. Therefore any modification you make to the
* returned list will be present inside the JAXB object.
* This is why there is not a <CODE>set</CODE> method for the element property.
*
* <p>
* For example, to add a new item, do as follows:
* <pre>
* getElement().add(newItem);
* </pre>
*
*
* <p>
* Objects of the following type(s) are allowed in the list
* {@link Element }
*
*
*/
public List<Element> getElement() {
if (element == null) {
element = new ArrayList<Element>();
}
return this.element;
}
/**
* Gets the value of the keyElement property.
*
* <p>
* This accessor method returns a reference to the live list,
* not a snapshot. Therefore any modification you make to the
* returned list will be present inside the JAXB object.
* This is why there is not a <CODE>set</CODE> method for the keyElement property.
*
* <p>
* For example, to add a new item, do as follows:
* <pre>
* getKeyElement().add(newItem);
* </pre>
*
*
* <p>
* Objects of the following type(s) are allowed in the list
* {@link String }
*
*
*/
public List<String> getKeyElement() {
if (keyElement == null) {
keyElement = new ArrayList<String>();
}
return this.keyElement;
}
/**
* Gets the value of the displayFolder property.
*
* <p>
* This accessor method returns a reference to the live list,
* not a snapshot. Therefore any modification you make to the
* returned list will be present inside the JAXB object.
* This is why there is not a <CODE>set</CODE> method for the displayFolder property.
*
* <p>
* For example, to add a new item, do as follows:
* <pre>
* getDisplayFolder().add(newItem);
* </pre>
*
*
* <p>
* Objects of the following type(s) are allowed in the list
* {@link DisplayFolder }
*
*
*/
public List<DisplayFolder> getDisplayFolder() {
if (displayFolder == null) {
displayFolder = new ArrayList<DisplayFolder>();
}
return this.displayFolder;
}
/**
* Gets the value of the catalogOnly property.
*
* @return
* possible object is
* {@link Boolean }
*
*/
public Boolean isCatalogOnly() {
return catalogOnly;
}
/**
* Sets the value of the catalogOnly property.
*
* @param value
* allowed object is
* {@link Boolean }
*
*/
public void setCatalogOnly(Boolean value) {
this.catalogOnly = value;
}
/**
* Gets the value of the physicalSchemaName property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getPhysicalSchemaName() {
return physicalSchemaName;
}
/**
* Sets the value of the physicalSchemaName property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setPhysicalSchemaName(String value) {
this.physicalSchemaName = value;
}
/**
* Gets the value of the authoringSchemaName property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getAuthoringSchemaName() {
return authoringSchemaName;
}
/**
* Sets the value of the authoringSchemaName property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setAuthoringSchemaName(String value) {
this.authoringSchemaName = value;
}
/**
* Gets the value of the physicalDatabaseName property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getPhysicalDatabaseName() {
return physicalDatabaseName;
}
/**
* Sets the value of the physicalDatabaseName property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setPhysicalDatabaseName(String value) {
this.physicalDatabaseName = value;
}
/**
* Gets the value of the authoringDatabaseName property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getAuthoringDatabaseName() {
return authoringDatabaseName;
}
/**
* Sets the value of the authoringDatabaseName property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setAuthoringDatabaseName(String value) {
this.authoringDatabaseName = value;
}
} |
Increased Iron Sequestration in Alveolar Macrophages in Chronic Obtructive Pulmonary Disease Free iron in lung can cause the generation of reactive oxygen species, an important factor in chronic obstructive pulmonary disease (COPD) pathogenesis. Iron accumulation has been implicated in oxidative stress in other diseases, such as Alzheimers and Parkinsons diseases, but little is known about iron accumulation in COPD. We sought to determine if iron content and the expression of iron transport and/or storage genes in lung differ between controls and COPD subjects, and whether changes in these correlate with airway obstruction. Explanted lung tissue was obtained from transplant donors, GOLD 23 COPD subjects, and GOLD 4 lung transplant recipients, and bronchoalveolar lavage (BAL) cells were obtained from non-smokers, healthy smokers, and GOLD 13 COPD subjects. Iron-positive cells were quantified histologically, and the expression of iron uptake (transferrin and transferrin receptor), storage (ferritin) and export (ferroportin) genes was examined by real-time RT-PCR assay. Percentage of iron-positive cells and expression levels of iron metabolism genes were examined for correlations with airflow limitation indices (forced expiratory volume in the first second (FEV1) and the ratio between FEV1 and forced vital capacity (FEV1/FVC)). The alveolar macrophage was identified as the predominant iron-positive cell type in lung tissues. Futhermore, the quantity of iron deposit and the percentage of iron positive macrophages were increased with COPD and emphysema severity. The mRNA expression of iron uptake and storage genes transferrin and ferritin were significantly increased in GOLD 4 COPD lungs compared to donors (6.9 and 3.22 fold increase, respectively). In BAL cells, the mRNA expression of transferrin, transferrin receptor and ferritin correlated with airway obstruction. These results support activation of an iron sequestration mechanism by alveolar macrophages in COPD, which we postulate is a protective mechanism against iron induced oxidative stress. Introduction Iron is critical for the maintenance of cell homeostasis, having important roles in respiration, DNA synthesis, energy production, and metabolism. However, excess iron can be detrimental because of its potential to generate harmful free radicals. Because of this, tight regulation of iron metabolism is essential. Perturbation from normal physiologic iron concentrations has been associated with the pathogenesis of aging, neurodegenerative disease,and cancer, presumably via the generation of excess reactive oxygen species (ROS). The role of iron in other diseases in which oxidative stress has been implicated remains to be determined. Chronic obstructive pulmonary disease (COPD), comprised of irreversible airways obstruction and alveolar space enlargement or emphysema, is a major cause of mortality and morbidity worldwide. Cigarette smoke is the main etiological factor of COPD, which triggers an inflammatory response in the lung. Oxidative stress induced by the free radicals in tobacco smoke and produced by inflammatory cells has been strongly implicated in the pathogenesis of COPD. In addition, excess iron accumulation in the lung has been reported in association with cigarette smoke and severe emphysema. Moreover, cigarette smoke can alter lung iron metabolism in animal models. However, it is unknown where iron accumulates in lungs of COPD subjects, if expression of iron uptake and storage genes in the lung differs between controls and subjects with COPD, and whether changes in iron metabolism correlate with disease severity. This study sought to 1) quantify the iron deposits in the lung tissue of lung transplant donors, GOLD 2-3 (moderate to severe COPD), and GOLD 4 (very severe COPD) subjects, and in bronchoalveolar lavage (BAL) cells from smokers, non-smokers, and GOLD 1-3 COPD subjects, 2) identify the iron-accumulating cell types in the lung parenchyma, 3) determine the expression of transferrin and transferrin receptor (iron uptake), ferritin (iron storage) and ferroportin (iron export), and 4) determine correla-tions of changes in iron metabolism gene expression with airflow limitation indices (forced expiratory volume in the first second (FEV 1 ) and the ratio between FEV 1 and forced vital capacity (FEV 1 /FVC)) which are indicative of COPD severity. Ethics Statement The lung parenchyma study was approved by the Human Studies Committee of Washington University and the bronchoalveolar lavage study was approved by the Institutional Review Board of the University Hospital of Reims. Subjects, Lung Processing, Sampling, and Collection of BAL Lung samples were obtained from 20 GOLD 4 COPD subjects receiving lung transplant, 9 GOLD 2-3 COPD subjects undergoing resection of lung cancer (avoiding areas affected by tumor), and 8 non-COPD lung donors obtained following size adjustment for transplantation as controls. The lungs were processed as previously described. BAL samples were obtained from a second set of non-cancer, GOLD 1-3 COPD subjects, healthy smokers and healthy non smokers who underwent fiberoptic bronchoscopy according to American Thoracic Society recommendations. Briefly, BAL was performed by instilling saline solution in a sub-segmental bronchus, followed by aspiration and discarding of the first 50 ml aliquot. The remaining BAL fluid was centrifuged and cells were used for this study. In both subject sets, COPD diagnosis and GOLD classification was based on spirometric pulmonary function tests according to the Global Initiative for Chronic Obstructive Lung Disease consensus statement, and informed, written consent was obtained from each subject. Quantification and Identification of Iron-positive Cells Lung sections were stained with Perls-DAB for iron content (brown-black color) and anti-CD68 (red color) to identify macrophages. Dark staining anthracotic material and macrophages with dark-colored content were assessed on a consecutive serial section stained with nuclear fast red only (American Master Tech Scientific Inc., St. Lodi, CA). Slides were scanned using a NanoZoomer 2.0 (Hamamatsu Photonics, K.K., Japan) and the staining was quantified using Image-Pro Plus Software (Media-Cybernetics, Silver Spring, MD). The area of cells positive for iron was calculated using the following formula: Iron positive cell area = (Area of brown-black color on Perls-DABstained slide -Area of brown-black color on the nuclear fast red slide)/(Area of pink color on the nuclear fast red slide) x 100. The percentage of iron-positive macrophages was determined on Perls-DAB-CD68 co-stained slides from 3 randomly selected 10x fields containing $10 macrophages per field. RNA Isolation and Quantitative Real-time RT-PCR Total RNA was isolated from human donor and GOLD 4 COPD lung tissue samples, and BAL cells using TRIzol reagent (Invitrogen, Carlsbad, CA). Quantitative Real-Time RT-PCR was performed as follow: the cDNA was synthesized using SuperScript II reverse transcriptase (Invitrogen). Real-time RT-PCR employed the Fast SYBR Green Master Mix (Applied Biosystems, Foster City, CA) and gene-specific primers (Table S1) on an Eco TM Real-Time PCR System (Illumina, San Diego, CA).Results were standardized using the delta-delta C T method using the average of the expression GAPDH, HRPT1 (Hypoxanthine phosphoribosyltransferase-1) and PPIA (Peptidylprolyl isomerase-1) for normalization. Morphological Analysis For 14 GOLD 4 COPD and 4 non-COPD subjects, CT-scan of the frozen lungs was performed, and analyzed as previously described. Briefly, the mean radiograph attenuation, expressed in Hounsfield Units (HU), was determined in the CT section corresponding to the lung area of the tissue samples using a separate image processing program (ImageJ; available at: http:// rsb.info.nih. gov/ij). Statistical Analysis A Student's t test was used to compare between two groups, an Anova analysis with a Tukey-Kramer post-hoc test was used to compare between more than two groups and Spearman rank correlation was used to test for correlations between variables with a p#0.05 considered significant.Statistical analysis was performed using Excel 2011 (Microsoft Corporation, Redmond, WA). Patient Characteristics Peripheral lung tissue was obtained from 20 GOLD 4 COPD subjects receiving lung transplants for severe emphysema (GOLD 4), 9 GOLD 2-3 COPD subjects undergoing resection of lung cancer (avoiding areas affected by tumor), and 8 non-COPD donor lungs as controls. Their clinical and demographic characteristics are displayed in Table 1. As expected, the donor group consisted of younger, non-smokerssubjects. Pulmonary function tests were not obtained from donors. In addition to lung tissue samples, BAL cells were obtained from another set of subjects: 8 healthy non-smokers, 8 healthy smokers, and 10 GOLD 1-3 COPD subjects. Their clinical and demographic information are presented in Table 2. As expected, subjects with COPD had increased airflow limitation compared to the healthy non-smokers and the healthy smokers groups. The BAL differentials are also presented in Table 2. The total number of inflammatory cells in the BAL fluid of smokers and COPD subjects were higher than non-smokers. However, in each group the predominant cell type (almost 90%) in the BAL were macrophages. Increased Iron Deposition in Severe COPD Lungs Iron accumulation in the lung was examined in 20 GOLD 4 COPD, 9 GOLD 2-3 COPD, and 8 non-COPD lungs by Perls-DAB staining. To distinguish between iron deposits and other dark anthracotic material, consecutive serial sections were stained with nuclear fast red only. To quantify the iron deposits, we calculated the iron-positive cell area by taking the area of brown-black color on Perls-DAB stained slide minus the area of brown-black color on the nuclear fast red stained slide divided by the area of pink color on the nuclear fast red slide. Iron deposits were rarely found in non-COPD lung parenchyma ( Fig. 1A) but were detectable in the parenchyma of GOLD 2-3 COPD lungs and appeared abundant in GOLD 4 COPD lungs ( Fig. 1B and 1C, respectively). The iron positive cell area was significantly increased in severe GOLD 4 COPD lungs (23616%) compared to non-COPD (1.161.5%, p = 1.9610 25 ) or GOLD 2-3 COPD (1.661.5%, p = 1.9610 25 ) lungs (Fig. 1D). Moreover, iron positive cell area was found to correlate with the mean radiograph attenuation at the level of the lung tissue sample which is an index of emphysema severity (Fig. 1E). Our data suggest that excess iron accumulation is also associated with COPD and emphysema severity, with an increase in lung iron content in GOLD 4 COPD relative to GOLD 2-3 COPD subjects despite similar smoking pack years. Iron is Localized in Macrophages in COPD Lungs To quantify the prevalence of iron-positive macrophages, we performed co-staining of Perls-DAB and CD68 in lung sections. As shown in Figure 2B/B' and 2C/C', iron co-localized with macrophages in GOLD 2-3 and GOLD 4 COPD lungs, respectively, but not in non-COPD lungs ( Fig. 2A/A'). The percentage of iron-positive macrophages was increased in GOLD 2-3 COPD lungs (26% 619, p = 6.1610 212 ) and GOLD 4 COPD lungs (68616%, p = 6.1610 212 ) compared to non-COPD lungs (3.562.8%). Interestingly, the persentage of iron-positive macrophages correlated with the mean radiograph attenuation of the lung tissue sample (Fig. 2E). These data suggest that the percentage of iron-positive macrophage increases with the severity of COPD and emphysema. Increased Expression of Iron Uptake Genes in COPD Lungs Iron metabolism needs to be tightly regulated due to potential harmful effects of excess free iron. Free iron is bound by transferrin, taken into cells by the transferrin receptor, and is stored in cells bound to ferritin. Iron can be exported from cells via ferroportin. The expression of these genes is tightly regulated via the iron-responsive proteins (IREBs), which are able to interact with 59 or 39 untranslated region of their mRNA. Among the two orthologous, IREB 2 has been associated with airflow limitation in GWA studies. To determine wether the increased iron accumulation in COPD alveolar macrophages was a result of an increase in the expression of mRNAs encoding iron uptake proteins, we assessed the expression of transferrin and the transferrin receptor by real-time RT-PCR using RNA from 8 non-COPD lungsamples and 16 GOLD 4 COPD lungsamples. Transferrin expression was significantly increased in GOLD 4 COPD lungs compared to non-COPD lungs (fold increase = 6.9, p = 5.4610 26, Fig. 3A). There was no significant difference in the expression of transferrin receptor between GOLD 4 COPD lungs and non-COPD lungs (Fig. 3B). To determine which cells in the lungs expressed transferrin and whether the expression of transferrin was associated with iron deposition, lung sections of non-COPD and GOLD 4 COPD subjects were co-stained for Perls-DAB and transferrin. Non-COPD lungs showed scant staining for transferrin, localized mainly to alveolar macrophages based on location and cell morphology (Fig. 3C). However, in GOLD 4 COPD lungs, the majority of the transferrin-positive cells were parenchymal cells, and not the iron-positive alveolar macrophages (Fig. 3D). Together, these data suggest that iron-uptake gene expression is increased in severe COPD lungs compared to non-COPD lungs, but the iron-binding protein transferrin is not expressed by the macrophages which accumulate the iron. Expression of Iron Retention and Homeostasis Genes in COPD Lungs Net iron accumulation could also be caused by an increase in cellular iron retention and/or a decrease in iron export. Accordingly, we examined the expression of genes related to iron retention and export, ferritin and ferroportin, respectively, by realtime RT-PCR using RNA from 8 non-COPD lung samples and 16 GOLD 4 COPD lung samples. Ferritin mRNA expression was significantly increased in GOLD 4 COPD lungs compared to non- COPD lungs (fold increase = 3.22, p = 0.031, Fig. 4A), while the expression of ferroportin mRNA was unchanged (Fig. 4D). Consistent with the increased intracellular retention of iron in macrophages in COPD lungs, increased ferritin staining in COPD lungs (Fig. 4B) compared to non-COPD lungs (Fig. 4C) localized to alveolar macrophages. IREB2 mRNA expression was signifcantly higher in GOLD4-COPD than in non-COPD (fold increase = 1.6, p = 0.045, Fig. 4E). We also looked for a correlation between the expression of these iron metabolism related genes and the emphysema severity. We did not find any statistically signficant correlation betwen the expression of these genes and the mean radiograph attenuation of the lung tissue (Table S2). To determine whether iron deposits and ferritin were present in the same macrophages in COPD lungs, we performed CD68/ Perls-DAB and CD68/ferritin co-staining on consecutive GOLD 4 COPD lung sections (Fig. 5). Iron-positive macrophages exhibited strong ferritin staining (arrows). Inversely, iron-negative macrophages did not have any ferritin staining (within circle). These data suggest that the iron accumulation in alveolar macrophages in severe COPD lungs may, at least in part, be due to an increase in cellular iron retention mechanisms. Expression of Iron Metabolism Genes in GOLD 1-3 COPD BAL Cells The data presented in Figures 3 and 4 were obtained using mRNA from whole lung tissue samples. To better understand iron accumulationin alveolar macrophages of severe COPD lungs, we investigated the expression of iron metabolism genes in BAL cells (Fig. 6). By cytological analysis, nearly 90% of cells in the BAL fluid were macrophages (Table 2). Therefore, the data obtained from these studies may largely reflect gene expression by macrophages. Compared to non-smokers, transferrin expression by BAL cells from non-COPD smokers and COPD subjects was significantly decreased (fold increase = 0.52 and 0.13 repectively, p = 5.7610 24, Fig. 6A). In contrast, the transferrin receptor was more highly expressed in BAL cells from COPD subjects compared to non-smokers or non-COPD smokers (fold increase = 11 or 14 respectively, p = 6.7610 23, Fig. 6B). When investigating mechanisms of iron retention, we found a significantly higher expression of ferritin by BAL cells from COPD subjects compared to non-smokers (fold increase = 23, p = 0.028, Fig. 6C). Interestingly, whereas ferroportin expression appeared to be similar between BAL cells from non-smokers and COPD subjects, its expression was significantly higher in BAL cells from non-COPD smokers than in the COPDsubjects (fold increase = 7.5, p = 0.028, Fig. 6D). IREB2 expression did not differ between the COPD and the other groups (Fig. 6E). These data support those presented above and show that the expression of iron metabolism genes is altered in alveolar macrophages from COPD patients compared to non-COPD patients which could result in the increased iron accumulation. Iron Metabolism Gene Expression in BAL Cells Correlates with Airflow Limitation Next, we investigated correlations between expression of genes related to iron metabolism by BAL cells and airflow limitation indices (Fig. 7). Expression of transferrin positively correlated with both FEV 1 and the ratio FEV 1 /FVC (Fig. 7 A and F). Inversely, expression of the transferrin receptor and ferritin negatively correlated with airflow limitation (Fig. 7 B-G and C-H). Finally, the expression of ferroportin and IREB2 did not significantly correlate with the airflow limitation. Interestingly, expression of these iron related genes did not correlate with hemoglobin or CRP serum concentration. IREB2 expression only correlated with subject age (Table S3). Similarly, expression of studied iron related genes was not influenced by the subject sex or the presence of chronic bronchitis (Table S4). The expression of some iron metabolism related genes were associated with dyspnea severity (transferrin), exacerbation rate (transferrin and ferritin) (Table S4) and smoking history including smoking pack-year (transferrin and ferritin) (Table S3) and smoking status (transferrin, ferritin and ferroportin) (Table S5). Finally, in the COPD patients, no relation was found between the presence of an inhaled treatment and the expression of iron metabolism related genes (data not shown). Globally, these data demonstrate that changes in macrophage expression of iron metabolism genes correlate with airflow limitation and COPD severity. Discussion The main findings of this study are that: 1) iron deposits are localized in macrophages in COPD lungs; 2) the quantity of lung iron deposits increases with COPD and emphysema severity; 3) expression of transferrin (involved in iron uptake), and of ferritin (involved in iron storage), are increased in severe COPD lungs whereas ferroportin (involved in cellular excretion of iron) is unchanged; 4) in BAL cells from COPD subjects at GOLD stage 1-3, expression of transferrin receptor and ferritin expression are increased, and 5) indices of airflow limitation correlate with expression of transferrin, transferrin receptor and ferritin in BAL cells from healthy non smokers and smokers, and COPD subjects. Consistent with our results, iron accumulation has been reported in the lungs of cigarette smokers and in severe emphysema but the mechanisms sustaining the iron accumulation in lungs from COPD patients have never been explored. Iron uptake, storage and sequestration are of interest in COPD. Indeed in other diseases associated with aging, including atherosclerosis, Parkinson's and Alzheimer's, iron depositions are postulated to contribute to excess oxidative stress, which is now recognized as important in the pathogenesis of COPD. Moreover, free iron acumulation in the lung may promote bacterial growth and influence COPD exacerbation. Therefore, the free iron pool has to be tightly controlled to protect the lung against the harmful properties of iron. Iron bound by transferrin is taken up into cells by the transferrin receptor, and is stored in cells bound to ferritin. Compared to control lungs, higher transferrin and ferritin expression was found in COPD lungs, and transferrin receptor was higher in BAL from COPD subjects than in healthy subjects. Further, the expression of ferroportin, the only known iron exporter, was unchanged with COPD. These findings support active iron sequestration by alveolar macrophages in COPD lungs, and may represent a protective maneuver intended to control free iron, and therefore, perverse effects of iron. In fact, O. Olakanmi et al have reported that iron sequestration by alveolar macrophages decresease the formation of the highly toxic hydroxyl radical and more recently, it has been shown that iron sequestration by macrophages protects A549 cells against iron toxicity. Consistent with these resluts, we did not find a spatial relationship between the accumulation of 8-hydroxyguanosine, a marker of nucleic acids oxidation, and iron staining in COPD lung specimens in this study (data not shown). While not conclusive, this suggests at least that iron accumulation process in alveolar macrophages does not contribute locally to increased oxidative stress and may even decrease iron induced oxidative stress. Interestingly, current cigarette smoke exposure was not found necessary to alter iron metabolism. Indeed, GOLD 4 COPD subjects in this study had ceased cigarette smoking for at least 6 months prior to transplant and processing of their lung tissue. The clinical relevance of these findings is supported by the correlations between expression levels of several iron pathway mRNAs in BAL cells and indices of airflow limitation in this study. Ferritin and transferrin receptor expression were increased in COPD subjects and correlated with a decrease in FEV 1 or FEV 1 / FVC ratio. IREB2 expression tended to be higher in BAL of COPD subjects and correlated negatively with FEV 1 /FVC ratio, supporting a shift in iron metabolism in macrophages in COPD. These studies were limited to smokers, former smokers, and GOLD 2 and -3 COPD subjects, as there is high risk in obtaining BAL from GOLD 4 COPD subjects. Another aspect of the study which may be a limitation is that distinct study sets were employed for the tissue-based and BAL-based experiments. Alternatively, that iron metabolism was altered in separate cohorts of subjects may lend greater weight to the findings. Altering lung iron uptake in macrophages during cigarette smoke exposure in animal model could be employed to test whether this impacts overall lung oxidative stress and the progression of alveolar enlargement. In vitro studies in macrophages could further test the relationships between cigarette smoke exposure, free iron, iron sequestration, ROS generation and oxidative stress. Overall this study demonstrates that macrophages in the lungs of COPD subjects have increased iron uptake and storage, likely through increased expression of transferrin, transferrin receptor and ferritin, while ferroportin expression is unchanged. This should result in a net gain in iron sequestration in COPD lung macrophages, which we postulate is a protective mechanism intended to reduce free iron and its harmful effects. Similar to the robust anti-oxiodant response reported in COPD lungs, it seems likely that iron sequestration may be a mechanism that is intended to limit, but fails to eliminate, progression of COPD. Further investigations are needed to elucidate the contribution of iron sequestration in alveolar macrophages in the complex pathophysiology of COPD. |
Relationship between chewing tobacco, smoking, consuming alcohol and cognitive impairment among older adults in India: a crosssectional study Background Physical aging increases the sensitivity to the effects of substance use, elevating the risk for cognitive impairment among older adults. Since studies on the association of substance use with cognitive ability in later years are scant in India, we aimed to explore the factors associated with cognitive impairment especially, alcohol consumption, smoking, and chewing tobacco later in life. Methods The present research used nationally representative data from Building a Knowledge Base on Population Aging in India (BKPAI) that was conducted in 2011, across seven states of India (N=9,453). Sample distribution along with percentage distribution was calculated for cognitive impairment over explanatory variables. For finding the association between cognitive impairment over explanatory variables, binary logistic regression models were estimated. Results About 16.5 percent of older adults in rural areas consumed smoked tobacco compared to 11.7 percent in urban areas. Nearly, 23.7 percent of rural older adults consumed smokeless tobacco in comparison to 16 percent in urban areas. Alcohol consumption was high among rural residents (7.9%) than urban counterparts (6.7%). The prevalence of cognitive impairment was 62.8% and 58% among older adults from rural and urban areas respectively. Older adults who smoked tobacco had a 24 percent significantly higher likelihood to have cognitive impairment with reference to older adults who did not smoke . Moreover, older adults who consumed alcohol had a 30 percent significantly higher likelihood to have cognitive impairment . It was also found that older adults who had smoked along with consuming alcohol were at risk of worse cognitive outcomes than those who neither smoke nor drink alcohol or consumed either of them unlike consuming smokeless tobacco only. Conclusion The encouragement of older people to stop smoking and smokeless tobacco use could be considered as part of a strategy to reduce the incidence of cognitive impairment. Further, appropriate measures should be taken for the detection of early stages of cognitive decline in older individuals and efforts should be made to improve the availability and quality of care for dementing older adults. Background The longer people live, the greater the risk of aging biology in determining both length and quality of life. Older adults are placed at increased risk of substance use disorders due to environmental and social factors associated with aging. Similarly, physical aging and commonly used medications can result in increased sensitivity to the effects of substance use, elevating the risk for cognitive impairment. In 2016, Alzheimer's disease and other dementias appeared for the first time as ranked fifth noncommunicable disease in the World Health Organization (WHO) top 10 causes of death globally. A systematic review revealed that the proportion of dementia that is characterized as the severe decline in cognitive functioning and the prevalence of cognitive impairment is increasing in developing countries. Moreover, the study observed that although it is difficult to separate the effects of normal aging from that of disease, it appears that intellectual decline in some cognitive domains is an inevitable consequence of aging. Further, a dearth of literature showed that the link between substance use disorders and objective cognitive outcomes has been well covered. However, whether or not alcohol use is associated with cognition among older individuals is an unresolved issue. Alcohol drinking and tobacco use in later life The study found that heavy alcohol consumption in older adults has been associated with a faster decline in cognition in late middle age, particularly in men. At the same time, evidence shows that consuming alcohol has a positive effect on several health outcomes. Interestingly, studies from developed as well as developing countries have shown that consuming alcohol at a mild to moderate level acts as a protective factor against cognitive decline among older adults, compared to excessive consumption of alcohol. Although a number of studies have shown inconsistent results, light to moderate alcohol use among older people in some studies appeared to reduce the risk of dementia and Alzheimer's disease. However, researchers have found that alcohol had a negative cognitive impact, even at moderate levels. Many authors have argued about the causal role of substance use in exacerbating psychotic reactions. Further, the study also showed that in psychotic patients, alcohol drinking and smoking may aggravate thought disturbances. Similarly, as a result of increased alcohol dependence, the family life is harmed and had a strained relationship with other family members as well as with their neighbour's due to inappropriate behavior that may have led to poor health outcomes including cognition. Besides, a history of alcohol dependence, even when it is not associated with current heavy alcohol use, can also be associated with persistent cognitive impairment. Furthermore, a recent study found that between 2010 and 2017, alcohol consumption in India increased by 38 percent from 4.3 to 5.9 liters per adult per year. On the other hand, according to Global Adults Tobacco Survey India (GATS India, 2016-2017), in India, 41.4 percent of adults aged 65 years and older currently use tobacco (smoked and/or smokeless tobacco). The study also observed that tobacco use has been socially accepted among adults and older adults in most Indian societies and is a source of social interaction and recreational pursuit. Unlike western countries, India has a higher number of smokeless tobacco consumers in comparison to smokers. Studies in India showed that smoking and chewing tobacco significantly correlated with the prevalence of coronary heart disease and hypertension and higher quality of life could be achieved by avoiding such habits. It is also demonstrated that tobacco use arguably accounts for far more medical disability and mortality in the older population than abuse of all other substances combined. These findings serve to determine the effects of regular smoking on adverse health outcomes. Although daily smoking seemed to be associated with increased failures, it is difficult to disentangle the effects of cumulative use of tobacco over a lifetime on cognitive ability among older adults. There is also great variation in levels of alcohol consumption, smoking and chewing tobacco and substance disorders across different demographic and socioeconomic groups in India. For instance, a comparative study of Bangladesh, India, and Nepal revealed that the prevalence of smokeless tobacco is higher among adult men in India than among women. Previous work on mental, neurological and substance use disorders has posited that low education and poverty were associated with a higher occurrence of dementia in both China and India. Again, people from lowest levels of socioeconomic groups were prone to report higher rates of smoking than their counterparts. Study also suggests that populations with the poorest income status who increasingly purchase smokeless tobacco make use of scarce resources available with them and may have important indirect effects on their overall health. Additionally, as a result of huge socioeconomic variations in the population, a recent study in India also found that several socio-economic and health factors such as increasing age, no schooling, and bedridden status for the past six months were significantly associated with higher cognitive impairment among the older population. However, studies on the association of substance use with cognitive impairment among older adults are scant, especially in a country with low educational attainment. Thus, we aimed to explore the factors associated with the decline in cognitive functioning especially, alcohol consumption, smoking, and chewing tobacco. The study also sought to determine whether smoking/chewing tobacco and alcohol consumption together are associated with greater impairments in cognitive functioning in old age. The study hypothesizes that: Chewing tobacco, alcohol consumption, and smoking are positively associated with cognitive impairment in an aging population. There is a significant interaction between smoked/ smokeless tobacco use and alcohol consumption on cognitive impairment among older adults. Data The present research used data from Building a Knowledge Base on Population Aging in India (BKPAI) which was a nationally representative data was conducted in 2011, across seven states of India. The survey was sponsored by the Tata Institute for Social Sciences (TISS), Mumbai, and Institute for Economic Growth (IEG), Delhi, Institute for Social and Economic Change (ISEC), and UNFPA (United Nations Population Fund), New Delhi. The survey gathered information on various socio-demographic, economic and health aspects of aging among households of those aged 60 years and above. The data from all the seven states were collected which represents the various regions of India. The states of Punjab and Himachal represent the northern part, Kerala and Tamil Nadu represent the southern part, Orissa and West Bengal represent the eastern part, and Maharashtra represents the western part of the country. Being the survey of the older adults, the sample size was equally split between urban and rural areas, irrespective of the proportion of the urban and rural population. Eighty Primary Sampling Units (PSU) (villages or urban wards) -40 urban and an equal number of rural with 16 households per Primary Sampling Unit (PSU) having an older person were covered in the survey. In all, 9850 older adults were interviewed from 8329 households aged 60 years and above. The sample included for the analysis after dropping the missing data (397 older adults) and outliers was 9453 older adults. Variable description Outcome variable Cognitive impairment was measured by the number of words recalled. To measure the cognitive impairment, a scale of 0 to 10 was prepared. Higher score represents lower cognitive impairment and vice-versa. The words used for testing cognitive impairment were Bus, House, Chair, Banana, Sun, Bird, Cat, Saree, Rice, and Monkey. Five or more words were recoded as 0 "low" representing lower cognitive impairment and a score of four or less was recoded as 1 "high" representing higher cognitive impairment. High cognitive impairment represents cognitive disability among older adults in the present study. Place of Residence was recoded as rural and urban. The study was stratified into rural and urban place of residence. However, during multivariate analysis place of residence was used as a control variable to see the adjusted effects. Explanatory variable There were three main explanatory variables for the study. 1. Smoking tobacco was recoded as 0 "no" and 1 "yes". 2. Chewing tobacco was recoded as 0 "no" and 1 "yes". 3. Alcohol consumption was recoded as 0 "no" and 1 "yes". The questions assessed 'ever use of smoking tobacco', 'chewing tobacco' and 'alcohol consumption'. Age was recoded as 60-69, 70-79, and 80 + years. Sex was recoded as Men and Women. Educational status was recoded as no schooling, below five years of schooling, 6-10 years of schooling, and 11 and above years of schooling. Working status was recoded as "yes", "no" and "retired". Marital status was recoded as currently in union and not in union "included never married, widowed, divorced and separated". Five questions for community involvement were asked and were used to create a variable to measure social capital. The score developed ranges from 0 to 5 and a score of 1 to 5 was recoded as 0 "community involvement" a score of 0 was recoded as 1 "no community involvement". The other question "do you have someone you can trust and confide in?" was recoded into binary form as 0 "yes" and 1 "no". Living arrangement was recoded as "others", "living alone and with spouse". Self-rated health had a scale of 1 to 5 "poor to excellent" and was recoded as 0 "good" (representing good, very good, and excellent) and 1 "poor" (representing poor or fair). Chronic morbidity was recoded as 0 "no" and 1 "yes". Ability to do activities of daily living had a scale of 0 to 6 where it represents higher the score, higher the independence. A score of was recoded as 0 "high" which represents complete independence and 5 and less was recoded as 1 "low" which represents not completely independent to do activities of daily living (Cronbach Alpha: 0.93). Ability to do instrumental activities of daily living had a scale of 0 to 8 representing higher the score, higher the independence. A score of 6 + was recoded as 0 "high" representing high IADL and a score of 5 and less was recoded as 1 "low" representing low IADL. The International Classification of Functioning, Disability, and Health (ICF) proposed the framework on which ADL and IADL were calculated. The Activities of Daily Living (ADL) is an umbrella term relating to self-care, comprising those activities that people undertake routinely in their everyday life. The activities can be subdivided into personal care or ADL and domestic and community activities or Instrumental ADL (IADL). The ADL and IADL have emerged as the most common approaches in empirical assessments of functionality among the older adults and are considered to be befitting to the ICF framework. Caste was recoded as non-Scheduled Caste/Scheduled Tribes and Scheduled Caste/Scheduled Tribes (SC/ST). Religion was recoded as Hindu, Muslim, and others. Wealth status was computed using 30 household assets and was divided into 5 quintiles as poorest, poorer, middle, richer, richest. Then the wealth index was divided as poor "poorest/poorer", middle and rich "richer/richest". Data for seven states was available in the data as mentioned in the data section. Statistical analysis Using STATA 14 sample distribution along with percentage distribution was calculated for cognitive impairment over explanatory variables. For finding the association between cognitive impairment over explanatory variables binary logistic regression model was used. The outcome variable was cognitive impairment coded as "low and high " and the main explanatory variables were consumption of tobacco (smoking and chewing) and consumption of alcohol. The binary logistic regression model is usually put into a more compact form as follows: The parameter 0 estimates the log odds of the cognitive impairment for the reference group, while estimates the maximum likelihood, the differential log odds of the cognitive impairment associated with set of predictors X, as compared to the reference group and represents the residual in the model. The multivariate analysis had four models to explain the unadjusted and adjusted estimates. Model-1 was used to provide the independent effect of smoking tobacco, chewing tobacco, and alcohol consumption on cognitive impairment of older adults. Model-2 was an adjusted model (full-effect model) providing adjusted estimates. The model was adjusted for socio-economic and background characteristics. Model-3, Model-4, and Model-5 provide interaction effects for those who smoke tobacco and consume alcohol; chew tobacco and consume alcohol and who smoke and chew tobacco on cognitive impairment among older adults. Table 1 represents the socio-economic profile of older adults in India. About 16.5 percent of older adults consumed smoked tobacco in reference to 11.7 percent in urban areas. Nearly, 23.7 percent of older adults consumed chewing tobacco in comparison to 16 percent in urban areas. Results About 11.4 percent of older adults were in the category of 80 + year's age group in rural areas and urban areas the percentage was 9.3 percent. Nearly 57.8 percent of older adults in rural areas had no education in comparison to 32 percent in urban areas. About 6.1 percent of older adults in rural areas were retired in reference to 15.3 % in urban areas. Nearly, 38.1 percent of older adults in rural areas were not in a marital union in comparison to 43.2 percent in urban areas. About 22 percent of older adults from rural areas had no community involvement whereas the proportion was low in urban residents (17.7 %). Nearly, 18.6 percent and 13 percent of older adults from rural and urban areas had no trust over some someone respectively. Nearly, 23.2 percent of older adults lived alone/with the spouse in rural areas whereas the proportion was low for urban residents (19.5 %). About, 56.8 percent and 51.5 percent of older adults from rural and urban areas reported poor self-rated health status respectively. About, 7.7 percent and 59.5 percent of older adults from rural areas and 6.6 percent and 48.6 percent of older adults from urban areas reported low ADL and IADL respectively. Table 2 represented percentage distribution of cognitive impairment among older adults by their background characteristics . In urban areas, older adults who smoked tobacco had a higher prevalence of cognitive impairment (58.4 %). Older adults who consumed chewing tobacco had a higher prevalence of cognitive impairment (rural-68.2 % and urban-62.3 %). Alcohol consumption had a positive association with cognitive impairment among older adults from urban areas (59.6 %). Older adults aged 80 years and above had a higher prevalence of cognitive impairment (rural-79.1 % and urban 76.5 %). Older women had a higher prevalence of cognitive impairment (rural-69. Discussion The increased life expectancy led populations to use substances for longer until the compounding effect of substance and declining health leads to significant morbidity and mortality. Similarly, smoking and chewing tobacco increase morbidity and result in bad health outcomes both among adults and older individuals. In agreement with this, the present study also confirms the first hypothesis we tested that alcohol consumption and tobacco use are positively associated with cognitive impairment in later years. The natural aging process brings notable cognitive challenges. And the cognitive impairment adversely affects wellbeing and is a strong predictor for chronic disease progression, and subsequent mortality. On the other hand, while substance use generally declines in later adulthood, even small amounts of alcohol use can have serious consequences. Our results suggest that risk factors for cognitive impairment among older adults include higher age and female sex, while a higher educational level, working status, community involvement are protective factors. Compared to the results from studies conducted in the urban population in India, the rates of cognitive impairment in rural areas is substantially higher, considering that a rural location is an established risk factor for dementia in Asian countries. Also, the regression results show that lower educational attainment has been linked to lower cognitive ability. More years of education translates into a greater cognitive reserve. Older men and women who are not currently in the marital union were found to have lower cognitive ability compared to those who are currently in the marital union. It is consistent with earlier studies suggesting that older adults in any stressful marital relationship or widowhood may be as likely to engage in health-related substance use behaviour's due to increased stress from loss of financial and/or emotional resources and face more severe cognitive impairments. Similar to the patterns of cognitive impairment observed in Western studies that found an association of cognitive functioning with chronic diseases and somatic comorbidities, our results also found that cognitive impairment was significantly associated with chronic morbidity. In concordance with previous studies in India, our study also observed that a substantial proportion of older adults with higher functional ability (ADL and IADL) reported lower cognitive impairment supporting the concept of successful cognitive aging. A couple of cohort studies also showed a positive association between self-rated health and current cognitive function among older adults and found poor self-rated health as a predictor of higher cognitive impairment in the advanced ages. Furthermore, a recent study also found that older adults with chronic kidney disease and poor functional and general health status were at increased risk for cognitive impairment. Consistently, the results of the current study showed that those who reported poor self-rated health had lower cognitive ability than their counterparts. Our findings also support the notion that smoking or chewing tobacco seems to be a risk factor for cognitive impairment. Current analysis found that smokers were more likely to be cognitively impaired than never smokers after adjusting for control variables such as age, sex, education, working status, and other social and health-related variables. Further, this study also observed alcohol use is positively associated with cognitive impairment. This is consistent with the view that alcohol consumption and substance use, which contribute to vascular diseases, could increase the risk for dementia and cognitive impairment. On the other hand, it is also likely that those individuals who maintain cognitive function may display more adaptive qualities and abstain from substance use. Besides, the potential confounding effects of depressive symptoms, prescriptive drugs, and other substances are not taken into consideration in the current analysis. And finally, we cannot say for certain that maintaining cognitive function is causally related to not smoking or not consuming alcohol. In the final model with interaction analyses, a significant interaction effect between the use of smoked tobacco and alcohol consumption on cognitive impairment was found. The older adults who were smokers and consumed alcohol exhibited greater cognitive deficits than those only smokers or who only consumed alcohol. Surprisingly, the odds of cognitive impairment were higher among those who used smokeless tobacco and consumed alcohol than those who consumed alcohol only unlike those who used smokeless tobacco only. Hence, though the results were not significant, the interaction effect also showed that smokeless tobacco is a predictor of cognitive impairment if the person consumed alcohol and used smokeless tobacco or used any form of tobacco that is, smoked and smokeless tobacco jointly leading to the partial confirmation of our second hypothesis. However, different mechanisms may underlie the adverse effects of heavy drinking and the beneficial effects of light to moderate drinking, and such mechanisms may also partly explain why certain differences are reported in decline in cognitive ability in later years. The study suffers from certain limitations that need to be mentioned. Firstly, the results cannot be generalized for present scenario as the survey was conducted in 2011. Secondly, seven states of India that represent different regions of India were accounted to provide the estimates at national level. Therefore, one should be cautious while generalizing it for pan India. Another limitation of the study is that since it was cross-sectional rather than longitudinal, the findings should be interpreted with caution and it cannot attribute a final direction to the relationship of substance use and cognitive impairment. Moreover, given the benefits of moderate alcohol consumption and the risks of excessive drinking and smoking, the interplay between alcohol use and other substance use and cognitive ability requires further study to formulate a more crystallized understanding of the effect of substance use on cognitive health outcomes. Finally, the dose and duration and consumption patterns of alcohol and tobacco have not been taken into account in the current study. However, apart from these limitations, the BKPAI survey is one of the limited surveys in the context of India which provides such reliable estimates at the national level. Moreover, it is one of the most recent surveys in the Indian scenario which provide estimates on substance use among older adults and also report their cognitive functioning ability. Conclusions The findings of this study have implications for societies that are aging and consuming alcohol or chewing/smoking tobacco. When planning geriatric health care for older adults, priority must be given to older-older, women, illiterate, and those older adults who are socially less involved and have poor health outcomes, as they are more vulnerable to impaired cognitive function. Thus, the encouragement of older people to stop using smoked and smokeless tobacco could be considered as part of a strategy to reduce the incidence of cognitive impairment. Moreover, despite the benefit of earlier diagnosis is established, most of the older adults with cognitive impairment do not receive a diagnosis and if they do, it happens late in their disease course. Hence, appropriate measures should be taken for the detection of early stages of cognitive decline in older individuals and efforts should be made to improve the availability and quality of care for dementing older adults. Furthermore, the preventive measures for cognitive impairment should focus on countering the risk factors suggested by the current evidence such as alcohol drinking, smoking and chewing tobacco, and other factors too. And other measures also include increased taxation of tobacco products and bans on its advertisements. Meanwhile, more research is warranted to identify the modifiable risk factors for declining cognitive ability among older adults. |
Obesity in the Otsuka Long Evans Tokushima Fatty Rat: Mechanisms and Discoveries Understanding the neural systems underlying the controls of energy balance has been greatly advanced by identifying the deficits and underlying mechanisms in rodent obesity models. The current review focuses on the Otsuka Long Evans Tokushima Fatty (OLETF) rat obesity model. Since its recognition in the 1990s, significant progress has been made in identifying the causes and consequences of obesity in this model. Fundamental is a deficit in the cholecystokinin (CCK)-1 receptor gene resulting in the absence of CCK-1 receptors in both the gastrointestinal track and the brain. OLETF rats have a deficit in their ability to limit the size of meals and in contrast to CCK-1 receptor knockout mice, do not compensate for this increase in the size of their spontaneous meals, resulting in hyperphagia. Prior to becoming obese and in response to pair feeding, OLETF rats have increased expression of neuropeptide Y (NPY) in the compact region of the dorsomedial hypothalamus (DMH), and this overexpression contributes to their overall hyperphagia. Study of the OLETF rats has revealed important differences in the organization of the DMH in rats and mice and elucidated previously unappreciated roles for DMH NPY in energy balance and glucose homeostasis. Understanding the neural systems underlying the controls of energy balance has been greatly advanced by identifying the deficits and underlying mechanisms in rodent obesity models. The current review focuses on the Otsuka Long Evans Tokushima Fatty (OLETF) rat obesity model. Since its recognition in the 1990s, significant progress has been made in identifying the causes and consequences of obesity in this model. Fundamental is a deficit in the cholecystokinin (CCK)-1 receptor gene resulting in the absence of CCK-1 receptors in both the gastrointestinal track and the brain. OLETF rats have a deficit in their ability to limit the size of meals and in contrast to CCK-1 receptor knockout mice, do not compensate for this increase in the size of their spontaneous meals, resulting in hyperphagia. Prior to becoming obese and in response to pair feeding, OLETF rats have increased expression of neuropeptide Y (NPY) in the compact region of the dorsomedial hypothalamus (DMH), and this overexpression contributes to their overall hyperphagia. Study of the OLETF rats has revealed important differences in the organization of the DMH in rats and mice and elucidated previously unappreciated roles for DMH NPY in energy balance and glucose homeostasis. Keywords: cholecystokinin, neuropeptide Y, CCK-1 receptor, dorsomedial hypothalamic nucleus, food intake, obesity iNTRODUCTiON Rodent obesity models have been critical to our understanding of the neural systems involved in the controls of food intake and body weight. Dissection of the genetics underlying the obesity of ob/ ob and db/db mice led not only to the discovery of leptin but also contributed greatly to the understanding of multiple hypothalamic peptide systems involved in energy balance. Another example of a genetic model that has increased our understanding of the neural systems involved in energy balance is the Otsuka Long Evans Tokushima Fatty (OLETF) rat. This rat obesity model was derived from a spontaneous obesity in an outbred colony of Long Evans rats. OLETF and a control Long Evans Tokushima Otsuka (LETO) lines were then developed by selective breeding. OLETF rats were initially studied primarily as a model of late onset type 2 diabetes, as older OLETF rats were not only obese but also hyperglycemic and insulin resistant. Characterization of overall pancreatic function in OLETF rats demonstrated the absence of a pancreatic amylase response to administration of the brain gut peptide cholecystokinin (CCK). Further studies revealed that OLETF rats had a >6 kbp deletion in the gene for the CCK-1 receptor that spanned the first and second exons and resulted in the absence of expression of a functional CCK-1 receptor. Thus, the OLETF rat is a CCK-1 receptor knockout model. CHOLeCYSTOKiNiN AND CHOLeCYSTOKiNiN ReCePTORS Cholecystokinin is a gut/brain peptide that plays a variety of roles. Gut CCK is released from I cells in the upper intestine in response to the intraluminal presence of nutrients and plays a variety of roles in the overall digestive function. Exogenously administered and endogenously released CCK slow gastric emptying, modulate intestinal motility and stimulate gall bladder and pancreatic secretions. CCK also plays a role in the control of food intake by contributing to meal termination. Exogenously administered CCK reduces food intake and does so by reducing meal size. A role for endogenously released CCK in the controls of meal size is demonstrated by the ability of CCK receptor antagonists to increase food intake by prolonging eating -increasing meal duration and size. The primary mechanism of action of CCK in the inhibition of food intake is paracrine, acting on local vagal afferent terminals in close apposition to the intestinal I cells. CCK receptors are expressed in vagal afferent cell bodies in the nodose ganglion and transported to abdominal vagal endings. CCK both directly activates vagal afferent fibers and also sensitizes vagal fibers to signals, transmitting information about gastric and intestinal luminal volume. In the brain, CCK acts as neurotransmitter/neuromodulator. CCK-producing neurons are widely distributed in the brain, and CCK neurons have been reported to be the most ubiquitous of all peptidergic neurons. Cell bodies are found throughout all layers of the cerebral cortex and are widely distributed throughout olfactory and limbic systems and in multiple hypothalamic nuclei. In the midbrain, CCK cell bodies are found in the substantia nigra, the ventral tegmental area, and the raphe nucleus, and CCK modulates both dopaminergic and serotonergic function. There are two CCK receptor subtypes. These were initially identified based on their relative affinity for various CCK fragments and analogs. CCK-1 receptors require the sulfated tyrosine, and these were originally characterized in rat and guinea pig pancreas. CCK-1 receptors exist in both low capacity, high affinity and high capacity, low affinity states. CCK-2 receptors have high affinity for unsulfated CCK and various CCK fragments and were initially characterized in brain. Both receptors are members of the G-coupled super family of receptors. As well as found in pancreas and gall bladder, CCK-1 receptors are expressed in the nodose ganglion (and transported in vagal afferent fibers) and in a number of specific brain sites, including the dorsomedial hypothalamus (DMH). There are important species-specific differences in the expression patterns of CCK-1 and CCK-2 receptors, including the expression of CCK-2 and not CCK-1 receptors in human pancreas. However, the expression of CCK-1 receptors in vagal afferent neurons and in specific brain sites appears to be similar in rat and man (not in the mouse as will be discussed later). The satiety actions of CCK depend on the interactions with CCK-1 receptors. Sulfated CCK-8 or sulfated longer forms (i.e., CCK-33, CCK-58) inhibit food intake in a dose-related fashion, while unsulfated CCK or shorter CCK fragments do not. Furthermore, specific CCK-1 antagonist administration increases food intake while CCK-2 antagonists do not. This pharmacological specificity has been demonstrated across multiple species. CHARACTeRiZATiON OF THe HYPeRPHAGiA iN OLeTF RATS The initial discovery that OLETF rats had a deletion in the gene for the CCK-1 receptor led to experiments examining whether CCK could inhibit their food intake. OLETF rats lacking functional CCK-1 receptors were shown to be insensitive to the feeding inhibitory actions of exogenously administered CCK. Characterization of their daily food intake revealed that OLETF rats ate meals that were about twice as large as those of LETO controls and, in response to this increase in the size of their meals, they at fewer meals. However, the decrease in meal frequency was not sufficient to normalize their food intake resulting in a chronic hyperphagia or overconsumption (Figure 1). Evidence for the hyperphagia is evident even prior to weaning. In independent ingestion tests, in which rat pups are consuming milk off the floor of a test chamber, OLETF pups as young as 2 days of age consume significantly more sweetened milk than age-matched LETO controls. In tests assessing nursing behavior, OLETF pups also gain more weight during a suckling bout indicative of increased intake. The food intake of OLETF rats is also characterized by higher preferences for high fat, sucrose and other sweet tastes. This can be demonstrated in both real feeding and sham feeding paradigms, implicating taste mechanisms in the preferences. Pair feeding experiments in which the daily intake of OLETF rats was limited to that of paired LETO control rats revealed that the obesity in the OLETF rats was completely attributable to their hyperphagia. Pair feeding completely normalized their rates of body weight gain (Figure 2) as well as the size of their fat mass and their glucose regulation. Thus, the OLETF rat is an obesity model of disordered food intake. CHARACTeRiZATiON OF HYPOTHALAMiC FUNCTiON iN OLeTF RATS The lack of compensation for the increase in meal size in OLETF rats requires explanation. Chronic administration of CCK at meal onset results in chronic decreases in meal size but an increase in meal frequency such that overall food intake is not affected. These data suggest a role for CCK in meal termination, but not in overall food intake. Knockout of CCK-1 receptors in the mouse produces results that are consistent with this interpretation. CCK-1 knockout mice have increased meal size, but the decrease in meal frequency compensates for this so that CCK-1 KO mice have normal body weight. Why does the absence of CCK-1 receptors result in obesity in the OLETF rat, but not in a mouse KO? Part of the answer comes from the examination of hypothalamic signaling in the OLETF rat. While mRNA expression for arcuate POMC and neuropeptide Y (NPY) was appropriate in obese or lean pair-fed OLETF rats , NPY expression in the compact subregion of the DMH was significantly elevated in pair-fed OLETF rats and normalized in ad lib-fed rats. These data suggested the possibility that elevations in DMH NPY might be driving the hyperphagia on OLETF rats. Analyses of NPY expression levels in juvenile OLETF rats prior to obesity were consistent with such an explanation. Five-week-old pre-obese OLETF rats had greatly elevated DMH NPY expression. Importantly, the same neurons expressing NPY in the DMH also expressed CCK-1 receptors representing one of the populations of brain CCK-1 receptors identified in the original autoradiography studies. Furthermore, direct injection of CCK into the DMH both reduces food intake and downregulates NPY mRNA expression without affecting ARC NPY expression, suggesting a role for CCK in modulating DMH NPY. In the absence of CCK-1 receptors, DMH NPY is upregulated. An examination of NPY expression in the mouse revealed that although NPY expression was evident in the ARC, its expression was not evident in the compact region of the DMH. NPY receptors are evident in the dorsal and ventral medial subregions of the DMH, and NPY expression increases in response to exposure to a high-fat diet. A role for these in the lasting hyperphagia that occurs in diet-induced obesity has been suggested. In contrast to rats, mouse DMH does not contain CCK-1 receptors as neither binding activity nor mRNA expression, for CCK-1 receptors are detected in the DMH. These data have led to the hypothesis that the obesity in the OLETF rats results from a combination of disordered satiety signaling due to the lack of vagal afferent CCK-1 receptors and an upregulation of DMH NPY that prevents complete compensation for the increased meal size. The CCK-1 receptor knockout mouse has similar deficits in the control of meal size, but in the absence of altered DMH signaling, appropriately compensates for chronically consuming larger meals. This hypothesis was directly tested in the rat using viralmediated knockdown of DHM NPY in OLETF rats. Forty percent knockdown of DMH NPY mRNA expression in response to bilateral administration of an AAV-expressing short hairpin RNA (AAVshNPY) significantly reduced the food intake and weight gain trajectory of OLETF rats. The alteration in food intake was expressed as a partial reduction in the size of consumed meals such that the meal size deficit in OLETF rats with DMH injections of AAVshNPY was similar to the meal size deficits in CCK-1 receptor KO mice. DMH NPY overexpression in control rats had the opposite effect. Overexpressing DMH NPY resulted in increased food intake, especially on a high-fat diet, and significantly elevated weight gain. eXeRCiSe AND OLeTF OBeSiTY The study of the OLETF rat has led to a number of important insights about interactions between exercise and food intake and the role of DMH signaling in energy balance. Providing OLETF rats access to a running results in a normalization of their body weight and prevention if hyperinsulinemia. This is not simply due to the increased energy expenditure as their daily food intake is also greatly reduced by running wheel access, and their meal patterns are normalized. The long-term effects of running wheel activity depend upon the timing of access. In adult OLETF rats, running wheel access normalizes food intake and body weight, but at the cessation of access, food intake greatly increases, and body weight returns to levels of comparably aged OLETF rats that did not have access to running wheels. Thus, the effects of exercise are temporary and only evident during the time of running wheel access. In contrast, providing access to running wheels for a 6-week period beginning at 8 weeks of age had longlasting effects on both food intake and body weight in OLETF rats. Although food intake and body weight increased somewhat when access to the running wheels was stopped, OLETF rats did not regain weight to levels of control OLETF rats without running wheel access. Effects of exercise on other rodent obesity phenotypes have now been demonstrated as well. The age-dependent aspect of the effects of exercise may depend on epigenetic effects in pathways undergoing maturation and thus increasing the possibility of lasting effects when the exposure is at a younger age. NOveL ACTiONS OF DMH NPY The observation of altered DMH NPY signaling in the OLETF rat and how DMH knockdown rescues the obese phenotype has led to extensive studies of the roles of DMH NPY in various aspects of energy balance. As mentioned above, overexpression of DMH NPY leads to increased food intake and body weight, especially when rats are presented with a high-fat diet. These data led to a more careful examination of the consequences of altered DMH NPY signaling. Knockdown of NPY in the DMH in normal weight Sprague-Dawley rats has been demonstrated to reduce the size of fat depots and ameliorate high-fat diet-induced hyperphagia and obesity. Furthermore, DMH NPY knockdown resulted in the development of brown adipocytes in inguinal white adipose tissue that was characterized by increased uncoupling protein 1 expression. DMH NPY knockdown also increased energy expenditure and enhanced the thermogenic response to a cold environment. This knockdown also enhanced insulin sensitivity. These data identified novel roles for DMH NPY in modulating adipose tissue, thermogenesis, insulin sensitivity, and energy expenditure. Further work has revealed a novel modulator of DMH NPY signaling. Gene expression profiling of the DMH in response to exercise revealed an elevation of the expression of transthyretin (TTR), best known as a blood and cerebrospinal fluid transporter of thyroxine and retinol. To test the hypothesis that TTR may play a role in modulating signaling-related energy balance in the DMH, we examined the effects of brain TTR on food intake and body weight and have further determined hypothalamic signaling that may underlie its feeding effect in rats. We found that icv administration of TTR in normal growing rats decreased food intake and body weight. Furthermore, TTR administration decreased NPY levels in the DMH. Chronic icv infusion of TTR in OLETF rats reversed their hyperphagia and obesity. Overall, these studies examining factors that might modulate DMH NPY demonstrated a novel anorectic action of central TTR in the control of energy balance, providing a potential novel target for obesity treatment. SUMMARY Work with the OLETF rat has not only been focused on identifying the mechanisms underlying its obesity but also served as a vehicle for uncovering multiple novel mechanisms involved in the overall controls of energy balance. |
<gh_stars>1-10
/*
* #%L
* BroadleafCommerce Profile
* %%
* Copyright (C) 2009 - 2013 Broadleaf Commerce
* %%
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* #L%
*/
package org.broadleafcommerce.profile.core.domain;
import java.util.HashMap;
import java.util.Map;
import javax.persistence.CascadeType;
import javax.persistence.CollectionTable;
import javax.persistence.Column;
import javax.persistence.ElementCollection;
import javax.persistence.Entity;
import javax.persistence.EntityListeners;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Inheritance;
import javax.persistence.InheritanceType;
import javax.persistence.JoinColumn;
import javax.persistence.Lob;
import javax.persistence.ManyToOne;
import javax.persistence.MapKeyColumn;
import javax.persistence.Table;
import javax.persistence.UniqueConstraint;
import org.broadleafcommerce.common.copy.CreateResponse;
import org.broadleafcommerce.common.copy.MultiTenantCopyContext;
import org.broadleafcommerce.common.presentation.AdminPresentation;
import org.broadleafcommerce.common.presentation.AdminPresentationClass;
import org.broadleafcommerce.common.presentation.AdminPresentationMap;
import org.broadleafcommerce.common.presentation.PopulateToOneFieldsEnum;
import org.broadleafcommerce.common.presentation.override.AdminPresentationMergeEntry;
import org.broadleafcommerce.common.presentation.override.AdminPresentationMergeOverride;
import org.broadleafcommerce.common.presentation.override.AdminPresentationMergeOverrides;
import org.broadleafcommerce.common.presentation.override.PropertyType;
import org.broadleafcommerce.common.time.domain.TemporalTimestampListener;
import org.hibernate.annotations.Cache;
import org.hibernate.annotations.CacheConcurrencyStrategy;
import org.hibernate.annotations.Cascade;
import org.hibernate.annotations.GenericGenerator;
import org.hibernate.annotations.MapKeyType;
import org.hibernate.annotations.Parameter;
import org.hibernate.annotations.Type;
@Entity
@EntityListeners(value = { TemporalTimestampListener.class })
@Inheritance(strategy = InheritanceType.JOINED)
@Table(name = "BLC_CUSTOMER_PAYMENT", uniqueConstraints = @UniqueConstraint(name = "CSTMR_PAY_UNIQUE_CNSTRNT", columnNames = { "CUSTOMER_ID", "PAYMENT_TOKEN" }))
@AdminPresentationMergeOverrides(
{
@AdminPresentationMergeOverride(name = "billingAddress.addressLine1", mergeEntries =
@AdminPresentationMergeEntry(propertyType = PropertyType.AdminPresentation.PROMINENT, booleanOverrideValue = true)),
@AdminPresentationMergeOverride(name = "billingAddress.", mergeEntries = {
@AdminPresentationMergeEntry(propertyType = PropertyType.AdminPresentation.TAB, overrideValue = CustomerPaymentImpl.Presentation.Tab.Name.BILLING_ADDRESS),
@AdminPresentationMergeEntry(propertyType = PropertyType.AdminPresentation.TABORDER, intOverrideValue = CustomerPaymentImpl.Presentation.Tab.Order.BILLING_ADDRESS)
})
})
@AdminPresentationClass(populateToOneFields = PopulateToOneFieldsEnum.TRUE)
public class CustomerPaymentImpl implements CustomerPayment, AdditionalFields {
private static final long serialVersionUID = 1L;
@Id
@GeneratedValue(generator = "CustomerPaymentId")
@GenericGenerator(
name = "CustomerPaymentId",
strategy = "org.broadleafcommerce.common.persistence.IdOverrideTableGenerator",
parameters = {
@Parameter(name = "segment_value", value = "CustomerPaymentImpl"),
@Parameter(name = "entity_name", value = "org.broadleafcommerce.profile.core.domain.CustomerPaymentImpl")
})
@Column(name = "CUSTOMER_PAYMENT_ID")
protected Long id;
@ManyToOne(cascade = { CascadeType.PERSIST, CascadeType.MERGE }, targetEntity = CustomerImpl.class, optional = false)
@JoinColumn(name = "CUSTOMER_ID")
@AdminPresentation(excluded = true)
protected Customer customer;
@ManyToOne(cascade = { CascadeType.PERSIST, CascadeType.MERGE }, targetEntity = AddressImpl.class, optional = true)
@JoinColumn(name = "ADDRESS_ID")
protected Address billingAddress;
@Column(name = "PAYMENT_TOKEN")
@AdminPresentation(friendlyName = "CustomerPaymentImpl_paymentToken",
tooltip = "CustomerPaymentImpl_paymentToken_tooltip",
tab = Presentation.Tab.Name.PAYMENT,
tabOrder = Presentation.Tab.Order.PAYMENT,
group = Presentation.Group.Name.PAYMENT,
groupOrder = Presentation.Group.Order.PAYMENT)
protected String paymentToken;
@Column(name = "IS_DEFAULT")
@AdminPresentation(friendlyName = "CustomerPaymentImpl_isDefault",
tab = Presentation.Tab.Name.PAYMENT,
tabOrder = Presentation.Tab.Order.PAYMENT,
group = Presentation.Group.Name.PAYMENT,
groupOrder = Presentation.Group.Order.PAYMENT)
protected boolean isDefault = false;
@ElementCollection
@MapKeyType(@Type(type = "java.lang.String"))
@Lob
@Type(type = "org.hibernate.type.StringClobType")
@CollectionTable(name = "BLC_CUSTOMER_PAYMENT_FIELDS", joinColumns = @JoinColumn(name = "CUSTOMER_PAYMENT_ID"))
@MapKeyColumn(name = "FIELD_NAME", nullable = false)
@Column(name = "FIELD_VALUE")
@Cascade(org.hibernate.annotations.CascadeType.ALL)
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE, region = "blStandardElements")
@AdminPresentationMap(friendlyName = "CustomerPaymentImpl_additionalFields",
tab = Presentation.Tab.Name.PAYMENT,
tabOrder = Presentation.Tab.Order.PAYMENT,
keyPropertyFriendlyName = "CustomerPaymentImpl_additional_field_key",
forceFreeFormKeys = true)
protected Map<String, String> additionalFields = new HashMap<String, String>();
@Override
public void setId(Long id) {
this.id = id;
}
@Override
public Long getId() {
return id;
}
@Override
public Customer getCustomer() {
return customer;
}
@Override
public void setCustomer(Customer customer) {
this.customer = customer;
}
@Override
public Address getBillingAddress() {
return billingAddress;
}
@Override
public void setBillingAddress(Address billingAddress) {
this.billingAddress = billingAddress;
}
@Override
public String getPaymentToken() {
return paymentToken;
}
@Override
public void setPaymentToken(String paymentToken) {
this.paymentToken = paymentToken;
}
@Override
public boolean isDefault() {
return isDefault;
}
@Override
public void setDefault(boolean aDefault) {
this.isDefault = aDefault;
}
@Override
public Map<String, String> getAdditionalFields() {
return additionalFields;
}
@Override
public void setAdditionalFields(Map<String, String> additionalFields) {
this.additionalFields = additionalFields;
}
@Override
public <G extends CustomerPayment> CreateResponse<G> createOrRetrieveCopyInstance(MultiTenantCopyContext context) throws CloneNotSupportedException {
CreateResponse<G> createResponse = context.createOrRetrieveCopyInstance(this);
if (createResponse.isAlreadyPopulated()) {
return createResponse;
}
CustomerPayment cloned = createResponse.getClone();
// dont clone
cloned.setCustomer(customer);
cloned.setBillingAddress(billingAddress.createOrRetrieveCopyInstance(context).getClone());
cloned.setDefault(isDefault);
cloned.setPaymentToken(paymentToken);
for (Map.Entry<String, String> entry : additionalFields.entrySet()) {
cloned.getAdditionalFields().put(entry.getKey(), entry.getValue());
}
return createResponse;
}
public static class Presentation {
public static class Group {
public static class Name {
public static final String PAYMENT = "CustomerPaymentImpl_payment";
}
public static class Order {
public static final int PAYMENT = 1000;
}
}
public static class Tab {
public static class Name {
public static final String PAYMENT = "CustomerPaymentImpl_payment";
public static final String BILLING_ADDRESS = "CustomerPaymentImpl_billingAddress";
}
public static class Order {
public static final int PAYMENT = 1000;
public static final int BILLING_ADDRESS = 2000;
}
}
}
}
|
// ListCustomizeResourceRangeStats list state data in period for all online jobs
func (c *customizeStoreManager) ListCustomizeResourceRangeStats(start, end time.Time,
count int) (map[string][]*Metric, error) {
metrics := make(map[string][]*Metric)
c.lock.RLock()
defer c.lock.RUnlock()
for k, cache := range c.caches {
var values []*Metric
ms := cache.InTimeRange(start, end, count)
if len(ms) == 0 {
continue
}
for _, m := range ms {
values = append(values, m.(*Metric))
}
metrics[k] = values
}
return metrics, nil
} |
<gh_stars>10-100
//-----------------------------------------------------------------------------
// luna2d engine
// Copyright 2014-2017 <NAME>
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//-----------------------------------------------------------------------------
#include "luascript.h"
#include "luatable.h"
#include "lunaengine.h"
#include "lunafiles.h"
using namespace luna2d;
static const bool IS_64BIT_ARCH = sizeof(size_t) == 8; // Is current binary built for 64-bit architecture
LuaScript::LuaScript()
{
Open();
}
LuaScript::~LuaScript()
{
Close();
}
// Custom module loader for load modules from assets
int LuaScript::ModuleLoader(lua_State *luaVm)
{
LuaScript* lua = LuaScript::FromLuaVm(luaVm);
std::string moduleName = LuaStack<std::string>::Pop(luaVm, 1) + ".lua";
// Get path to script file where was called "require" function
lua_Debug info;
lua_getstack(luaVm, 2, &info);
lua_getinfo(luaVm, "S", &info);
std::string sourcePath = LUNAEngine::SharedFiles()->GetParentPath(info.source) + "/";
// Try load module by relative path
if(lua->LoadFile(sourcePath + moduleName)) return 1;
// Try load module by global path
if(lua->LoadFile("scripts/" + moduleName)) return 1;
return 0;
}
// Wrap some default lua functions
void LuaScript::WrapDefault()
{
// Wrap module loader for load modules from assets
LuaTable searchers = GetGlobalTable().GetTable("package").GetTable("searchers");
lua_rawgeti(luaVm, LUA_REGISTRYINDEX, searchers.GetRef()->GetRef());
LuaStack<lua_CFunction>::Push(luaVm, &LuaScript::ModuleLoader);
lua_rawseti(luaVm, -2, 2);
lua_pop(luaVm, 1);
}
void LuaScript::MakeWeakRegistry()
{
lua_createtable(luaVm, 0, 0);
lua_createtable(luaVm, 0, 1);
lua_pushliteral(luaVm, "__mode");
lua_pushliteral(luaVm, "v");
lua_rawset(luaVm, -3);
lua_setmetatable(luaVm, -2);
weakRegistryRef = luaL_ref(luaVm, LUA_REGISTRYINDEX);
}
// Get pointer to lua VM
lua_State* LuaScript::GetLuaVm()
{
return luaVm;
}
// Open lua state
void LuaScript::Open()
{
luaVm = luaL_newstate();
luaL_openlibs(luaVm); // Open standard lua libs
WrapDefault(); // Wrap default functions
MakeWeakRegistry();
// Store pointer to LuaScript instance in lua registry
lua_pushliteral(luaVm, "_L");
lua_pushlightuserdata(luaVm, this);
lua_rawset(luaVm, LUA_REGISTRYINDEX);
}
// Close lua state
void LuaScript::Close()
{
lua_close(luaVm);
}
void LuaScript::DoString(const std::string& str)
{
lua_pushcfunction(luaVm, &LuaScript::ErrorHandler); // Set error handler
luaL_loadstring(luaVm, str.c_str());
lua_pcall(luaVm, 0, LUA_MULTRET, -2); // Call with using eror handler
}
bool LuaScript::DoFile(const std::string& filename)
{
lua_pushcfunction(luaVm, &LuaScript::ErrorHandler); // Set error handler
if(!LoadFile(filename))
{
lua_pop(luaVm, 1); // Remove error handler from stack
return false;
}
lua_pcall(luaVm, 0, LUA_MULTRET, -2); // Call with using error handler
return true;
}
// Load file without run
bool LuaScript::LoadFile(const std::string& filename)
{
// On 64-bit platforms try load file with 64-bit bytecode version if it exists
bool use64bit = IS_64BIT_ARCH && LUNAEngine::SharedFiles()->IsExists(filename + "64");
// "luaL_dofile" cannot open file from assets(e.g. in .apk)
// Because load file as buffer and do file using "luaL_loadbuffer"
std::string buffer = LUNAEngine::SharedFiles()->ReadFileToString(use64bit ? filename + "64" : filename);
if(buffer.empty()) return false;
luaL_loadbuffer(luaVm, buffer.c_str(), buffer.size(), filename.c_str());
return true;
}
// Get global table
LuaTable LuaScript::GetGlobalTable()
{
lua_pushglobaltable(luaVm);
int ref = luaL_ref(luaVm, LUA_REGISTRYINDEX);
return LuaTable(luaVm, ref);
}
int LuaScript::GetWeakRegistryRef()
{
return weakRegistryRef;
}
// Lua error handler
int LuaScript::ErrorHandler(lua_State *luaVm)
{
// Log error
LUNA_LOGE("%s", lua_tostring(luaVm, 1));
// Log stack trace
LUNA_LOGE("Stack trace:");
lua_Debug info;
int depth = 0;
while(lua_getstack(luaVm, depth, &info))
{
lua_getinfo(luaVm, "Sln", &info);
if(info.currentline > -1) // Skip lines with native lua functions
{
LUNA_LOGE("%s:%d: %s\n", info.source, info.currentline, info.name ? info.name : "");
}
depth++;
}
return 0;
}
// Get pointer to LuaScript instance from lua_State
LuaScript* LuaScript::FromLuaVm(lua_State* luaVm)
{
lua_pushliteral(luaVm, "_L");
lua_rawget(luaVm, LUA_REGISTRYINDEX);
LuaScript* lua = static_cast<LuaScript*>(lua_touserdata(luaVm, -1));
lua_pop(luaVm, 1); // Remove pointer from stack
return lua;
}
LuaScript::operator lua_State*()
{
return GetLuaVm();
}
|
<gh_stars>0
import { Injectable } from '@nestjs/common';
import { hash } from 'src/utils';
import { Connection } from 'typeorm';
import { Auth } from './models/auth.entity';
@Injectable()
export class AuthService {
constructor(private readonly connection: Connection) {}
async createToken(openId: string, userId: number): Promise<string> {
const token = hash(openId);
const auth = new Auth();
auth.token = token;
auth.open_id = openId;
auth.user_id = userId;
await Promise.all([
this.connection.manager.save(auth),
this.connection
.createQueryBuilder()
.delete()
.from(Auth)
.where('open_id = :openId', { openId })
.execute(),
]);
return token;
}
}
|
Observation of strong intrinsic pinning in MgB2 films By using three types of MgB2 superconductors, such as c-axis-oriented single-crystal films, c-axis-oriented columnar-structure films and films without c-axis orientation perpendicular to the substrate surface, we have investigated the intrinsic pinning effect in MgB2 superconductors. The strong field performance of Jc was observed by turning the orientation of grains from the c axis to the a axis. No c-axis-oriented MgB2 films showed a noticeable increase of Jc at high fields compared with c-axis-oriented films, whether they had columnar structures or not. Our results clearly show that MgB2 has strong intrinsic pinning caused by the large anisotropy of the superconducting energy gap in the boron layers like high-Tc cuprate superconductors with a layered structure. |
<filename>chapters/7/7_11_2.cpp<gh_stars>0
#include <iostream>
#include "Sales_data.h"
using std::cin; using std::cout; using std::endl;
using std::cerr;
int main(){
Sales_data s1;
Sales_data s2("9787121155352");
Sales_data s3("9787121155352", 100, 129.99);
Sales_data s4(cin);
return EXIT_SUCCESS;
} |
/**
* This creates a new window to set the Max source site distance
*
*/
private void initDistanceControl() {
SetMinSourceSiteDistanceControlPanel distanceControlPanel = new SetMinSourceSiteDistanceControlPanel(this.getGlassPane());
distanceControlPanel.pack();
distanceControlPanel.setVisible(true);
} |
'''
analyzer turns expression into a self-evaluating function
potentially save the effort to parse expression every time it executes
in fact, it might not be that useful, since we have already parsed list expression
but it's fun, so let's implement it
anyway, I feel deeply nested functions harder to debug than deeply nested expression
so I do not recommend this step
'''
import inspect
from typing import Any, Callable, Dict, List, Optional, Type, Union
from sicp414_evaluator import AndExpr, SequenceExpr, BooleanExpr, BooleanVal, CallExpr, DefineProcExpr, DefineVarExpr, \
Environment, Expression, GenericExpr, IfExpr, LambdaExpr, NilExpr, NilVal, NotExpr, NumberExpr, NumberVal, OrExpr, \
PrimVal, ProcVal, QuoteExpr, SchemePanic, SchemeRuntimeError, SchemeVal, SequenceExpr, SetExpr, SymbolVal, UndefVal, \
StringExpr, StringVal, SymbolExpr, Token, env_define, find_type, install_is_equal_rules, install_parse_expr_rules, install_primitives, \
install_stringify_expr_rules, install_stringify_value_rules, is_truthy, make_global_env, parse_expr, parse_tokens, pure_check_proc_arity, \
pure_eval_call_invalid, pure_eval_call_prim, pure_eval_call_proc_extend_env, pure_eval_define_var, quote_token_combo, \
scan_source, scheme_flush, scheme_panic, stringify_token_full, stringify_value
from sicp416_resolver import ResDistancesType, install_resolver_rules, pure_resolved_eval_set, pure_resolved_eval_symbol, resolve_expr
class SchemeAnalysisError(Exception):
def __init__(self, token: Token, message: str):
self.token = token
self.message = message
def __str__(self):
return 'analysis error at %s in line %d: %s' % (stringify_token_full(self.token), self.token.line+1, self.message)
EvaluableType = Callable[[Environment], SchemeVal]
AnalRecurFuncType = Callable[[Expression], EvaluableType]
AnalFuncType = Callable[[Expression, AnalRecurFuncType,
ResDistancesType], EvaluableType]
_analyzer_rules: Dict[Type, AnalFuncType] = {}
def update_analyzer_rules(rules: Dict[Type, AnalFuncType]):
_analyzer_rules.update(rules)
def analyze_expr(expr: SequenceExpr, distances: ResDistancesType):
def analyze_recursive(expr: Expression):
t = find_type(type(expr), _analyzer_rules)
f = _analyzer_rules[t]
return f(expr, analyze_recursive, distances)
try:
eval = analyze_recursive(expr)
def _evaluate(env: Environment):
try:
res = eval(env)
except SchemeRuntimeError as err:
scheme_panic(str(err))
return res
except SchemeAnalysisError as err:
scheme_panic(str(err))
return _evaluate
'''analysis list rule definitions'''
AnalRuleType = Union[
Callable[[], EvaluableType],
Callable[[GenericExpr], EvaluableType],
Callable[[GenericExpr, AnalRecurFuncType], EvaluableType],
Callable[[GenericExpr, AnalRecurFuncType, ResDistancesType], EvaluableType]
]
def analyzer_rule_decorator(rule_func: AnalRuleType):
arity = len(inspect.getfullargspec(rule_func).args)
def _analyzer_rule_wrapped(expr: Expression, analyze: AnalRecurFuncType, distances: ResDistancesType):
args: List[Any] = [expr, analyze, distances]
return rule_func(*args[0:arity])
return _analyzer_rule_wrapped
@analyzer_rule_decorator
def analyze_symbol(expr: SymbolExpr, analyze: AnalRecurFuncType, distances: ResDistancesType):
def _evaluate(env: Environment):
return pure_resolved_eval_symbol(expr, env, distances)
return _evaluate
@analyzer_rule_decorator
def analyze_string(expr: StringExpr):
return lambda env: StringVal(expr.value.literal)
@analyzer_rule_decorator
def analyze_number(expr: NumberExpr):
return lambda env: NumberVal(expr.value.literal)
@analyzer_rule_decorator
def analyze_boolean(expr: BooleanExpr):
return lambda env: BooleanVal(expr.value.literal)
@analyzer_rule_decorator
def analyze_nil():
return lambda env: NilVal()
@analyzer_rule_decorator
def analyze_sequence(expr: SequenceExpr, analyze: AnalRecurFuncType):
evls: List[EvaluableType] = []
for subexpr in expr.contents:
evl = analyze(subexpr)
evls.append(evl)
def _evaluate(env: Environment):
res: SchemeVal = UndefVal()
for evl in evls:
res = evl(env)
return res
return _evaluate
@analyzer_rule_decorator
def analyze_quote(expr: QuoteExpr):
return lambda env: quote_token_combo(expr.content)
class ProcAnalyzedVal(ProcVal):
'''procedure body is a EvaluableType'''
def __init__(self, name: str, pos_paras: List[str], rest_para: Optional[str], body: EvaluableType, env: Environment):
super().__init__(name, pos_paras, rest_para, env)
self.body = body
def pure_eval_call_proc_analyzed(paren: Token, operator: ProcAnalyzedVal, operands: List[SchemeVal]):
pure_check_proc_arity(paren, operator, operands)
new_env = pure_eval_call_proc_extend_env(operator, operands)
return operator.body(new_env)
@analyzer_rule_decorator
def analyze_call(expr: CallExpr, analyze: AnalRecurFuncType):
operator_evl = analyze(expr.operator)
operand_evls = [analyze(subexpr) for subexpr in expr.operands]
def _evaluate(env: Environment):
operator = operator_evl(env)
operands = [evl(env) for evl in operand_evls]
if isinstance(operator, PrimVal):
return pure_eval_call_prim(expr.paren, operator, operands)
elif isinstance(operator, ProcAnalyzedVal):
return pure_eval_call_proc_analyzed(expr.paren, operator, operands)
else:
return pure_eval_call_invalid(expr.paren, operator)
return _evaluate
@analyzer_rule_decorator
def analyze_set(expr: SetExpr, analyze: AnalRecurFuncType, distances: ResDistancesType):
initializer_evl = analyze(expr.initializer)
def _evaluate(env: Environment):
initializer = initializer_evl(env)
return pure_resolved_eval_set(expr, initializer, env, distances)
return _evaluate
@analyzer_rule_decorator
def analyze_define_var(expr: DefineVarExpr, analyze: AnalRecurFuncType):
initializer_evl = analyze(expr.initializer)
def _evaluate(env: Environment):
initializer = initializer_evl(env)
return pure_eval_define_var(expr.name, initializer, env)
return _evaluate
@analyzer_rule_decorator
def analyze_define_proc(expr: DefineProcExpr, analyze: AnalRecurFuncType):
body_evl = analyze(expr.body)
def _evaluate(env: Environment):
proc_obj = ProcAnalyzedVal(expr.name.literal, [p.literal for p in expr.pos_paras], expr.rest_para.literal if expr.rest_para is not None else None, body_evl, env)
env_define(env, expr.name.literal, proc_obj)
return SymbolVal(expr.name.literal)
return _evaluate
@analyzer_rule_decorator
def analyze_if(expr: IfExpr, analyze: AnalRecurFuncType):
pred_evl = analyze(expr.pred)
then_evl = analyze(expr.then_branch)
else_evl = None
if expr.else_branch is not None:
else_evl = analyze(expr.else_branch)
def _evaluate(env: Environment):
if is_truthy(pred_evl(env)):
return then_evl(env)
elif else_evl is not None:
return else_evl(env)
else:
return UndefVal()
return _evaluate
@analyzer_rule_decorator
def analyze_lambda(expr: LambdaExpr, analyze: AnalRecurFuncType):
body_evl = analyze(expr.body)
def _evaluate(env: Environment):
return ProcAnalyzedVal('lambda', [p.literal for p in expr.pos_paras], expr.rest_para.literal if expr.rest_para is not None else None, body_evl, env)
return _evaluate
@analyzer_rule_decorator
def analyze_and(expr: AndExpr, analyze: AnalRecurFuncType):
evls = [analyze(subexpr) for subexpr in expr.contents]
def _evaluate(env: Environment):
for evl in evls:
res = evl(env)
if not is_truthy(res):
return res
return res
return _evaluate
@analyzer_rule_decorator
def analyze_or(expr: OrExpr, analyze: AnalRecurFuncType):
evls = [analyze(subexpr) for subexpr in expr.contents]
def _evaluate(env: Environment):
for evl in evls:
res = evl(env)
if is_truthy(res):
return res
return res
return _evaluate
@analyzer_rule_decorator
def analyze_not(expr: NotExpr, analyze: AnalRecurFuncType):
evl = analyze(expr.content)
def _evaluate(env: Environment):
res = evl(env)
return BooleanVal(False) if is_truthy(res) else BooleanVal(True)
return _evaluate
def install_analyzer_rules():
rules = {
SequenceExpr: analyze_sequence,
SymbolExpr: analyze_symbol,
StringExpr: analyze_string,
NumberExpr: analyze_number,
BooleanExpr: analyze_boolean,
NilExpr: analyze_nil,
QuoteExpr: analyze_quote,
CallExpr: analyze_call,
SetExpr: analyze_set,
DefineVarExpr: analyze_define_var,
DefineProcExpr: analyze_define_proc,
IfExpr: analyze_if,
LambdaExpr: analyze_lambda,
AndExpr: analyze_and,
OrExpr: analyze_or,
NotExpr: analyze_not
}
update_analyzer_rules(rules)
def install_rules():
install_parse_expr_rules()
install_stringify_expr_rules()
install_stringify_value_rules()
install_is_equal_rules()
install_resolver_rules()
install_analyzer_rules()
install_primitives()
def test_one(source: str, **kargs: str):
'''
each test tries to execute the source code as much as possible
capture the output, panic and result
print them and compare to expected value
'''
# source
source = source.strip()
print('* source: %s' % source)
try:
# scan
tokens = scan_source(source)
# parse
combos = parse_tokens(tokens)
expr = parse_expr(combos)
# resolve
distances = resolve_expr(expr)
# analyze
evl = analyze_expr(expr, distances)
# evaluate
glbenv = make_global_env()
result = evl(glbenv)
result_str = stringify_value(result)
output_str = scheme_flush()
if len(output_str):
print('* output: %s' % output_str)
if 'output' in kargs:
assert output_str == kargs['output']
print('* result: %s' % result_str)
if 'result' in kargs:
assert result_str == kargs['result']
except SchemePanic as err:
# any kind of panic
print('* panic: %s' % err.message)
assert err.message == kargs['panic']
print('----------')
def test():
# use before intialization in different scopes
test_one(
'''
(define (f)
(define a (cons 1 (lambda () a)))
(car ((cdr a))))
(f)
''',
result='1'
)
# global redefinition
test_one(
'''
(define x 1)
(define x 2)
x
''',
result='2'
)
# local variable shadows outer definitions
test_one(
'''
(define x 1)
(define (f)
(define x 2)
x)
(f)
''',
result='2'
)
# if
test_one(
'''
(define x (if #t 1 2))
(if (= x 1) (display "a"))
(if (= x 2) (display "b"))
''',
output='a',
result='#<undef>'
)
# if, begin and set
test_one(
'''
(define x 1)
(define (f)
(if (= x 1) (begin (set! x 2) x) (x)))
(f)
''',
result='2'
)
# single iteration
test_one(
'''
(define (run)
(define (factorial n)
(define (fact-iter product counter)
(if (> counter n)
product
(fact-iter (* counter product)
(+ counter 1))))
(fact-iter 1 1))
(factorial 5))
(run)
''',
result='120'
)
# single recursion
test_one(
'''
(define (run)
(define (factorial n)
(if (= n 1)
1
(* n (factorial (- n 1)))))
(factorial 5))
(run)
''',
result='120'
)
# mutual recursion
test_one(
'''
(define (f)
(define (even n) (if (= n 0) #t (odd (- n 1))))
(define (odd n) (if (= n 0) #f (even (- n 1))))
(even 5))
(f)
''',
result='#f'
)
if __name__ == '__main__':
install_rules()
test()
|
Subjective Predictors of Psychological Well-being of Gifted Adolescents he paper presents the results of empirical verification of the theoretical model of subjective predictors for the psychological well-being of gifted adolescents, including subjectivity, hardiness, self-efficacy, the general emotional background, represented by the level of personal anxiety, and characteristics of the self-concept of adolescents.It was assumed that the factors moderating the relationship of these variables are the attitude of adolescents to their own giftedness, as well as the specificity of the activity in which adolescents show signs of giftedness, and the level of their achievements in it.The sample consisted of 422 adolescents aged 1517 years enrolled in specialized educational programs for adolescents who show academic, mathematical, leadership and sports talent.The collection of empirical data was carried out using questionnaires and testing (Scale of psychological well-being; Frankfurt scales of self-assessment; Questionnaire of subjectivity; Test of hardiness; The self-efficacy scale; Test for determining self-efficacy; Integrative test of anxiety).Structural equation modeling was done using the IBM SPSS Statistics ver.23 software package and the AMOS module made it possible to recognize subjectiveness (p<0.001), the psychological well-being of gifted adolescents and the influence subjectivity and hardiness is mediated by the attitude of adolescents to their own giftedness (p<0.01), which, in turn, is determined by the level of their achievements (p<0.01).The characteristics of the self-concept, as well as such factors as "the type of activity in which the signs of giftedness are manifested" and "gender", were not included in the empirical model.The prognostic potential of the model and the possibility of solving on its basis the tasks associated with the psychological support of the personal development of gifted adolescents are discussed. |
Observable induced gravitational waves from an early matter phase Assuming that inflation is succeeded by a phase of matter domination, which corresponds to a low temperature of reheating $T_r<10^9\rm{GeV}$, we evaluate the spectra of gravitational waves induced in the post-inflationary universe. We work with models of hilltop-inflation with an enhanced primordial scalar spectrum on small scales, which can potentially lead to the formation of primordial black holes. We find that a lower reheat temperature leads to the production of gravitational waves with energy densities within the ranges of both space and earth based gravitational wave detectors. Introduction Induced gravitational waves are produced as a result of the interaction between scalar perturbations at second order in the post-inflationary universe. The amplitude of their spectra is dependent on the square of the primordial scalar spectrum, and a relatively large induced gravitational wave spectrum is expected from the generation of Primordial Black Holes (PBHs). In a previous paper Ref. we evaluated the spectra of induced gravitational waves generated during a radiation dominated era from the hilltop-type and running mass models, which have been shown to be the only models which can lead to Primordial Black Holes. We showed that these models lead to an induced gravitational wave signature within the sensitivity ranges of planned gravitational wave detectors DECIGO and BBO. We also found that the running mass model predicted spectra within the sensitivity of eLISA under the proviso that inflation is terminated early, with the intriguing factor that if we could motivate N 40 we would get a detectable signature of PBHs with a mass compatible with Dark Matter. In this paper, we assume that the universe undergoes a phase of early matter domination 1, which lowers the reheat temperature as well as the number of allowed e−folds of inflation. The source term during the matter phase is constant, and as a result of the absence of pressure, the density contrast grows. This may result in perturbations entering the non-linear regime and decoupling from the Hubble flow. In our analysis we only assume a linear evolution of perturbations, and we account for the non-linear evolution by cutting off our analysis at some critical scale. To calculate this critical scale we begin by stating that perturbations in the early matter phase evolve as where m is the energy density of matter, H is the conformal Hubble parameter and is the gravitational potential. The evolution of the perturbations is therefore linear until the density contrast becomes of order unity which occurs at the scale : where k r is the scale which re-enters the horizon at the time of reheating, P is the primordial spectrum and k N L is the critical scale at which we terminate our calculation. This paper is organised as follows, in section 2 we review the parameters of inflation, in section 3 we present the thermal history of the universe, relating the temperature of reheating to the relevant scale of reheating, in section 4 we caclulate the bounds on the primordial spectrum from PBHs, in section 5 we review the spectrum of induced gravitational waves produced during the early matter phase, in section 6 we review the models of inflation that can lead to a detectable limit of induced gravitational waves and penultimately in section 7 we present the results with the final discussion presented in section 8. The following conventions are utilised in this paper: refers to conformal time and is related to proper time t as d = dt/a, a is the scale factor, and the conformal Hubble parameter H is related to the Hubble parameter H ≡/a as H = aH. Scales are denoted by k, are given in units of inverse megaparsec Mpc −1 and are related to physical frequency f as f = ck/(2a) where c is the speed of light. We assume a radiation dominated universe at the time of the formation of the gravitational waves, in which case we have a = a 0 ( / 0 ), H = −1, and the scale at re-entry is k = −1. Inflationary Parameters Models of inflation can be parametrised by the slow roll parameters : where V is the potential, and derivatives are with respect to the inflaton field. These are related to the observational parameters, the spectral index n s, the running of the spectral index n s and the scalar spectrum P as: We use a time re-parametrisation, N = ln ae a *, where the subscripts e and * denote the end of inflation and the time of horizon exit respectively. This is related to the potential in the slow roll limit as: and to the scale at horizon exit as : where k 0 = 0.002Mpc −1 is the pivot scale, and in this paper we effectively take N (k 0 ) = 0. We use the latest data release from the WMAP mission, for the WMAP data combined with BAO and H0 data with a null tensor prior. Throughout this paper we take n s = 0.96 and n s ≤ 0.0062. The temperature of reheating The number of e−folds can be related to the temperature of reheating T r as where we have taken the energy scale of inflation to be the SUSY GUT scale ∼ 10 16 GeV. Assuming SUSY means that N max = 56, otherwise we can push this estimate up to N max ∼ 60. In this work we are only interested in reducing the number of e−folds via the inclusion of an early matter phase. The thermal history of the universe can support a reheat temperature down to 1MeV, as this is the temperature below which Neutrinos fail to thermalise and affect big-bang nucleosynthesis and hence N 37. To relate T r to the scale k r we assume the conservation of entropy which gives where g * s is the number of degrees of freedom, M G is the gravitational scale (M G = M p / √ 8 2.4 10 18 GeV ) and since k = aH/a 0 we get the scale which re-enters the horizon at the end of reheating k r : k r ∼ 1.7 10 16 Mpc −1 T r 10 9 GeV g * s 106.75 If the primordial spectrum of perturbations towards the end of inflation is large enough, i.e if the density contrast exceeds ≈ 1/3, perturbations can collapse to form primordial black holes. Based on this, constraints can be placed on the spectrum based on astrophysical phenomena. In our previous paper, we numerically converted the mass fraction of the PBHs into a power spectrum. To perform this analysis we assumed a gaussian distributed energy perturbation 2 and a very large reheat temperature T r 10 10 GeV. In this paper, we update this calculation for lower reheat temperatures. M BH (T ) and k(M BH ) The comoving wavenumber corresponding to the Hubble radius at temperature T is On the other hand, the mass of PBHs produced at temperature T is where is a numerical factor and M = 1.989 10 33 g. Eliminating T, we find In our numerical calculation we adopt = 1 and g * s = 106.75 for whatever values of T, which implies M BH ≈ 0.946 10 28 g T 10 2 GeV These are only precise for M BH 10 28 g but the error will not be very large even for larger PBHs. T r sets a cut-off Our setup is such that the universe after inflation is once dominated by an oscillating scalar field and then reheated to the temperature T r. We neglect any PBHs produced before reheating (i.e., during the early matter domination) and even those produced after reheating if it happens on sub-horizon scales. This gives us conservative PBH constraints. In Ref., the authors discussed PBH formation in matter-dominant universe. Assuming a spherical collapse of a dense region into a PBH, they obtain the result that more PBHs tend to be produced. This would be reasonable since the pressure P = 0. However, they did not consider any non spherical effects, which cause the non-spherical morphologies to evolve during the collapsing phase, which would have prevented a further collapse. To get a rigorous bound on the PBH spectrum for the early matter phase, we need numerical simulations to obtain the correct criterion for matter collapse in matter domination analogous to the radiation domination case that 1/3 < < 1 for matter to collapse into a PBH. In fact, in this treatment there arises a cut-off PBH mass determined by the reheat temperature: (4.5) which corresponds to a cut-off wavenumber The scalar perturbations with k > k co (T r ) may not constrained by PBHs, see Fig. 1. The spectra of induced gravitational waves We follow the analysis of Ref. with much of the notation of Ref.. The spectrum of induced gravitational waves generated during an early matter phase was first calculated in Ref. and is given as where x = k, k is the scale, is conformal time, v =k/k the ratio of the incoming to outgoing scales, y = 1 + v 2 − 2v, is the cosine of the angle between the incoming and outgoing scales, P is the spectrum of primordial fluctuations, in this case generated during inflation, and I M DS is the time integral given as We have taken the lower limit to be x = 0, and terminated the integration at x = k r, where r is the conformal time at the end of the reheating era. The spectrum for induced gravitational waves during radiation domination is given by where the time integrals, 1 and 2, used in this paper are derived in and given in appendix A. Analytical Estimate for a Flat Spectrum In this section we assume a flat spectrum and set P (k) = ∆ R 10 −9. Then the spectrum can be written as: consider the term I M DS /x 3/2 : and it is clear that for x 1 I M DS approaches a constant. Substituting this into our equation for the spectrum we have recall that x = k, where is the limit of our time integral which we take to be the end of the reheating phase r = 2/k r. Therefore x r is much greater than 1 for most of the scales we consider. Hence we can pull I M DS out of the integral and the integrals can be performed analytically. We find that the analytical equation is compatible with with numerical calculation for a flat spectrum. For scales k r < k < k max the spectrum is then where we have taken the upper limit on v, v max = k max /k and the lower limit to be v min = k min /k, and we took k min = k r where k r is the scale that re-entered the horizon at the end of the matter era, we have also taken I M DS ≈ 1. By only considering scales which re-enter the horizon near k ∼ k r, those whose amplitudes have grown the most, Eq. (5.8) can be reduced to ∼ 356(16k max /(15k)) which for k max = k N L ∼ 141k r and k ∼ k r is 10 5. Taking instead k max = 10 3 k r leads to a spectrum maximum of ≈ 10 6 as is confirmed in the full numerical calculation shown in Fig. 4. The evolution of the tensor mode Defining v k = ah k, the equation of motion for the tensor modes is given as and can be solved approximately for the full evolution of the universe. Using step and boxcar functions, the source term aS k can be written out as where we have taken S ∝ −3 during radiation domination, is the heaviside step function and req = ( − r ) − ( − eq ) is the boxcar function. The scale factor can be written out in a similar fashion For sub horizon modes, k 1/, we obtain the solution plotted black in Fig. 2. In this scenario, inflation gives way to an early phase of matter domination which ends when = r and is followed by a phase of radiation domination that is overtaken by matter at = eq. The source term is at first constant, then when r < < eq it decays at a rate ∝ −3 and constant again for > eq. The amplitude of the sub-horizon tensor modes which re-enter the horizon during early matter domination is held at a constant until r, when begins to freely propagate and decay at a rate ∝ a −1, until it becomes equal to the source term and held at a constant value. The superhorizon modes grow until = r, are held at a constant between r < < eq and grow again for > eq. The accuracy of the sudden transition approximation Throughout this paper we have utilised the sudden transition approximation between an early matter phase and radiation. In this section we investigate the effect a smoother turnover has on the tensor modes generated during the early matter phase. For this we make the following approximations for the scale factor and source term where n is an integer. We plot the results for n = 1, n = 8 and the sudden transition approximation in Fig. 3. As is clear from the figure, in all cases the tensor modes approach the freely oscillating stage during radiation domination, however, the smooth turnover results in a smaller amplitude of an order of magnitude. We also note that n = 8 is very close to the sudden transition approximation, and that n = 1 is less than an order of magnitude smaller that it. This phenomenon requires further investigation.. The solid lines represent the source term, the red is the sudden transition approximation Eq. (5.10), the green is the smooth turnover with n = 1, and the black is n = 4. The dashed lines represent the tensor modes. As we can see the tensor modes do approach the freely oscillating limit, but there is some loss of amplitude with respect to the sudden transition approximation Transfer Function Detectors of gravitational waves will place a bound on the energy density of gravitational waves, defined as : where f is the frequency, c is the critical energy density defining the coasting solution of the Friedman equation and GW is the energy density of gravitational waves. This is related to the primordial spectrum via a transfer function, = t 2 (k, )P h (k). Scales that re-enter the horizon during a period of early matter domination experience a constant source term, and the amplitude of the tensor mode is kept at it's super horizon value. However once the universe enters the radiation epoch the source term decays as does the amplitude of the tensor modes, as is depicted in Fig. 2. In Ref. they show that for scales smaller than some critical scale, which our scales of interest are, the transfer function is where a k is the scale factor when the scale k enters the horizon. To be precise, it is the scale factor at the time when the source term at that scale begins to decay. That is, for scales that enter the horizon during early matter domination, a k = a r. Our transfer function is then a r a eq a eq a( ) = a eq a r k eq k r (5.15) where subscript eq is that of radiation-matter equality and the relative energy of scalarinduced gravitational waves is where z is the redshift. For scales which re-enter the horizon during radiation domination, the relative energy of induced gravitational waves is Full numeric results for a flat spectrum To get the full spectrum of induced gravitational waves for an early matter phase followed by a radiation phase we evaluate where F (v, y, ) is the integral over v, y in Eq. (5.3), and we drop the cross terms arising from < matter radiation > in the last line. This approximation is reasonable since the cross terms are only of significance at k ∼ k r. Figure 4 is a depiction of the spectrum of induced gravitational waves arising from a flat primordial spectrum of density perturbations; n s = 1. This figure was generated mainly for illustrative purposes, and as such we have chosen k N L to be 10 3 times as large as the value calculated using Eq. (1.2). We assume that modes with k > k N L do not experience the constant source term. Scales which are still super-horizon at the end of the early-matter phase have a spectrum P h ∝ k 3 and therefore become rapidly smaller than the spectrum generated by the pure radiation source term. That means that for modes which enter the horizon soon after r we need only consider the convolution of modes with those that enter during the radiation era. We present the results for a flat spectrum at various reheat temperatures in Fig. 5, where we have taken the limits on v to be v min = k r /k and v max = k N L /k with the latter upper bound accounting for the non-linear cutoff. We could have modified the calculation and checked for each v and y thatk < k N L and |k − k| < k N L, however simply modifying the limits of v has the same effect.. We plot the spectrum of induced gravitational waves for a flat primordial spectrum with T r = 10 9 GeV. The spectrum for scales that re-enter the horizon deep in the radiation era have a flat spectrum, due to the fact that the source term is a decaying function and thus the modes oscillate freely, and is represented by the solid blue line. The solid black line represents the modes that reenter the horizon during the early matter phase (k > k R ). The The red dashed line is the complete spectrum, assuming an early matter phase followed by a phase of radiation domination. The spectrum for k > k N L behaves as P h ∝ 1/k 4, as P h ∝ 1/k for k r < k < k N L, as P h ∝ k 3 for k k r and as P h ∼ constant for k k r. One can think of this as follows: modes that re-enter the horizon during the radiation phase but with k ∼ k r, i.e. near the EMD phase, will interact with modes that re-entered during EMD and hence their behaviour/characteristics are modified from the instant reheating scenario. We have utilised a simplified analysis, in that P h (k) = P hmatter (k) + P h rad (k). Figure 5. We plot the spectrum of induced gravitational waves for a flat primordial spectrum with various reheat temperatures and the cutoff scale k max = k N L. The black dashed lines correspond to the PBH bound assuming (from left pseudo-vertical line to right pseudo-vertical line) T r = 1GeV, 10GeV, 10 2 GeV, 10 3 GeV, 10 4 GeV, 10 5 GeV, 10 6 GeV, 10 7 GeV, 10 8 GeV and 10 9 GeV. The cluster of solid lines in the top right corner are the sensitivity ranges of ground based detectors, LIG0 S5 and S6, and KAGRA, while the thick horizontal salmon pink line is the forecast sensitivity of Advanced LIGO. Also shown is the sensitivity limit of the Square Kilometre Array (SKA). The green, red, blue and purple solid lines correspond to taking T r = 1MeV, 1GeV, 10 4 GeV, and 10 8 GeV. The blue dashed line is the spectrum for T r = 10 4 GeV without terminating at k N L. It is interesting to note that even a flat primordial spectrum can lead to a spectrum of induced gravitational waves detectable by cross-correlated DECIGO The Models of Inflation The spectrum of induced gravitational waves is directly proportional to the square of the primordial spectrum and is therefore clear that for an enhanced spectrum of induced gravitational waves one needs an enhanced primordial spectrum. Since at the pivot scale the spectrum is tightly constrained by CMB data we need to go beyond this and consider models which enhance the spectrum on small scales. Phenomenologically, the two models of inflation that exhibit this property are the running mass model and the hilltop model, depicted in Fig. 6. The Hilltop type model Identified in Ref. as the phenomenological form necessary for PBH formation this model has the potential : where the coupling terms p and q are less than 1 and p < q to get the shape in Fig. 6. Certain realisations in super gravity can be found; see for example Refs. [11,. This model is very compatible with WMAP data, and as such many of the model's terms are not ruled out, but in this analysis we add the extra requirement that the model is maximised at small scales and yet still remain within the PBH bound. We also take the basic number of e−folds to be N = 56. Parameter selection criteria is explained in more detail in Ref.. The Running Mass Model This model is the basic 2 model with a varying mass term that arises in taking renormalised group equations, and is given as [8, In this case we select parameters which satisfy n s = 0.96, n s = 0.0039 and n s = 0.0043, which for T r > 10 9 GeV are terminated at N = and N = respectively. Results We plot results for the hilltop model for p = 2 and q = 2.3, 3, 4, and for the running mass models which satisfy n s = 0.96 and n s < 0.0062 with N ∼ 57. These are plotted in Figures. 7,8,9,10,11 and 12 for a range of reheat temperatures 1GeV < T r < 10 9 GeV 3. Each reheat temperature modifies the maximum allowed number of e−folds N max and therefore we have integrated only up to k max = k pivot e Nmax, except for in the case when k max > k N L where we only integrate up to k N L. In the final figure, Fig. 12 we have plotted the results of the hilltop and running mass models for a reheat temperature of T r = 10 6 GeV. In our previous paper Ref., we calculated the spectrum of Induced Gravitational Waves for the running mass models with large running 0.0067 < n s < 0.012 which is no longer supported by the latest WMAP release. On a related note, to motivate N < 37 by modifying the reheat temperature would require T r < 1MeV which is unsupported by theory. We also find that for the running mass model satisfying N = 38 e−folds, the corresponding induced gravitational waves spectra were not within the sensitivity ranges of any of the experiments. However, if the k N L cutoff can be relaxed, the spectra may very well be within the ranges of SKA and PULSAR. Fig. 5. The red line represents the induced gravitational wave spectrum for T r 10 10 GeV and N = 55 e−folds. From right to left, the black solid lines correspond to the induced gravitational wave spectrum for T r = 10 8 GeV, T r = 10 5 GeV, and T r = 1GeV. For illustration, we include the spectrum for a reheat temperature of T r = 10 5 GeV and integrated up to the maximum scale instead of the non-linear cutoff (dashed blue line). Figure 9. The spectra of induced gravitational waves for the Hilltop model p = 2 and q = 2.3, with k max = k N L. The red line represents the induced gravitational wave spectrum T r 10 10 GeV and N = 55 e−folds. From right to left, the black solid lines correspond to the induced gravitational wave spectrum for T r = 10 8 GeV, T r = 10 5 GeV, and T r = 1GeV. The black dashed lines and the cluster of solid lines in the right hand corner are defined in Fig. 5. Figure 10. The spectra of induced gravitational waves for the running mass model satisfying n s = 0.0039 and N = 57, with k max = k N L. From right to left, the black solid lines correspond to the induced gravitational wave spectrum T r = 1GeV,T r = 1 10 5 GeV, and T r = 10 9 GeV. The black dashed lines and the cluster of solid lines in the right hand corner are defined in Fig. 5. Figure 11. The spectra of induced gravitational waves for the running mass model satisfying n s = 0.0043 and N = 57, with k max = k N L. From right to left, the black solid lines correspond to the induced gravitational wave spectrum T r = 1GeV,T r = 1 10 5 GeV, and T r = 10 8 GeV.T The black dashed lines and the cluster of solid lines in the right hand corner are defined in Fig. 5. Figure 12. The spectra of induced gravitational waves for all the models plotted in the previous figures assuming a reheat temperature of T r = 10 5 GeV, with k max = k N L. The solid lines are hilltop with p = 2 q = 2.3 (red), p = 2 q = 3 (black) and p = 2 q = 4 (blue); the dashed lines are the running mass models with n s = 0.0043 (blue) and n s = 0.0039 (red). The black dashed lines and the cluster of solid lines in the right hand corner are defined in Fig. 5. Discussion In this work we assumed a sudden transition between early matter domination and radiation domination. In the early universe however, the Hubble time can be of the order of the decay rate of matter to radiation. In this case modes re-entering the horizon towards the end of matter domination will also experience a decaying source term and the effect shown here may be reduced, depending on how long the transition phase lasts, as is shown in Fig. 3. Therefore the results presented here are upper bounds on what the induced gravitational wave spectrum could be, with the actual spectrum possibly being only an order of magnitude or two smaller than what we calculated. Therefore our conclusions depend on the condition that the tensor modes generated during an early matter phase survive the transition into radiation. Under this proviso, we have shown that assuming an early matter phase with T r = 10 5 GeV results in a spectrum of induced gravitational waves with energy densities within the range of BBO/DECIGO and cross-correlated DECIGO. Since the assumption of an early matter phase truncates the PBH bound at a smaller value of k, we have shown that the running mass model generates induced gravitational waves detectable by ground based gravitational wave detectors such as LIGO, and KAGRA. This means that the RMM model with a running of n s = 0.0043 and a reheat temperature of 10 8 GeV as well as the model with n s = 0.0039 and T r = 10 9 GeV can be ruled out since LIGO has failed to detect a gravitational signature. The Si and ci terms are the sine and cosine integrals respectively, defined as: Symbol Expression sin(x) 1 1 kv 2 y 2 (v 2 − 3y 2 + 1) sin(vx) sin(yx) 1 v 4 4v 3 4v 2 3y 4 4y 3 2y 2 v 2 Table 2. This table gives the expressions of the coefficients of the sine and cosine integrals in Eq. (A.1). Each n coefficient has the same parameters as the others, but the parameters differ in their respective signs. The columns to the right of the s give the sign of the parameter defined in the column header. For example then we can read off 1 as −1 + v 4 + 4v 3 + 4v 2 − 3y 4 − 4y 3 − 2y 2 v 2. |
Changes in antipsychotic use among patients with severe mental illness after a Food and Drug Administration advisory A 2003 Food and Drug Administration advisory warned of increased hyperlipidemia and diabetes risk for patients taking secondgeneration antipsychotics (SGAs). After the advisory, a professional society consensus statement provided treatment recommendations and stratified SGAs into high, intermediate, and low metabolic risk. We examine subsequent changes in incident and prevalent SGA use among individuals with severe mental illness. |
Based in airy Farringdon offices, with splashes of colour everywhere - from brightly coloured rugs to huge posters featuring scantily clad women on the cover of client Nuts magazine - Resonate is every inch the creative, consumer shop.
Similarly, its managing director - Michael Frohlich - is consumer PR through and through, with a flamboyant, effervescent manner, a great sense of humour and heaps of restless energy.
But though Frohlich appears to run Resonate as a small hotshop, it has been part of Chime Communications' corporate PR behemoth The Bell Pottinger Group since its launch by Frohlich and Graham Drew in 2003.
It was nine-strong by the end of 2006, but a merger with Bell Pottinger Consumer at the beginning of the year saw staff numbers rocket from nine to 25. All staff now work under the Resonate moniker and report to Frohlich.
Frohlich played an important role in the growth of Shine, where he was managing director during a period of ‘tremendous growth', but he is diplomatic enough to avoid discussing the finer details of his time at the agency.
Similarly, the usually open and chatty Frohlich will not be drawn on whether or not he owns part of Resonate. He does say, though, that ‘word got to Lord Bell' that he was interested in setting up alone, and then Bell Pottinger offered to back him.
Fittingly, he leaves nothing to chance where his own interview is concerned. He cannot help taking a peek at the camera's screen to check the results of the photo shoot, and he prepares four sheets of A4 to ensure he has all the relevant facts and figures to hand.
The figures include the fact that Resonate has won around £750,000 ‘gross new business' so far this year, and that - he says - they would be in the top 12 of the PRWeek's 2006 Top 50 consumer league table, implying the agency is above the £2.7m fee income mark.
The merger has been very successful, he says, but Resonate's ambitions go well beyond being a successfully merged entity. ‘The group vision is to have the top agency in each sector,' he says. ‘My goal is to become the top consumer PR offering in the UK.' He plans to be in the Top 10 of the PRWeek consumer league table by 2008, and eventually to employ around 100 staff.
‘I don't want it to come across that it's all about megalomania! I don't think there are many people who have the same opportunity to actually have the structure and the support to be able to grow a serious agency. I'm so grateful for the opportunity and I want to people to be proud of the agency - that's what I'm driving for,' he says.
Frohlich is 35, married with two boys and lives in Pinner in Middlesex. Although he comes across as a very sociable man, and spends a lot of time at lunches and dinners with clients and journalists, he says he can be ‘an unsociable, grumpy old sod' at the weekend.
Frohlich identifies his strengths as his ‘ability to motivate people' and his ‘passion and hunger', which is just as well - such qualities will be required in abundance if he is to achieve the Bell Pottinger Group's vision. |
<gh_stars>1-10
package Imm.AST.Expression;
import java.util.ArrayList;
import java.util.List;
import Ctx.ContextChecker;
import Exc.CTEX_EXC;
import Exc.OPT0_EXC;
import Imm.AST.SyntaxElement;
import Imm.TYPE.TYPE;
import Opt.AST.ASTOptimizer;
import Snips.CompilerDriver;
import Tools.ASTNodeVisitor;
import Util.Source;
import Util.Util;
public class StructSelect extends Expression {
/* ---< FIELDS >--- */
public Expression selector;
public boolean deref;
public Expression selection;
/* ---< CONSTRUCTORS >--- */
/**
* Default constructor.
* @param source See {@link #source}
*/
public StructSelect(Expression selector, Expression selection, boolean deref, Source source) {
super(source);
this.selection = selection;
this.selector = selector;
this.deref = deref;
}
/* ---< METHODS >--- */
public void print(int d, boolean rec) {
CompilerDriver.outs.println(Util.pad(d) + "Struct" + ((this.deref)? "Pointer" : "") + "Select");
if (rec) {
this.selector.print(d + this.printDepthStep, rec);
this.selection.print(d + this.printDepthStep, rec);
}
}
public TYPE check(ContextChecker ctx) throws CTEX_EXC {
ctx.pushTrace(this);
TYPE t = ctx.checkStructSelect(this);
ctx.popTrace();
return t;
}
public Expression opt(ASTOptimizer opt) throws OPT0_EXC {
return opt.optStructSelect(this);
}
public <T extends SyntaxElement> List<T> visit(ASTNodeVisitor<T> visitor) {
List<T> result = new ArrayList();
if (visitor.visit(this))
result.add((T) this);
result.addAll(this.selector.visit(visitor));
result.addAll(this.selection.visit(visitor));
return result;
}
public void setContext(List<TYPE> context) throws CTEX_EXC {
this.selector.setContext(context);
this.selection.setContext(context);
}
public Expression clone() {
StructSelect ss = new StructSelect(this.selector.clone(), this.selection.clone(), this.deref, this.getSource().clone());
if (this.getType() != null)
ss.setType(this.getType().clone());
ss.copyDirectivesFrom(this);
return ss;
}
public String codePrint() {
String s = "";
StructSelect c = this;
while (c != null) {
s = c.selection.codePrint() + s;
if (this.deref)
s = "->" + s;
else s = "." + s;
if (c.selector instanceof StructSelect) {
c = (StructSelect) c.selector;
}
else {
s = c.selector.codePrint() + s;
break;
}
}
return s;
}
}
|
Watershed prioritization based on soil and water hazard model using remote sensing, geographical information system and multi-criteria decision analysis approach Abstract This study includes the application of analytical hierarchy process (AHP) based multi-criteria decision analysis (MCDA) for prioritising vulnerable area of watershed for soil and water conservation measures based on impact analysis of different soil and water hazard conservation factors. To prioritise the vulnerable areas, the Bindra watershed which is part of Chhattisgarh State in India has been selected and divided into 16 sub-watersheds. Soil and water hazard rate index (SWHRI) was identified in GIS environment using soil loss (SL), sediment yield (SY), sediment transport index (STI), runoff potential (RP), and land capability class (LCC). Study found that, out of 16 sub-watersheds, 3 sub-watersheds (SW-11, SW-12, and SW-16), and 2 sub-watersheds (SW-1 and SW-4) falling under very high and very low priority zone respectively, therefore soil conservation measures in these areas should be implemented immediately through scientifically developed techniques. |
<reponame>dwwoelfel/postgraphile
import postgraphile from './postgraphile';
import { createPostGraphileSchema, watchPostGraphileSchema } from 'postgraphile-core';
import withPostGraphileContext from './withPostGraphileContext';
export { postgraphile, createPostGraphileSchema, watchPostGraphileSchema, withPostGraphileContext };
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.