content
stringlengths 7
2.61M
|
---|
// Using an array of these makes it easy to add or remove buttons as desired.
private static class Button
{
java.awt.Rectangle bounds; // in Screen coordinates
String text;
ActionListener click;
void performClick()
{
click.actionPerformed(null);
}
}
|
"""
n,k=map(int,input().split())
s=input()
dp=[0]*(n+1)
kk=dict()
global ct
ct=0
def rec(s):
dp[len(s)]+=1
global ct
ct+=1
if(ct>k+10):
return
if(len(s)==0):
return
for i in range(0,len(s)):
st=s[:i]+s[i+1:]
if(kk.get(st)==None):
#dp[len(st)]+=1
#print(st)
kk[st]=1
rec(st)
rec(s)
print(dp)
tot=0
for i in range(0,len(dp)):
tot+=dp[i]
if(tot<k):
print(-1)
else:
sumu=0
c=0
for i in range(n,-1,-1):
req=min(k-c,dp[i])
c+=req
sumu+=(req*(n-i))
if(c>=k):
break
print(sumu)
"""
from collections import deque
n,k=map(int,input().split())
s=input()
dp=[0]*(n+1)
kk=dict()
ct=0
q=deque()
q.append(s)
while(q):
r=q.popleft()
dp[len(r)]+=1
ct+=1
if(ct>k+10):
break
if(len(r)>0):
for i in range(0,len(r)):
st=r[:i]+r[i+1:]
if(kk.get(st)==None):
kk[st]=1
q.append(st)
tot=0
for i in range(0,len(dp)):
tot+=dp[i]
if(tot<k):
print(-1)
else:
sumu=0
c=0
for i in range(n,-1,-1):
req=min(k-c,dp[i])
c+=req
sumu+=(req*(n-i))
if(c>=k):
break
print(sumu)
|
In vitro digestion of Pickering emulsions stabilized by soft whey protein microgel particles: influence of thermal treatment. Emulsions stabilized by soft whey protein microgel particles have gained research interest due to their combined advantages of biocompatibility and a high degree of resistance to coalescence. We designed Pickering oil-in-water emulsions using whey protein microgels by a facile route of heat-set gel formation followed by mechanical shear and studied the influence of heat treatment on emulsions stabilized by these particles. The aim of this study was to compare the barrier properties of the microgel particles and heat-treated fused microgel particles at the oil-water interface in delaying the digestion of the emulsified lipids using an in vitro digestion model. A combination of transmission electron microscopy and surface coverage measurements revealed an increased coverage of heat-treated microgel particles at the interface. The heat-induced microgel particle aggregation and, therefore, a fused network at the oil-water interface were more beneficial to delay the rate of digestion in the presence of pure lipase and bile salts compared to intact whey protein microgel particles, as shown by the measurements of zeta potential and free fatty acid release, plus theoretical calculations. However, simulated gastric digestion with pepsin impacted significantly on such barrier effects, due to the proteolysis of the particle network at the interface irrespective of the heat treatment, as visualized using sodium dodecyl sulfate polyacryl amide gel electrophoresis measurements.
|
Students on stage listen to Bernie Sanders speak to their classmates at Roosevelt High School in Des Moines, Iowa, on January 28, 2016. (Photo: Phil Roeder)
One of the most recurrent allegations leveled at Bernie Sanders supporters is that they are young. On the surface it is difficult to imagine that accusing a supporter base of populating an age bracket could be advanced as a serious critique, but such is the frequency with which the observation is made that one has to assume that there is some popular appeal to the coupling of youth with political illegitimacy.
In reality, the disenfranchising of young people is an easy way to dismiss their grievances, a sort of a crowd directed ad hominem attack. This narrative has not only appeared in mainstream media, but both Hillary and Bill Clinton have sought at points to publicly make ridiculous the political participation of young people.
For more original Truthout election coverage, check out our election section, “Beyond the Sound Bites: Election 2016.”
On April 3, when given an opportunity to counter the proposition that accepting campaign contributions from the fossil fuel industry (nearly $6.9 million, according to Greenpeace) constrains her capacity to sincerely tackle the climate crisis, Hillary Clinton remarked that she “felt sorry for young people” and that they should “do their research.” On April 16, Bill Clinton jovially accused young students of wanting to “shoot every third person on Wall Street.” The implications of these comments are that young people are naive and fanatical; these comments exhibit a kind of ageist let-them-eat-cake-ery that reveals a basic misapprehension of who this demographic actually is.
The fact is many Sanders advocates are young, and this is integral to their capacity to progress a dynamic agenda. Often depicted as self-absorbed and apathetic, young people have in reality been the principal collateral damage of neoliberalism. Not only are they poor, but as a result of impossible student debt burdens, no expectation of job security, falling real wages and the constant threat of impending global economic collapse, they are future poor. They are the most incarcerated generation in history (over 50 percent of people incarcerated in state prisons are between the ages of 20 and 30), living in a country where rising national debt levels are everywhere cited as the cause of crumbling infrastructure, and where nearly half of all young people admit to avoiding seeking medical treatment because the cost is prohibitive. For most millennials, their entire adulthood has been defined by permanent war, imbedded inequality, omnipresent poverty, routine corruption and a planet that is dying as they stand upon it. As the 2016 State of the Millennial report noted in its assessment of the challenges facing young people, “48 percent of Millennials now believe that the American Dream is dead.”
Compounding these deep-set infrastructural problems, voter suppression in Arizona and New York, and the lackluster response to it, has only intensified suspicions that the democratic process is neither free nor fair, and that interference in the most rudimentary exercise in political participation is commonplace. This has exacerbated a sense that somehow young people do not exist in a full enough capacity as citizens to expect meaningful participation or recognition in the electoral process. In a climate of such heightened tensions, it is indeed risky to imply that young people are petulant or reckless when they refuse to fall in line behind Hillary Clinton.
Bernie or busters are not inherently malevolent. On the contrary, they are acutely aware of having been muzzled and are carefully weighing up the likelihood that they can survive eight more years of the establishment regime. Menacing them with threats about how much worse it can get only crystallizes the impression they already have of a detached liberal class that remains oblivious to just how bad it is right now.
Illustrating this very point, on April 20, Prof. Robert Reich outlined the “new common ground between populist left and right” in which he points out that both constituencies are against crony capitalism, bank bailouts, Citizens United and corporate welfare. If there is any truth to Bill Clinton’s awkward dig that young Sanders advocates want to “shoot every third person on Wall Street,” it is probably worth noting that they are not the only ones.
The concerns of Sanders millennials are reflective mostly of a desire to preserve some of the world they will inherit. Refusing to take seriously their grievances reveals a willingness to ignore the terrifying enormity, perhaps even impossibility, of that task. If an account is not given for voter suppression and penance not made to restore the confidence of a disenfranchised demographic in democracy, there is no telling what the fallout will be. But if history is to be any instruction, willful disregard of the suffering of poverty without recourse to political participation usually ends in violence.
Far from being myopic and disconnected, millennials are media savvy and politically shrewd, and they are not going to be placated by the type of windy rhetoric that sated the Obama electorate. If Hillary Clinton wants to bring them to the ballot box behind her, she is going to have to do more than talk in sweeping terms about uniting a party against a common enemy: She is going to have to convincingly change her politics, and in an atmosphere of high mistrust, that may prove complicated.
|
Prediction of Dynamical Systems by Symbolic Regression We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast. I. INTRODUCTION The prediction of the behavior of dynamical systems is of fundamental importance in all scientific disciplines. Since ancient times, philosophers and scientists have tried to formulate observational models and infer future states of such systems. Applications include topics as diverse as weather forecasting, the prediction of the motion of the planets, or the estimation of quantum evolution. The common ingredient of such systems -at least in natural sciences -is the existence of an underlying mathematical model which can be applied as the predictor. In recent years, the use of artificial intelligence (AI) or machine learning (ML) methods have complemented the formulation of such mathematical models through the application of advanced data analysis algorithms that allow accurate estimation of observed dynamics by learning automatically from the given observations and building models in terms of their own modelling languages. Artificial Neural Networks (ANNs) are one example of such techniques that are popularly applied to model dynamic phenomena. ANNs are structured as networks of soft weights organized in layers or so-called neurons or hidden units. One problem of ANN type approaches is the difficult-to-interpret black-box nature of the learnt models. Symbolic regression-based approaches, such as Genetic Programming (GP), pro-vide alternative ML methods that are recently gaining increasing popularity. These methods, similar to other ML counterparts, learn models from observed data and act as good predictors of the future states of dynamical systems. Their added advantages over other methods include the interpretable nature of their learnt models and a flexible and weakly-typed modelling language that allows them to be applied to a variety of domains and problems. Undoubtedly, the methods used most often in ML are neural networks. These involve deep learning, in the sense that several layers are used and interpreted as the organization of patterns, as one imagines the human brain to work. In the present study, involving deterministic systems, we want to use a certain branch of ML, namely symbolic regression. This technique joins the classical, equation-oriented approach with its computerscientific upstart. In this publication we do not present any major improvements in the algorithms; rather we demonstrate how one can apply symbolic regression to identify and predict the future state of dynamical systems. Symbolic regression algorithms work by exploring a function space, which is generally bounded by a preselected set of mathematical operators and operands (variables, constants, etc.), using a population of randomly generated candidate solutions. Each candidate solution encoded as a tree essentially works as a function and is evaluated based on its fitness or in other words its ability to match the observed output. These candidate solutions are evolved using a fitness-weighted selection mechanism and different recombination and variation operators. One common problem in symbolic regression is the bloating effect which is caused by excessive lengthening of individual solutions or filling of the population by large number of solutions with low fitness. In this work we use a multi-objective function evaluation mechanism to avoid this problem by including minimizing the solution length as an explicit objective in the fitness function. Symbolic regression subsumes linear regression, generalized linear regression, and generalized additive models into a larger class of methods. Such methods have been used with success to infer equations of dynamical systems directly from data. One problem with deterministic chaotic systems is the sampling of phase space using embedding. For a high-dimensional system, this leads to prohibitively long sampling times. Typical reconstruction methods use delay coordinates and the associated differences, this results in mapping models for the observed systems. Mathematically, differential coordinates are better suited for modelingbut they are not always accessible from data. Both approaches, difference and differential embedding, are discussed in with numerical methods to obtain suitable differential variables from data. Modern methods like diffusion maps or local linear embedding, including the analysis of stochastic systems, circumvent the curse of dimensionality by working directly on the manifold of the dynamical system. Apart from prediction and identification of dynamical systems, the symbolic regression approach has been used recently for the control of turbulent flow systems. In that application, we demonstrate how to find the symbolic equations in a very general form combined with subsequent automatic simplification and multiobjective optimization. This yields interpretable equations of a complexity that we can select. We use open-source Python packages for the analysis. Symbolic regression is conducted using an elastic net method provided by the fast function extraction package (FFX) for quick tests, and the more general, but usually slower method implemented as a genetic programming algorithm (GP) based on the deap package. Subsequent simplification is obtained using sympy. Of course, any other programming framework with similar functionality will do. For a systematic study we examine numericallygenerated data from a harmonic oscillator as the simplest system to be predicted, and a more involved system of coupled FitzHugh-Nagumo oscillators, which are known to produce complex behaviour and may serve as a very simple model for neurons. We investigate the capacity of the ML approach to detect an incoming front of activity, and give exact equations for the regression. We compare different sampling and spatio-temporal embedding methods, and discuss the results: it is shown that a space-time embedding has advantages over time-only and space-only embedding. Our final example concerns a real-world application, the short-term and medium-term forecasting of solar power production. In principle, this could be achieved trivially by a high-resolution weather forecast and knowledge of the transfer of solar energy to solar cells, a very well-understood process. However, such a highly resolved weather forecast does not exist, because it is prohibitively expensive: even the largest meteorological computers are still unable to compute the weather on small spatial scales, let alone with a long time horizon at high accuracy. As the dynamical systems community identified a long time ago, this is mainly due to uncertainties in the initial conditions, as demonstrated by the celebrated Lorenz equations. Consequently, we follow a data-based approach and improve upon weather predictions using local energy production data as a time series. We are aware that use of the full set of weather data will improve the reported forecast, but increasing the resolution is not our interest here, rather the proof of concept of the ML method and its applicability to realworld problems. The rest of this paper is organized as follows. In Sec. III we discuss the methods and explain our approach. This section is followed by a longer section IV where results are presented for the above-mentioned example systems. We end the paper with a summary and conclusions, Sec. V. II. METHODS In the field of dynamical systems (DS), and in particular nonlinear dynamical systems, reconstruction of the characteristics of an observed system from data has been and is a fundamental scientific topic. In this regard, one can distinguish parameter and structure identification. We first discuss the existing literature on parameter identification which is easier in that there is an established mathematical framework to fit coefficients to known curves representing experimental data, which in turn result from known dynamics. This can be conducted for linear or non-linear functions. For deterministic systems, with the advent of modern computers, quantities like fractal dimensions, Lyapunov exponents and entropies can also be computed to make systems comparable in dynamics. These analyses further allow the rough characterization of the type and number of orbits of a DS. On the other hand, embedding techniques have been developed to reconstruct the dynamics of a high-dimensional system from lowerdimensional time series. These techniques have a number of limitations with respect to accuracy and the amount of data needed for making good predictive models. A chaotic system with positive Lyapunov exponents has a prediction horizon which depends heavily on accuracy and precision of the data, since chaos "destroys" information. This can be seen very clearly by the shift map example. However a system on a regular orbit, even marked with complicated equations, might be predicted accurately. For high-dimensional systems, one needs a large amount of data to address the "curse of dimensionality". In fact it can be shown that for each dimension, the number of data needed increases on a power-law basis. Eventually, the direct inference of the underlying equations of motion from data can be approached using regression methods, like Kalman filtering, general linear models (GLM), generalized additive models (GAM), or more general schemes, see and references therein. Apart from the equations themselves, partial derivatives often have to be estimated, which is an additional problem for low-precision data We also consider structure identification, which as mentioned above is a more complicated task. In the last 10-15 years, powerful new methods from computer science have been applied to this purpose. This includes numerous studies on diffusion maps, local linear embedding, manifold learning, support vector machines, artificial neural networks, and symbolic regression. Here, we focus on symbolic regression. It must be emphasized that most methods are not unique and their success can only be tested based on their predictive power. A. Symbolic Regression One drawback of many computational-oriented methods is the lack of equations that can be analyzed mathematically in the neighborhood of analyzed trajectories. Symbolic regression is a way to produce such equations. It includes methods that identify the structure or parameters of the searched equation or both of them simultaneously with respect to objective functions i. This means that methods like GLM, or GAM are contained in such a description. A recent implementation of GLMs is Fast Function Extraction (FFX), which is explained briefly below. Genetic programming, explained in detail below, is another intuitive method and often used for symbolic regression. Here, the algorithm searches the function space through random combinations and mutations of functions, chosen from a basic set of equations. Symbolic regression is supposed to be form free and thus unbiased towards human perception. However, human knowledge enters in the meta-rules imposed on the model through the basic building blocks and rules on how they can be combined. Thus, the optimal model is always conditioned on the underlying meta-rules. Genetic Programming Genetic programming is an evolutionary algorithm to find an optimal algorithm or program. The term "pro-gramming" in optimization is used synonymously with "plan" or algorithm. It was used first by Dantzig, the inventor of linear programming, at a time when computer programs did not exist as we know them today. The algorithm seeks an optimal algorithm, in our case a function, using evolutionary, or "genetic" strategies, as explained below. The pioneering work was established by. We can briefly describe it as follows: in GP we can represent formulae as expression trees, such as that shown in Fig. 1. Non-terminal nodes are filled with elements from a basic function set defined by the meta-rules. Terminal nodes consist of variables or parameters. Given the optimization problem we seek the optimal solution f * through optimizing (minimizing or maximizing, or for some cost functionals, finding the supremum or infimum) the fitness (or cost) functional. To find the optimal solution, GP uses a whole population of candidate solutions in parallel which are evolved iteratively through fitness proportionate selection, recombination and mutation operations. The initial generation is created randomly. Afterwards, the algorithm cycles through the following loop until it reaches its convergence or stopping criteria: breed: Based on the current generation G t, a new set of size of alternative candidate solutions, the offspring O t, are selected. Several problemdependent operators are used for this tweaking step, e.g. changing parts of a candidate solution (mutation) or combining two solutions into two new ones (crossover). These tweaking operations may include selection pressure, so that the "fitter" solutions are more likely to produce offspring. evaluate: The offspring O t are evaluated, i.e. their fitness is calculated. select: Based on the fitness value, members of the next generation are selected. This scheme fits the requirements of symbolic regression. Mutation is typically conducted by replacing a random subtree by a new tree. Crossover takes two trees and swaps random subtrees between them. This procedure is illustrated in Fig. 1. The fitness function uses a typical error metric, e.g. least squares or normalized root mean squared error. The random mutations sample the vicinity of their parent solution in function space. As a random mutation could likely lead to less optimal solution, it does not ensure a bias towards optimality. However, this is achieved by the selection, because it ensures that favourable mutations are kept in the set while others are not considered in further iterations. By design and when based on similar meta-rules, GP includes other algorithms like GLMs or linear programming. Similarly, the crossover operator (right) takes two expression trees and swaps two random subtrees. FFX and the Elastic Net Here we briefly summarize the FFX algorithm of Mc-Conaghy et al.. This is a symbolic regression algorithm based on a combined generalized linear model and elastic net approach: where {a i } are a set of coefficients to be determined, and { i } are an overdetermined set of basis functions described by an heuristic, simplicity-driven set of rules (e.g. highest allowed polynomial exponent, products, non-linear functions,...). In the elastic method, a least squares criterion is used to solve the fitting problem. To avoid overfitting, i.e. high model sensitivity on training data, two regulating terms are added: The 1, and 2 norms of the coefficient vector. The 1 norm favors a sparse model (few coefficients) and simultaneously avoids large coefficients. The where y are the data, ≥ 0 the regularization weight and ∈ is the mixing between 1 and 2 norms. A benefit of the regularized objective function is that it implicitly gives rise to models with different complexity, i.e. different number of bases N B. For large values of, the predicted coefficients will all be zero. Reducing will result in more complicated combinations of non-zero coefficients. For every point on the (, )-grid, the "elastic net", one can obtain a single optimal model using a standard solver like coordinate descent to determine the optimal coefficients a *. A small change in the elastic net parameters leads to a small change in a * such that one can use the already obtained solution of a neighboring grid point to restart coordinate descent with the new parameters. For the obtained models we can calculate the normalized root mean-squared error and model complexity (number of used basis functions). The FFX algorithm is based purely on deterministic calculations. Hence its runtime compared to a similar GP algorithm is significantly shorter. However, the meta-rules are more stringent. B. Multiobjective Fitness As mentioned above, the solution of the regression problem is not unique in general. A major factor which motivates symbolic regression is its comprehensible white-box nature opposed to the black-box nature of, for example neural networks. Invoking Ockhams razor (lex parsimoniae), a simple solution is considered superior to a complicated one as it is more easy to comprehend. In addition, more complicated functions are prone to overfitting. This means that complexity should be a criterion in the function search, such that more complex functions are considered less optimal. We therefore seek a solution which satisfies two objectives. Comparing solutions by more than one metric i is not straightforward. One possible approach is to weight these metrics into one objective : making different candidate solutions easily comparable. The elastic net Eq. 3 uses such a composite metric. However, a priori it is assumed that there is a linear trade-off between the individual objectives. This has three major flaws: One needs to determine suitable (problem dependent) w i. One does not account for non-linear trade-offs (e.g. all-or-nothing in one objective). Instead of single optimal solution there may be a set of optimal solutions defining the compromise between conflicting objectives (here error vs complexity). The optimal set is also called the Pareto-front. This is the set of non-dominated candidate solutions, i.e. candidate solutions that are not worse than any other solution in the population when compared on all objectives. For the FFX algorithm, explained above, one can obtain the (Pareto-) optimal set of candidate solutions by sorting the models. The mapping from parameter space to the Pareto-optimal set is called Pareto-filtering. Interestingly, the concept of non-domination already partly solves the sorting problem in higher dimensions as it maps from R N to M ordered one-dimensional manifolds: Candidate solutions in the Pareto-front are of rank 0. Similarly, one can find models of rank 1, i.e. all models that are dominated only once (or in other words the nondominated models of all models taken out of the original Pareto-front). Model 1 f 1 can be said to be better than Model 2 f 2 if its rank is lower: To compare models of the same rank, one has to introduce an additional heuristic criterion, for which there are several choices. Usually the criterion promotes uniqueness of a candidate solution to ensure diversity of the population to avoid becoming trapped in a local minimum. As the uniqueness of a solution may depend on its representation and is usually costly to compute, often its projection to fitness space is used. This is conducted to ensure an effective spread of candidate solutions on the Pareto-front. For example, the non-dominated sorting algorithm II (NSGAII) uses a heuristic metric called crowding distance or sparsity to compare two models of the same rank. The scaled Euclidean distance in fitness space to the neighboring models is used to describe the uniqueness of a model. For NSGAII we have: Out of the current generation and their offspring G t ∩ O t the best, in terms of, solutions are chosen for the next generation G t+1. This selection method ensures elitism, i.e. the best solutions found so far are carried forward in next generations. Looking at the high-level description in Algorithm 1, G t can be seen as an archive which keeps old members as long as they are not dominated by a new solution from the current offspring O t. The different selection strategies were first studied in the context of genetic algorithms, but more recently they have been successfully applied to symbolic regression. III. OUR GP SETUP For all applications below, our function set is {+, *, −, /, sin, cos, exp, log, √, 2 }. All discontinuities are defined as zero. Our terminal set consists of the input data x i as well as symbolic constants c i which are determined during evaluation. We set up our multiple objectives as follows: the algorithm runs until the error of the most accurate model is below 0.1%, or for 100 generations. The population size as well as the number of offspring per generation is set to 500. The depth of individuals of the initial populations varies randomly between 1 and 4. With equal probability we generate the corresponding expression trees where each leaf might have a different depth or each leaf is forced to have the same depth. For mutation we randomly pick a subtree and replace it with a new tree, again using the half and half method, with minimum size 0 and maximum size 2. Crossover is conducted by randomly picking a subtree each and exchanging them. Our breeding step is composed of randomly choosing two individuals from the current population, performing crossover on them with probability p = 0.5 and afterwards always mutating them. Our multiobjective cost functional has the following components where NRMSE is the normalized root mean-squared error of the observed data y and its predictor = f ( x), and is simply the total number of nodes in the expression tree f. Selection is conducted according to NSGAII. In this paper, a model is called accurate if its error metric 1 is small, where "small" depends on the context. For example, numerical data might be modeled accurately if 1 ≤ 0.05 and measured data might be modeled accurately if 1 ≤ 0.20. Similarly a model is complicated if its complexity 2 is relatively large. "Good" and its comparatives are to be understood in the sense of. During the generation of the initial population and selection, we force diversity by prohibiting identical solutions. It is very unlikely to randomly create identical solutions. However, offspring may be nearly identical in structure as well as fitness and consequently a crossover between parent and child solution may produce an identical grandchild solution. The probability of such an event grows exponentially with the number of identical solutions in a population and therefore it reduces the diversity of the population in the long-term risking a premature convergence of the algorithm. Thus, by prohibiting identical solutions, the population will have a transient period until it reaches its maximum capacity. This will also reduce the effective number of offspring per generation. This change reduces the probability of becoming trapped in a local minimum because of a steady state in the evolutionary loop. Our main emphasis is the treatment of the model parameters c i. In standard implementations, e.g. the already mentioned, the parameters are mutated randomly, like all other nodes. Here, using modern computational power we are able to use traditional parameter optimization algorithms. Thus, the calculation of 1 becomes another optimization task given the current model f j : The initial guess for c i is either inherited or set to one. Thus, we effectively have two combined optimization layers. Each run is conducted using 10 restarts of the algorithm. The Pareto front is the joined front of the individual runs. Finally, we can use algebraic frameworks to simplify the obtained formulae. This is useful, since a formula (phenotype, macrostate) may be represented by many different expression trees (genotypes, microstates). IV. CASE STUDIES We present here results for three systems with increasing difficulty: first, we demonstrate the principles using a very simple system, the harmonic oscillator; second, we infer a predictive model for a set of coupled oscillators; and finally we show how we can predict a very applied system, namely the power production from a solar panel. For the first two examples we use numerically produced data, where we have full control over the system, while for the demonstration of applicability we use data from a small solar power station. A. Harmonic Oscillator In this subsection we describe the first test of our methodology: an oscillator should be identified correctly and a accurate prediction must be possible. Consequently, we investigate the identification of a prediction model, not necessarily using a differential formalism. This might be interpreted as finding an approximation to the solution of the underlying equation by data analysis. A deep investigation of the validity of the solution for certain classes of systems is rather mathematical and is beyond the scope of this investigation. Our system reads = y where x and y are the state variables and is a constant. We use the particular analytical solution x(t) = x 0 sin(t), y(t) = x 0 cos(t). The prediction target is Since the analytical solution is a linear combination of the feature inputs, just N = 2 data points are needed to train the model. This holds for infinite accuracy of the data and serves as a trivial test for the method. In general, a learning algorithm is "trained" on some data and the validity of the result is tested on another set, that is as independent as possible. That way, overfitting is avoided. For the same reason one needs to define a stop criterion for the algorithm, e.g. the data accuracy is 10 −5, it is useless and even counterproductive to run an algorithm until a root mean square error of 10 −10 (the cost function used here) is achieved. For the example under consideration, we stop the training once the training error is smaller than 1 Typically, a realistic scenario should include the effect of noise, e.g. in the form of measurement uncertainties. We consequently add "measurement" Gaussian noise with mean zero and variance proportional to the signal amplitude: 1 ∼ N (0, (x 0 ) 2 ), 2 ∼ N (0, (x 0 ) 2 ), hencex = x + 1, = y + 2. The training and testing data sets were created as follows: the data are generated between . Out of the first half, we chose N values at random for training. For testing purposes we use the second half. We study the parameter space (N,, ) and average the testing errors over 10 realizations for each parameter set. In Fig. 2 we display the normalized root mean squared error of the prediction using FFX (measured against the noisy data) as a function of the noise amplitude. Given x(t) and y(t) the analytical solution for the non-noisy system is just a linear combination, i.e. x(t + ) = cos( )x(t) + sin( ) y(t), and has a complexity of two. During training we aim for a NRMSE of 1%. Thus, we find the analytical solution in the limit of small noise amplitude, see Fig. 2 and Fig 4. Strong noise covers the signal and thus the error saturates. The length of the analyzed data is another important parameter: typically one expects convergence of the error ∼ 1 √ N for more data. A "vertical" cut through the data in Fig. 2 is shown in Fig. 3. The training set length N has a much lower impact than the classical scaling suggests. Crucial for this scaling is the form free structure as well as the heuristic which is used to select the final model. For demonstration purposes, we chose the most accurate model on the testing set, which is of course vulnerable to overfitting. The average complexity, calculated by Eq. of the final model as a function of the noise amplitude, is shown in Fig 4. As evident we can recover the three regimes of Fig. 2. For small noise, the analytical and numerical solution agree. In the intermediate regime we find on average more complex models (in comparison to the analytical solution). Very strong noise hides the signal and a good prediction is impossible. The optimal solution tends to be single constant, i.e. for high the complexity tends to smaller values as seen in Fig 4. The prediction error has two components: 1) given a structure, noisy data will lead to uncertain parameters and 2) due to the form-free nature of symbolic regression, noisy data will also lead to an uncertain structure, increasing the uncertainty in the parameters. Thus, final model selection has to be performed carefully, especially when dealing with noisy data. A detailed study is presented for the example of coupled oscillators. B. Coupled Oscillators The harmonic oscillator is an easy case to treat with our methods. Now, we extend the analysis to add a spatial dimension. We study a model of FitzHugh-Nagumo oscillators on a ring. The oscillators are coupled and generate traveling pulse solutions. The model was originally derived as a simplification of the Hodgkin-Huxley model to describe spikes in axons, and serves nowadays as a paradigm for excitable dynamics. Here, its spiky behavior is used as an abstraction of a front, observed in real world applications like the human brain, modeled by connected neurons, or a wind power plant network where fronts of different pressure pass through the locations of the wind power plants. The aim is to The form-free structure allows for overfitting. For small noise, the true solution is found with complexity 2, for higher noise levels, the algorithm starts to fit the noise and more terms are added, reflected by a higher complexity. show that temporal and/or spatial information on the state of some network sites enables an increase in predictability of a chosen site or eventually (if there are waves in the network) to the front detection. The model for the ith oscillator is: where v i and w i, i, j = 1,..., N, denote the fast and slower state variables, I i is an external driving force, D is the coupling strength parameter, and A ij ∈ {0, 1} describes the coupling structure between nodes i and j. The constant parameters, a and b determine the dynamics of the system as −1 is the time scale of the slower "recovery variable", and a, and b set the position of the fixed point(s). For A ij we choose diffusive coupling on a ring, i.e. periodic boundary conditions. With the external current I i we can locally pump energy into the system to create two pulses which will travel with the same speed but in opposite directions, annihilating when they meet. Using different spatio-temporal sampling strategies, the aim is to detect and predict the arrival of a spike train at a location far enough away from the excitation center (i.e. farther than the wave train diameter). We mark this special location with the index zero. Note that we do not aim to find a model for a spatiotemporal differential equation, since this would involve the estimation of spatial derivatives, which in turn require a fine sampling. This is definitely not the scope here. Rather we focus on the more application-relevant question to make a prediction based on an equation. The construction of the data set was similar to the single oscillator case: sensors were restricted to the v i variables. We can record the time series of v 0 and use time delayed features for the prediction. Another option is to use information from non-local sensors. We prepare and integrate the system as follows: we consider a ring of N = 200 oscillators. The constants are chosen as a = 0.7, b = 0.8, = 12.5 and D = 1. The system is initialized with v i = 0 and w i = −1.5. With the characteristic function T (x) = 1 if x ∈ T else 0 we can write the space and time dependent perturbation as I i (t) = 5 t− t ≤0.4 (t) t≤40 (t) i∈{−50,−49} (i). This periodic perturbation leads to a pair of traveling waves. The data were sampled at times t n = n∆t with ∆t = 0.1. The system has multiple time scales: two are associated with the on-site FitzHugh-Nagumo oscillator ( f ast = 1, slow = 1 ), while two more are due to diffusive coupling ( Dif f = D) and perturbation ( P ert behaves as I i (t) described above). The temporal width of the pulse traveling through a particular site, P = 8.4, corresponds to the full width half maximum of the pulse. In Fig. 5 we show the evolution of the oscillator network. The state of v i is color-coded. The horizontal width of the yellow stripe corresponds to the spatial pulse width 10.75. The speed of the spike or front is consequently v f ront ∼ / P = 1.28. An animation of this can be found in the supplemental material. The training data, denoted as well feature set, were recorded in three different ways: site-only: Only v 0 is recorded, and time-delayed features v 0,∆n = v 0 (t = (n − ∆n)∆t) are also included with ∆n∆t = −1, −2, −3, −4. mixed: This combines the two approaches above. For each site we also include the time delayed features. To avoid introducing additional symbols we use state variables with double subscripts for discrete times, where the second index refers to time, and one subscript for continuous time. The respective useage is evident from the context. We choose to predict the state at time t = 2 given the data described above. In other words, the prediction target is v 0 (t n + ) with = 20 2.5 P, corresponding to the requirement to be far engouh from the excitation point. Of course, this implies a distance of ∆x ∼ 2.5. The testing and training sets were selected by using every second point of the recorded time series. FFX Results We first discuss the results obtained by FFX (Sec. II A 2). In Fig. 6 we display the Pareto fronts using the three different approaches for the training set. All curves have one point in common which represents the best fitting constant (complexity 0). As one would expect, the site only data do not contain enough information to detect a front. Thus, even high complexity models cannot reach an error below 4% and the required error of 1% is never met. In the two other datasets the algorithm has the possibility to find a combination of spatial amd temporal inputs to account for the front velocity. Note that the shape of the front strongly depends on the internal parameter of the elastic net Eq. 3. More information should not lead to a decrease in predictability. Thus, the Pareto front of a data set richer in features dominate the corresponding Pareto front of a data set with less features. Counter-intuitively, using = 0.95 the front for the mixed dataset becomes non-convex as some good fitting models are hidden by the regularizer. Thus, we can use to influence the shape of the front. Despite that, the most accurate model of the mixed data set is still the most accurate model overall. In the following we discuss the results for the best models for each feature set. If we take the perspective of an observer sitting at i = 0, we see the spike passing: first the state is zero, then a slow increase is observed followed by a rapid increase and decrease around the spike maximum. Eventually the state returns slowly to zero. Statistically, the algorithm is trained by long quiet times and a short, complicated spike form which is hard to model by a reduced set of state variables. This is illustrated in Fig. 7a where for any feature set the biggest differences occur in the spike region. Apparently, the model with site-only variables shows worse results than the spatial one, and the spatiotemporal set models best the passing spike. We note that in a direct confrontation, the true and modeled signal would hard to be distinguished. In Fig. 7b we confront the time derivative for the model from mixed variables. The true and modeled spike are indistinguishable by eye. The formulae of the most accurate models are shown in Table I. For site-only features, quadratic combinations of points at different times occur. This reflects the approximation of the incoming front by a quadratic term. If, however only spatial points are used, the dynamics far away are used to predict the incoming front. If the small terms are neglected, the model consists of the signal at the target site itself, and the previous site (-2) which carries the largest weight. Physically, it means that despite being far away the front is already felt at 2 sites away. Since the front is stationary in a co-moving frame, spatio-temporal embedding is best, namely sampling the spike train in space and moving in time with the train velocity. Then we have a simple and compact linear dependence as seen in the last row of Table I. Let us inspect the possible physics in the model approximating the constants a 0, a 1, a 2, a 3 roughly as 0, 0.45, 0.35, 0.175 such that a 2 = 2a 3. We first notice that p = 8. 4 10. The last terms can then be recombined to a 3 v −2,−10 + a 3 v −2,−10 + v −2,0 as a mean value of the state with time distance of approximately one typical time scale. The state at −30 is at the backside of the front and together the most important information, namely the increase and decrease of the incoming signal is selected by the model. Alternatively, since v(0, t) = v(−v f P, t − P ) the best model in Table I can be interpreted as the weighted average of the clos- est combination (∆i, ∆t) to represent the front velocity ( ∆i ∆t = 4 3 ≈ v f ). This demonstrates how powerful the algorithm works in selecting important features. GP Results We again examine the Pareto-optimal models illustrated in Fig. 8. For each feature set we obtain a nonconvex Pareto front. The shape and the values of the fronts are broadly similar to the results obtained by FFX. Because GP is an evolutionary method and relies on random breeding rules, we display averaged results: we initialize the algorithm with different seeds of the random number generator, calculate the Pareto fronts and average the errors for the non dominated models of the same complexity. Note that not all complexities occur on each particular front. This way, we obtain a generic Pareto front and avoid atypical models which may occur by chance. The specific model given below in the tables temporal site-only −0.0273 + 3.34v0,0 − 2.41v0,0v0,−10 − 2.09v0,−40v0,−10 + 1. For the spatially extended and mixed data sets the errors are smaller than the circle size. The models are re-evaluated on the testing set. is not averaged, but the best result for one specific seed. The errors of the models reachable by the different sets are again decreasing from site only over spatially extended to mixed. However, the mixed model reaches almost zero error which is quite remarkable! The difference plots for the method are given in Fig. 9. While the site only set is not able to give a convincing model for an incoming front, the spatially extended set gives a reasonable model with little error. The mixed model is very good with perfect coincidence of model and true dynamics. This model cannot be distinguished by eye from the observed signal. The models provided by the GP algorithm with seed 42 are given in Table II. Due to the very general character of GP these can be overwhelming at first glance. However, we can simplify them down by using computer algebra systems like sympy or mathematica (here we use sympy). The interpretation of the GP results requires a bit more thinking. In essence, they follow a logic similar to the FFX results. The site-only model is complicated, and instead of a square operator a trigonometric function is used to mimic the incoming pulse. Since the data do not include directly all information needed, the algorithm tries to fit unphysical functions. This is clearly a non-deterministic and overfitting result, mirrored by the high complexity of the functions involved. For spatially extended models, we obtain a linear and sinusoidal components, and the model uses only three features, namely the on-site values and the ones at two and four units left on our site under consideration. Remarkably, a sinusoidal behavior detected with an exponential decrease, which is our intuition. Eventually, the spatio-temporal embedding yields a very simple model which approximates the front velocity v f to be between 4 3 and 1. The accuracy of this model is very high. Summarizing, when given enough input information, both methods find a linear model for the predictor v 0 (t + ) by finding the most suitable combination of temporal and spatial shift to mimic the constant front Table II. Coupled spiking oscillators, method GP. Formulas of the most accurate models for seed 42. velocity. If this information is not available in the input data, nonlinear functions are used. C. Solar Power Data In this section, we describe the results obtained for oneday-ahead forecasting of solar power production. The input data used for training are taken from the unisolar solar panel installation at Potsdam University with about 30 kW installed. Details are found at. We join the solar power data with meteorological forecast data from the freely available European Centre for Medium-Range Weather Forecasts (ECMWF) portal as well as the actual observed weather data. These public data are of limited quality and serve for our proof of concept with real data and all their deficiencies. The solar panel data P (t) were recorded every five minutes, at geoposition 52.41 latitude, 12.98 longitude. The information about the weather can be split into two categories: weather observations of a station near the power source W (t) and the weather forecast (t + ), where is the time difference to the prediction target. We do not have weather data from the station directly, but can use data from a weather station nearby (ID: 10379). The weather forecast data are obtained every six hours at the closest location publicly accessible, 52.5 latitude and 13 longitude. Typical meteorological data contain, but are not limited to, the wind speed and direction, pressure at different levels, the irradiation, cloud coverage, temperature and humidity. However, in this example, we only use temperature and cloudiness as well as their forecasts as features for our model. The latter is obtained by minimizing with f the model under consideration. Our prediction target isP (t + ) with = 24, the one-day-ahead power production. We create our datasets with a sampling of 1h. While additional information from the solar power data remains unused, the prediction variables have to be interpolated. The quality of the forecast depends on quality of the weather measurement and weather forecast. As we use publicly available data, we can only demonstrate the procedure and cannot attain errors as low as those used in commercial products, which will be discussed elsewhere. The features of the the data set are listed in Table IV Error 1 test train Figure 10. Solar power study, Average Pareto front obtained using GP: with increasing complexity, training and testing data first behave similarly, then testing deviates strongly indicating overfitting or too small testing data set, respectively. The peaks around complexity 20 are due to two reasons: there are only few models on the fronts, and one of them is an extreme outlier. to have its minimum equal zero and maximum equal to one. The models are trained with data from June and July of 2014. Testing is conducted for August 2014. To obtain first impression (assuming no prior knowledge), we calculate the mutual correlation of the data. The power produced the next day is heavily correlated with the predicted solar irradiation. This is a confirmation that the physics involved is mirrored in the model and that weather prediction is good in average. Quantitative statements on the quality of weather prediction is not easy and can be found in the literature. GP Results Let us consider the results of our forecasting with GP shown in Fig. 10. The Pareto fronts are shown for both the training and testing set. As above, for the coupled oscillators, we have conducted 10 runs with different seeds and display the averaged result. Of course, for the training set (filled diamonds), increasing complexity means decreasing error. We see a strong deviation for very complicated models of the testing data (filled circles). This may be an indication of a small testing sample, or indicate overfitting. The outlier at = 18 is a result of the particular realization of the evolutionary optimization. With a different setting, e.g. more iterations, or multiple runs such outliers are eliminated. To clarify this question, we show the functions found as solution of our procedure with increasing complexity and one specific seed in Table IV. From Table IV we see that GP follows a very reasonable strategy: First, it recognizes that the persistence method is a very reasonable thing, with production tomorrow being the same as today (x 1 = P (t)). Veto a complexity of 5, the identified models only depend on the solar power x 1 and describe with increasing accuracy the conditioned average daily profile. The more complex models include the weather data and forecast. The geometric mean of current power and predicted temperature is present. However, due to the low quality weather forecast as well as the seasonal weather difference between training and testing data, there is no net gain in prediction quality. Without any further analysis, the model with the lowest testing error is chosen. In Fig. 11 (a) we confront the real time series with the prediction from GP for the model of complexity 4. One clearly finds the already mentioned conditioned average profile. This predicts the production onset a bit too early. The error distribution is shown in Fig. 11 (b), where we recognize an asymmetric error distribution with more probable under-than overprediction. FFX Results The results of the FFX method are shown in Fig. 12-13 and the models in Table V. As shown, the FFX method is less capable of predicting longer inactive periods, such as at night, where no solar power is produced. This is clearly visible in Fig. 13. Analyzing the equations of Table V, we notice that the best FFX function is a quadratic form with maxima to limit the signal above zero. This amounts to recover the mean shape of the signal as a quadratic function. Unfortunately this seems almost trivial since one could obtain this mean shape by purely geometrical considerations with a factor for the cloud coverage. Summarizing the results for the solar power curves, both methods are able to reproduce the true curve to approximately 20% which is reasonable for a nonoptimized method. The detection of changes when clear sky switches to partially or fully clouded one is not entirely satisfactory and one needs to investigate the improvement of weather predictions for a single location. As said in the introduction, a perfect weather prediction with high resolution would render this work useless for power production forecast (although not for other questions). Nevertheless, we note that the results in the form of analytic models are highly valuable, because interpretations and further mathematical analysis are possible. Figure 12. Solar power study, Pareto front obtained using FFX. The results for FFX are as accurate as the ones obtained with GP. Test and training set are, however, nicely aligned. This demonstrates not only consistency of the models, but less variability of the models found. V. CONCLUSION We have demonstrated the use of symbolic regression combined with complexity analysis of the resulting models for the future prediction of dynamical systems. More precisely, we identify a system of equations yielding optimal forecasts in terms of a minimized normalized root mean squared error of the difference between model forecast and observation of the system state. We did not investigate theoretical aspects such as the underlying state space, nor what implications of the functions on the model. These will be subject of future investigations. Such work is to be carried out carefully to find the limitations of the approach, in particular of genetic programming, which is rather uncontrolled in the way the search space is explored. On the other hand, the methods stand in line with a large collection of methods from regression and classification and one can use much of this previous knowledge. In our opinion, the multiobjective analysis is crucial to identify models to a degree such that they can be used in practice. Probably, this approach will prove very helpful if used in combination with scale analysis, e.g. by prefiltering the data on a selected spatio-temporal scale and then identify equations for this level. We have tried to show the possible power by three examples of increasing complexity: a trivial one -the harmonic oscillator with an almost perfect predictive power, a collection of excitable oscillators where we demonstrated that the methods can perform a kind of multiscale analysis based on the data. Thirdly, examine the one-day-ahead forecasting of solar power production we have shown that even for messy data we can improve the classical methods by a few percent (in NRMSE). For theoretical considerations, this might be negligible, for real Figure 13. Solar power study, method FFX: a) Timeseries of the predicted (P ), and observed (P ) data. We display the results of the first week of August 2015. Similar to the GP prediction extrema are not particularly well predicted. For the linear model, even the zero values are not well hit. The reason for this is the regression to mean values and the inability of powers to stay at zero for a sufficient time. b) Histogram of the residuals = P −P. Despite different formulas, the histogram of the residuals is asymmetric around zero with a trend to underpredict as well. world applications, a few percent might translate into a considerable advantage, since the usage of rare resources can be optimized. A question for further research is how we can use simplification during the GP iteration to alter the complexity. It may be even a viable choice to control the complexity growth over time, the so-called bloat, in single objective genetic programming -a topic of ongoing interest. Additionally, we introduced an intermediate step to only allow for one of many identical solutions for further evolution. One could consider to expand the idea of identical expression trees to include symmetries. We conclude that symbolic regression is very useful for the prediction of dynamical systems, based on observations only. Our future research will focus on the use of equations couple the systems to other macroscopic ones (e.g. finance, in the case of wind power), and on the analysis of system stability and other fundamental properties using the found equations, which is scientifically a very crucial point.
|
Synaptosome behaviour is unaffected by weak pulsed electromagnetic fields The present study examined the effect on rat cortical synaptosomes of a 2 h exposure to 50Hz electromagnetic fields (EMFs) with a peak magnetic field of 2 mT. We measured modifications of synaptosomal mitochondrial respiration rate, ATP production, membrane potential, intrasynaptosomal Ca2+ concentration and free iron release. The O2 consumption remained unvaried in exposed synaptosomes at about 2 nM O2/min/mg proteins; ATP production was also unchanged. The intrasynaptosomal Ca2+ concentration decreased slowly and there was a slight, but nonsignificant, depolarisation of the synaptosomal membrane. Finally, the free iron release by synaptosomal suspensions, a useful predictor of neurodevelopmental outcome, remained unchanged after EMF exposure. On the whole, our results indicate that the physiological behaviour of cortical synaptosomes is not affected by weak pulsed EMFs. Bioelectromagnetics 28:477483, 2007. © 2007 WileyLiss, Inc.
|
Comments on the Discussion Forum: Oromucosal immunomodulation as clinical spectrum mitigating factor in SARSCoV2 infection Abstract The mammalian lactoperoxidase system, consisting of lactoperoxidase and the H2O2producing enzyme duox, is our first line of defence against airborne microbes. This system catalyses the production of hypoiodite and hypoiodous acid in the presence of sufficient iodine. These products are highly efficient at destroying the H1N1 virus and the respiratory syncytial virus (RSV). Japan has not been affected as much as other nations during the COVID19 pandemic (death rate about 10% of the United States), and we think this is due to a diet high in iodine. With this in mind, we suggest four actions to prevent SARSCoV2 infections. First, health professionals should study the preventative effect of increasing iodine in the diets of the aged, institutionalized, diabetics andsmokers. Second, the recommended daily intake (RDI) for iodine should be significantly increased, to at least double, the current RDI. Governments should encourage the use and distribution of cheap iodized salts, kelp and seaweed. Third, more research should be done around the physiology and the protective effects of the lactoperoxidase system. Finally, the degradation products of the SARSCoV2 viral particle by hypoiodite and hypoiodous acid should be characterized; portions of the damaged particle are likely to elicit stronger immunity and better vaccines. | INTRODUCTION We read the excellent presentation by Rodriguez-Argente and coauthors and wish to add some thoughts to the discussion. 1 The lactoperoxidase (LPO) system is actually the first line of mammalian, immunological defence against airborne bacterial and viral infections, including influenza and the respiratory syncytial virus (RSV). 2,3 This is not well known, and some textbooks of immunology do not even mention this system. LPO is a haem enzyme that is extruded into human lung mucosa, nasal linings and saliva and is a critical component of innate immunity. 4 LPO is a member of a small enzyme family which includes thyroid peroxidase, important in the biosynthesis of the vital thyroid hormones T4 and T3; eosinophil peroxidase, active in mammalian defence against invading parasites; myeloperoxidase, the key enzyme of natural killer, NK, cells, also responsible for the destructive oxidation of virus particles, bacteria and some transformed cells. 5 LPO has been studied in detail at the biochemical level, and the three-dimensional structure is known at atomic resolution. 6 LPO acts using thiocyanate (SCN -) and hydrogen peroxide (H 2 O 2 ) catalysing these into the highly reactive product hypothiocyanite (OSCN − ). What is not so well known is that LPO also catalyses the oxidation of iodide into the microbiocidal chemicals hypoiodite (IO − ) and H 2 O 2 -producing enzyme duox, is our first line of defence against airborne microbes. This system catalyses the production of hypoiodite and hypoiodous acid in the presence of sufficient iodine. These products are highly efficient at destroying the H1N1 virus and the respiratory syncytial virus (RSV). Japan has not been affected as much as other nations during the COVID-19 pandemic (death rate about 10% of the United States), and we think this is due to a diet high in iodine. With this in mind, we suggest four actions to prevent SARS-CoV-2 infections. First, health professionals should study the preventative effect of increasing iodine in the diets of the aged, institutionalized, diabetics andsmokers. Second, the recommended daily intake (RDI) for iodine should be significantly increased, to at least double, the current RDI. Governments should encourage the use and distribution of cheap iodized salts, kelp and seaweed. Third, more research should be done around the physiology and the protective effects of the lactoperoxidase system. Finally, the degradation products of the SARS-CoV-2 viral particle by hypoiodite and hypoiodous acid should be characterized; portions of the damaged particle are likely to elicit stronger immunity and better vaccines. hypoiodous acid (HOI). Hypoiodous acid is a weak acid but releases atomic oxygen or I +, incredibly powerful oxidizing agents. The structure of LPO complexed with the I − substrate has been solved and shows iodide ion and H 2 O 2 bind at many sites including the active substrate-binding site. 6 The products OSCN −, HOI and IO − are all potent, nonspecific and general viracidal agents which are lethal to the influenza virus. 2 Most mammals also biosynthesize the detrimental compound H 2 O 2 required for LPO function by the duox ensemble present in lung mucous. 7 The entire duox-LPO system is compartmentalized to the lung and nasal mucous linings, saliva, tears. Nature thus restricts these enzyme activities to surfaces and fluids outside living tissues. The initial COVID-19 infection rate was surprisingly low in Japan in early 2020. Japan has not enforced a strict, nationwide lockdown despite being a densely populated country, located geographically close to the COVID-19 origin and where many people live and work in crowds. In June 2021, only about 40 000 people were infected of 125 M inhabitants with a running total of about 15 000 deaths. Even with just 60% of the populace fully vaccinated, the mortality rate is about one-twelfth of the United States. 8 The typical protein source in the Japanese diet is squid and fish and for flavouring most use kelp and seaweed, all high in iodine. Adults consume more than double the iodine than the US RDA (150 g), and average about 413 g/ day for women and 312 g/day for men; both well under the recommended daily upper limit of 1100 g. 9 There is an inverse correlation between the high iodine content of the Japanese diet and the low COVID-19 infection rate, and we think this is more than a correlation but a causal relation. We suggest a reason to explain this inverse correlation-the viracidal activity of the protective LPO system present in human airways and lungs is enhanced by an iodine-rich diet. It has been shown that increasing the iodine concentration in mammalian airways enhances the LPO system performance and virus destruction. 10 In addition and very importantly, both IO − and HOI are nonspecific, viracidal agents; the protective activity will likely be independent of the SARS-CoV-2 type. Mutant forms are now dominating new, widespread infections with prediction that thousands of vicious varieties are on the horizon and are very likely to complicate our future. 11 Since the median longevity of the Japanese is the highest of any industrialized nation the long-term health effects of increasing the iodine, RDI should not be serious compared to our current situation. A study of individuals in Japan who ingested much more iodine than the recommended upper limit suggested that high iodine levels can be problematic, but for only a few people with previous underlying thyroid problems. We here propose that increasing the consumption of iodide supplements and iodine-rich foods such as kelp and seaweed, which are cheap and readily available to many billions of people, will help prevent infection. A major problem now facing humanity is that vaccines against SARS-CoV-2 infection are currently available in developed nations for the common variants but it will take years before billions can be immunized and annual re-vaccinations to deal with new varieties may be needed. It is estimated that at least 2 billion people worldwide suffer serious iodine insufficiency. The COVID-19 pandemic has also wreaked havoc with elderly and institutionalized people, who typically have a diet deficient in iodine. 12 Additionally, nothing is known about the age dependence of LPO activity in humans which may disappear over time. Smokers of all ages have been hard hit by coronavirus. We have previously presented strong evidence that CO, a major gaseous constituent of tobacco smoke, binds to LPO at the active site thus inhibiting enzyme activity. 13 Inactivation of this key enzyme correlates with the higher mortality among smokers from lung infections of all types. Unfortunately, many people will remain susceptible and will die in countries which cannot afford expensive vaccine programmes. Some countries now enforce economic shutdown with strict social distancing but in the face of political pressure these measures cannot be sustained. Many decades ago, it was reported that povidone-iodine application to the nasopharyngeal region was beneficial against an influenza pandemic in India. 14 Recent clinical trials indicate that povidone-iodine application reduces coronavirus infections. 15 We propose a more convenient and cheaper method for containing COVID-19 by the use and distribution of iodized salt, kelp and seaweed to double or more the current iodine RDI. This simple method should encourage compliance by all but the most stubborn. We realize that increasing iodine supplementation will not completely eliminate the pandemic but should decrease the infection and transmission rates. If our simple suggestion is followed, it may have immediate and positive results for people of all cultures. First, we hope that health professionals will study this correlation closely. Populations having diets high in iodine should display resistance against coronavirus and influenza infections in general. Studies should also be quickly performed of the preventative effect of increasing iodine in the diets of the aged and institutionalized, diabetics and smokers. Larger studies must be performed to bracket the dietary levels of iodine which when met, allow humans to best avoid coronavirus infections. Second, we encourage health professionals to consider recommending increasing, doubling the RDI, by either use of iodized salts or increased availability of cheap seafood including kelp and seaweed. Small tablets are easily manufactured and distributed without the strict and expensive requirements of medical products. Such measures can be quickly instituted by many organizations, especially in low-income nations and the positive effects should be observed immediately. Third, more research into the physiology of LPO needs to be done including the age dependence of LPO activity and concentrations in human airways. Like many human enzymes, there may be several isomeric forms of LPO; some with higher and some with lower activities, some expressed more than others. Study should be made of the genetics of LPO and the duox enzymes to uncover possible multiple forms. On the positive side, the resistance of COVID-19 variants to the LPO system can be easily studied using the in vitro assay techniques already made public. 2 Health professionals can be quickly informed of the relative resistance of these mutants to iodine supplementation. There must exist a mechanism for the delivery of the LPO system components from lung tissues to the mucous lining. It is likely that special transport enzymes are involved in this activity which should be discovered and characterized. Defects are likely to have dire consequences for the defence of some humans against airborne microbes. Fourth, the structures of LPO complexes with several iodide ions and H 2 O 2 have been recently published 6 so elucidation of the active LPO-HOI complex should be undertaken to better understand the actions of the protective duox-LPO-iodine ensemble. It is extremely important that the oxidation products of the SARS-CoV-2 viral particle by hypoiodite and hypoiodous acid be characterized. It is likely that many breakdown particulates make their way from the mucous lining into lung tissues where these elicit both immune and inflammation responses. Perhaps some of these particulates could be used to develop better, more broad-spectrum and longer-lasting vaccines. The lactoperoxidase system is the first line of mammalian defence against many airborne pathogens. It is probable that increasing iodine supplementation will enable the lactoperoxidase system to more effectively destroy invading influenza viral particles, including SARS-CoV-2. Agencies and governments should encourage the use and distribution of cheap iodized salts, kelp and seaweed to this end. Studies across cultural and geographical borders should be done to better understand the protective activities of the lactoperoxidase system as a function of dietary iodine.
|
Critical roles for polymerase zeta in cellular tolerance to nitric oxide-induced DNA damage. Nitric oxide (NO), a signal transmitter involved in inflammation and regulation of smooth muscle and neurons, seems to cause mutagenesis, but its mechanisms have remained elusive. To gain an insight into NO-induced genotoxicity, we analyzed the effect of NO on a panel of chicken DT40 clones deficient in DNA repair pathways, including base and nucleotide excision repair, double-strand break repair, and translesion DNA synthesis (TLS). Our results show that cells deficient in Rev1 and Rev3, a subunit essential for DNA polymerase zeta (Polzeta), are hypersensitive to killing by two chemical NO donors, spermine NONOate and S-nitroso-N-acetyl-penicillamine. Mitotic chromosomal analysis indicates that the hypersensitivity is caused by a significant increase in the level of induced chromosomal breaks. The data reveal the critical role of TLS polymerases in cellular tolerance to NO-induced DNA damage and suggest the contribution of these error-prone polymerases to accumulation of single base substitutions.
|
Adipose tissue foam cells are present in human obesity. CONTEXT Adipose tissue macrophages (ATMs) are thought to engulf the remains of dead adipocytes in obesity, potentially resulting in increased intracellular neutral lipid content. Lipid-laden macrophages (foam cells ) have been described in atherosclerotic lesions and have been proposed to contribute to vascular pathophysiology, which is enhanced in obesity. OBJECTIVE The objective of this study was to determine whether a subclass of lipid-laden ATMs (adipose FCs) develop in obesity and to assess whether they may uniquely contribute to obesity-associated morbidity. SETTING AND PATIENTS Patients undergoing elective abdominal surgery from the Beer-Sheva (N = 94) and the Leipzig (N = 40) complementary cohorts were recruited. Paired abdominal subcutaneous (SC) and omental (Om) fat biopsy samples were collected and analyzed by histological and flow cytometry-based methods. Functional studies in mice included coculture of ATMs or FCs with adipose tissue. RESULTS ATM lipid content was increased 3-fold in Om compared with SC fat, particularly in obese persons. FCs could be identified in some patients and were most abundant in Om fat of obese persons, particularly those with intra-abdominal fat distribution. Stepwise multivariate models demonstrated depot differential associations: fasting glucose with SC FCs ( = 0.667, P <.001) and fasting insulin ( = 0.413, P =.006) and total ATM count ( = 0.310, P =.034) with Om FCs in models including age, body mass index, high-density lipoprotein cholesterol, and high-sensitivity C-reactive protein. When cocultured with adipose explants from lean mice, FCs induced attenuated insulin responsiveness compared with adipose explants cocultured with control ATMs with low lipid content. CONCLUSIONS FCs can be identified as an ATM subclass in human SC and Om adipose tissues in 2 independent cohorts, with distinct depot-related associations with clinical parameters. Once formed, they may engage in local cross-talk with adipocytes, contributing to adipose insulin resistance.
|
from os import path
import sys
modulePath = path.abspath(path.join(path.dirname(__file__), '../src'))
sys.path.append(modulePath)
sys.path.append(path.abspath(path.dirname(__file__)))
|
Breaking News Emails Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.
July 8, 2016, 9:04 AM GMT / Updated July 8, 2016, 9:04 AM GMT / Source: Reuters
SEOUL – The United States and South Korea said Friday they had decided to deploy an advanced missile defense system to counter North Korea's missile threat.
The Terminal High Altitude Area Defense (THAAD) anti-missile system will be deployed solely to counter the threat from the North, the South's Defence Ministry and the U.S. Defense Department said in a joint statement.
China reacted angrily, lodging protests with the American and South Korean ambassadors. It has supported sanctions on North Korea but objects to the THAAD deployment because the system’s radar can reach into its territory.
South Korean Defense Ministry's Deputy Minister Yoo Jeh-seung shakes hands with the commander of U.S. Forces Korea's Eighth Army Lieutenant General Thomas Vandal Friday. STRINGER / Reuters
South Korea said it aims for a deployment "soon". The Yonhap news agency said the system was expected to be in operation by the end of 2017 at the latest, citing the South's defense ministry.
"South Korea and the United States made an alliance decision to deploy THAAD to USFK as a defensive measure to ensure the security of the South and its people, and to protect alliance military forces from North Korea's weapons of mass destruction and ballistic missile threats," the joint statement said.
USFK stands for U.S. Forces Korea, which includes 28,500 U.S. troops based in South Korea.
"When the THAAD system is deployed to the Korean Peninsula, it will be focused solely on North Korean nuclear and missile threats and would not be directed towards any third party nations," the statement said.
A joint South Korea-U.S. working groups is preparing to determine the best location for deploying THAAD, according to the joint statement.
A US Department of Defense handout photo shows two THAAD interceptors and a Standard-Missile 3 Block IA missile during a test in 2013. HANDOUT / AFP - Getty Images
The move comes after a North Korean rocket launch in February put an object into space orbit. That was condemned by the U.N. Security Council as a test of a long-range missile in disguise, which the North is prohibited from doing under several Security Council resolutions.
North Korea rejects the ban, saying it is an infringement on its sovereignty and its right to space exploration.
|
// Copyright (C) 2016 <NAME> <<EMAIL>>. All rights reserved.
// Use of this source code is governed by the MIT license,
// which can be found in the LICENSE file.
package takuzu
// This file contains the takuzu validation functions and methods.
// checkRange returns true if the range is completely defined, and an error
// if it doesn't follow the rules for a takuzu line or column
// Note that the boolean might be invalid if the error is not nil.
func checkRange(cells []Cell) (bool, error) {
full := true
size := len(cells)
counters := []int{0, 0}
var prevCell Cell
var prevCellCount int
for _, c := range cells {
if !c.Defined {
full = false
prevCell.Defined = false
prevCellCount = 0
continue
}
counters[c.Value]++
if prevCellCount == 0 {
prevCellCount = 1
} else {
if c.Value == prevCell.Value {
prevCellCount++
if prevCellCount > 2 {
v := c.Value
return full, validationError{
ErrorType: ErrorTooManyAdjacentValues,
CellValue: &v,
}
}
} else {
prevCellCount = 1
}
}
prevCell = c
}
if counters[0] > size/2 {
v := 0
return full, validationError{
ErrorType: ErrorTooManyValues,
CellValue: &v,
}
}
if counters[1] > size/2 {
v := 1
return full, validationError{
ErrorType: ErrorTooManyValues,
CellValue: &v,
}
}
return full, nil
}
// CheckRangeCounts returns true if all cells of the provided range are defined,
// as well as the number of 0s and the number of 1s in the range.
func CheckRangeCounts(cells []Cell) (full bool, n0, n1 int) {
counters := []int{0, 0}
full = true
for _, c := range cells {
if c.Defined {
counters[c.Value]++
} else {
full = false
}
}
return full, counters[0], counters[1]
}
// CheckLine returns an error if the line i fails validation
func (b Takuzu) CheckLine(i int) error {
_, err := checkRange(b.GetLine(i))
return err
}
// CheckColumn returns an error if the column i fails validation
func (b Takuzu) CheckColumn(i int) error {
_, err := checkRange(b.GetColumn(i))
return err
}
// Validate checks a whole board for errors (not completeness)
// Returns true if all cells are defined.
func (b Takuzu) Validate() (bool, error) {
finished := true
computeVal := func(cells []Cell) (val int) {
for i := 0; i < len(cells); i++ {
val += cells[i].Value * 1 << uint(i)
}
return
}
lineVals := make(map[int]bool)
colVals := make(map[int]bool)
for i := 0; i < b.Size; i++ {
var d []Cell
var full bool
var err error
// Let's check line i
d = b.GetLine(i)
full, err = checkRange(d)
if err != nil {
err := err.(validationError)
err.LineNumber = &i
return false, err
}
if full {
hv := computeVal(d)
if lineVals[hv] {
err := validationError{
ErrorType: ErrorDuplicate,
LineNumber: &i,
}
return false, err
}
lineVals[hv] = true
} else {
finished = false
}
// Let's check column i
d = b.GetColumn(i)
full, err = checkRange(d)
if err != nil {
err := err.(validationError)
err.ColumnNumber = &i
return false, err
}
if full {
hv := computeVal(d)
if colVals[hv] {
err := validationError{
ErrorType: ErrorDuplicate,
ColumnNumber: &i,
}
return false, err
}
colVals[hv] = true
} else {
finished = false
}
}
return finished, nil
}
|
The present invention relates to optical devices and, more specifically, to optical devices that enable control of dispersion in optical communication systems.
The demand for greater bandwidth in optical communications is driving the fiber-optic telecommunications industry to explore technologies to achieve faster transmission speeds and increased capacity. The increase in bandwidth, however, is limited by a number of fundamental factors such as attenuation, noise and dispersion.1,2 In particular, dispersion is problematic because it distorts and/or broadens the optical pulses used to carry information through the optical communication system, thereby leading to data transmission error, especially in long haul and/or high speed systems.
Various attempts have been made to control or counteract dispersion in optical communication networks. For example, dispersion compensating fibers (DCFs) are available from companies such as Lucent Technologies/OFS and Corning to provide a negative dispersion across a specific operating band.1,2 However, since DCFs provide essentially constant negative dispersion, DCFs are generally useful only for dispersion correction at one wavelength at a time. That is, a series of DCFs are needed to control dispersion over the full range of wavelengths used in the optical communication system. Therefore, dispersion compensation solutions based on DCFs tend to be complicated and expensive.
Another approach to dispersion control is the use of fiber Bragg gratings.1,2 A fiber Bragg grating includes a chirped Bragg grating or a number of Bragg gratings designed to reflect different wavelengths all formed in a length of fiber so as to provide dispersion compensation on input light. Like DCFs, however, fiber Bragg gratings are limited in the range of wavelengths over which they are effective. Therefore, several gratings are needed to provide dispersion compensation over the optical communication wavelength range. Fiber Bragg gratings can also induce dispersion ripple, which leads to undesirable distortion of the optical signals.
Still other dispersion compensation schemes involve the use of all-pass filters.3-6 All-pass filters are optical filters designed to provide phase compensation without affecting the amplitude of input light.3 For example, in U.S. Pat. No. 6,289,151 B1, Kazarinov et al. (hereinafter, Kazarinov) describes an all-pass filter based on a number of ring resonators in a plurality of feedback loops. The all-pass filter of Kazarinov compensates for optical signal dispersion by applying a frequency-dependent time delay to portions of the optical signal in the feedback loops. The frequency-dependent time delay is provided by cascaded or series ring resonators, each of the ring resonators having a different phase. One problem arises with respect to the all-pass filter of Kazarinov, however, is submitted since a plurality of ring resonators and couplers are needed to provide dispersion compensation over the optical communication bandwidth. Also high manufacturing tolerances are required to ensure balanced performance of the device in compensating the dispersion of optical signal at a range of frequencies.
As another example of an all-pass filter, J. Ip in U.S. Pat. No. 5,557,468 (hereinafter, Ip) discloses a dispersion compensation device based on a reflective Fabry-Perot etalon.7 The all-pass filter of Ip includes a Fabry-Perot etalon including two reflectors. Each reflector includes a single uniform reflectance value that is different from the reflectance value of the other reflector so as to provide an input port and a separate output port for monitoring, for example, the frequency of the signal output of the all-pass filter. Again, the range of frequencies over which the all-pass filter of Ip is effective remains limited. Ip suggests the use of two or more Fabry-Perot etalons with dissimilar reflectivity characteristics and offset center frequency response, but it is submitted that the manufacturing tolerances for such a multi-stage cascaded device make the device impractical.
A Fabry-Perot etalon including a 100% reflectance mirror as one of its reflectors (also known as a Gires-Tournois interferometer) is also used as an all-pass filter. However, since the Fabry-Perot etalon generally provides an output in the form of a series of Gaussian peaks, it is difficult to manufacture a single stage Gires-Tournois ferometer exhibiting the desired phase response over a desired range of wavelengths.
Still another example of an all-pass filter for dispersion compensation is a thin film-based coupled cavity all-pass (CCAP) filter as discussed, for example, by Jablonski et al.8 The CCAP filter of Jablonski et al. is essentially a series of interference filters cascaded together. The CCAP filter of Jablonski et al. is similar to the aforedescribed Kazarinov approach in that the CCAP filter consists of two or more cavities disposed between reflectors and cascaded together to form a single filter. The thin film-based CCAP filter includes a plurality of alternating low index and high index thin films designed to form a stack of reflector sections separated by low index xe2x80x9ccavityxe2x80x9d sections. The thin film configuration allows the device to be compact compared to the use of a series of adjacent Fabry-Perot filters. However, the design of the thin film-based CCAP filter including more than two cavities is submitted to be mathematically problematic and, further, since the number of materials available for use as the low index and high index materials is limited, the filter is difficult to implement as a practical device.
The present invention provides an optical device for dispersion compensation which serves to reduce or eliminate the foregoing problems in a highly advantageous and heretofore unseen way and which provides still further advantages.
1. K. Slocum et al., xe2x80x9cDispersion Compensators,xe2x80x9d Wit SoundView Corp. Report, May 29, 2001.
2. J. Jungjohann et al., xe2x80x9cWill Dispersion Kill Next Generation 40 Gigabit Networks?xe2x80x9d CIBC World Markets Equity Research, Jun. 19, 2001.
3. R. Kazarinov et al., xe2x80x9cAll-Pass Optical Filters,xe2x80x9d U.S. Pat. No. 6,289,151 B1, issued Sep. 11, 2001.
4. G. Lenz et al., xe2x80x9cOptical Communication System including Broadband All-Pass Filter for Dispersion Compensation,xe2x80x9d U.S. Pat. No. 6,259, 847 B1, issued Jul. 10, 2001.
5. C. K. Madsen et al., xe2x80x9cIntegrated Optical Allpass Filters for Dispersion Compensation,xe2x80x9d OSA TOPS vol. 29, WDM Components, pp. 142-149.
6. G. Lenz et al., xe2x80x9cOptical Filter Dispersion in WDM Systems: A Review,xe2x80x9d OSA TOPS vol. 29, WDM Components, pp. 246-253.
7. J. Ip, xe2x80x9cChromatic Dispersion Compensation Device,xe2x80x9d U.S. Pat. No. 5,557,468, issued Sep. 17, 1996.
8. M. Jablonski et al., xe2x80x9cThe Realization of All-Pass Filters for Third-Order Dispersion Compensation in Ultrafast Optical Fiber Transmission Systems,xe2x80x9d Journal of Lightwave Technology, vol. 19, no. 8, pp. 1194-1205, August 2001.
As will be disclosed in more detail hereinafter, there is disclosed herein an optical device for receiving input light and for acting on the input light to produce output light. The optical device includes a first reflector and a second reflector supported in a spaced-apart, confronting relationship with the first reflector such that the input light received by the optical device, at least potentially, undergoes multiple reflections between the first and second reflectors. At least a selected one of the first and second reflectors is configured to subject each one of a plurality of different portions of the input light to one of a plurality of different reflectance values to produce an emitted light passing through at least the selected reflector in a way which is combinable to generate the output light.
In another aspect of the invention, there is disclosed a dispersion compensation module including the aforedescribed optical device.
In still another aspect of the invention, a method for use in an optical device for receiving input light and for acting on the input light to produce output light is disclosed. The method includes the steps of supporting a first reflector and a second reflector in a spaced-apart, confronting relationship and configuring the first and second reflectors such that the input light received by the optical device, at least potentially, undergoes multiple reflections between the first and second reflectors. The method also includes the step of configuring at least a selected one of the reflectors to include a plurity of different reflectance values. The method further includes the step of subjecting a plurality of different portions of the input light, during the multiple reflections, to a plurality of different reflectance values at a selected one of the reflectors to produce an emitted light passing through the selected reflector in a way which is combinable to generate the output light.
|
Prevalence of hepatitis delta virus infection in Malaysia. The prevalence of coinfection, superinfection and chronic infection with the hepatitis delta virus (HDV) was studied in 324 hepatitis B surface antigen (HBsAg)-positive Malaysians. Of these, 10.0% (5/50) had coinfection, 5.7% (11/194) had superinfection, but none of the 80 patients with chronic liver disease (CLD) or primary hepatocellular carcinoma (PHC) had chronic infection with HDV. The overall HDV infection was 4.9% (16/324). One of the coinfection cases acquired the HDV infection as early as 1982. HDV superinfection was detected mainly among IV drug abusers (20% or 7/35) and promiscuous males and females (13.6% or 3/22). They were all asymptomatic. Only 0.8% (1/125) apparently healthy blood donors was infected with HDV. None of the 12 multi-transfused patients examined were positive. Malaysia is the only Southeast Asian country examined so far in which HDV infection was detected. The reason could be that the IV drug abusers and the sexually promiscuous groups missed being examined in the other countries. Comparing the HDV infection rates in 4 categories of infected Malaysians (viz. acute hepatitis B patients, IV drug abusers, blood donors and CLD patients) with those of other countries, it was noted that the Malaysian rates were similar to the lowest in the range of prevalence rates of each category in the latter group. The rate of coinfection in a preliminary study in 1982-84 (9.0% or 1/11) was not very different from that obtained to date (10.0% or 5/50).(ABSTRACT TRUNCATED AT 250 WORDS)
|
#pragma once
class ParticleToolScene : public Scene
{
private:
ParticleSystem* particleSystem;
public:
ParticleToolScene();
~ParticleToolScene();
virtual void Update() override;
virtual void PreRender() override;
virtual void Render() override;
virtual void PostRender() override;
virtual void GUIRender() override;
};
|
Burbank High School had a 41 percent opt-out rate on this year's state standardized tests. That's much higher than the statewide percentage.
California State University management and the union that represents faculty say they may announce a settlement on Friday to their contract dispute.
Cal State campuses are preparing for the education and safety logistics of a five-day faculty strike. Campuses say they'll be open for business.
Lawyers who sued Compton Unified last year say childhood trauma experts are helping school officials craft reforms for the entire school district.
The lead plaintiff in the case, Rebecca Friedrichs, is a local teacher who says she plans to continue the fight.
The non-binding report acknowledges the Great Recession "severely impacted" CSU, but it recommends a 5-percent raise for faculty over two years.
Officials made changes this year aimed at making Smarter Balanced tests more accommodating for the 300,000 special education students who sit for the exams.
Teen students train shelter dogs to help the dogs get adopted. The dogs help the students cope with the trauma of living in one of the most violent parts of L.A.
Faculty point to data they gathered that suggests all but three of the 23 campuses have ratios significantly higher than what’s recommended by experts.
California State University campuses are now using Smarter Balanced test scores to ensure incoming students are ready for college-level work.
Scores were low last year on California's standardized tests of English and math. This year, educators will be looking closely for any improvement.
Inglewood Unified has a new state administrator running the schools. Balancing the budget and turning around dropping student enrollment are his key tasks.
Teachers in Compton have called in sick twice in one week as their union and the school district are at odds over a pay increase.
Student hunger and homelessness is growing at the California State University campuses. Some campuses do a lot more than others to help.
A proposal by a state senator would freeze tuition for California State University students who pledge to finish their undergraduate degree in four years.
|
import { AsInputs } from '@pulumi-utils/sdk';
import { PipelineProps } from '../pipeline';
import { CustomResource, Input, Output, ID, CustomResourceOptions, Inputs, output } from '@pulumi/pulumi';
import { IntegrationRef, TriggerCondition, Variable } from '../common';
import { Integration } from '../integration';
export interface RunDockerContainerState {
project_name: string;
pipeline_id: number;
/**
* The name of the Docker image.
*/
docker_image_name: string;
/**
* The tag of the Docker image.
*/
docker_image_tag: string;
/**
* The commands that will be executed.
*/
inline_commands: string;
/**
* The name of the action.
*/
name: string;
/**
* Specifies when the action should be executed. Can be one of `ON_EVERY_EXECUTION`, `ON_FAILURE` or `ON_BACK_TO_SUCCESS`. The default value is `ON_EVERY_EXECUTION`.
*/
trigger_time: 'ON_EVERY_EXECUTION' | 'ON_FAILURE' | 'ON_BACK_TO_SUCCESS';
/**
* The numerical ID of the action, after which this action should be added.
*/
after_action_id?: number;
/**
* When set to `true` the action is disabled. By default it is set to `false`.
*/
disabled?: boolean;
/**
* The ID of the action which built the desired Docker image. If set to 0, the image will be taken from previous pipeline action. Can be used instead of `docker_build_action_name`.
*/
docker_build_action_id?: number;
/**
* The name of the action which built the desired Docker image. Can be used instead of `docker_build_action_id`.
*/
docker_build_action_name?: string;
/**
* Default command to execute at runtime. Overwrites the default entrypoint set by the image.
*/
entrypoint?: string;
/**
* Defines the export path of the container’s filesystem as a tar archive.
*/
export_container_path?: string;
/**
* If set to `true` the execution will proceed, mark action as a warning and jump to the next action. Doesn't apply to deployment actions.
*/
ignore_errors?: boolean;
/**
* The integration. Required for using the image from the Amazon ECR, Google GCR and Docker Hub.
*/
integration?: IntegrationRef | Integration;
/**
* The username required to connect to a private registry.
*/
login?: string;
/**
* Defines whether or not to mount the filesystem to the running container.
*/
mount_filesystem_disable?: boolean;
/**
* The password required to connect to a private registry.
*/
password?: string;
/**
* The name of the Amazon S3 region. Required for using the image from the Amazon ECR. The full list of regions is available here.
*/
region?: string;
/**
* The url to the Docker registry or GCR. Required for Google GCR.
*/
registry?: string;
/**
* Number of retries if the action fails.
*/
retry_count?: number;
/**
* Delay time between auto retries in minutes.
*/
retry_delay?: number;
/**
* All build commands are run as the default user defined in the selected Docker image. Can be set to another username (on the condition that this user exists in the selected image).
*/
run_as_user?: string;
/**
* When set to `true`, the subsequent action defined in the pipeline will run in parallel to the current action.
*/
run_next_parallel?: boolean;
/**
* Defines whether the action should be executed on each failure. Restricted to and required if the `trigger_time` is `ON_FAILURE`.
*/
run_only_on_first_failure?: boolean;
/**
* The timeout in seconds.
*/
timeout?: number;
/**
* The list of trigger conditions to meet so that the action can be triggered.
*/
trigger_conditions?: TriggerCondition[];
/**
* If set to `true` the Docker image will be taken from action defined by `docker_build_action_id`.
*/
use_image_from_action?: boolean;
/**
* The list of variables you can use the action.
*/
variables?: Variable[];
/**
* The path preceding the colon is the filesystem path (the folder from the filesystem to be mounted in the container). The path after the colon is the container path (the path in the container, where this filesystem will be located).
*/
volume_mappings?: string[];
}
export type RunDockerContainerArgs = AsInputs<RunDockerContainerState>;
export interface RunDockerContainerProps {
url: string;
html_url: string;
action_id: number;
docker_image_name: string;
docker_image_tag: string;
inline_commands: string;
name: string;
trigger_time: 'ON_EVERY_EXECUTION' | 'ON_FAILURE' | 'ON_BACK_TO_SUCCESS';
type: 'RUN_DOCKER_CONTAINER';
after_action_id?: number;
disabled?: boolean;
docker_build_action_id?: number;
docker_build_action_name?: string;
entrypoint?: string;
export_container_path?: string;
ignore_errors?: boolean;
integration?: IntegrationRef | Integration;
login?: string;
mount_filesystem_disable?: boolean;
password?: <PASSWORD>;
region?: string;
registry?: string;
retry_count?: number;
retry_delay?: number;
run_as_user?: string;
run_next_parallel?: boolean;
run_only_on_first_failure?: boolean;
timeout?: number;
trigger_conditions?: TriggerCondition[];
use_image_from_action?: boolean;
variables?: Variable[];
volume_mappings?: string[];
pipeline: PipelineProps;
project_name: string;
pipeline_id: number;
}
/**
* Required scopes in Buddy API: `WORKSPACE`, `EXECUTION_MANAGE`, `EXECUTION_INFO`
*/
export class RunDockerContainer extends CustomResource {
static __pulumiType = 'buddy:action:RunDockerContainer';
static get(name: string, id: Input<ID>, state?: Partial<RunDockerContainerState>, opts?: CustomResourceOptions) {
return new RunDockerContainer(name, state as any, { ...opts, id });
}
static isInstance(obj: any): obj is RunDockerContainer {
if (null == obj) {
return false;
}
return obj['__pulumiType'] === RunDockerContainer.__pulumiType;
}
project_name!: Output<string>;
pipeline_id!: Output<number>;
action_id!: Output<number>;
docker_image_name!: Output<string>;
docker_image_tag!: Output<string>;
inline_commands!: Output<string>;
name!: Output<string>;
trigger_time!: Output<'ON_EVERY_EXECUTION' | 'ON_FAILURE' | 'ON_BACK_TO_SUCCESS'>;
type!: Output<'RUN_DOCKER_CONTAINER'>;
after_action_id!: Output<number | undefined>;
disabled!: Output<boolean | undefined>;
docker_build_action_id!: Output<number | undefined>;
docker_build_action_name!: Output<string | undefined>;
entrypoint!: Output<string | undefined>;
export_container_path!: Output<string | undefined>;
ignore_errors!: Output<boolean | undefined>;
integration!: Output<IntegrationRef | Integration | undefined>;
login!: Output<string | undefined>;
mount_filesystem_disable!: Output<boolean | undefined>;
password!: Output<string | undefined>;
region!: Output<string | undefined>;
registry!: Output<string | undefined>;
retry_count!: Output<number | undefined>;
retry_delay!: Output<number | undefined>;
run_as_user!: Output<string | undefined>;
run_next_parallel!: Output<boolean | undefined>;
run_only_on_first_failure!: Output<boolean | undefined>;
timeout!: Output<number | undefined>;
trigger_conditions!: Output<TriggerCondition[] | undefined>;
use_image_from_action!: Output<boolean | undefined>;
variables!: Output<Variable[] | undefined>;
volume_mappings!: Output<string[] | undefined>;
constructor(name: string, argsOrState: RunDockerContainerArgs | RunDockerContainerState, opts?: CustomResourceOptions) {
const inputs: Inputs = {};
if (!opts) {
opts = {};
}
if (opts.id) {
const state = argsOrState as RunDockerContainerState | undefined;
inputs['project_name'] = state?.project_name;
inputs['pipeline_id'] = state?.pipeline_id;
inputs['docker_image_name'] = state?.docker_image_name;
inputs['docker_image_tag'] = state?.docker_image_tag;
inputs['inline_commands'] = state?.inline_commands;
inputs['name'] = state?.name;
inputs['trigger_time'] = state?.trigger_time;
inputs['after_action_id'] = state?.after_action_id;
inputs['disabled'] = state?.disabled;
inputs['docker_build_action_id'] = state?.docker_build_action_id;
inputs['docker_build_action_name'] = state?.docker_build_action_name;
inputs['entrypoint'] = state?.entrypoint;
inputs['export_container_path'] = state?.export_container_path;
inputs['ignore_errors'] = state?.ignore_errors;
inputs['integration'] = state?.integration instanceof Integration ? { hash_id: state.integration.hash_id } : state?.integration;
inputs['login'] = state?.login;
inputs['mount_filesystem_disable'] = state?.mount_filesystem_disable;
inputs['password'] = state?.password;
inputs['region'] = state?.region;
inputs['registry'] = state?.registry;
inputs['retry_count'] = state?.retry_count;
inputs['retry_delay'] = state?.retry_delay;
inputs['run_as_user'] = state?.run_as_user;
inputs['run_next_parallel'] = state?.run_next_parallel;
inputs['run_only_on_first_failure'] = state?.run_only_on_first_failure;
inputs['timeout'] = state?.timeout;
inputs['trigger_conditions'] = state?.trigger_conditions;
inputs['use_image_from_action'] = state?.use_image_from_action;
inputs['variables'] = state?.variables;
inputs['volume_mappings'] = state?.volume_mappings;
} else {
const args = argsOrState as RunDockerContainerArgs | undefined;
if (!args?.project_name) {
throw new Error('Missing required property "project_name"');
}
if (!args?.pipeline_id) {
throw new Error('Missing required property "pipeline_id"');
}
if (!args?.docker_image_name) {
throw new Error('Missing required property "docker_image_name"');
}
if (!args?.docker_image_tag) {
throw new Error('Missing required property "docker_image_tag"');
}
if (!args?.inline_commands) {
throw new Error('Missing required property "inline_commands"');
}
if (!args?.name) {
throw new Error('Missing required property "name"');
}
if (!args?.trigger_time) {
throw new Error('Missing required property "trigger_time"');
}
inputs['docker_image_name'] = args.docker_image_name;
inputs['docker_image_tag'] = args.docker_image_tag;
inputs['inline_commands'] = args.inline_commands;
inputs['name'] = args.name;
inputs['trigger_time'] = args.trigger_time;
inputs['after_action_id'] = args.after_action_id;
inputs['disabled'] = args.disabled;
inputs['docker_build_action_id'] = args.docker_build_action_id;
inputs['docker_build_action_name'] = args.docker_build_action_name;
inputs['entrypoint'] = args.entrypoint;
inputs['export_container_path'] = args.export_container_path;
inputs['ignore_errors'] = args.ignore_errors;
inputs['integration'] = output(args.integration as Output<IntegrationRef | Integration>).apply(integration =>
integration instanceof Integration ? { hash_id: integration.hash_id } : integration
);
inputs['login'] = args.login;
inputs['mount_filesystem_disable'] = args.mount_filesystem_disable;
inputs['password'] = args.password;
inputs['region'] = args.region;
inputs['registry'] = args.registry;
inputs['retry_count'] = args.retry_count;
inputs['retry_delay'] = args.retry_delay;
inputs['run_as_user'] = args.run_as_user;
inputs['run_next_parallel'] = args.run_next_parallel;
inputs['run_only_on_first_failure'] = args.run_only_on_first_failure;
inputs['timeout'] = args.timeout;
inputs['trigger_conditions'] = args.trigger_conditions;
inputs['use_image_from_action'] = args.use_image_from_action;
inputs['variables'] = args.variables;
inputs['volume_mappings'] = args.volume_mappings;
inputs['project_name'] = args.project_name;
inputs['pipeline_id'] = args.pipeline_id;
}
if (!opts.version) {
opts.version = require('../package').version;
}
opts.ignoreChanges = ['project_name', 'pipeline_id', ...(opts.ignoreChanges || [])];
inputs['type'] = 'RUN_DOCKER_CONTAINER';
inputs['url'] = undefined;
inputs['html_url'] = undefined;
inputs['action_id'] = undefined;
super(RunDockerContainer.__pulumiType, name, inputs, opts);
}
}
|
When Mark Zuckerberg created Facebook in his Harvard dorm room, he didn’t need to ask Comcast, Verizon, or other internet service providers to add Facebook to their networks. He also didn’t have to pay these companies extra fees to ensure that Facebook would work as well as the websites of established companies. As soon as he created the Facebook website, it was automatically available from any internet-connected computer in the world.
This aspect of the internet is network neutrality. And a lot of network neutrality supporters fear it's in danger. President Obama pledged to support net neutrality on the campaign trail in 2007, and this week he unveiled a detailed net neutrality proposal. But authority over internet regulation doesn't rest with Obama, it rests with the Federal Communications Commission, an independent agency chaired by former cable industry lobbyist Tom Wheeler. And according to a Tuesday story in the Washington Post, Wheeler has signaled that he won't be adopting Obama's proposal. Instead, he's looking for a way to "split the baby," finding a way to protect internet openness without declaring broadband to be a public utility. That might look something like his May proposal, which would let broadband companies treat content differently provided it did so in a "commercially reasonable" fashion. This is about ensuring that the internet remains a fertile ground for new innovations Yet the prospect of weak network neutrality regulations isn't even the biggest threat to a level playing field online. The internet itself is changing in ways that threatens to make the conventional net neutrality debate almost irrelevant. Earlier this year, Netflix agreed to pay first Comcast and then Verizon for private connections directly to their respective networks. Netflix signed these deals under protest, charging that it had been coerced to pay "tolls" just to deliver content to their own customers. That might sound like a net neutrality violation, but the practice doesn't actually run afoul of the network neutrality rules advocates have been pushing for the last decade. Those rules ban "fast lanes" for content that arrives over the internet backbone, the shared information super highway that carries the bulk of the internet traffic today. But what Netflix paid Comcast and Verizon for amounts to a new, private highway just for Netflix traffic. Conventional network neutrality rules don't regulate this kind of deal. These private connections are going to be increasingly important to the American internet in the coming years. That might force net neutrality proponents to go back to the drawing board. Otherwise they might win the battle for net neutrality and still lose the war for a level playing field on the internet. The problem with fast lanes It's a typical weekday evening and you and your neighbors are all using the internet in various ways. You're watching Netflix videos, playing World of Warcraft, checking email, downloading podcasts, and reading cardstacks on Vox.com. The information required to display all this content is sent from servers all over the world. But it quickly finds its way to your internet service provider, the company that provides you and your neighbors with home internet access. Internet usage is particularly heavy this evening, and your ISP doesn't have the capacity to handle all the data you and your neighbors are downloading. So your neighbor's World of Warcraft game starts to stutter. Another's House of Cards episode freezes up and starts buffering. Your Skype video chat to your sister becomes pixelated and jerky. A digital traffic jam is ruining everyone's internet experience.
Some people think a "fast lane" arrangement is the solution to this problem. Some applications are more affected by congestion than others. Some applications are more valuable to users than others. So maybe the network ought to give top priority to applications that need it the most. And why shouldn't your ISP use the same method as FedEx to decide who gets the fastest delivery? Applications that need faster delivery can pay extra for it. The network might look like this:
Here, MyFlix has paid AT&Tcast to give its content priority over the content of its competitors. MyFlix customers get an excellent experience, but using YouBook or FaceTube might not be as pleasant. But this isn't how the internet works right now. For the most part, internet connections work on a first-come, first-served basis, with no one's packets getting special treatment. And net neutrality supporters think that's a good thing. There are several arguments for this neutral internet model. One is simplicity. There are thousands of networks around the world. The miracle of the internet is that anyone can set up a web server, anywhere in the world, and instantly reach everyone else, no matter where they are or what network they're using. But if broadband providers started dividing their networks up into fast lanes and slow lanes, things could get more complicated. To get satisfactory service for your website, you might have to negotiate fast-lane agreements with thousands of ISPs all over the world. Companies that didn't have the money — or the manpower — to do that would be at a competitive disadvantage. Smaller companies with less cash and fewer lawyers are going to be at a competitive disadvantage There's also a danger that large internet service providers will abuse their monopoly power. Most of the leading American broadband companies also sell paid television services that compete directly with online streaming services such as Netflix and Amazon Instant Video. Network owners might be tempted to relegate online video services to the slow lane to prevent them from becoming a competitive threat to their lucrative paid television businesses. Or they might charge competing services a big markup for access to the fast lane, ensuring that they won't be able to undercut them on price. A final problem is that a multi-tiered business model could give ISPs perverse incentives. An ISP might be tempted to make its slow lane slower — or at least not upgrade it very quickly — to encourage content companies to pony up for fast-lane status. At root, all of these arguments are about ensuring that the internet remains a fertile ground for new innovations. When Steve Chen, Chad Hurley, and Jawed Karim invented YouTube in 2005, they didn't have to negotiate special fast-lane contracts with ISPs around the world. They also didn't have to worry that incumbent broadband providers would view them as a threat to their cable services and relegate them to the slow lane — or demand fast-lane fees so high they couldn't afford to pay them. YouTube could compete with much larger companies on a level playing field. Network neutrality advocates want to make sure it stays that way. The changing internet When an ISP receives the bulk of its traffic through one big transit provider, as in the figure above, network neutrality is relatively easy to define. It just means that the ISP needs to handle the packets it receives over that big pipe on a first-come, first-served basis. That's how things worked when Tim Wu, an academic who's now a law professor at Columbia, coined the term network neutrality in 2002. ISPs purchased connectivity from a handful of companies that operated long-distance networks known as the internet's backbone. Companies that provided this service, known in industry lingo as "transit," acted as middle-men, carrying data between ISPs and content providers. But the internet's structure is changing. Both residential ISPs and content companies have been growing larger and more sophisticated. And increasingly, they are cutting out the middle-men. Instead of relying on transit providers to carry traffic between them, they're connecting to each other directly. As a result, the internet increasingly looks like this:
|
/*
* Copyright (C) Research In Motion Limited 2010. All rights reserved.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Library General Public License for more details.
*
* You should have received a copy of the GNU Library General Public License
* along with this library; see the file COPYING.LIB. If not, write to
* the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
* Boston, MA 02110-1301, USA.
*/
#include "config.h"
#include "core/layout/svg/SVGResourcesCache.h"
#include "core/HTMLNames.h"
#include "core/layout/svg/LayoutSVGResourceContainer.h"
#include "core/layout/svg/SVGResources.h"
#include "core/layout/svg/SVGResourcesCycleSolver.h"
#include "core/svg/SVGDocumentExtensions.h"
namespace blink {
SVGResourcesCache::SVGResourcesCache()
{
}
SVGResourcesCache::~SVGResourcesCache()
{
}
void SVGResourcesCache::addResourcesFromLayoutObject(LayoutObject* object, const ComputedStyle& style)
{
ASSERT(object);
ASSERT(!m_cache.contains(object));
const SVGComputedStyle& svgStyle = style.svgStyle();
// Build a list of all resources associated with the passed LayoutObject
OwnPtr<SVGResources> newResources = SVGResources::buildResources(object, svgStyle);
if (!newResources)
return;
// Put object in cache.
SVGResources* resources = m_cache.set(object, newResources.release()).storedValue->value.get();
// Run cycle-detection _afterwards_, so self-references can be caught as well.
SVGResourcesCycleSolver solver(object, resources);
solver.resolveCycles();
// Walk resources and register the layout object at each resources.
HashSet<LayoutSVGResourceContainer*> resourceSet;
resources->buildSetOfResources(resourceSet);
for (auto* resourceContainer : resourceSet)
resourceContainer->addClient(object);
}
void SVGResourcesCache::removeResourcesFromLayoutObject(LayoutObject* object)
{
OwnPtr<SVGResources> resources = m_cache.take(object);
if (!resources)
return;
// Walk resources and register the layout object at each resources.
HashSet<LayoutSVGResourceContainer*> resourceSet;
resources->buildSetOfResources(resourceSet);
for (auto* resourceContainer : resourceSet)
resourceContainer->removeClient(object);
}
static inline SVGResourcesCache* resourcesCacheFromLayoutObject(const LayoutObject* layoutObject)
{
Document& document = layoutObject->document();
SVGDocumentExtensions& extensions = document.accessSVGExtensions();
SVGResourcesCache* cache = extensions.resourcesCache();
ASSERT(cache);
return cache;
}
SVGResources* SVGResourcesCache::cachedResourcesForLayoutObject(const LayoutObject* layoutObject)
{
ASSERT(layoutObject);
return resourcesCacheFromLayoutObject(layoutObject)->m_cache.get(layoutObject);
}
void SVGResourcesCache::clientLayoutChanged(LayoutObject* object)
{
SVGResources* resources = SVGResourcesCache::cachedResourcesForLayoutObject(object);
if (!resources)
return;
// Invalidate the resources if either the LayoutObject itself changed,
// or we have filter resources, which could depend on the layout of children.
if (object->selfNeedsLayout() || resources->filter())
resources->removeClientFromCache(object);
}
static inline bool layoutObjectCanHaveResources(LayoutObject* layoutObject)
{
ASSERT(layoutObject);
return layoutObject->node() && layoutObject->node()->isSVGElement() && !layoutObject->isSVGInlineText();
}
void SVGResourcesCache::clientStyleChanged(LayoutObject* layoutObject, StyleDifference diff, const ComputedStyle& newStyle)
{
ASSERT(layoutObject);
ASSERT(layoutObject->node());
ASSERT(layoutObject->node()->isSVGElement());
if (!diff.hasDifference() || !layoutObject->parent())
return;
// In this case the proper SVGFE*Element will decide whether the modified CSS properties require a relayout or paintInvalidation.
if (layoutObject->isSVGResourceFilterPrimitive() && !diff.needsLayout())
return;
// Dynamic changes of CSS properties like 'clip-path' may require us to recompute the associated resources for a layoutObject.
// FIXME: Avoid passing in a useless StyleDifference, but instead compare oldStyle/newStyle to see which resources changed
// to be able to selectively rebuild individual resources, instead of all of them.
if (layoutObjectCanHaveResources(layoutObject)) {
SVGResourcesCache* cache = resourcesCacheFromLayoutObject(layoutObject);
cache->removeResourcesFromLayoutObject(layoutObject);
cache->addResourcesFromLayoutObject(layoutObject, newStyle);
}
LayoutSVGResourceContainer::markForLayoutAndParentResourceInvalidation(layoutObject, false);
}
void SVGResourcesCache::clientWasAddedToTree(LayoutObject* layoutObject, const ComputedStyle& newStyle)
{
if (!layoutObject->node())
return;
LayoutSVGResourceContainer::markForLayoutAndParentResourceInvalidation(layoutObject, false);
if (!layoutObjectCanHaveResources(layoutObject))
return;
SVGResourcesCache* cache = resourcesCacheFromLayoutObject(layoutObject);
cache->addResourcesFromLayoutObject(layoutObject, newStyle);
}
void SVGResourcesCache::clientWillBeRemovedFromTree(LayoutObject* layoutObject)
{
if (!layoutObject->node())
return;
LayoutSVGResourceContainer::markForLayoutAndParentResourceInvalidation(layoutObject, false);
if (!layoutObjectCanHaveResources(layoutObject))
return;
SVGResourcesCache* cache = resourcesCacheFromLayoutObject(layoutObject);
cache->removeResourcesFromLayoutObject(layoutObject);
}
void SVGResourcesCache::clientDestroyed(LayoutObject* layoutObject)
{
ASSERT(layoutObject);
SVGResources* resources = SVGResourcesCache::cachedResourcesForLayoutObject(layoutObject);
if (resources)
resources->removeClientFromCache(layoutObject);
SVGResourcesCache* cache = resourcesCacheFromLayoutObject(layoutObject);
cache->removeResourcesFromLayoutObject(layoutObject);
}
void SVGResourcesCache::resourceDestroyed(LayoutSVGResourceContainer* resource)
{
ASSERT(resource);
SVGResourcesCache* cache = resourcesCacheFromLayoutObject(resource);
// The resource itself may have clients, that need to be notified.
cache->removeResourcesFromLayoutObject(resource);
for (auto& objectResources : cache->m_cache) {
objectResources.value->resourceDestroyed(resource);
// Mark users of destroyed resources as pending resolution based on the id of the old resource.
Element* resourceElement = resource->element();
Element* clientElement = toElement(objectResources.key->node());
SVGDocumentExtensions& extensions = clientElement->document().accessSVGExtensions();
extensions.addPendingResource(resourceElement->fastGetAttribute(HTMLNames::idAttr), clientElement);
}
}
}
|
A. Field of Invention
This invention relates generally to a rate-responsive pacemaker, and, in particular, to a rate-responsive pacemaker in which the rate control parameter is minute volume and the sensing is accomplished by dual chamber unipolar leads.
B. Description of the Prior Art
U.S. Pat. No. 4,702,253, entitled "Metabolic-Demand Pacemaker and Method of Using the Same to Determine Minute Volume", which issued Oct. 27, 1987, to Nappholz et al., discloses a rate-responsive pacemaker which employs minute volume as a rate control parameter. Minute volume is a measure of the amount of air inspired by a person as a function of time. The greater the amount of air inspired, the greater the need for a higher heart pacing rate. The pacemaker of the aforesaid patent (hereinafter called "the '253 pacemaker") measured minute volume by providing a three-electrode lead which employs one electrode referenced to a pacemaker case to sense heart signals and pace the patient's heart in the conventional manner and which employs the remaining two electrodes to perform the minute volume measurement. The two electrodes for measuring minute volume were located in the superior vena cava blood vessel and/or in a cardiac chamber in the vicinity of the patient's pleural cavity. The '253 pacemaker periodically applied current pulses between one of the electrodes and the pacemaker case, and measured the voltage which resulted from the applied current between the other electrode and the pacemaker case. The measured voltage was a function of the blood impedance in the vessels in and around the pleural cavity which, in turn, depended upon the pleural pressure. The '253 pacemaker determined the minute volume by monitoring the variation in the impedance measurement.
One problem with the '253 pacemaker is that it requires a lead having at least three electrodes while the industry has standardized unipolar (single electrode) and bipolar (dual electrode) leads. There are many patients with implanted old pacemaker systems having unipolar and bipolar leads, and if a three-electrode lead is required for a new pacemaker, then the new pacemaker cannot readily replace an old pacemaker and use the old leads. Furthermore, there are physicians who like the feel of the existing, standard leads they have been using in the past, and one factor which weighs against implanting a rate-responsive pacemaker might be that it requires a new lead type having a new feel.
U.S. Pat. No. 4,901,725, entitled "Minute Volume Rate-Responsive Pacemaker", which issued Feb. 20, 1990, to Nappholz et al., disclosed an improved minute rate-responsive pacemaker (hereinafter called "the '725 pacemaker") which can be used with a conventional bipolar lead. This bipolar lead had two electrodes for sensing and pacing the heart. In the '725 pacemaker, the standard ring electrode was used additionally to apply a current which flows to the pacemaker case. The tip electrode was used to measure the blood impedance between the tip and the case in response to the current pulse applied through the ring electrode. The '725 pacemaker utilized the measured blood impedance to derive an appropriate pacing rate.
Although the '725 pacemaker used a bipolar lead which is standard in cardiac pacing, it had a limitation in that this pacemaker could not be used in the many patients who had previously implanted unipolar leads. Unipolar leads, which have a single tip electrode, are also standard in the art of cardiac pacing. If a bipolar lead is required when a patient has a new pacemaker implanted, then a non-rate-responsive pacemaker that is connected to a unipolar lead cannot be replaced by a rate-responsive pacemaker embodying the invention of the '725 pacemaker simply by exchanging pacemakers and using the same lead.
Previous attempts have been made to perform minute volume rate-responsive pacing in a pacemaker using unipolar leads. These attempts failed, primarily because the blood impedance signal measured from unipolar leads was too weak in comparison to system noise and other unwanted signals present. Additionally, U.S. Pat. No. 5,201,808 (hereinafter called "the '808 pacemaker") entitled "Minute Volume Rate-Responsive Pacemaker Employing Impedance Sensing on a Unipolar Lead," which issued Apr. 13, 1993, to Steinhaus et al., discloses calculating minute volume through the use of one unipolar lead. The '808 pacemaker did not measure impedance between the tip electrode and the case as was previously done in prior art devices, but instead the impedance measurement was taken between the input of the lead from the pacemaker and the pacemaker case. Therefore the pacemaker senses the impedance of the body tissues along the entire length of the lead by providing a high frequency exultation to the lead. Therefore the pacemaker requires very complex circuitry.
Accordingly, it is desirable to develop a minute volume rate-responsive pacemaker device for use with conventional unipolar leads that does not significantly increase the complexity of the circuitry involved, and does not increase the cost of manufacture of the device.
|
Dynamic Trilateral Game Model for Attack Graph Security Game Internal threats have a huge impact on the attack graph security game. The failure of the MTD model defence measures would be caused by the existence of internal users with certain authority. Dynamic trilateral game model was proposed to extend the original two-part game model. By materializing internal threats, the uncertainty of two-part game model was eliminated, which was expressed as the probability equation used by the players in the observation state process. And the relationship between the offensive and defensive sides became indirect. The user strategy, based on mixed strategy game model, was proposed to increase the coupling between stealth attack and internal threats. The income matrix was dynamically constructed to measure the behavioural outcomes of users and attackers. User behavioural references were obtained through dynamic programming. For the defender, the heuristic strategy in the model reduces the complexity of partiess behaviour through random sampling. Experiments were carried out on the attack graph model under various game settings. Compared with the two-part game model, our models experimental results showed that the cyber security risks were reduced by 17.9% and 18.8% respectively on the strong and weak structural attack graph. Introduction To defend against possible attack situations of attackers, the existed framework does not provide a good defense strategy. Relevant work in this field is as follows. Attack graph technology can be used to analyze the vulnerabilities existed in the target network. With regard to the research of attack graphs, the existed research directions were divided into three aspects: attack graph construction, attack graph analysis and target model construction. Target model construction mainly tended to model the attacker model. Attack graph construction tended to establish a corresponding attack graph for the target network topology. The main direction of attack graph analysis was to analyze the security problem of the network based on the attack graph. The difficulty in generating a security policy was the multi-step nature of malicious attacks. The specific information of the internal vulnerabilities may be exposed by the internal users because attacker does not have permissions. In this regard, relevant scholars have proposed a number of solutions. Attack graph analysis research was divided into static strategy research and dynamic strategy research. However, the framework based on the static defence strategy has undergone major changes and cannot adapt to the real-time network environment. In order to cope with the dynamic changes of information systems, it is a very effective method to integrate MTD (Motion Target Defence) into attack graph technology. The main idea was to improve the antiaggressiveness of information systems by constructing, evaluating and deploying a variety of continuous transformation mechanisms and strategies. The existed research could be divided into and combined with the advanced model. However, the above models has a common problem, which is to ignore the influence of users in the offensive and defensive game. Some scholars used an uncertain variables represent the unknown effects that may exist in a two-party game, and these effects were caused by the third party participating in the game. The results obtained in this way were not accurate. In summary, the reason for hindering further development of defence strategy was that the game framework was not comprehensive enough to obtain more accurate results. At the same time, the traditional multi-party game model could not adapt to the game problem in the security field, so it was very important to propose a new security game framework. Dynamic Trilateral Game Model for Bayesian Attack Graph The multi-step nature of attack behaviour brings many difficulties for security protection. In order to accurately calculate the possibility of nodes being attacked in the network, this section extends the definition of Bayesian attack graph. Bayesian attack graph: Bayesian attack graph is a directed acyclic graph, denoted as G V, E,, p 1. V is a non-empty set of node, indicating a set of all vulnerable nodes of the system. 2. E is a set of all edges in the attack graph, indicating the association between the vulnerable nodes., ∈ E is the attack path from node u to node v. |, denotes the set of parent nodes of v, |, denotes a set of child nodes of v. 3. represents the node type, v ⋀, ⋁ for each node vV, where ⋀ is the set of "and"-type nodes and ⋁ is the set of "or"-type nodes. 4. p e, 0,1 represents the attack probability of attacking node v from node u. The state change of the dynamic attack graph changes as the discrete time metric changes. Between t=0 and t=T, each game subject determines the behaviour at the moment by observing the state of the attack graph at the previous moment. The behaviour of these subjects comprehensively determines the state of the next moment of the attack graph. The status of the dynamic attack graph can be represented by the following tuple information: 1. t 0,, T, the time information of the attack graph status; 2. S,,, the result information of all game subjects at this moment; 3. ∈, the security situation value of the current attack graph at time. The behavioral constraints of each strategic decision maker are as follows: Attacker Constraint: At time t, the attacker strategy set is defined as A, and the attack strategy constraints have the following points. First, when ∈, if the attacker intends to add the v-node to the set A, move the node v out of. Secondly, When ∈ ∪ ∉, the attacker cannot join a v-node to the set A ; Defender Constraint: At time t, the defender strategy set is defined as D, and the defender strategy constraints have the following points. First, when ∈, if the defender intends to add the v node to the set D, then this defense is invalid, and the node v is removed from the. Secondly, when ∈ ∪, the defender cannot join a v-node to the defender strategy set D ; User Constraint: At time t, the user strategy set is defined as U, and the user strategy constraints have the following point. When ∈ ∪ ∪, the user cannot join a v-node to the user strategy set U ; At time t, every strategic decision maker can only add one node to its own strategy set. Game State Transition Model The three-party game can be seen as a non-cooperative game in which the three roles are non-zero and completely dynamic information. The defender's behaviour is to spread the threat to the inactive node. The defender's behaviour is that the administrator fixes the vulnerability on the system node, and the user's behaviour is to gain revenue by exposing system information. The parties in the game dynamically select their own behaviour according to the state and behavioural strategies of the system. The system jumps to the next state according to the influence of party's behaviour. Attacker Strategy Based On Equilibrium Value Propagation The vulnerability nodes in the network topology interact with each other. The attacker is a rational decision-maker who only focuses on the utilization of the target vulnerability nodes, and expects to use these target vulnerability nodes to profit, while other nodes are not in the attacker's intention. Therefore, the value metric of each vulnerability node is only related to the target node. Since attacker's attack strategy is subject to the activated node, the attacker can only attack the child nodes of the activated node. For the "or" node, the attacker can directly activate, and for the "and" node, which can only be activated after all parent nodes have been activated. The attack candidate node set S is constructed as follows: Attack value calculation: For each node in the attack graph, its value is always related to the target node. At some point, the attacker is always more inclined to select nodes that are beneficial to the target from the candidate nodes. Nguyen et al. consider that the value of non-target node is indirect value, but node value is the highest value of a single target node, and does not take into account that each target node has an impact. Based on the above considerations, this research improves the node vulnerability propagation algorithm and uses the following formula to calculate: Due to the uncertainty of the attacker strategy, in order to verify the impression of different attacker strategies on the overall network security situation, this research uses the following three attacker strategies: 1. random attack strategy At time t, the attacker randomly selects an attack node from the candidate nodes as the next attack target. probability attack strategy This selection method of the attack node calculates the attack probability of all nodes according to the node value, and the selection probability of each candidate attack node is calculated according to the following formula. p ∑ ∈ greedy attack strategy The selection method of the attack node calculates the attack value of all nodes according to the node value, and activate the node with the highest value among the attack nodes. User Strategy Based On Mixed Strategy Game The user has the corresponding authority to scan and analyze the vulnerability in the scope of the authority. Users are not concerned about the security of the entire system, only consider their own gains and losses, their means is to expose the analyzed vulnerability information to the attacker. For each user's accessible vulnerability node, the revenue is the value of this node, and the cost is legal cost. The legal cost of the user's exposure to vulnerability information and the gains obtained are related to the attacker's behaviour. For this problem, the mixed strategy game can be used to model the attacker-user model. The resulting mixed strategy Nash equilibrium is the user's behavioural guidance. Legal cost: The user exposed the information about the vulnerability node has a legal cost, the cost is positively related to the distance between the node and the target node. The legal cost of the node is the sum of the risk propagation values of all the important nodes here, which is expressed as follow: * Full benefit: When user's interference policy and attacker's attack strategy are the same as v, the net income value that the user can obtain is recorded as r, where is the user benefit of node v, is the legal cost of node v. Revenue Matrix: For each node, different behaviours of attackers and users will have different income functions. Assume that the income matrix is Q, which is expressed as *, where n represents the number of vulnerable nodes. The mathematical expression is as follows: Where k is the distance between the attack node and the user exposed node. User-attacker game model: The outcome of each of the choices of the attacker and the user is represented by revenue matrix, and the user wishes to have a strategy that maximizes the expected value of the mixed strategy game. For the above problem, it is specified that the user's strategy set is U, and the mixed strategy expectation is v, wherein each element in the user strategy set represents the user's tendency to select a node mode. The sum of the user's propensity for all the selected node modes is 1. The above can be expressed using the following mathematical expression: For the above formula, the user's target can be expressed as max, since v is a number greater than 0, so the target can be converted to min min ⋯, let u can represent the above equation into another form: The linear programming problem can be solved to obtain the mixed strategy expectation value and the user strategy set, wherein the largest propensity strategy in the user strategy set is the user selected strategy. Defender Strategy Based On Trilateral Game As a defensive party, the defender always expects a better security posture in the entire network structure. After the entire network topology is reinforced, the entire network topology has higher risk resistance capability. The defender needs to consider the user's exposure strategy to deploy the defence strategy based on the attacker's attack strategy. The defence strategy of the defender can be expressed as the following steps. 1. At the time t+1, randomly select N attack path and the set A represents attack paths, where k 1,2,, N. The set of policies in A is represented as A. 2. For the blocking of each attack path, security situation assessment is conducted. The security situation assessment in this paper uses the evaluation indicators of Yang to evaluate the overall network security situation. The calculation formula is as follows: Where is the realization probability of the attack stage, which refers to the possibility that the attacker has successfully invaded the state of a certain stage; Impact v is the score of all the individual vulnerability threats used in the attack phase; Weight is the node weight value during the attack phase. Select a path with the largest value of A from A, select the non-exposed node in the intersection for the intersection of path nodes set and the attack candidate node. if there is no other node in the intersection, move A out of A and repeat step 3 until the corresponding defense harden node is selected. Security Situation Algorithm Based On Trilateral Game Exploit Probability: For the path e,, the vulnerability is a possible vulnerability. The probability of attack on the path is the probability of exploiting the vulnerability. The probability is calculated based on the three attributes: AV, AC and AU. Calculated as follows: p e, 2 * AV * AC * AU Node threat score: According to the official CVSS document, the threat score for the target node is calculated as follows: Impact v 10.41 * 1 1 C 1 I 1 Node weight value: In the entire network topology, every node has a certain weight ratio to prove the importance of these nodes in this topology, so as to comprehensively quantify the node importance of the attack phase. Node attack cost: The attacker's attack requires cost. The main reason for the cost is the attacker's understanding of the vulnerability and how the vulnerability is displayed on the network. For known vulnerabilities, the attack cost is mainly considered from three aspects: 1. Whether the exploit method is released; 2. Whether the exploit tool is used to spread on the network; 3. Whether the vulnerability is needed to exploit the vulnerability. The above three aspects of comprehensive measurement can be used as a node attack cost calculation indicator. Figure 1. Schematic diagram of the state change of the tripartite game. At time t, the three parties simultaneously observe the attack graph. By observing the state of the attack graph, the behaviours of the parties in the game comprehensively determine the next state of the attack graph. The game state changes as shown in the Figure 1. The game is played as follows: 1. According to the behavioural constraints, the defender selects the defense strategy D. 2. Under D, the user selects the state of the corresponding attack graph to select the U. 3. Under the joint action of D and D, the attacker observes the current attack graph state and performs policy selection to obtain A. The security situation value of the network attack topology at each moment in the game process is calculated by the following formula until the arrival time maximum T. 1 A Where A is the arbitrary attack path at the current time, and n is the number of attack paths. Experiment In order to eliminate random conditions, this section conducts experiments through the random network structure. This section of the experiment is divided into two groups of experiments, which use different network topologies. The two topologies are: (i) A layered directed attack graph model; (ii) a random directed attack graph. The control experiment applied the method of the paper to the framework. Random Network Validation In order to eliminate random conditions, this section conducts experiments through the random network structure. This section of the experiment is divided into two groups of experiments, which use different network topologies, which are mainly used to verify the validity of the model in the case of large-scale order and large-scale disorder. The two topologies are: (i) A layered directed attack graph model; (ii) a random directed attack graph. Layered Attack Graph The structural model is an ordered structure model, and it is a 5-layer directed attack graph, where the 1,,5 layer has 25 * 0.8 nodes, and the last layer's nodes are the target node. Each layer node is connected to 50% of the next level node, and the nodes are randomly selected. Since the node attribute has a certain influence on the behavior of the game, the attack graphs of the 30%, and 50% three 'and' node proportions are experimentally verified, as shown in Figure 2, The meaning of the x-axis in the subgraph is the proportion of the root node in the attack graph, and the meaning of the y-axis is the security situation value. The smaller security situation value, the smaller the damage to the system. In order to prevent extreme experimental data from appearing in the experiment. In the same experimental environment, the result is a 100-fold average of the results produced by the environment. In this structure, there are three types of attacker attack strategies, and the results of these attack strategies are verified. Combining the results of each strategy, the model improves the security of the layered directed attack graph by 17.9% compared to the two-player game model. Random Attack Graph The structural model is an unordered structural model, and the scale of the attack graph is set to |V|=100 and |E|=300. In this section, 15 nodes are randomly selected as attack target nodes in the attack graph. Similar to the experiment with layered attack graph, the attack graphs of the 30%, and 50% three 'and' node proportions are experimentally verified, as shown in Figure 3. The result is 100 times the average of the results produced by the environment. Combining the results of the three attack strategies, the model improves the security of the random directed attack graph by 18.8% compared to the two-player game model.
|
package com.imrenagi.service_auth.utils;
/**
* Created by imrenagi on 5/13/17.
*/
public class UserUtility {
public static final Long ROLE_SUPER_ADMIN = 1L;
public static final Long ROLE_ADMIN = 2L;
public static final Long ROLE_USER = 3L;
public static final Long ROLE_VENDOR = 4L;
}
|
Chef Stephanie Izard is seen at her restaurant Duck Duck Goat, 857 W. Fulton Market.
Apparently running three restaurants and preparing to open a fourth (in Los Angeles) leaves Stephanie Izard with time on her hands, because the chef is launching a pop-up restaurant this month.
Tiny Goat, located in the upstairs private dining room in Little Goat Diner (820 W. Randolph St.), will serve chef-designed tasting menus exactly two nights each month. It will kick off Dec. 30 and 31 (yes, New Year’s Eve) with a nine-course menu of Izard’s favorite 2018 dishes from Girl & the Goat, Duck Duck Goat and Little Goat (plus a couple of all-time favorites).
“I’m setting it up like a small little restaurant,” said Izard, by which she means that reservations will be staggered throughout the evening, rather than offering only one or two seating times.
Future dates have not been determined, but Izard plans to stick to a Friday-Saturday or Thursday-Friday format. “Eventually, we’ll try to get the word out two months in advance, so people can put it on their calendars,” she said.
Tickets for the Dec. 30 and 31 dinners will be $125. Prepaid reservations are available through the Tock system, using this link.
|
Not many games like this out there, which makes it a truly unique experience.
The story is amazing, mature and intelligently written. A
Not many games like this out there, which makes it a truly unique experience.
The story is amazing, mature and intelligently written. A mystery so fragmented it makes you use your brain and keeps you curious only to blow your mind with some unexpected plot twists near the end.
The voice acting is out of this world, I can't remember the last time I've seen such honest and heartfelt character portrayals in a game. The voice actors did an excellent job.
Not only is the music chill inducing but it's implemented in such a way that it plays in rhythm with things happening on screen and dynamically changing in response to different situations. Sound design is also masterful.
The only reason I'm giving it a 9 is because as a shooter it performs way below average. The gunplay is kind of clunky and the stealth system is extremely frustrating. For a game that encourages you to be stealthy and kill as less people as possible it makes it an extreme pain in the ass. Enemies sometimes spot you through walls or bushes. If one guard sees you, all guards instantly know where you are and start attacking you are even if some of them are miles away when you're spotted. After hundreds of restarts I just gave up and went all guns blazing.
In spite of this, it's damn near perfect.
…
|
<reponame>sgholamian/log-aware-clone-detection<gh_stars>0
//,temp,LeveldbRMStateStore.java,710,740,temp,NMLeveldbStateStoreService.java,527,555
//,3
public class xxx {
private void storeOrUpdateRMDT(RMDelegationTokenIdentifier tokenId,
Long renewDate, boolean isUpdate) throws IOException {
String tokenKey = getRMDTTokenNodeKey(tokenId);
RMDelegationTokenIdentifierData tokenData =
new RMDelegationTokenIdentifierData(tokenId, renewDate);
if (LOG.isDebugEnabled()) {
LOG.debug("Storing token to " + tokenKey);
}
try {
WriteBatch batch = db.createWriteBatch();
try {
batch.put(bytes(tokenKey), tokenData.toByteArray());
if(!isUpdate) {
ByteArrayOutputStream bs = new ByteArrayOutputStream();
try (DataOutputStream ds = new DataOutputStream(bs)) {
ds.writeInt(tokenId.getSequenceNumber());
}
if (LOG.isDebugEnabled()) {
LOG.debug("Storing " + tokenId.getSequenceNumber() + " to "
+ RM_DT_SEQUENCE_NUMBER_KEY);
}
batch.put(bytes(RM_DT_SEQUENCE_NUMBER_KEY), bs.toByteArray());
}
db.write(batch);
} finally {
batch.close();
}
} catch (DBException e) {
throw new IOException(e);
}
}
};
|
declare module "love.physics" {
/**
* Allow two Bodies to revolve around a shared point.
* @link [RevoluteJoint](https://love2d.org/wiki/RevoluteJoint)
*/
export interface RevoluteJoint extends Joint<"RevoluteJoint"> {
/**
* Checks whether the limits are enabled.
*
* @return enabled, True if enabled, false otherwise.
* @link [RevoluteJoint:areLimitsEnabled](https://love2d.org/wiki/RevoluteJoint:areLimitsEnabled)
*/
areLimitsEnabled(): boolean;
/**
* Enables or disables the joint limits.
*
* @param enable True to enable, false to disable.
* @link [RevoluteJoint:setLimitsEnabled](https://love2d.org/wiki/RevoluteJoint:setLimitsEnabled)
*/
setLimitsEnabled(enable: boolean): void;
/**
* Starts or stops the joint motor.
*
* @param enable True to enable, false to disable.
* @link [RevoluteJoint:setMotorEnabled](https://love2d.org/wiki/RevoluteJoint:setMotorEnabled)
*/
setMotorEnabled(enable: boolean): void;
/**
* Get the current joint angle.
*
* @return angle, The joint angle in radians.
* @link [RevoluteJoint:getJointAngle](https://love2d.org/wiki/RevoluteJoint:getJointAngle)
*/
getJointAngle(): number;
/**
* Get the current joint angle speed.
*
* @return s, Joint angle speed in radians/second.
* @link [RevoluteJoint:getJointSpeed](https://love2d.org/wiki/RevoluteJoint:getJointSpeed)
*/
getJointSpeed(): number;
/**
* Gets the joint limits.
*
* @return lower, The lower limit, in radians.
* @return upper, The upper limit, in radians.
* @tupleReturn
* @link [RevoluteJoint:getLimits](https://love2d.org/wiki/RevoluteJoint:getLimits)
*/
getLimits(): [number, number];
/**
* Gets the lower limit.
*
* @return lower, The lower limit, in radians.
* @link [RevoluteJoint:getLowerLimit](https://love2d.org/wiki/RevoluteJoint:getLowerLimit)
*/
getLowerLimit(): number;
/**
* Gets the maximum motor force.
*
* @return f, The maximum motor force, in Nm.
* @link [RevoluteJoint:getMaxMotorTorque](https://love2d.org/wiki/RevoluteJoint:getMaxMotorTorque)
*/
getMaxMotorTorque(): number;
/**
* Gets the motor speed.
*
* @return s, The motor speed, radians per second.
* @link [RevoluteJoint:getMotorSpeed](https://love2d.org/wiki/RevoluteJoint:getMotorSpeed)
*/
getMotorSpeed(): number;
/**
* Get the current motor force.
*
* @return f, The current motor force, in Nm.
* @link [RevoluteJoint:getMotorTorque](https://love2d.org/wiki/RevoluteJoint:getMotorTorque)
*/
getMotorTorque(): number;
/**
* Gets the upper limit.
*
* @return upper, The upper limit, in radians.
* @link [RevoluteJoint:getUpperLimit](https://love2d.org/wiki/RevoluteJoint:getUpperLimit)
*/
getUpperLimit(): number;
/**
* Checks whether limits are enabled.
* @return enabled, True if enabled, false otherwise.
* @link [RevoluteJoint:hasLimitsEnabled](https://love2d.org/wiki/RevoluteJoint:hasLimitsEnabled)
* @deprecated since 11.0. This function has been renamed to RevoluteJoint:areLimitsEnabled.
*/
hasLimitsEnabled(): boolean;
/**
* Checks whether the motor is enabled.
*
* @return enabled, True if enabled, false if disabled.
* @link [RevoluteJoint:isMotorEnabled](https://love2d.org/wiki/RevoluteJoint:isMotorEnabled)
*/
isMotorEnabled(): boolean;
/**
* Sets the limits.
*
* @param lower The lower limit, in radians.
* @param upper The upper limit, in radians.
* @link [RevoluteJoint:setLimits](https://love2d.org/wiki/RevoluteJoint:setLimits)
*/
setLimits(lower: number, upper: number): void;
/**
* Sets the lower limit.
*
* @param lower The lower limit, in radians.
* @link [RevoluteJoint:setLowerLimit](https://love2d.org/wiki/RevoluteJoint:setLowerLimit)
*/
setLowerLimit(lower: number): void;
/**
* Set the maximum motor force.
*
* @param f The maximum motor force, in Nm.
* @link [RevoluteJoint:setMaxMotorTorque](https://love2d.org/wiki/RevoluteJoint:setMaxMotorTorque)
*/
setMaxMotorTorque(f: number): void;
/**
* Sets the motor speed.
*
* @param s The motor speed, radians per second.
* @link [RevoluteJoint:setMotorSpeed](https://love2d.org/wiki/RevoluteJoint:setMotorSpeed)
*/
setMotorSpeed(s: number): void;
/**
* Sets the upper limit.
*
* @param upper The upper limit, in radians.
* @link [RevoluteJoint:setUpperLimit](https://love2d.org/wiki/RevoluteJoint:setUpperLimit)
*/
setUpperLimit(upper: number): void;
}
}
|
'''
CATEGORIES TO M SCRIPT - CREATE CONDITIONAL STATEMENT CODE FOR POWER BI
-
a dynamoPython script, visit the website for more details
https://github.com/Amoursol/dynamoPython
'''
__author__ = '<NAME> - <EMAIL>'
__twitter__ = '@adambear82'
__github__ = '@adambear82'
__version__ = '1.0.0'
'''
for large projects with lots of clashes it is useful to analyse in
a business inteligence or data visualisation tool such as ms power bi.
creating the conditonal statement in power bi can take a long time if
there are a lot of categories to include
'''
# ------------------------
# import modules
# ------------------------
# refer to the clipboard
import clr
clr.AddReference('System.Windows.Forms')
from System.Windows.Forms import Clipboard
# refer to the document manager
clr.AddReference('RevitServices')
import RevitServices
from RevitServices.Persistence import DocumentManager
doc = DocumentManager.Instance.CurrentDBDocument
# refer to the revit API
clr.AddReference('RevitAPI')
import Autodesk
from Autodesk.Revit.DB import *
# ------------------------
# inputs & variables
# ------------------------
# some categoreies exported from navisworks are not included as
# categories in visibility graphics, for examplevv
# Handrails, Landings, Pads, Runs, Slab Edges, Top Rails, Wall Sweeps
# remove single and double spaces after commas and split into list
catsInput = IN[0]
catsReplace1 = catsInput.replace(', ', ',')
catsReplace2 = catsReplace1.replace(', ', ',')
catsManual = catsReplace2.split(',')
catsManual.sort()
# provide reference strings
hashtag = 'Renamed Columns1'
pathlink = 'pathlink'
filterIn = 'filter_in'
filterOut = 'filter_out'
# ------------------------
# get categories
# ------------------------
# get categories that can add sub categories
# ie the categories which appear in vis graphics
# annotated from forum post with kudos to <NAME>
# https://forum.dynamobim.com/t/get-all-elements-in-model-categories/9447/7
modelCats = []
for cat in doc.Settings.Categories :
if cat.CategoryType == CategoryType.Model and cat.CanAddSubcategory:
modelCats.append(cat.Name)
# only append extra categories if they have been defined in input
if catsInput :
for cat in catsManual :
modelCats.append(cat)
# sort alphabetically so its easier to read
cats = sorted(modelCats)
# ------------------------
# strings
# ------------------------
# the 1st line adds a column to the table based on a filter on the hash
table = ''.join(('= Table.AddColumn(#"', hashtag, '", "filter",'))
# define strings to be used in M code
each = 'each if ['
elif0 = 'else if ['
elif1 = '] = "'
elif2 = '" then "'
elif3 = '"'
# the 2nd line is a special case
# where cats[0] requires 'each' instead of 'else if'
catJoin = each, pathlink, elif1, cats[0], elif2, filterIn, elif3
temp = ''.join(catJoin)
listLines = []
listLines.append(temp)
# the 3rd line and onwards starts with else if
# each row is checked if it is equall to one of the remaining cats
# cats is sliced by [1:] to return items from index 1 to the last index
for c in cats[1:] :
catJoin = elif0, pathlink, elif1, c, elif2, filterIn, elif3
temp = ''.join(catJoin)
listLines.append(temp)
lines = '\r\n'.join(listLines)
# the final line starts with else
# rows not in cats are given the filterOut value
strElse = ''.join(('else "', filterOut, '")'))
# the code is brought together with new lines between each line
code = '\r\n'.join((table, lines, strElse))
# ------------------------
# send to clipboard
# ------------------------
# annotated with kudos to bakery 'by send to clipboard from revit' (sic)
# https://github.com/LukeyJohnson/BakeryForDynamo/blob/97e5622db7ba14cd42caac9b8bd4fdba6b66871e/nodes/bv%20Send%20to%20Clipboard%20from%20Revit.dyf#L5-L12
# try to copy the code, provide a message if it fails
try:
Clipboard.SetText(code)
copyMsg = code
except:
copyMsg = 'Data could not be copied to clipboard'
# ------------------------
# output
# ------------------------
OUT = copyMsg
|
/**
* Support Vector input and String input.
* String: Split the string by character.
* Vector: return itself.
*/
public static <T> Vector<String> splitStringToWords(T str){
if(str instanceof Vector){
return (Vector)str;
} else if(str instanceof String){
Vector<String> res = new Vector<>();
String s = (String) str;
for(int i = 0; i < s.length(); i++){
res.add(s.substring(i, i + 1));
}
return res;
} else{
throw new RuntimeException("Only support Vector and String.");
}
}
|
<gh_stars>0
#include<fstream>
#include "../ch07/sales_data.h"
using namespace std;
int main(int argc, char **argv)
{
ifstream input(argv[1]);
ofstream output(argv[2]);
Sales_data total;
if(read(input, total)) {
Sales_data trans;
while(read(input, trans)) {
if(total.isbn() == trans.isbn()) {
total.combine(trans);
} else {
print(output, total);
total = trans;
}
}
print(output, total);
} else {
cerr << "no data?" << endl;
}
return 0;
}
|
def raw_command(self, botengine, name, value):
if name == self.MEASUREMENT_NAME_STATUS:
if value:
self.on(botengine)
else:
self.off(botengine)
|
1. Field of the Invention
The present invention relates to phase change memory devices composed using phase change materials such as chalcogenide.
The present application claims priority on Japanese Patent Application No. 2007-176044, the content of which is incorporated herein by reference.
2. Description of the Related Art
Conventionally, dynamic random-access memories (DRAM) have been currently used in various electronic devices; however, they are volatile memories that cannot store data without a power supply. They are disadvantageous in that refreshing is required to hold data during power supply.
A nonvolatile memory has been conventionally known to solve drawbacks of a volatile memory. A flash memory is known as a typical example of a nonvolatile memory. Compared with DRAM, it has a problem due to restrictions in which it needs a relatively long time for writing and erasing data and a relatively high consumption of electric current.
Recently, phase change random-access memory (PRAM) composed using phase change materials such as chalcogenide has been developed as a new type of nonvolatile memory. In the PRAM (simply referred to as phase change memory), different write currents are applied to phase change materials, which are thus varied in crystalline states so as to store data. The PRAM can be used as a nonvolatile memory and will be expected as a promising replacement for the conventionally-known DRAM because it does not need refreshing to hold data.
A write circuit of the conventionally-known phase change memory device needs an electric current of several hundreds of micro-amperes (μA) in order to write data into memory cells. It is very difficult to adequately produce such a high write current based on the existing voltage supply; hence, it is necessary to use a high potential power source, which produces a high write current by way of a potential switch circuit. The potential switch circuit is a complex circuit having a relatively large scale of circuitry, thus increasing the overall scale of circuitry of the phase change memory device.
Various types of phase change memory devices have been disclosed in various documents such as Patent Document 1 and Patent Document 2. Patent Document 1: Japanese Unexamined Patent Application Publication No. 2007-26644 Patent Document 2: Japanese Patent Application Publication No. 2005-514719
Patent Document 1 teaches a phase change memory device capable of changing a drive voltage level thereof, which includes a write booster circuit and a write driver. In a first mode, the write booster circuit boosts a first voltage to produce a first control voltage in response to a control signal. In a second mode or a third mode, it boosts the first voltage to produce a second control voltage in response to the control signal.
Patent Document 2 teaches a programmable conductor random-access memory (PCRAM), to which an adequate voltage is applied so as to write data into chalcogenide memory cells by setting prescribed resistances thereto.
Both of Patent Document 1 and Patent Document 2 differ from the present invention in terms of the object and constitution because the present invention aims at a reduction of the scale of circuitry by eliminating the potential switch circuit in the write circuit for writing data into phase change memory cells.
|
Patterns and Predictors of Cognitive Function Among Virally Suppressed Women With HIV Cognitive impairment remains frequent and heterogeneous in presentation and severity among virally suppressed (VS) women with HIV (WWH). We identified cognitive profiles among 929 VS-WWH and 717 HIV-uninfected women from 11 Women's Interagency HIV Study sites at their first neuropsychological (NP) test battery completion comprised of: Hopkins Verbal Learning Test-Revised, Trail Making, Symbol Digit Modalities, Grooved Pegboard, Stroop, Letter/Animal Fluency, and Letter-Number Sequencing. Using 17 NP performance metrics (T-scores), we used Kohonen self-organizing maps to identify patterns of high-dimensional data by mapping participants to similar nodes based on T-scores and clustering those nodes. Among VS-WWH, nine clusters were identified (entropy = 0.990) with four having average T-scores ≥45 for all metrics and thus combined into an unimpaired profile (n = 311). Impaired profiles consisted of weaknesses in: sequencing (Profile-1; n = 129), speed (Profile-2; n = 144), learning + recognition (Profile-3; n = 137), learning + memory (Profile-4; n = 86), and learning + processing speed + attention + executive function (Profile-5; n = 122). Sociodemographic, behavioral, and clinical variables differentiated profile membership using Random Forest models. The top 10 variables distinguishing the combined impaired vs. unimpaired profiles were: clinic site, age, education, race, illicit substance use, current and nadir CD4 count, duration of effective antiretrovirals, and protease inhibitor use. Additional variables differentiating each impaired from unimpaired profile included: depression, stress-symptoms, income (Profile-1); depression, employment (Profile 2); depression, integrase inhibitor (INSTI) use (Profile-3); employment, INSTI use, income, atazanavir use, non-ART medications with anticholinergic properties (Profile-4); and marijuana use (Profile-5). Findings highlight consideration of NP profile heterogeneity and potential modifiable factors contributing to impaired profiles. Cognitive impairment remains frequent and heterogeneous in presentation and severity among virally suppressed (VS) women with HIV (WWH). We identified cognitive profiles among 929 VS-WWH and 717 HIV-uninfected women from 11 Women's Interagency HIV Study sites at their first neuropsychological (NP) test battery completion comprised of: Hopkins Verbal Learning Test-Revised, Trail Making, Symbol Digit Modalities, Grooved Pegboard, Stroop, Letter/Animal Fluency, and Letter-Number Sequencing. Using 17 NP performance metrics (T-scores), we used Kohonen self-organizing maps to identify patterns of high-dimensional data by mapping participants to similar nodes based on T-scores and clustering those nodes. Among VS-WWH, nine clusters were identified (entropy = 0.990) with four having average T-scores ≥45 for all metrics and thus combined into an "unimpaired" profile (n = 311). Impaired profiles consisted of weaknesses in: sequencing (Profile-1; n = 129), speed (Profile-2; n = 144), learning + recognition (Profile-3; n = 137), learning + memory (Profile-4; n = 86), and learning + processing speed + attention + executive function (Profile-5; n = 122). Sociodemographic, behavioral, and clinical variables differentiated profile membership using Random Forest models. The top 10 variables distinguishing the combined impaired vs. unimpaired profiles were: clinic site, age, education, race, illicit substance use, current and nadir CD4 count, duration of effective antiretrovirals, and protease inhibitor use. Additional variables differentiating each impaired from unimpaired profile included: depression, stress-symptoms, income (Profile-1); depression, employment (Profile 2); INTRODUCTION Early in the HIV epidemic, people with HIV (PWH) frequently exhibited distinct clinical features including cognitive, behavioral, and motor dysfunction characteristic of a subcortical dementia. The clinical syndrome was progressive, severe and included slow mental processing, memory impairment, gait disturbance, tremors, apathy, and depressive symptoms. Since the advent of effective and accessible antiretroviral therapy (ART), PWH are living longer and may be more likely to develop comorbidities that include hypertension, diabetes, cardiovascular disease, chronic liver and renal disease, and malignancies. Although it remains unclear as to whether these comorbidities accelerate and/or potentiate CNS dysfunction, different combinations of comorbidities are likely to result in diverse patterns of cognitive function. Thus, in PWH there is a need to understand cognitive profiles and their correlates, including sociodemographic, clinical, and behavioral factors in the context of viral suppression. Cognitive phenotyping in NeuroHIV research may facilitate a better understanding of the underlying pathophysiological mechanisms of each specific cognitive profile. Several studies using different methodological approaches focus on patterns and predictors of cognitive function in PWH. Cognitive patterns in PWH were first investigated by Lojek and Bornstein, who identified four patterns in 162 predominately White (93%), young (mean age = 34 years), and educated (mean years of education = 14) men at various stages of HIV infection. Using dimension reduction (factor analysis) of seven neuropsychological (NP) outcome metrics from 16 tests followed by k-means clustering, the four profiles consisted of a generally unimpaired group; and weaknesses or impairments in only psychomotor speed, only memory and learning, and most domains. A recent cross-sectional study identified three profiles using five cognitive domain Tscores in a latent profile analysis in almost 3,000 predominately White (69%), educated (mean years of education = 15) men with HIV (MWH; 53%) and without HIV from the Multicenter AIDS Cohort Study (MACS; mean age = 40 years). The three profiles included an unimpaired profile, a profile below average on learning and memory, and a profile below average on all domains. Similarly, three profiles were identified using 10 NP outcome metrics in a latent profile analysis in 361 PWH who were predominately men (88%), actively receiving ART (94%) at the Southern Alberta Clinic. Again, an unimpaired profile was identified along with a profile with specific weaknesses in executive function and memory and one with more global NP impairment. Notably, each of these studies focused on all or predominately White, educated MWH and included mixed samples of virological suppressed (VS) and non-suppressed (NVS) individuals. Findings in MWH cannot necessarily be generalized to women with HIV (WWH). WWH may be at greater risk for cognitive impairment due, in part, to a disproportionate burden of poverty, low literacy levels, substance abuse, poor mental health, barriers to health care services, and environmental exposures prevalent in predominantly minority urban communities in which they reside. Biological factors, such as sex steroid hormones and female-specific factors (e.g., pregnancy, menopause), may also contribute to the pattern and magnitude of cognitive impairment in PWH. Combining samples of NVS and VS individuals introduces heterogeneity in cognitive function and findings from combined samples may not be generalizable to VS-PWH, a population that is expanding with the introduction of increasingly tolerable and available medication options. As the pattern and predictors of cognitive function are likely not the same in MWH and WWH as well as in VS vs. NVS individuals, we examined heterogeneity in NP performance in the largest sample to date of VS-WWH and HIV-uninfected women. We accomplished this by applying novel machine learning methods to identify subgroups who demonstrated similar NP profiles. This approach may help guide our understanding of profiles that are associated with patterns of NP weakness. We also identified factors associated with each profile from a constellation of sociodemographic, behavioral, and clinical factors that have been found to be import distinguishing factors in prior studies, with the addition of female-specific factors (e.g., pregnancy, menopausal stage) that could not be examined in mixed-sex studies. Participants The Women's Interagency HIV Study (WIHS) is a multi-center, longitudinal, study of WWH and HIV-uninfected women. The first three waves of study enrollment occurred between October 1994 and November 1995, October 2001 and September 2002, and January 2011 and January 2013 from six sites (Brooklyn, Bronx, Chicago, DC, Los Angeles, and San Francisco). A more recent wave of enrollment occurred at sites in the southern US (Chapel Hill, Atlanta, Miami, Birmingham, and Jackson) between October 2013 and September 2015. Study methodology including recruitment procedures and eligibility criteria, training, and quality assurance procedures were previously published. This analysis was restricted to all participants completing the first NP test battery. NP data for the initial six sites were collected between 2009 and 2011, while NP data from the southern sites were collected between 2013 and 2015. Neuropsychological (NP) Test Battery and Outcomes The NP test battery included the Hopkins Verbal Learning Test-Revised (HVLT-R; outcomes: trial 1 learning, total learning, delayed free recall, percent retention, recognition), Letter-Number Sequencing (LNS; outcomes: total correct on the working memory and attention conditions), total correct words generated across three trials ), Animal fluency (outcome: total correct animals generated), and Grooved Pegboard (GPEG; outcomes: time to completion, dominant, and non-dominant hand). Timed outcomes were log transformed to normalize distributions and reverse scored so higher equated to better performance. Demographically-adjusted T-scores were calculated for each outcome. T-scores are normalized to have an average of 50 and a standard deviation of 10. Mean Tscores >55 were considered high performing, between 45 and 55 were considered within the normal range, <45 were considered as weaknesses, and those <40 were considered impaired. Factors Associated With NP Profiles Factors of interest were based on prior NP WIHS studies and included: clinic site; enrollment wave; sociodemographic, mental health, behavioral, clinical, and female-specific factors; and common non-ART medications with known neurocognitive adverse effects (NCAEs). Sociodemographic factors included age, education, WRAT-III reading subscale score, race/ethnicity, employment status, average annual household income (≤$12,000), and health insurance status. Mental health factors included depressive symptoms (Center for Epidemiological Studies Depression scale ≤16]), perceived stress (perceived stress scale -10 top tertile cutoff), and post-traumatic stress symptoms (PTSD Checklist-Civilian Scale). Behavioral factors included current smoking status, recent alcohol intake, marijuana, and crack, cocaine, and heroin use. General clinical, metabolic, and cardiovascular factors included Hepatitis C antibody positive, body mass index (BMI), non-ART medication use , and history of stroke, hypertension, and diabetes mellitus. Femalespecific factors included ever pregnant, history of hysterectomy and/or bilateral oophorectomy, hormonal contraceptive use, hormone therapy use, and menopausal stage [defined using the Study of Women's Health Across the Nation criteria which is also used in previous WIHS studies ]. HIVrelated clinical factors included HIV RNA, nadir and current CD4 + T lymphocyte count, ART use and adherence, duration of ART use, and previous AIDS diagnosis. Statistical Analyses All 17 NP measures were used to find groups of similar cognitive profiles within each participant subset (VS-WWH, HIV-uninfected) utilizing Kohonen self-organizing maps (SOM) followed by clustering with MClust. SOM is an unsupervised machine learning technique used to identify patterns in high dimensional data by producing a two-dimensional grid representation consisting of multiple nodes which have a fixed position in the SOM grid along with associated participants who are mapped to that node. The coordinates of the node represent the similarity to other nodes (i.e., nodes that are closer together in the grid have similar patterns than nodes that are further away) and one node can represent multiple participants. Following the identification of the nodes, the nodes were clustered using the MClust package. Once the clustering of the nodes was completed, cluster profiles were assigned to the participants associated to that node. Profiles where the mean T-Score on all cognitive outcomes was ≥45 were combined into an "unimpaired" profile. By using SOM and MClust in sequence, we were able to achieve fine-tuned clustering based on patterns of NP performance. Factors associated with profile membership between each impaired profile and the unimpaired profile within each group (VS-WWH, HIV-uninfected) were explored by creating Random Forest (RF) models and then extracting variable importance. The datasets were randomly separated into training (70%) and testing (30%) sets. RF models were created on the training sets using internal validation via a 10-fold resampling method repeated five times. Prior to model creation, the Synthetic Minority Oversampling Technique (SMOTE) was used to control for bias due to any imbalance in the number of cases. Variables were removed from the model if they had low variance or if they had >30% missing data. Any missing data in the remaining variables was imputed before model creation using RF imputations and ridge regression ( size of 0.0001 for a compromise between stability and lack of bias). For comparison to previous studies we also created RF models for each group comparing the combined unimpaired and impaired profiles. Models were also validated on the testing set to confirm that they still had predictive power balanced between classes and that success of the trained models was not due to overfitting. All variables were plotted by relative variable importance based on the training set models, and attention was given to the top 10 variables in each profile. All analysis was done using R analysis packages. SOM was achieved using the Kohonen package in R and clustering was done using the MClust package. MClust is an R Software package used for model-based clustering using finite normal mixture modeling that provides functions for parameter estimation via the Expectation-Maximization algorithm with an assortment of covariance structures. This program identifies the best model for 10 parameterized covariance structures and chooses the best one based on the lowest Bayesian Information Criterion (BIC). The covariance structures consist of varying distributions (spherical, diagonal, or ellipsoidal), volumes (equal or variable), shapes (equal of variable), and orientation (equal or variable, only for ellipsoidal distribution). Random Forest model creation was achieved using the Caret package in R. SMOTE resampling was done using the DMwR package. Imputation of missing data was done using the Multivariate Imputation by Chained Equations (MICE) package in R. ROC confidence intervals were calculated using the pROC package in R with 2,000 stratified bootstrap replicates (95%CI). Participants Participants included 929 VS-WWH and 717 HIV-uninfected women at their first study visit with complete NP testing (Supplementary Table 1). On average, participants were 45.1 ± 9.3 years of age with 12.7 years of education. Thirty percent were from the southern WIHS sites, 69% were non-Hispanic Black, and 15% identified as Hispanic. Only 41% were employed and 48% reported having an average annual household income <$12,000/year, while 87% were currently insured. Thirty percent had depressive symptoms while 35% were identified as having higher perceived stress levels. Nineteen percent had recently used marijuana, 7% were currently using crack, cocaine, and/or heroin, and 40% were current smokers. Ninety percent reported ever having been pregnant and 41% were post-menopausal. The average T-score for all NP tests in VS-WWH and HIV-women was in the normal range between 45 and 55 (Supplementary Table 2). Cognitive Profiles in VS-WWH and HIV-Uninfected Women For both VS-WWH and HIV-uninfected women, clusters of participants with similar patterns of relative performance on all 17 NP were profiled using a sequence of SOM and MClust. VS-WWH and HIV-uninfected women had good fits Frontiers in Neurology | www.frontiersin.org (entropy = 0.99) and were then assigned names based on their relative patterns of weaknesses after consultation with a clinical neuropsychologist. The profiles are visualized in Figure 1 and univariate differences between the test scores, as well as univariate differences in predictor variables, are given in Tables 1, 2 (Supplementary Tables 3, 4). Profile Results in VS-WWH Profiling of the 929 VS-WWH resulted in nine total clusters using an ellipsoidal multivariate mixture model with equal orientation (VVE) with an entropy of 0.99. Of these clusters, four were combined into a large "unimpaired" cluster consisting of 311 women ( Figure 1A; Table 1). Of the remaining clusters: Profiling of the 1,666 PWH resulted in three total groups from a using an ellipsoidal multivariate mixture model with equal orientation with an entropy of 0.982 ( Figure 1A). Profile Results in HIV-Uninfected Women Profiling of the 717 HIV-uninfected women also resulted in nine total clusters ( Figure 1B; Table 2) from an ellipsoidal multivariate model with equal volume and orientation (EVE) with an entropy of 0.99. Of these clusters, four did not have mean T-scores that were <45 on any test and were therefore combined into a large "unimpaired" cluster consisting of 400 women. Of the remaining clusters: Predictors of Cognitive Profiles For each group of women, a RF model was created to help identify variables contributing in a non-linear fashion to distinguishing between each impaired and the unimpaired profile. An additional model was created to distinguish between all combined impairment profiles and the unimpaired profile in order to compare the differences in variables. For each model, variable importance was calculated and those that ranked as the top 10 were identified. Predictors of Cognitive Profiles in VS-WWH In RF models (Figure 2) DISCUSSION We used machine learning models to identify distinct homogenous subgroups (profiles) in the largest dataset to date in VS-WWH and HIV-uninfected women. Separate patterns of cognitive performance, as well as associated factors of those patterns among each subgroup of women, were identified. The factors identified allow for screening and intervention, including potentially changing non-ART medications, as well as mental health and substance use screening and intervention. In the context of viral suppression, we identified several profiles with distinct patterns of performance across 17 NP outcomes. While these profiles are statistically-derived, some of the profiles found here parallel commonly identified patterns in other neurological conditions or processes. Among the virally suppressed group, Profile 1-VS revealed a unique pattern reflecting exclusive weaknesses in cognitive sequencing (LNS Attention and Working Memory) and motor set-shifting (TMT-Part B). While to our knowledge, this combination of isolated deficits in cognitive sequencing and motor set-shifting has not been appreciated in other disease populations, specific deficits in cognitive sequencing/verbal working memory have been observed in individuals with schizophrenia and their first-degree relatives. Additionally, McDonald et al. identified specific problems with motor set-shifting (TMT-Part B) in individuals with frontal lobe epilepsy. In contrast to the very specific weaknesses identified in Profile 1-VS, Profile 2-VS reflects general slowing, which is most often associated with typical (i.e., "healthy") aging. Profile 3-VS, characterized by poor encoding and recognition with intact retention, is more of a typical HIV-associated profile compared to mild cognitive impairment due to Alzheimer's disease (AD). Profile 4-VS showed a mostly amnestic profile with some evidence of cognitive slowing, as can be observed in AD or in AD with vascular contributions. This profile is similar to Profile 4-UN, which reflected an amnestic profile that is often observed in typical AD. Profile 5-VS, showing intact memory storage and manual speed/dexterity, but weak or impaired attention, processing speed, learning, and executive functioning is similar to what is observed in individuals with diffuse frontal-subcortical small vessel disease. Interestingly, a profile did not emerge among VS-WWH reflecting specific motor slowing which has been linked to HIV infection. This is consistent with prior cross-sectional WIHS analyses, where motor slowing was not a prominent feature among WWH but rather verbal learning and memory. Among the seronegative group, Profile1-UN was more likely to have diabetes, raising the possibility that their specific visual and motor deficits could be related to physical complications of diabetes, including diabetic retinopathy and neuropathy. Profile2-UN reflects unique impairments on the most verbally mediated tasks (i.e., verbal learning and recall, and verbal fluencies). While we are unaware of any specific disease process or syndrome that shows the same pattern, this group of individuals has clear weakness in verbal skills, which could be due to many factors, including learning differences or damage to brain regions associated with verbally mediated tasks. Profile 3-UN, reflecting specific motor slowing, is commonly observed in individuals with basal ganglia dysfunction, such as Parkinson's disease. Profile 5-UN, revealing rather generalized cognitive weaknesses or impairments, but relatively preserved attention and visual processing, does not reflect any specific disease process or syndrome to our knowledge. Even though the HIV group was virally suppressed, the dominant profiles did not fully align with the HIV-uninfected women, suggesting that HIV affects cognitive function even in the era of effective ART. There is a wealth of literature postulating neuronal damage as a result of ART agents, a viral reservoir that persists possibly due to poor CNS penetration of ART, or even legacy effects of damage occurring earlier during infection. Indeed, in the VS-WWH RF model where all impaired groups were grouped together, nadir CD4 was a top predictor of group membership. This also points to how existing studies that consider impairment to be a unidimensional construct may only be able to detect differences in these variables and miss those associated more strongly with some profiles than others. Despite different cognitive profiles among VS-WWH, the most discriminative factors between each impaired profile vs. the unimpaired profile were similar and included a number of well-established sociodemographic cognitive correlates, such as years of education, age, and race/ethnicity. Clinic site location also emerged, a factor that we have also seen using more standard statistical approaches in the WIHS. The factors underlying this rather robust association is unknown but may involve neighborhood factors, such as violence and food insecurity. Additionally, common behavioral correlates of cognition emerged including illicit substance use, which in the case of marijuana was more likely to be used in the unimpaired profile compared to the impaired profile demonstrating weaknesses in learning, processing speed, and executive function (Profile 5-VS). This finding is consistent with some studies demonstrating the protective effects of marijuana use on cognition in PWH. We also found common clinical correlates of cognition that distinguished cognitive profiles among VS-WWH including BMI and PI use. Likely proxies of HIV disease burden, including nadir CD4 count and years of ART use, were also discriminators. In contrast, sociodemographic and medical variables were unable to distinguish cognitive profiles based on seven major cognitive domains. Mental health factors also emerged as important profile discriminators among VS-WWH, including depressive and stress-related symptoms. Depressive symptoms differentiated a number of impaired profiles (4 of 5 profiles) compared to the unimpaired profile, whereas stress-related symptoms only emerged for two profiles including Profile 1-VS (sequencing) and Profile 4-VS (learning and memory). These findings align with our WIHS studies demonstrating numerous cognitive correlates of depressive symptoms, whereas stress-related symptoms related most strongly to learning and memory in the context of HIV. Importantly, mental health factors are an unmet medical need and are modifiable targets to improve cognition in WWH. INSTI use discriminated both Profile3-VS (learning and recognition) and Profile 4-VS (learning and memory) from the unimpaired profile. This finding is consistent with a number of recently published studies indicating INSTI use as a contributor to NP function. One study demonstrated an association between INSTI use and poorer learning and memory but not any other cognitive domains. A second study also demonstrated that switching or starting an INSTI was primarily associated with poorer learning among WWH. A third study demonstrated that long-term INSTI exposure distinguished two impaired profiles from an unimpaired profile. Our study also allows us to investigate female-specific factors that are often ignored and identify the importance of oophorectomy and/or hysterectomy (Profile2-UN Interestingly, these female-specific factors only emerged as important profile discriminators among HIV-uninfected women. As the proportion of menopause-inducing and noninducing oophorectomy and/or hysterectomy was similar across VS-WWH and HIV-uninfected women, one possible explanation is that the virus itself and clinical factors, such as ART may explain more of the variance in cognitive function in VS-WWH. However, in the absence of HIV, negative effects of oophorectomy and/or hysterectomy on cognition may become more apparent. Overall, these female-specific factors are potential contributions that are missed in other studies, which are predominately male. Future studies of women should evaluate these variables in a similar stratified form to identify potential mechanistic contributions. The existence of distinctive patterns of cognitive performance, as well as distinct factors associated with those patterns, also adds to existing evidence of differing neuropathological mechanisms. The dominant profiles often contained patterns of weaknesses that were subclinical, yet still lower than the unimpaired profiles. In many cases, the associated factors are intervenable and should be followed up with mechanistic and longitudinal studies. Differences between the profiles identified here and previous efforts to identify cognitive patterns can be attributed to both the methods used and the study population. To identify meaningful cognitive patterns, we used a combination of SOM and MClust, which is a slight deviation from tradition kmeans clustering. The nature of k-means is that it yields clusters where the most dramatic differences are shown, which may ignore subtle differences in patterns. Even Molsberry et al. and Amusan et al. who used latent profile analysis using domain T-scores had their fit dominated by a high-performing and a low-performing group. Using SOM for dimension reduction on the T-scores for individual tests prevented us from following pre-conceived notions about the latent structures of cognitive domains, which have been shown to be different in HIV. Another reason that we may have found different profiles than prior studies is that we focused on a diverse sample of underserved lower-income, African-American and Hispanic WWH where social correlates of health are common (e.g., low educational attainment, poverty, food insecurity, etc.) and may lead to more heterogeneous patterns of cognitive function. Of importance, this demographic is a more accurate reflection of the HIV epidemic as opposed to the predominantly White populations evaluated in other cohorts. The addition of machine learning models to traditional univariate statistics to identify dominating predictor variables is another distinguishing aspect of the current study. It is important to point out that RF modeling is a non-linear model, and that the variable importance measure does not take into account directionality. Therefore, it is possible to have a top predictor variable from RF that does not have P < 0.05 using a t-test. RF models are also multivariate; instead of, the predictive capabilities of variables are always observed within the context of other variables. This is important considering that none of these factors exist in isolation. This makes the model more powerful, but one limitation of this statistical approach is that it becomes more difficult to interpret and should be used as a springboard for more mechanistic studies and interventions, which is why machine learning models are often thought of as "hypothesisgenerating" models. In conclusion, in the largest sample of women to date in the United States, we have used a novel pipeline of machine learning methods to identify subgroups of patterns in NP performance and created predictive models to identify the factors that distinguish each pattern from an overall "unimpaired" group. We identified distinct patterns of cognitive weaknesses in VS-WWH that differed from the distinct patterns in HIVuninfected women. We also identified factors that may contribute to these specific profiles as a springboard for mechanistic or interventional studies. Future studies should also investigate the stability of these profiles over time, and identify the ones, if any, that are prone to future decline. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Institutional Review Board. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS LR has primary responsibility for final content and conceived the study idea. RD conducted the statistical analyses. RD, AB, and LR wrote the paper. All authors contributed to manuscript editing and statistical review, read, and approved the final manuscript.
|
/*******************************************************************************
* Copyright 2013 SAP AG
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
******************************************************************************/
package com.sap.core.odata.core.ep.producer;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import java.io.InputStream;
import java.net.URI;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.junit.Test;
import com.sap.core.odata.api.commons.HttpContentType;
import com.sap.core.odata.api.edm.EdmFunctionImport;
import com.sap.core.odata.api.ep.EntityProviderWriteProperties;
import com.sap.core.odata.api.processor.ODataResponse;
import com.sap.core.odata.core.ep.JsonEntityProvider;
import com.sap.core.odata.testutil.fit.BaseTest;
import com.sap.core.odata.testutil.helper.StringHelper;
import com.sap.core.odata.testutil.mock.MockFacade;
/**
* @author SAP AG
*/
public class JsonFunctionImportTest extends BaseTest {
@Test
public void singleSimpleType() throws Exception {
final EdmFunctionImport functionImport = MockFacade.getMockEdm().getDefaultEntityContainer().getFunctionImport("MaximalAge");
final ODataResponse response = new JsonEntityProvider().writeFunctionImport(functionImport, 42, null);
assertNotNull(response);
assertNotNull(response.getEntity());
assertEquals(HttpContentType.APPLICATION_JSON, response.getContentHeader());
final String json = StringHelper.inputStreamToString((InputStream) response.getEntity());
assertNotNull(json);
assertEquals("{\"d\":{\"MaximalAge\":42}}", json);
}
@Test
public void singleComplexType() throws Exception {
final EdmFunctionImport functionImport = MockFacade.getMockEdm().getDefaultEntityContainer().getFunctionImport("MostCommonLocation");
Map<String, Object> cityData = new HashMap<String, Object>();
cityData.put("PostalCode", "8392");
cityData.put("CityName", "Å");
Map<String, Object> locationData = new HashMap<String, Object>();
locationData.put("City", cityData);
locationData.put("Country", "NO");
final ODataResponse response = new JsonEntityProvider().writeFunctionImport(functionImport, locationData, null);
assertNotNull(response);
assertNotNull(response.getEntity());
assertEquals(HttpContentType.APPLICATION_JSON, response.getContentHeader());
final String json = StringHelper.inputStreamToString((InputStream) response.getEntity());
assertNotNull(json);
assertEquals("{\"d\":{\"MostCommonLocation\":{"
+ "\"__metadata\":{\"type\":\"RefScenario.c_Location\"},"
+ "\"City\":{\"__metadata\":{\"type\":\"RefScenario.c_City\"},\"PostalCode\":\"8392\","
+ "\"CityName\":\"Å\"},\"Country\":\"NO\"}}}",
json);
}
@Test
public void collectionOfSimpleTypes() throws Exception {
final EdmFunctionImport functionImport = MockFacade.getMockEdm().getDefaultEntityContainer().getFunctionImport("AllUsedRoomIds");
final ODataResponse response = new JsonEntityProvider().writeFunctionImport(functionImport, Arrays.asList("1", "2", "3"), null);
assertNotNull(response);
assertNotNull(response.getEntity());
assertEquals(HttpContentType.APPLICATION_JSON, response.getContentHeader());
final String json = StringHelper.inputStreamToString((InputStream) response.getEntity());
assertNotNull(json);
assertEquals("{\"d\":{\"__metadata\":{\"type\":\"Collection(Edm.String)\"},"
+ "\"results\":[\"1\",\"2\",\"3\"]}}",
json);
}
@Test
public void collectionOfComplexTypes() throws Exception {
final EdmFunctionImport functionImport = MockFacade.getMockEdm().getDefaultEntityContainer().getFunctionImport("AllLocations");
Map<String, Object> locationData = new HashMap<String, Object>();
locationData.put("Country", "NO");
List<Map<String, Object>> locations = new ArrayList<Map<String, Object>>();
locations.add(locationData);
final ODataResponse response = new JsonEntityProvider().writeFunctionImport(functionImport, locations, null);
assertNotNull(response);
assertNotNull(response.getEntity());
assertEquals(HttpContentType.APPLICATION_JSON, response.getContentHeader());
final String json = StringHelper.inputStreamToString((InputStream) response.getEntity());
assertNotNull(json);
assertEquals("{\"d\":{\"__metadata\":{\"type\":\"Collection(RefScenario.c_Location)\"},"
+ "\"results\":[{\"__metadata\":{\"type\":\"RefScenario.c_Location\"},"
+ "\"City\":{\"__metadata\":{\"type\":\"RefScenario.c_City\"},"
+ "\"PostalCode\":null,\"CityName\":null},\"Country\":\"NO\"}]}}",
json);
}
@Test
public void singleEntityType() throws Exception {
final EdmFunctionImport functionImport = MockFacade.getMockEdm().getDefaultEntityContainer().getFunctionImport("OldestEmployee");
final String uri = "http://host:80/service/";
final EntityProviderWriteProperties properties =
EntityProviderWriteProperties.serviceRoot(URI.create(uri)).build();
Map<String, Object> employeeData = new HashMap<String, Object>();
employeeData.put("EmployeeId", "1");
employeeData.put("getImageType", "image/jpeg");
final ODataResponse response = new JsonEntityProvider().writeFunctionImport(functionImport, employeeData, properties);
assertNotNull(response);
assertNotNull(response.getEntity());
assertEquals(HttpContentType.APPLICATION_JSON, response.getContentHeader());
final String json = StringHelper.inputStreamToString((InputStream) response.getEntity());
assertNotNull(json);
assertEquals("{\"d\":{\"__metadata\":{"
+ "\"id\":\"" + uri + "Employees('1')\","
+ "\"uri\":\"" + uri + "Employees('1')\","
+ "\"type\":\"RefScenario.Employee\",\"content_type\":\"image/jpeg\","
+ "\"media_src\":\"Employees('1')/$value\","
+ "\"edit_media\":\"" + uri + "Employees('1')/$value\"},"
+ "\"EmployeeId\":\"1\",\"EmployeeName\":null,"
+ "\"ManagerId\":null,\"RoomId\":null,\"TeamId\":null,"
+ "\"Location\":{\"__metadata\":{\"type\":\"RefScenario.c_Location\"},"
+ "\"City\":{\"__metadata\":{\"type\":\"RefScenario.c_City\"},"
+ "\"PostalCode\":null,\"CityName\":null},\"Country\":null},\"Age\":null,"
+ "\"EntryDate\":null,\"ImageUrl\":null,"
+ "\"ne_Manager\":{\"__deferred\":{\"uri\":\"" + uri + "Employees('1')/ne_Manager\"}},"
+ "\"ne_Team\":{\"__deferred\":{\"uri\":\"" + uri + "Employees('1')/ne_Team\"}},"
+ "\"ne_Room\":{\"__deferred\":{\"uri\":\"" + uri + "Employees('1')/ne_Room\"}}}}",
json);
}
}
|
/*
CF3
Copyright (c) 2015 ishiura-lab.
Released under the MIT license.
https://github.com/ishiura-compiler/CF3/MIT-LICENSE.md
*/
#include<stdio.h>
#include<stdint.h>
#include<stdlib.h>
#include"test1.h"
volatile uint64_t x2 = 134317538217261LLU;
uint8_t x3 = 45U;
int32_t t0 = 3;
int32_t x8 = -1;
static volatile uint64_t x12 = UINT64_MAX;
uint16_t x24 = 34U;
volatile int32_t t5 = -11;
volatile int8_t x31 = -1;
static int32_t x39 = INT32_MIN;
volatile int32_t t8 = 1;
uint8_t x43 = 126U;
int16_t x50 = INT16_MAX;
volatile uint32_t x55 = 12U;
uint32_t x58 = 1U;
int16_t x60 = -11015;
uint16_t x61 = 4065U;
int16_t x66 = -2;
uint8_t x77 = 40U;
volatile int32_t t18 = 21346;
int8_t x82 = INT8_MAX;
static int32_t x89 = INT32_MIN;
int8_t x92 = 1;
static int32_t x113 = INT32_MIN;
volatile int8_t x116 = -1;
int16_t x120 = INT16_MIN;
uint16_t x127 = UINT16_MAX;
int16_t x128 = 5501;
static volatile int32_t x130 = -1;
static int32_t x132 = -1;
volatile uint32_t x133 = 822769U;
int16_t x137 = INT16_MIN;
volatile int32_t x148 = -2;
int32_t x151 = -1;
int16_t x154 = -1;
static int16_t x159 = 1522;
int64_t x163 = INT64_MIN;
volatile int32_t t36 = -6041021;
int16_t x165 = INT16_MIN;
uint16_t x167 = 289U;
int8_t x171 = INT8_MIN;
volatile int32_t x183 = INT32_MIN;
int64_t x184 = 438460571531617LL;
int64_t x187 = -1LL;
volatile uint64_t x192 = 26337LLU;
volatile uint64_t t43 = 3293907846955263LLU;
int8_t x195 = -4;
int64_t x197 = 3578077336878484LL;
static volatile int32_t x210 = 491;
static volatile int16_t x218 = INT16_MAX;
int64_t x221 = -1LL;
int32_t x230 = -63765325;
volatile int32_t t55 = 35489479;
uint8_t x243 = UINT8_MAX;
int32_t x261 = -1;
int8_t x266 = INT8_MAX;
volatile int32_t t62 = -30;
static volatile int32_t x270 = -497066485;
static int8_t x272 = -1;
volatile int64_t x275 = -1LL;
int32_t t66 = 833;
int8_t x286 = INT8_MIN;
int64_t x293 = 110080002217360LL;
volatile int32_t t69 = INT32_MAX;
static volatile uint32_t x300 = 3500U;
static int64_t x304 = INT64_MAX;
int32_t t72 = 6;
static int32_t x330 = INT32_MIN;
volatile uint64_t t78 = 5553673LLU;
int64_t x336 = 5747565594LL;
uint32_t x344 = 1U;
int32_t x345 = INT32_MIN;
int64_t t82 = -377714898910LL;
int8_t x358 = -2;
int64_t x362 = -265369482399929088LL;
int64_t x363 = -1743471560444LL;
volatile uint64_t t86 = 16LLU;
static uint32_t x375 = 881867701U;
int16_t x377 = INT16_MIN;
static int64_t x378 = -1LL;
volatile int16_t x389 = -2196;
int64_t x392 = INT64_MIN;
static volatile uint8_t x394 = 0U;
static volatile int32_t t92 = -2434970;
volatile int32_t x398 = 969536072;
uint8_t x399 = 27U;
int8_t x400 = -2;
volatile int32_t t93 = -26;
int16_t x406 = INT16_MIN;
volatile uint32_t t95 = UINT32_MAX;
int32_t x414 = -1877501;
int16_t x417 = INT16_MAX;
int16_t x418 = INT16_MIN;
static int16_t x421 = -1;
int64_t x431 = INT64_MIN;
uint16_t x432 = 345U;
uint32_t x436 = 300832U;
int32_t t103 = 1;
uint64_t x442 = 9943518LLU;
int8_t x448 = INT8_MAX;
volatile int32_t t105 = -714422;
int16_t x449 = 0;
uint32_t x451 = 4970436U;
uint16_t x458 = UINT16_MAX;
static volatile int32_t t108 = 4814;
static int32_t x465 = INT32_MIN;
int64_t x475 = -1LL;
uint16_t x476 = 0U;
static volatile int32_t t115 = 3177;
int64_t x498 = 922980996145402815LL;
int8_t x499 = INT8_MIN;
int32_t x505 = INT32_MIN;
uint8_t x515 = UINT8_MAX;
int64_t x518 = -28LL;
int16_t x523 = INT16_MAX;
static uint64_t x532 = 3930299175311919LLU;
static uint64_t x541 = 132253651592LLU;
volatile int64_t x544 = -1LL;
int32_t x546 = INT32_MAX;
volatile int8_t x547 = INT8_MAX;
static int32_t x548 = INT32_MIN;
int8_t x551 = -1;
uint8_t x552 = 2U;
int8_t x556 = -2;
int16_t x557 = -1;
static int8_t x561 = INT8_MAX;
uint16_t x562 = 5947U;
static uint64_t x564 = 780794211586916325LLU;
static int32_t x565 = 688968285;
int32_t x569 = INT32_MIN;
int16_t x571 = INT16_MIN;
volatile int32_t t140 = 1;
int8_t x597 = INT8_MAX;
int8_t x599 = -1;
int64_t x600 = -993818LL;
int32_t x607 = 345;
int8_t x617 = -46;
uint16_t x618 = 63U;
volatile int32_t t147 = -142;
static uint16_t x635 = 1U;
volatile int32_t t150 = 3;
uint8_t x642 = 8U;
volatile uint64_t x644 = UINT64_MAX;
uint64_t t151 = 3632LLU;
static uint8_t x646 = 13U;
uint8_t x650 = 3U;
static volatile uint8_t x651 = 7U;
uint32_t x654 = UINT32_MAX;
static int8_t x657 = -14;
uint64_t x659 = 857190107LLU;
static uint16_t x663 = 0U;
int8_t x664 = INT8_MIN;
static int16_t x678 = INT16_MIN;
static int16_t x686 = 1;
uint32_t x691 = UINT32_MAX;
static volatile int64_t t165 = 1067316119LL;
volatile int64_t x706 = INT64_MAX;
uint64_t x710 = UINT64_MAX;
static volatile int64_t x714 = INT64_MIN;
int32_t x716 = INT32_MIN;
static int64_t x721 = INT64_MAX;
volatile int8_t x731 = INT8_MAX;
volatile int32_t x732 = INT32_MIN;
volatile int32_t t172 = INT32_MIN;
static int64_t x733 = INT64_MAX;
uint16_t x734 = UINT16_MAX;
int8_t x736 = INT8_MAX;
volatile int32_t x737 = 105;
static volatile uint16_t x756 = UINT16_MAX;
volatile int32_t t178 = -801;
static uint32_t x757 = 133899538U;
volatile int8_t x767 = -1;
uint16_t x769 = UINT16_MAX;
int32_t t184 = -21812141;
int8_t x787 = -1;
uint16_t x795 = 0U;
volatile int32_t t190 = -2;
uint32_t x812 = 1U;
volatile int64_t x816 = INT64_MIN;
int32_t x819 = INT32_MAX;
static int32_t t194 = -75346526;
uint16_t x823 = UINT16_MAX;
int32_t x829 = -1;
static int8_t x830 = -1;
volatile int32_t t197 = 33598735;
int16_t x840 = 7245;
void f0(void) {
uint64_t x1 = 1307846LLU;
static int16_t x4 = INT16_MIN;
t0 = ((x1<=(x2|x3))+x4);
if (t0 != -32767) { NG(); } else { ; }
}
void f1(void) {
int32_t x5 = 223;
static int32_t x6 = 4;
int64_t x7 = 4101620784LL;
static int32_t t1 = -32419;
t1 = ((x5<=(x6|x7))+x8);
if (t1 != 0) { NG(); } else { ; }
}
void f2(void) {
static uint8_t x9 = UINT8_MAX;
int64_t x10 = -1114LL;
static int32_t x11 = INT32_MAX;
uint64_t t2 = UINT64_MAX;
t2 = ((x9<=(x10|x11))+x12);
if (t2 != UINT64_MAX) { NG(); } else { ; }
}
void f3(void) {
uint16_t x13 = UINT16_MAX;
int8_t x14 = INT8_MIN;
static uint8_t x15 = 12U;
int16_t x16 = -1;
static int32_t t3 = 206605096;
t3 = ((x13<=(x14|x15))+x16);
if (t3 != -1) { NG(); } else { ; }
}
void f4(void) {
uint32_t x17 = UINT32_MAX;
int32_t x18 = INT32_MIN;
int32_t x19 = INT32_MAX;
int8_t x20 = INT8_MIN;
static int32_t t4 = 1178;
t4 = ((x17<=(x18|x19))+x20);
if (t4 != -127) { NG(); } else { ; }
}
void f5(void) {
int32_t x21 = 343182;
uint32_t x22 = 2414U;
static uint64_t x23 = UINT64_MAX;
t5 = ((x21<=(x22|x23))+x24);
if (t5 != 35) { NG(); } else { ; }
}
void f6(void) {
uint16_t x29 = 2U;
static int16_t x30 = 14;
int16_t x32 = INT16_MIN;
volatile int32_t t6 = -1072;
t6 = ((x29<=(x30|x31))+x32);
if (t6 != -32768) { NG(); } else { ; }
}
void f7(void) {
uint64_t x33 = 7574LLU;
int8_t x34 = 20;
uint32_t x35 = UINT32_MAX;
int16_t x36 = INT16_MAX;
volatile int32_t t7 = 454317241;
t7 = ((x33<=(x34|x35))+x36);
if (t7 != 32768) { NG(); } else { ; }
}
void f8(void) {
int32_t x37 = INT32_MIN;
static volatile uint8_t x38 = 3U;
int8_t x40 = INT8_MIN;
t8 = ((x37<=(x38|x39))+x40);
if (t8 != -127) { NG(); } else { ; }
}
void f9(void) {
static int32_t x41 = INT32_MIN;
static uint8_t x42 = 2U;
int16_t x44 = -1;
volatile int32_t t9 = -103;
t9 = ((x41<=(x42|x43))+x44);
if (t9 != 0) { NG(); } else { ; }
}
void f10(void) {
int32_t x45 = INT32_MAX;
uint32_t x46 = 11278U;
int32_t x47 = -1;
volatile uint32_t x48 = UINT32_MAX;
volatile uint32_t t10 = 5420081U;
t10 = ((x45<=(x46|x47))+x48);
if (t10 != 0U) { NG(); } else { ; }
}
void f11(void) {
int32_t x49 = 103449;
uint64_t x51 = 26075615853LLU;
static int8_t x52 = 0;
int32_t t11 = -1;
t11 = ((x49<=(x50|x51))+x52);
if (t11 != 1) { NG(); } else { ; }
}
void f12(void) {
int32_t x53 = INT32_MIN;
uint64_t x54 = 35466318LLU;
volatile uint8_t x56 = UINT8_MAX;
volatile int32_t t12 = 967433144;
t12 = ((x53<=(x54|x55))+x56);
if (t12 != 255) { NG(); } else { ; }
}
void f13(void) {
static volatile int8_t x57 = INT8_MAX;
int64_t x59 = INT64_MIN;
volatile int32_t t13 = 102723;
t13 = ((x57<=(x58|x59))+x60);
if (t13 != -11015) { NG(); } else { ; }
}
void f14(void) {
uint8_t x62 = 2U;
static volatile uint8_t x63 = 40U;
uint64_t x64 = 503375943147LLU;
static volatile uint64_t t14 = 102533765LLU;
t14 = ((x61<=(x62|x63))+x64);
if (t14 != 503375943147LLU) { NG(); } else { ; }
}
void f15(void) {
volatile int64_t x65 = -677044LL;
static uint16_t x67 = 7U;
uint64_t x68 = 1188359555714163717LLU;
static uint64_t t15 = 107LLU;
t15 = ((x65<=(x66|x67))+x68);
if (t15 != 1188359555714163718LLU) { NG(); } else { ; }
}
void f16(void) {
int64_t x69 = -1LL;
volatile int32_t x70 = INT32_MIN;
uint16_t x71 = UINT16_MAX;
int16_t x72 = -1;
volatile int32_t t16 = 3284;
t16 = ((x69<=(x70|x71))+x72);
if (t16 != -1) { NG(); } else { ; }
}
void f17(void) {
volatile int32_t x73 = -840;
volatile int64_t x74 = INT64_MIN;
static int64_t x75 = INT64_MAX;
uint32_t x76 = UINT32_MAX;
uint32_t t17 = 0U;
t17 = ((x73<=(x74|x75))+x76);
if (t17 != 0U) { NG(); } else { ; }
}
void f18(void) {
static volatile int32_t x78 = INT32_MAX;
int16_t x79 = INT16_MAX;
static uint8_t x80 = 43U;
t18 = ((x77<=(x78|x79))+x80);
if (t18 != 44) { NG(); } else { ; }
}
void f19(void) {
int8_t x81 = 1;
static int32_t x83 = INT32_MIN;
int32_t x84 = INT32_MIN;
volatile int32_t t19 = INT32_MIN;
t19 = ((x81<=(x82|x83))+x84);
if (t19 != INT32_MIN) { NG(); } else { ; }
}
void f20(void) {
uint32_t x85 = 543271216U;
int32_t x86 = INT32_MIN;
int8_t x87 = INT8_MIN;
int8_t x88 = 6;
int32_t t20 = -695;
t20 = ((x85<=(x86|x87))+x88);
if (t20 != 7) { NG(); } else { ; }
}
void f21(void) {
uint32_t x90 = 28U;
static int64_t x91 = INT64_MIN;
volatile int32_t t21 = -9589838;
t21 = ((x89<=(x90|x91))+x92);
if (t21 != 1) { NG(); } else { ; }
}
void f22(void) {
int8_t x93 = INT8_MIN;
int64_t x94 = -1LL;
int8_t x95 = INT8_MAX;
int8_t x96 = INT8_MIN;
static int32_t t22 = 808467489;
t22 = ((x93<=(x94|x95))+x96);
if (t22 != -127) { NG(); } else { ; }
}
void f23(void) {
int32_t x97 = INT32_MAX;
uint32_t x98 = 799U;
static int32_t x99 = INT32_MAX;
int8_t x100 = -1;
int32_t t23 = 2157738;
t23 = ((x97<=(x98|x99))+x100);
if (t23 != 0) { NG(); } else { ; }
}
void f24(void) {
volatile int8_t x101 = INT8_MIN;
int16_t x102 = -1;
static int32_t x103 = 65915916;
int64_t x104 = INT64_MIN;
static volatile int64_t t24 = 254886172LL;
t24 = ((x101<=(x102|x103))+x104);
if (t24 != -9223372036854775807LL) { NG(); } else { ; }
}
void f25(void) {
int16_t x114 = -1711;
static int16_t x115 = INT16_MIN;
volatile int32_t t25 = -1;
t25 = ((x113<=(x114|x115))+x116);
if (t25 != 0) { NG(); } else { ; }
}
void f26(void) {
static int16_t x117 = -11;
int8_t x118 = INT8_MIN;
volatile int32_t x119 = 851;
int32_t t26 = -268570;
t26 = ((x117<=(x118|x119))+x120);
if (t26 != -32768) { NG(); } else { ; }
}
void f27(void) {
volatile int16_t x121 = INT16_MAX;
static int16_t x122 = -1;
static int16_t x123 = 23;
volatile int64_t x124 = INT64_MAX;
int64_t t27 = INT64_MAX;
t27 = ((x121<=(x122|x123))+x124);
if (t27 != INT64_MAX) { NG(); } else { ; }
}
void f28(void) {
int32_t x125 = INT32_MAX;
int8_t x126 = INT8_MAX;
int32_t t28 = 5152;
t28 = ((x125<=(x126|x127))+x128);
if (t28 != 5501) { NG(); } else { ; }
}
void f29(void) {
volatile int8_t x129 = INT8_MIN;
volatile uint32_t x131 = UINT32_MAX;
static int32_t t29 = 301;
t29 = ((x129<=(x130|x131))+x132);
if (t29 != 0) { NG(); } else { ; }
}
void f30(void) {
int8_t x134 = INT8_MIN;
int32_t x135 = INT32_MAX;
static int8_t x136 = 1;
int32_t t30 = 0;
t30 = ((x133<=(x134|x135))+x136);
if (t30 != 2) { NG(); } else { ; }
}
void f31(void) {
int32_t x138 = INT32_MAX;
int32_t x139 = INT32_MIN;
int64_t x140 = -16319505000LL;
static volatile int64_t t31 = 48LL;
t31 = ((x137<=(x138|x139))+x140);
if (t31 != -16319504999LL) { NG(); } else { ; }
}
void f32(void) {
static volatile uint32_t x145 = 242U;
static int64_t x146 = INT64_MAX;
static int16_t x147 = -1;
volatile int32_t t32 = 227818019;
t32 = ((x145<=(x146|x147))+x148);
if (t32 != -2) { NG(); } else { ; }
}
void f33(void) {
int32_t x149 = 106727605;
int64_t x150 = INT64_MAX;
int64_t x152 = INT64_MIN;
volatile int64_t t33 = INT64_MIN;
t33 = ((x149<=(x150|x151))+x152);
if (t33 != INT64_MIN) { NG(); } else { ; }
}
void f34(void) {
volatile uint32_t x153 = 1U;
uint16_t x155 = 199U;
int16_t x156 = INT16_MIN;
int32_t t34 = 114306;
t34 = ((x153<=(x154|x155))+x156);
if (t34 != -32767) { NG(); } else { ; }
}
void f35(void) {
static uint64_t x157 = 17524164626139LLU;
volatile int64_t x158 = -1LL;
uint64_t x160 = UINT64_MAX;
volatile uint64_t t35 = 58973968LLU;
t35 = ((x157<=(x158|x159))+x160);
if (t35 != 0LLU) { NG(); } else { ; }
}
void f36(void) {
static int32_t x161 = -1;
int8_t x162 = -1;
int32_t x164 = 169;
t36 = ((x161<=(x162|x163))+x164);
if (t36 != 170) { NG(); } else { ; }
}
void f37(void) {
int8_t x166 = INT8_MIN;
uint16_t x168 = 5854U;
volatile int32_t t37 = 119396;
t37 = ((x165<=(x166|x167))+x168);
if (t37 != 5855) { NG(); } else { ; }
}
void f38(void) {
uint32_t x169 = 6392U;
volatile int8_t x170 = -1;
static uint8_t x172 = 2U;
volatile int32_t t38 = 148143;
t38 = ((x169<=(x170|x171))+x172);
if (t38 != 3) { NG(); } else { ; }
}
void f39(void) {
static int32_t x173 = INT32_MIN;
int16_t x174 = INT16_MIN;
static uint32_t x175 = 15614U;
volatile int32_t x176 = -54981;
static int32_t t39 = -207;
t39 = ((x173<=(x174|x175))+x176);
if (t39 != -54980) { NG(); } else { ; }
}
void f40(void) {
int8_t x177 = 1;
uint16_t x178 = 66U;
volatile int32_t x179 = INT32_MIN;
static volatile int64_t x180 = INT64_MAX;
int64_t t40 = INT64_MAX;
t40 = ((x177<=(x178|x179))+x180);
if (t40 != INT64_MAX) { NG(); } else { ; }
}
void f41(void) {
int32_t x181 = INT32_MIN;
int16_t x182 = INT16_MIN;
volatile int64_t t41 = -1LL;
t41 = ((x181<=(x182|x183))+x184);
if (t41 != 438460571531618LL) { NG(); } else { ; }
}
void f42(void) {
static uint64_t x185 = 4174010489LLU;
int8_t x186 = INT8_MIN;
int32_t x188 = -107830;
int32_t t42 = 54385807;
t42 = ((x185<=(x186|x187))+x188);
if (t42 != -107829) { NG(); } else { ; }
}
void f43(void) {
volatile int16_t x189 = INT16_MAX;
int32_t x190 = -1;
uint16_t x191 = UINT16_MAX;
t43 = ((x189<=(x190|x191))+x192);
if (t43 != 26337LLU) { NG(); } else { ; }
}
void f44(void) {
uint8_t x193 = 70U;
int8_t x194 = INT8_MIN;
uint32_t x196 = UINT32_MAX;
volatile uint32_t t44 = UINT32_MAX;
t44 = ((x193<=(x194|x195))+x196);
if (t44 != UINT32_MAX) { NG(); } else { ; }
}
void f45(void) {
static uint16_t x198 = UINT16_MAX;
int64_t x199 = INT64_MIN;
uint16_t x200 = 73U;
volatile int32_t t45 = 25;
t45 = ((x197<=(x198|x199))+x200);
if (t45 != 73) { NG(); } else { ; }
}
void f46(void) {
int16_t x201 = 3;
static uint64_t x202 = UINT64_MAX;
int8_t x203 = -1;
static int64_t x204 = -1LL;
static volatile int64_t t46 = -527177854194LL;
t46 = ((x201<=(x202|x203))+x204);
if (t46 != 0LL) { NG(); } else { ; }
}
void f47(void) {
static int8_t x205 = INT8_MIN;
int8_t x206 = -1;
static uint16_t x207 = UINT16_MAX;
static volatile uint16_t x208 = 15U;
static int32_t t47 = 14949773;
t47 = ((x205<=(x206|x207))+x208);
if (t47 != 16) { NG(); } else { ; }
}
void f48(void) {
uint8_t x209 = 21U;
int64_t x211 = INT64_MIN;
static uint16_t x212 = UINT16_MAX;
static volatile int32_t t48 = 28928339;
t48 = ((x209<=(x210|x211))+x212);
if (t48 != 65535) { NG(); } else { ; }
}
void f49(void) {
static int16_t x213 = INT16_MIN;
volatile uint64_t x214 = UINT64_MAX;
static volatile uint32_t x215 = 407027691U;
uint16_t x216 = 3884U;
static volatile int32_t t49 = 0;
t49 = ((x213<=(x214|x215))+x216);
if (t49 != 3885) { NG(); } else { ; }
}
void f50(void) {
int8_t x217 = -1;
int64_t x219 = INT64_MIN;
volatile uint64_t x220 = 773898921003LLU;
volatile uint64_t t50 = 471374329606LLU;
t50 = ((x217<=(x218|x219))+x220);
if (t50 != 773898921003LLU) { NG(); } else { ; }
}
void f51(void) {
uint32_t x222 = 2946U;
volatile uint16_t x223 = UINT16_MAX;
volatile int16_t x224 = 0;
volatile int32_t t51 = -186065;
t51 = ((x221<=(x222|x223))+x224);
if (t51 != 1) { NG(); } else { ; }
}
void f52(void) {
static volatile uint64_t x225 = UINT64_MAX;
volatile uint32_t x226 = UINT32_MAX;
int32_t x227 = INT32_MAX;
int32_t x228 = INT32_MAX;
int32_t t52 = INT32_MAX;
t52 = ((x225<=(x226|x227))+x228);
if (t52 != INT32_MAX) { NG(); } else { ; }
}
void f53(void) {
uint64_t x229 = UINT64_MAX;
uint64_t x231 = 1602208673LLU;
int32_t x232 = 3;
volatile int32_t t53 = 7;
t53 = ((x229<=(x230|x231))+x232);
if (t53 != 3) { NG(); } else { ; }
}
void f54(void) {
int64_t x233 = INT64_MIN;
uint64_t x234 = 192486082733LLU;
volatile int64_t x235 = INT64_MIN;
uint8_t x236 = 31U;
int32_t t54 = -60167;
t54 = ((x233<=(x234|x235))+x236);
if (t54 != 32) { NG(); } else { ; }
}
void f55(void) {
static int16_t x237 = INT16_MAX;
volatile int16_t x238 = 1;
int8_t x239 = INT8_MIN;
static int8_t x240 = -1;
t55 = ((x237<=(x238|x239))+x240);
if (t55 != -1) { NG(); } else { ; }
}
void f56(void) {
int64_t x241 = -27839735LL;
uint32_t x242 = 14533U;
int16_t x244 = 7;
int32_t t56 = -427000;
t56 = ((x241<=(x242|x243))+x244);
if (t56 != 8) { NG(); } else { ; }
}
void f57(void) {
int32_t x245 = INT32_MIN;
static volatile uint64_t x246 = 209643095LLU;
static uint8_t x247 = 17U;
int32_t x248 = 10772;
volatile int32_t t57 = 1558453;
t57 = ((x245<=(x246|x247))+x248);
if (t57 != 10772) { NG(); } else { ; }
}
void f58(void) {
uint64_t x249 = 34994871LLU;
uint64_t x250 = 148564970243163992LLU;
int16_t x251 = INT16_MIN;
uint32_t x252 = 5188339U;
volatile uint32_t t58 = 19438991U;
t58 = ((x249<=(x250|x251))+x252);
if (t58 != 5188340U) { NG(); } else { ; }
}
void f59(void) {
int64_t x253 = INT64_MAX;
uint64_t x254 = 27510LLU;
volatile int8_t x255 = -1;
static uint8_t x256 = 7U;
volatile int32_t t59 = -5386;
t59 = ((x253<=(x254|x255))+x256);
if (t59 != 8) { NG(); } else { ; }
}
void f60(void) {
static int32_t x257 = INT32_MAX;
int32_t x258 = INT32_MAX;
volatile uint32_t x259 = 16039715U;
int16_t x260 = -27;
volatile int32_t t60 = 252;
t60 = ((x257<=(x258|x259))+x260);
if (t60 != -26) { NG(); } else { ; }
}
void f61(void) {
volatile int64_t x262 = 156547742654799LL;
static uint32_t x263 = 62059U;
int8_t x264 = -14;
static volatile int32_t t61 = 450;
t61 = ((x261<=(x262|x263))+x264);
if (t61 != -13) { NG(); } else { ; }
}
void f62(void) {
int8_t x265 = -1;
volatile int8_t x267 = -8;
uint8_t x268 = 1U;
t62 = ((x265<=(x266|x267))+x268);
if (t62 != 2) { NG(); } else { ; }
}
void f63(void) {
static uint16_t x269 = UINT16_MAX;
static volatile int64_t x271 = -86012LL;
int32_t t63 = -57265;
t63 = ((x269<=(x270|x271))+x272);
if (t63 != -1) { NG(); } else { ; }
}
void f64(void) {
static int32_t x273 = -3903;
uint32_t x274 = UINT32_MAX;
int8_t x276 = -1;
volatile int32_t t64 = 19751;
t64 = ((x273<=(x274|x275))+x276);
if (t64 != 0) { NG(); } else { ; }
}
void f65(void) {
volatile int8_t x277 = -1;
int16_t x278 = 1127;
int32_t x279 = -1;
volatile int64_t x280 = INT64_MIN;
static volatile int64_t t65 = 1396940091196071LL;
t65 = ((x277<=(x278|x279))+x280);
if (t65 != -9223372036854775807LL) { NG(); } else { ; }
}
void f66(void) {
int8_t x281 = INT8_MIN;
volatile int8_t x282 = INT8_MAX;
uint8_t x283 = UINT8_MAX;
int16_t x284 = INT16_MIN;
t66 = ((x281<=(x282|x283))+x284);
if (t66 != -32767) { NG(); } else { ; }
}
void f67(void) {
int64_t x285 = -1LL;
static volatile int8_t x287 = 3;
volatile int8_t x288 = -50;
int32_t t67 = 48;
t67 = ((x285<=(x286|x287))+x288);
if (t67 != -50) { NG(); } else { ; }
}
void f68(void) {
int32_t x289 = INT32_MAX;
int8_t x290 = INT8_MIN;
int64_t x291 = -18216LL;
int32_t x292 = INT32_MAX;
static int32_t t68 = INT32_MAX;
t68 = ((x289<=(x290|x291))+x292);
if (t68 != INT32_MAX) { NG(); } else { ; }
}
void f69(void) {
uint8_t x294 = 0U;
static int16_t x295 = INT16_MIN;
static int32_t x296 = INT32_MAX;
t69 = ((x293<=(x294|x295))+x296);
if (t69 != INT32_MAX) { NG(); } else { ; }
}
void f70(void) {
static int16_t x297 = 1394;
volatile uint8_t x298 = UINT8_MAX;
volatile int16_t x299 = INT16_MAX;
uint32_t t70 = 8U;
t70 = ((x297<=(x298|x299))+x300);
if (t70 != 3501U) { NG(); } else { ; }
}
void f71(void) {
uint32_t x301 = UINT32_MAX;
uint16_t x302 = UINT16_MAX;
static int64_t x303 = INT64_MIN;
static int64_t t71 = INT64_MAX;
t71 = ((x301<=(x302|x303))+x304);
if (t71 != INT64_MAX) { NG(); } else { ; }
}
void f72(void) {
int16_t x305 = -1;
static int64_t x306 = INT64_MIN;
int64_t x307 = -1LL;
int32_t x308 = INT32_MIN;
t72 = ((x305<=(x306|x307))+x308);
if (t72 != -2147483647) { NG(); } else { ; }
}
void f73(void) {
uint8_t x309 = 4U;
volatile uint8_t x310 = 105U;
static int16_t x311 = INT16_MAX;
uint16_t x312 = 3U;
int32_t t73 = 1280291;
t73 = ((x309<=(x310|x311))+x312);
if (t73 != 4) { NG(); } else { ; }
}
void f74(void) {
int8_t x313 = -1;
int64_t x314 = INT64_MIN;
int16_t x315 = INT16_MIN;
int8_t x316 = INT8_MIN;
int32_t t74 = 673301;
t74 = ((x313<=(x314|x315))+x316);
if (t74 != -128) { NG(); } else { ; }
}
void f75(void) {
static int64_t x317 = -484897810413LL;
volatile uint64_t x318 = 62286168LLU;
static int16_t x319 = -220;
int32_t x320 = -721;
volatile int32_t t75 = -11;
t75 = ((x317<=(x318|x319))+x320);
if (t75 != -720) { NG(); } else { ; }
}
void f76(void) {
uint64_t x321 = 346LLU;
int8_t x322 = 1;
int64_t x323 = -19438064LL;
volatile uint64_t x324 = 2489LLU;
volatile uint64_t t76 = 126764667775665881LLU;
t76 = ((x321<=(x322|x323))+x324);
if (t76 != 2490LLU) { NG(); } else { ; }
}
void f77(void) {
int16_t x325 = -1;
uint16_t x326 = UINT16_MAX;
uint64_t x327 = 1LLU;
static volatile int64_t x328 = INT64_MIN;
int64_t t77 = INT64_MIN;
t77 = ((x325<=(x326|x327))+x328);
if (t77 != INT64_MIN) { NG(); } else { ; }
}
void f78(void) {
int8_t x329 = INT8_MAX;
volatile int16_t x331 = INT16_MIN;
static uint64_t x332 = 0LLU;
t78 = ((x329<=(x330|x331))+x332);
if (t78 != 0LLU) { NG(); } else { ; }
}
void f79(void) {
volatile int8_t x333 = -1;
uint8_t x334 = 47U;
static uint16_t x335 = 0U;
volatile int64_t t79 = -214040LL;
t79 = ((x333<=(x334|x335))+x336);
if (t79 != 5747565595LL) { NG(); } else { ; }
}
void f80(void) {
int64_t x337 = -1LL;
static int64_t x338 = INT64_MIN;
int8_t x339 = 1;
static int64_t x340 = -1LL;
int64_t t80 = -56281125LL;
t80 = ((x337<=(x338|x339))+x340);
if (t80 != -1LL) { NG(); } else { ; }
}
void f81(void) {
volatile int16_t x341 = INT16_MIN;
int32_t x342 = 465880953;
int8_t x343 = -1;
volatile uint32_t t81 = 3060U;
t81 = ((x341<=(x342|x343))+x344);
if (t81 != 2U) { NG(); } else { ; }
}
void f82(void) {
volatile int8_t x346 = 1;
uint8_t x347 = UINT8_MAX;
static volatile int64_t x348 = 640LL;
t82 = ((x345<=(x346|x347))+x348);
if (t82 != 641LL) { NG(); } else { ; }
}
void f83(void) {
int64_t x349 = INT64_MIN;
int64_t x350 = -1LL;
uint32_t x351 = 5614U;
int64_t x352 = -42941099221299747LL;
int64_t t83 = -3173LL;
t83 = ((x349<=(x350|x351))+x352);
if (t83 != -42941099221299746LL) { NG(); } else { ; }
}
void f84(void) {
static uint64_t x353 = UINT64_MAX;
int32_t x354 = INT32_MAX;
uint64_t x355 = 1083878833226LLU;
static uint64_t x356 = 443932283600LLU;
uint64_t t84 = 957038826615LLU;
t84 = ((x353<=(x354|x355))+x356);
if (t84 != 443932283600LLU) { NG(); } else { ; }
}
void f85(void) {
static uint64_t x357 = 60356324LLU;
int64_t x359 = INT64_MIN;
volatile int64_t x360 = -146LL;
int64_t t85 = 2901402LL;
t85 = ((x357<=(x358|x359))+x360);
if (t85 != -145LL) { NG(); } else { ; }
}
void f86(void) {
int8_t x361 = INT8_MIN;
uint64_t x364 = 60272044937508999LLU;
t86 = ((x361<=(x362|x363))+x364);
if (t86 != 60272044937508999LLU) { NG(); } else { ; }
}
void f87(void) {
int64_t x373 = 1109254390145163003LL;
int8_t x374 = INT8_MIN;
volatile uint16_t x376 = UINT16_MAX;
volatile int32_t t87 = -92;
t87 = ((x373<=(x374|x375))+x376);
if (t87 != 65535) { NG(); } else { ; }
}
void f88(void) {
int8_t x379 = INT8_MAX;
volatile uint8_t x380 = 28U;
int32_t t88 = -6982;
t88 = ((x377<=(x378|x379))+x380);
if (t88 != 29) { NG(); } else { ; }
}
void f89(void) {
int32_t x381 = INT32_MAX;
int16_t x382 = INT16_MAX;
uint32_t x383 = 11643841U;
static int8_t x384 = INT8_MIN;
int32_t t89 = -439026;
t89 = ((x381<=(x382|x383))+x384);
if (t89 != -128) { NG(); } else { ; }
}
void f90(void) {
static int8_t x385 = INT8_MAX;
volatile int8_t x386 = INT8_MIN;
int32_t x387 = INT32_MIN;
int64_t x388 = INT64_MAX;
volatile int64_t t90 = INT64_MAX;
t90 = ((x385<=(x386|x387))+x388);
if (t90 != INT64_MAX) { NG(); } else { ; }
}
void f91(void) {
int32_t x390 = INT32_MIN;
static volatile uint16_t x391 = UINT16_MAX;
int64_t t91 = INT64_MIN;
t91 = ((x389<=(x390|x391))+x392);
if (t91 != INT64_MIN) { NG(); } else { ; }
}
void f92(void) {
volatile int64_t x393 = 6LL;
int64_t x395 = -1LL;
static int16_t x396 = 5;
t92 = ((x393<=(x394|x395))+x396);
if (t92 != 5) { NG(); } else { ; }
}
void f93(void) {
uint16_t x397 = UINT16_MAX;
t93 = ((x397<=(x398|x399))+x400);
if (t93 != -1) { NG(); } else { ; }
}
void f94(void) {
uint64_t x401 = 5818LLU;
uint64_t x402 = 46702LLU;
int16_t x403 = 6;
uint16_t x404 = 19U;
volatile int32_t t94 = 1;
t94 = ((x401<=(x402|x403))+x404);
if (t94 != 20) { NG(); } else { ; }
}
void f95(void) {
volatile int8_t x405 = INT8_MAX;
volatile int16_t x407 = INT16_MIN;
uint32_t x408 = UINT32_MAX;
t95 = ((x405<=(x406|x407))+x408);
if (t95 != UINT32_MAX) { NG(); } else { ; }
}
void f96(void) {
static volatile uint32_t x409 = UINT32_MAX;
static int32_t x410 = INT32_MAX;
uint64_t x411 = 872104877287518LLU;
int64_t x412 = -1LL;
static int64_t t96 = -75249751739943885LL;
t96 = ((x409<=(x410|x411))+x412);
if (t96 != 0LL) { NG(); } else { ; }
}
void f97(void) {
volatile uint32_t x413 = 69U;
int8_t x415 = INT8_MAX;
uint32_t x416 = UINT32_MAX;
uint32_t t97 = 871031U;
t97 = ((x413<=(x414|x415))+x416);
if (t97 != 0U) { NG(); } else { ; }
}
void f98(void) {
static int8_t x419 = INT8_MAX;
uint32_t x420 = 954597U;
uint32_t t98 = 3209U;
t98 = ((x417<=(x418|x419))+x420);
if (t98 != 954597U) { NG(); } else { ; }
}
void f99(void) {
volatile int64_t x422 = INT64_MAX;
volatile int32_t x423 = 845;
int32_t x424 = -565023830;
static volatile int32_t t99 = -7;
t99 = ((x421<=(x422|x423))+x424);
if (t99 != -565023829) { NG(); } else { ; }
}
void f100(void) {
volatile int32_t x425 = -2927;
uint16_t x426 = 10311U;
int64_t x427 = -1LL;
int8_t x428 = -25;
static volatile int32_t t100 = -702359849;
t100 = ((x425<=(x426|x427))+x428);
if (t100 != -24) { NG(); } else { ; }
}
void f101(void) {
static int64_t x429 = -1LL;
int16_t x430 = 0;
int32_t t101 = 3894475;
t101 = ((x429<=(x430|x431))+x432);
if (t101 != 345) { NG(); } else { ; }
}
void f102(void) {
int16_t x433 = 1;
int64_t x434 = -1LL;
volatile int16_t x435 = 237;
volatile uint32_t t102 = 19193915U;
t102 = ((x433<=(x434|x435))+x436);
if (t102 != 300832U) { NG(); } else { ; }
}
void f103(void) {
volatile uint64_t x437 = 36LLU;
int32_t x438 = INT32_MAX;
volatile int32_t x439 = -822495;
volatile uint16_t x440 = 1718U;
t103 = ((x437<=(x438|x439))+x440);
if (t103 != 1719) { NG(); } else { ; }
}
void f104(void) {
uint8_t x441 = 1U;
volatile int8_t x443 = 27;
int8_t x444 = -15;
static int32_t t104 = 15;
t104 = ((x441<=(x442|x443))+x444);
if (t104 != -14) { NG(); } else { ; }
}
void f105(void) {
int64_t x445 = INT64_MIN;
static volatile uint32_t x446 = 4766U;
static uint32_t x447 = UINT32_MAX;
t105 = ((x445<=(x446|x447))+x448);
if (t105 != 128) { NG(); } else { ; }
}
void f106(void) {
int32_t x450 = INT32_MIN;
static uint32_t x452 = 69U;
uint32_t t106 = 2269U;
t106 = ((x449<=(x450|x451))+x452);
if (t106 != 70U) { NG(); } else { ; }
}
void f107(void) {
int64_t x453 = -1LL;
int8_t x454 = 1;
int64_t x455 = INT64_MIN;
uint32_t x456 = 57U;
static volatile uint32_t t107 = 82U;
t107 = ((x453<=(x454|x455))+x456);
if (t107 != 57U) { NG(); } else { ; }
}
void f108(void) {
uint32_t x457 = 6U;
int8_t x459 = INT8_MIN;
int16_t x460 = -29;
t108 = ((x457<=(x458|x459))+x460);
if (t108 != -28) { NG(); } else { ; }
}
void f109(void) {
static volatile int16_t x461 = 1;
int16_t x462 = 5;
int32_t x463 = -1;
int32_t x464 = INT32_MAX;
static int32_t t109 = INT32_MAX;
t109 = ((x461<=(x462|x463))+x464);
if (t109 != INT32_MAX) { NG(); } else { ; }
}
void f110(void) {
volatile int64_t x466 = INT64_MIN;
static int64_t x467 = 218846333383731077LL;
int32_t x468 = INT32_MIN;
int32_t t110 = INT32_MIN;
t110 = ((x465<=(x466|x467))+x468);
if (t110 != INT32_MIN) { NG(); } else { ; }
}
void f111(void) {
int32_t x469 = 33717;
uint16_t x470 = 5890U;
uint8_t x471 = 1U;
int64_t x472 = INT64_MIN;
static volatile int64_t t111 = INT64_MIN;
t111 = ((x469<=(x470|x471))+x472);
if (t111 != INT64_MIN) { NG(); } else { ; }
}
void f112(void) {
uint16_t x473 = 1U;
int16_t x474 = 0;
volatile int32_t t112 = 185;
t112 = ((x473<=(x474|x475))+x476);
if (t112 != 0) { NG(); } else { ; }
}
void f113(void) {
int64_t x481 = INT64_MIN;
int32_t x482 = INT32_MIN;
uint16_t x483 = 666U;
int32_t x484 = INT32_MIN;
int32_t t113 = 757;
t113 = ((x481<=(x482|x483))+x484);
if (t113 != -2147483647) { NG(); } else { ; }
}
void f114(void) {
int64_t x485 = INT64_MIN;
int16_t x486 = INT16_MAX;
int8_t x487 = INT8_MIN;
uint16_t x488 = UINT16_MAX;
volatile int32_t t114 = 2;
t114 = ((x485<=(x486|x487))+x488);
if (t114 != 65536) { NG(); } else { ; }
}
void f115(void) {
int64_t x489 = INT64_MIN;
static int8_t x490 = INT8_MIN;
int64_t x491 = 1924587178707480425LL;
int16_t x492 = INT16_MAX;
t115 = ((x489<=(x490|x491))+x492);
if (t115 != 32768) { NG(); } else { ; }
}
void f116(void) {
int64_t x493 = -335797307271LL;
int16_t x494 = INT16_MAX;
int64_t x495 = INT64_MIN;
int32_t x496 = INT32_MIN;
int32_t t116 = INT32_MIN;
t116 = ((x493<=(x494|x495))+x496);
if (t116 != INT32_MIN) { NG(); } else { ; }
}
void f117(void) {
volatile int16_t x497 = -10047;
int16_t x500 = INT16_MIN;
int32_t t117 = 94198;
t117 = ((x497<=(x498|x499))+x500);
if (t117 != -32767) { NG(); } else { ; }
}
void f118(void) {
uint64_t x501 = 7737381293719947LLU;
int32_t x502 = INT32_MIN;
int64_t x503 = INT64_MIN;
int8_t x504 = -29;
volatile int32_t t118 = 20;
t118 = ((x501<=(x502|x503))+x504);
if (t118 != -28) { NG(); } else { ; }
}
void f119(void) {
int16_t x506 = INT16_MIN;
int32_t x507 = -1;
int32_t x508 = INT32_MIN;
volatile int32_t t119 = 6235121;
t119 = ((x505<=(x506|x507))+x508);
if (t119 != -2147483647) { NG(); } else { ; }
}
void f120(void) {
volatile int16_t x509 = INT16_MIN;
uint64_t x510 = 3615627LLU;
uint8_t x511 = 107U;
int32_t x512 = -1;
volatile int32_t t120 = 6540835;
t120 = ((x509<=(x510|x511))+x512);
if (t120 != -1) { NG(); } else { ; }
}
void f121(void) {
static int8_t x513 = INT8_MAX;
int32_t x514 = -1;
volatile uint64_t x516 = 4890199632515LLU;
uint64_t t121 = 601LLU;
t121 = ((x513<=(x514|x515))+x516);
if (t121 != 4890199632515LLU) { NG(); } else { ; }
}
void f122(void) {
uint16_t x517 = 62U;
int8_t x519 = -1;
volatile int32_t x520 = INT32_MIN;
static volatile int32_t t122 = INT32_MIN;
t122 = ((x517<=(x518|x519))+x520);
if (t122 != INT32_MIN) { NG(); } else { ; }
}
void f123(void) {
static uint8_t x521 = 60U;
static uint16_t x522 = 3U;
int64_t x524 = -11526LL;
int64_t t123 = -19625LL;
t123 = ((x521<=(x522|x523))+x524);
if (t123 != -11525LL) { NG(); } else { ; }
}
void f124(void) {
volatile int32_t x525 = -1;
uint32_t x526 = UINT32_MAX;
int16_t x527 = INT16_MAX;
uint64_t x528 = UINT64_MAX;
volatile uint64_t t124 = 187139LLU;
t124 = ((x525<=(x526|x527))+x528);
if (t124 != 0LLU) { NG(); } else { ; }
}
void f125(void) {
uint64_t x529 = 34086777645LLU;
static uint16_t x530 = UINT16_MAX;
int8_t x531 = INT8_MAX;
volatile uint64_t t125 = 55958277620167650LLU;
t125 = ((x529<=(x530|x531))+x532);
if (t125 != 3930299175311919LLU) { NG(); } else { ; }
}
void f126(void) {
int8_t x533 = -59;
int32_t x534 = 12;
volatile int8_t x535 = 1;
int8_t x536 = INT8_MIN;
static volatile int32_t t126 = -655397538;
t126 = ((x533<=(x534|x535))+x536);
if (t126 != -127) { NG(); } else { ; }
}
void f127(void) {
int64_t x542 = 531348819LL;
uint32_t x543 = 59685536U;
volatile int64_t t127 = 137724237478697LL;
t127 = ((x541<=(x542|x543))+x544);
if (t127 != -1LL) { NG(); } else { ; }
}
void f128(void) {
uint32_t x545 = 924399116U;
volatile int32_t t128 = 1;
t128 = ((x545<=(x546|x547))+x548);
if (t128 != -2147483647) { NG(); } else { ; }
}
void f129(void) {
volatile int64_t x549 = INT64_MIN;
static uint8_t x550 = UINT8_MAX;
volatile int32_t t129 = -7;
t129 = ((x549<=(x550|x551))+x552);
if (t129 != 3) { NG(); } else { ; }
}
void f130(void) {
int64_t x553 = INT64_MAX;
uint32_t x554 = 6467406U;
int16_t x555 = INT16_MAX;
volatile int32_t t130 = -62;
t130 = ((x553<=(x554|x555))+x556);
if (t130 != -2) { NG(); } else { ; }
}
void f131(void) {
volatile uint8_t x558 = 0U;
uint16_t x559 = 2U;
int32_t x560 = -1607736;
int32_t t131 = 3462797;
t131 = ((x557<=(x558|x559))+x560);
if (t131 != -1607735) { NG(); } else { ; }
}
void f132(void) {
int32_t x563 = INT32_MIN;
volatile uint64_t t132 = 7LLU;
t132 = ((x561<=(x562|x563))+x564);
if (t132 != 780794211586916325LLU) { NG(); } else { ; }
}
void f133(void) {
static int16_t x566 = INT16_MIN;
int16_t x567 = INT16_MIN;
int8_t x568 = INT8_MAX;
static int32_t t133 = 42;
t133 = ((x565<=(x566|x567))+x568);
if (t133 != 127) { NG(); } else { ; }
}
void f134(void) {
int32_t x570 = INT32_MIN;
int32_t x572 = INT32_MIN;
volatile int32_t t134 = 0;
t134 = ((x569<=(x570|x571))+x572);
if (t134 != -2147483647) { NG(); } else { ; }
}
void f135(void) {
uint16_t x573 = 0U;
int16_t x574 = 145;
uint16_t x575 = 1U;
volatile int32_t x576 = INT32_MIN;
int32_t t135 = -1025;
t135 = ((x573<=(x574|x575))+x576);
if (t135 != -2147483647) { NG(); } else { ; }
}
void f136(void) {
int8_t x577 = INT8_MIN;
int32_t x578 = -3041195;
uint16_t x579 = 556U;
int16_t x580 = -1;
volatile int32_t t136 = 2;
t136 = ((x577<=(x578|x579))+x580);
if (t136 != -1) { NG(); } else { ; }
}
void f137(void) {
int8_t x581 = INT8_MIN;
static uint64_t x582 = 183066LLU;
uint32_t x583 = 256455291U;
int32_t x584 = -1;
volatile int32_t t137 = -786799795;
t137 = ((x581<=(x582|x583))+x584);
if (t137 != -1) { NG(); } else { ; }
}
void f138(void) {
int64_t x585 = 1LL;
int32_t x586 = INT32_MAX;
uint32_t x587 = 1148296496U;
uint32_t x588 = 342543U;
static volatile uint32_t t138 = 149854U;
t138 = ((x585<=(x586|x587))+x588);
if (t138 != 342544U) { NG(); } else { ; }
}
void f139(void) {
volatile int8_t x589 = INT8_MAX;
static int8_t x590 = INT8_MAX;
int32_t x591 = -1;
int8_t x592 = INT8_MIN;
static volatile int32_t t139 = 0;
t139 = ((x589<=(x590|x591))+x592);
if (t139 != -128) { NG(); } else { ; }
}
void f140(void) {
int16_t x593 = -1;
int32_t x594 = 24;
static uint32_t x595 = UINT32_MAX;
volatile int32_t x596 = INT32_MIN;
t140 = ((x593<=(x594|x595))+x596);
if (t140 != -2147483647) { NG(); } else { ; }
}
void f141(void) {
uint8_t x598 = 77U;
volatile int64_t t141 = 2607775599080620LL;
t141 = ((x597<=(x598|x599))+x600);
if (t141 != -993818LL) { NG(); } else { ; }
}
void f142(void) {
static int32_t x601 = 2;
static uint8_t x602 = 3U;
int16_t x603 = 2;
uint8_t x604 = UINT8_MAX;
volatile int32_t t142 = -1269112;
t142 = ((x601<=(x602|x603))+x604);
if (t142 != 256) { NG(); } else { ; }
}
void f143(void) {
int64_t x605 = INT64_MIN;
int64_t x606 = -1LL;
int64_t x608 = -1LL;
volatile int64_t t143 = 37468090767235805LL;
t143 = ((x605<=(x606|x607))+x608);
if (t143 != 0LL) { NG(); } else { ; }
}
void f144(void) {
int16_t x609 = INT16_MIN;
uint64_t x610 = UINT64_MAX;
volatile int32_t x611 = -1244975;
volatile int16_t x612 = -3;
int32_t t144 = -60474678;
t144 = ((x609<=(x610|x611))+x612);
if (t144 != -2) { NG(); } else { ; }
}
void f145(void) {
static int32_t x613 = 15;
volatile int16_t x614 = INT16_MIN;
int8_t x615 = INT8_MAX;
uint32_t x616 = 23800886U;
volatile uint32_t t145 = 7140U;
t145 = ((x613<=(x614|x615))+x616);
if (t145 != 23800886U) { NG(); } else { ; }
}
void f146(void) {
static int8_t x619 = INT8_MIN;
static uint8_t x620 = 111U;
static volatile int32_t t146 = 29;
t146 = ((x617<=(x618|x619))+x620);
if (t146 != 111) { NG(); } else { ; }
}
void f147(void) {
int16_t x621 = 417;
int32_t x622 = 90;
int8_t x623 = -17;
uint16_t x624 = UINT16_MAX;
t147 = ((x621<=(x622|x623))+x624);
if (t147 != 65535) { NG(); } else { ; }
}
void f148(void) {
static volatile int8_t x625 = INT8_MIN;
uint64_t x626 = 1667005LLU;
uint64_t x627 = 301921836LLU;
int32_t x628 = INT32_MIN;
int32_t t148 = INT32_MIN;
t148 = ((x625<=(x626|x627))+x628);
if (t148 != INT32_MIN) { NG(); } else { ; }
}
void f149(void) {
static int32_t x633 = -1;
volatile int32_t x634 = -1;
int64_t x636 = INT64_MIN;
volatile int64_t t149 = -210656376902335564LL;
t149 = ((x633<=(x634|x635))+x636);
if (t149 != -9223372036854775807LL) { NG(); } else { ; }
}
void f150(void) {
int8_t x637 = 0;
int16_t x638 = 7;
int16_t x639 = INT16_MIN;
int8_t x640 = -1;
t150 = ((x637<=(x638|x639))+x640);
if (t150 != -1) { NG(); } else { ; }
}
void f151(void) {
int32_t x641 = 233;
uint64_t x643 = UINT64_MAX;
t151 = ((x641<=(x642|x643))+x644);
if (t151 != 0LLU) { NG(); } else { ; }
}
void f152(void) {
volatile int32_t x645 = -1;
volatile int8_t x647 = 16;
static volatile uint64_t x648 = 48786548042563LLU;
uint64_t t152 = 4073571201LLU;
t152 = ((x645<=(x646|x647))+x648);
if (t152 != 48786548042564LLU) { NG(); } else { ; }
}
void f153(void) {
int8_t x649 = -19;
int8_t x652 = INT8_MIN;
static int32_t t153 = -1783010;
t153 = ((x649<=(x650|x651))+x652);
if (t153 != -127) { NG(); } else { ; }
}
void f154(void) {
static volatile int64_t x653 = INT64_MIN;
int8_t x655 = INT8_MIN;
volatile int64_t x656 = -1LL;
int64_t t154 = 244948608297949541LL;
t154 = ((x653<=(x654|x655))+x656);
if (t154 != 0LL) { NG(); } else { ; }
}
void f155(void) {
int32_t x658 = INT32_MAX;
static int16_t x660 = 3;
int32_t t155 = 13677;
t155 = ((x657<=(x658|x659))+x660);
if (t155 != 3) { NG(); } else { ; }
}
void f156(void) {
volatile int32_t x661 = -1;
static uint16_t x662 = 7U;
static int32_t t156 = -45953241;
t156 = ((x661<=(x662|x663))+x664);
if (t156 != -127) { NG(); } else { ; }
}
void f157(void) {
static uint8_t x665 = UINT8_MAX;
int16_t x666 = -941;
static uint32_t x667 = UINT32_MAX;
uint8_t x668 = UINT8_MAX;
volatile int32_t t157 = 1182;
t157 = ((x665<=(x666|x667))+x668);
if (t157 != 256) { NG(); } else { ; }
}
void f158(void) {
volatile int8_t x669 = INT8_MIN;
uint8_t x670 = 3U;
volatile int32_t x671 = INT32_MIN;
int64_t x672 = -59LL;
int64_t t158 = 212063534LL;
t158 = ((x669<=(x670|x671))+x672);
if (t158 != -59LL) { NG(); } else { ; }
}
void f159(void) {
volatile int64_t x673 = -139611487LL;
uint16_t x674 = 7357U;
int16_t x675 = INT16_MIN;
int16_t x676 = INT16_MAX;
volatile int32_t t159 = -61;
t159 = ((x673<=(x674|x675))+x676);
if (t159 != 32768) { NG(); } else { ; }
}
void f160(void) {
static uint8_t x677 = UINT8_MAX;
int32_t x679 = 14750018;
int64_t x680 = 82200LL;
volatile int64_t t160 = 436664725473635306LL;
t160 = ((x677<=(x678|x679))+x680);
if (t160 != 82200LL) { NG(); } else { ; }
}
void f161(void) {
uint32_t x681 = 299146757U;
uint64_t x682 = 83791760888937992LLU;
int32_t x683 = -1;
uint8_t x684 = UINT8_MAX;
volatile int32_t t161 = 395;
t161 = ((x681<=(x682|x683))+x684);
if (t161 != 256) { NG(); } else { ; }
}
void f162(void) {
uint16_t x685 = 10365U;
static volatile int8_t x687 = INT8_MIN;
uint8_t x688 = UINT8_MAX;
static int32_t t162 = 20094729;
t162 = ((x685<=(x686|x687))+x688);
if (t162 != 255) { NG(); } else { ; }
}
void f163(void) {
uint8_t x689 = 2U;
volatile uint8_t x690 = UINT8_MAX;
int16_t x692 = INT16_MIN;
volatile int32_t t163 = 21998;
t163 = ((x689<=(x690|x691))+x692);
if (t163 != -32767) { NG(); } else { ; }
}
void f164(void) {
static int8_t x693 = -1;
int64_t x694 = 298742907LL;
uint16_t x695 = 1842U;
int32_t x696 = INT32_MIN;
int32_t t164 = -7830;
t164 = ((x693<=(x694|x695))+x696);
if (t164 != -2147483647) { NG(); } else { ; }
}
void f165(void) {
uint64_t x697 = 331344LLU;
volatile int64_t x698 = -1LL;
int16_t x699 = INT16_MIN;
int64_t x700 = -1LL;
t165 = ((x697<=(x698|x699))+x700);
if (t165 != 0LL) { NG(); } else { ; }
}
void f166(void) {
int16_t x705 = INT16_MIN;
int16_t x707 = INT16_MIN;
int8_t x708 = -1;
int32_t t166 = -125976875;
t166 = ((x705<=(x706|x707))+x708);
if (t166 != 0) { NG(); } else { ; }
}
void f167(void) {
static int32_t x709 = INT32_MIN;
uint16_t x711 = 1U;
int8_t x712 = INT8_MIN;
int32_t t167 = 49893;
t167 = ((x709<=(x710|x711))+x712);
if (t167 != -127) { NG(); } else { ; }
}
void f168(void) {
int64_t x713 = INT64_MIN;
int16_t x715 = 1;
int32_t t168 = 0;
t168 = ((x713<=(x714|x715))+x716);
if (t168 != -2147483647) { NG(); } else { ; }
}
void f169(void) {
int8_t x717 = INT8_MIN;
static uint16_t x718 = UINT16_MAX;
int16_t x719 = INT16_MAX;
int16_t x720 = -1;
static volatile int32_t t169 = 13;
t169 = ((x717<=(x718|x719))+x720);
if (t169 != 0) { NG(); } else { ; }
}
void f170(void) {
int8_t x722 = -1;
volatile int64_t x723 = -1LL;
int16_t x724 = INT16_MAX;
volatile int32_t t170 = 17321155;
t170 = ((x721<=(x722|x723))+x724);
if (t170 != 32767) { NG(); } else { ; }
}
void f171(void) {
int16_t x725 = -3524;
int32_t x726 = -1;
static int64_t x727 = -115979275939260LL;
int16_t x728 = -17;
volatile int32_t t171 = -4;
t171 = ((x725<=(x726|x727))+x728);
if (t171 != -16) { NG(); } else { ; }
}
void f172(void) {
int16_t x729 = -1;
int64_t x730 = INT64_MIN;
t172 = ((x729<=(x730|x731))+x732);
if (t172 != INT32_MIN) { NG(); } else { ; }
}
void f173(void) {
static int8_t x735 = INT8_MAX;
volatile int32_t t173 = -6856;
t173 = ((x733<=(x734|x735))+x736);
if (t173 != 127) { NG(); } else { ; }
}
void f174(void) {
int64_t x738 = -1013398226520910LL;
volatile uint64_t x739 = 174891413838593961LLU;
int16_t x740 = -721;
static volatile int32_t t174 = 19452509;
t174 = ((x737<=(x738|x739))+x740);
if (t174 != -720) { NG(); } else { ; }
}
void f175(void) {
uint64_t x741 = UINT64_MAX;
static uint64_t x742 = UINT64_MAX;
static int8_t x743 = -1;
int16_t x744 = INT16_MIN;
int32_t t175 = 3501546;
t175 = ((x741<=(x742|x743))+x744);
if (t175 != -32767) { NG(); } else { ; }
}
void f176(void) {
int32_t x745 = INT32_MIN;
int8_t x746 = -11;
static volatile int16_t x747 = INT16_MIN;
int8_t x748 = INT8_MIN;
volatile int32_t t176 = -7856;
t176 = ((x745<=(x746|x747))+x748);
if (t176 != -127) { NG(); } else { ; }
}
void f177(void) {
static uint64_t x749 = 34LLU;
int32_t x750 = 444974;
int8_t x751 = INT8_MIN;
int16_t x752 = INT16_MIN;
volatile int32_t t177 = 6;
t177 = ((x749<=(x750|x751))+x752);
if (t177 != -32767) { NG(); } else { ; }
}
void f178(void) {
uint16_t x753 = 25230U;
int64_t x754 = 80LL;
static int32_t x755 = INT32_MIN;
t178 = ((x753<=(x754|x755))+x756);
if (t178 != 65535) { NG(); } else { ; }
}
void f179(void) {
static volatile int16_t x758 = -10;
uint8_t x759 = 6U;
uint8_t x760 = UINT8_MAX;
volatile int32_t t179 = -2;
t179 = ((x757<=(x758|x759))+x760);
if (t179 != 256) { NG(); } else { ; }
}
void f180(void) {
int64_t x761 = -243LL;
int32_t x762 = INT32_MAX;
static int16_t x763 = INT16_MAX;
int8_t x764 = INT8_MAX;
int32_t t180 = -4550681;
t180 = ((x761<=(x762|x763))+x764);
if (t180 != 128) { NG(); } else { ; }
}
void f181(void) {
uint16_t x765 = 22019U;
uint32_t x766 = 1000062643U;
int8_t x768 = INT8_MIN;
static int32_t t181 = -7;
t181 = ((x765<=(x766|x767))+x768);
if (t181 != -127) { NG(); } else { ; }
}
void f182(void) {
volatile int8_t x770 = INT8_MIN;
static int32_t x771 = 17;
static uint16_t x772 = 15U;
int32_t t182 = 1273996;
t182 = ((x769<=(x770|x771))+x772);
if (t182 != 15) { NG(); } else { ; }
}
void f183(void) {
uint16_t x773 = UINT16_MAX;
int64_t x774 = -2527839LL;
uint8_t x775 = UINT8_MAX;
int32_t x776 = -1;
static int32_t t183 = -61;
t183 = ((x773<=(x774|x775))+x776);
if (t183 != -1) { NG(); } else { ; }
}
void f184(void) {
int64_t x777 = INT64_MIN;
volatile int16_t x778 = INT16_MAX;
static volatile uint8_t x779 = UINT8_MAX;
static uint8_t x780 = 25U;
t184 = ((x777<=(x778|x779))+x780);
if (t184 != 26) { NG(); } else { ; }
}
void f185(void) {
int8_t x781 = INT8_MIN;
int16_t x782 = -11;
int8_t x783 = -1;
static int16_t x784 = INT16_MIN;
int32_t t185 = -250;
t185 = ((x781<=(x782|x783))+x784);
if (t185 != -32767) { NG(); } else { ; }
}
void f186(void) {
uint32_t x785 = 13284U;
int64_t x786 = INT64_MAX;
int16_t x788 = INT16_MIN;
int32_t t186 = -542742;
t186 = ((x785<=(x786|x787))+x788);
if (t186 != -32768) { NG(); } else { ; }
}
void f187(void) {
int8_t x789 = INT8_MIN;
int64_t x790 = INT64_MIN;
volatile int16_t x791 = INT16_MAX;
static int64_t x792 = 505823036LL;
volatile int64_t t187 = 56895898132754418LL;
t187 = ((x789<=(x790|x791))+x792);
if (t187 != 505823036LL) { NG(); } else { ; }
}
void f188(void) {
uint8_t x793 = 1U;
static uint64_t x794 = UINT64_MAX;
volatile uint16_t x796 = UINT16_MAX;
int32_t t188 = -949;
t188 = ((x793<=(x794|x795))+x796);
if (t188 != 65536) { NG(); } else { ; }
}
void f189(void) {
int64_t x797 = INT64_MIN;
int8_t x798 = -1;
int8_t x799 = 3;
int8_t x800 = INT8_MIN;
static volatile int32_t t189 = -514558201;
t189 = ((x797<=(x798|x799))+x800);
if (t189 != -127) { NG(); } else { ; }
}
void f190(void) {
uint32_t x801 = UINT32_MAX;
int16_t x802 = INT16_MAX;
int32_t x803 = INT32_MIN;
uint16_t x804 = 14770U;
t190 = ((x801<=(x802|x803))+x804);
if (t190 != 14770) { NG(); } else { ; }
}
void f191(void) {
int8_t x805 = 0;
uint16_t x806 = UINT16_MAX;
volatile int16_t x807 = INT16_MIN;
int8_t x808 = 3;
volatile int32_t t191 = 4096072;
t191 = ((x805<=(x806|x807))+x808);
if (t191 != 3) { NG(); } else { ; }
}
void f192(void) {
static int32_t x809 = -2667515;
uint8_t x810 = 1U;
static uint32_t x811 = UINT32_MAX;
volatile uint32_t t192 = 77U;
t192 = ((x809<=(x810|x811))+x812);
if (t192 != 2U) { NG(); } else { ; }
}
void f193(void) {
int16_t x813 = INT16_MAX;
uint16_t x814 = 29U;
static uint32_t x815 = 657848637U;
int64_t t193 = -13LL;
t193 = ((x813<=(x814|x815))+x816);
if (t193 != -9223372036854775807LL) { NG(); } else { ; }
}
void f194(void) {
static volatile int64_t x817 = INT64_MAX;
uint32_t x818 = 7509U;
uint16_t x820 = 73U;
t194 = ((x817<=(x818|x819))+x820);
if (t194 != 73) { NG(); } else { ; }
}
void f195(void) {
int32_t x821 = INT32_MAX;
static int32_t x822 = -1081;
int16_t x824 = INT16_MAX;
int32_t t195 = 21315;
t195 = ((x821<=(x822|x823))+x824);
if (t195 != 32767) { NG(); } else { ; }
}
void f196(void) {
static int16_t x825 = -3064;
int8_t x826 = -1;
int64_t x827 = INT64_MIN;
uint32_t x828 = UINT32_MAX;
uint32_t t196 = 2038277557U;
t196 = ((x825<=(x826|x827))+x828);
if (t196 != 0U) { NG(); } else { ; }
}
void f197(void) {
int8_t x831 = 1;
int32_t x832 = 24576;
t197 = ((x829<=(x830|x831))+x832);
if (t197 != 24577) { NG(); } else { ; }
}
void f198(void) {
int16_t x833 = INT16_MAX;
int16_t x834 = -14693;
uint16_t x835 = 16102U;
int8_t x836 = 2;
static volatile int32_t t198 = 14288;
t198 = ((x833<=(x834|x835))+x836);
if (t198 != 2) { NG(); } else { ; }
}
void f199(void) {
int8_t x837 = -1;
int64_t x838 = 8101410LL;
volatile int8_t x839 = INT8_MAX;
int32_t t199 = -280152895;
t199 = ((x837<=(x838|x839))+x840);
if (t199 != 7246) { NG(); } else { ; }
}
int main(void) {
f0();
f1();
f2();
f3();
f4();
f5();
f6();
f7();
f8();
f9();
f10();
f11();
f12();
f13();
f14();
f15();
f16();
f17();
f18();
f19();
f20();
f21();
f22();
f23();
f24();
f25();
f26();
f27();
f28();
f29();
f30();
f31();
f32();
f33();
f34();
f35();
f36();
f37();
f38();
f39();
f40();
f41();
f42();
f43();
f44();
f45();
f46();
f47();
f48();
f49();
f50();
f51();
f52();
f53();
f54();
f55();
f56();
f57();
f58();
f59();
f60();
f61();
f62();
f63();
f64();
f65();
f66();
f67();
f68();
f69();
f70();
f71();
f72();
f73();
f74();
f75();
f76();
f77();
f78();
f79();
f80();
f81();
f82();
f83();
f84();
f85();
f86();
f87();
f88();
f89();
f90();
f91();
f92();
f93();
f94();
f95();
f96();
f97();
f98();
f99();
f100();
f101();
f102();
f103();
f104();
f105();
f106();
f107();
f108();
f109();
f110();
f111();
f112();
f113();
f114();
f115();
f116();
f117();
f118();
f119();
f120();
f121();
f122();
f123();
f124();
f125();
f126();
f127();
f128();
f129();
f130();
f131();
f132();
f133();
f134();
f135();
f136();
f137();
f138();
f139();
f140();
f141();
f142();
f143();
f144();
f145();
f146();
f147();
f148();
f149();
f150();
f151();
f152();
f153();
f154();
f155();
f156();
f157();
f158();
f159();
f160();
f161();
f162();
f163();
f164();
f165();
f166();
f167();
f168();
f169();
f170();
f171();
f172();
f173();
f174();
f175();
f176();
f177();
f178();
f179();
f180();
f181();
f182();
f183();
f184();
f185();
f186();
f187();
f188();
f189();
f190();
f191();
f192();
f193();
f194();
f195();
f196();
f197();
f198();
f199();
return 0;
}
|
<gh_stars>0
import React from 'react';
import SocialLink from '@/components/socialLink';
import styled from 'styled-components';
export interface SocialLinkInterface {
/** The URL of the social link. */
href: string;
/** The relative path of the social link image. */
src: string;
/** The name of the social link. */
name: string;
}
const socialLinks: SocialLinkInterface[] = [
{
href: 'https://igassmann.me/',
src: '/images/social-links/profile-picture-100x100.png',
name: 'Website',
},
{
href: 'https://github.com/IGassmann/',
src: '/images/social-links/github.svg',
name: 'GitHub',
},
{
href: 'https://www.linkedin.com/in/igassmann/',
src: '/images/social-links/linkedin.svg',
name: 'LinkedIn',
},
{
href: 'https://twitter.com/i_gassmann/',
src: '/images/social-links/twitter.svg',
name: 'Twitter',
},
];
const SocialLinksContainer = styled.div`
background-color: ${(props) => props.theme.colors.secondaryBackground};
padding: 150px 0;
text-align: center;
ul {
max-width: 950px;
width: calc(100% - 40px);
padding: 0 20px;
margin: 0 auto;
display: flex;
flex-wrap: nowrap;
justify-content: space-between;
flex-flow: row wrap;
flex-grow: 0;
flex-shrink: 0;
list-style: none;
li {
padding: 0;
margin: 0 8px;
display: block;
align-content: center;
align-items: center;
justify-content: center;
line-height: 40px;
border-radius: 50%;
text-align: center;
}
}
@media screen and (max-width: ${(props) => props.theme.sizes.xLarge}) {
li {
width: 100px;
}
}
@media screen and (max-width: ${(props) => props.theme.sizes.small}) {
li {
width: 60px;
}
}
`;
const SocialLinks: React.FC = () => (
<SocialLinksContainer>
<ul>
{socialLinks.map((socialLink, index) => (
<li key={index}>
<SocialLink {...socialLink} />
</li>
))}
</ul>
</SocialLinksContainer>
);
export default SocialLinks;
|
High-Throughput Development of SSR Markers from Pea (Pisum sativum L.) Based on Next Generation Sequencing of a Purified Chinese Commercial Variety Pea (Pisum sativum L.) is an important food legume globally, and is the plant species that J.G. Mendel used to lay the foundation of modern genetics. However, genomics resources of pea are limited comparing to other crop species. Application of marker assisted selection (MAS) in pea breeding has lagged behind many other crops. Development of a large number of novel and reliable SSR (simple sequence repeat) or microsatellite markers will help both basic and applied genomics research of this crop. The Illumina HiSeq 2500 System was used to uncover 8,899 putative SSR containing sequences, and 3,275 non-redundant primers were designed to amplify these SSRs. Among the 1,644 SSRs that were randomly selected for primer validation, 841 yielded reliable amplifications of detectable polymorphisms among 24 genotypes of cultivated pea (Pisum sativum L.) and wild relatives (P. fulvum Sm.) originated from diverse geographical locations. The dataset indicated that the allele number per locus ranged from 2 to 10, and that the polymorphism information content (PIC) ranged from 0.08 to 0.82 with an average of 0.38. These 1,644 novel SSR markers were also tested for polymorphism between genotypes G0003973 and G0005527. Finally, 33 polymorphic SSR markers were anchored on the genetic linkage map of G0003973 G0005527 F2 population. Introduction Pea (Pisum sativum L.) is one of the most popular food legumes in the world. The harvested area was approximately 6.4 million hectares and production was almost 11 million metric tons of dry peas in 2013. As one of the most important legumes, pea can be used as vegetable, pulse, and feed. Moreover, pea plays a critical role in crop rotation and low-carbon agriculture for its capacity of biological fixation of atmospheric N 2. Although significant advances have been made through traditional breeding practices, resulting in semi-leafless pea, snow pea, and snap pea, progress in developing SSR markers and marker assisted selection in pea breeding is limited. This is due mainly to the large genome size of pea (4.45 GB), which is approximately 9 times larger than that of barrel medic (Medicago truncatula Gaertn.) (http://www.jcvi.org/medicago/), and 4 times larger than that of soybean (Glycine max L. Merr.). A number of next-generation sequencing technologies such as the Roche 454, the Illumina Hiseq 2500 and the Pacific Biosciences PacBio RS II systems have been developed in recent years. These technologies are capable of generating tens of millions of short DNA sequence reads at a relatively low cost. De novo sequencing of genomes, re-sequencing of genomes and RNA-seq were popular all over the world. However, only a few researchers utilized Next Generation Sequencing (NGS) platforms for high-throughput development of SSR markers in plant genome. The present study aims at obtaining more SSR sequences cheaply and efficiently by using the high-throughput Illumina HiSeq 2500 platform (Illumina, San Diego, CA, USA). We report here the result of identifying over 8,899 putative SSR containing sequences, characterizing and validating 1,644 of these newly identified SSRs experimentally using 22 P. sativum and two P. fulvum genotypes, and enhancing the density of previous genetic linkage map with 33 of these newly identified markers. Plant materials Widely grown Chinese pea cultivar Zhongwan No. 6, numbered G0005527 in the National Genebank of China, was purified by single seed descend for three consecutive generations. DNA from the resulting plants was used for sequencing and SSR marker development. For validating the SSRs, a diverse panel of 24 accessions, consisting of 11 entries from China, 11 from other countries and two wild relatives as out-groups, was used in the amplification experiment (Fig 1 and Table 1). These germplasm resources are maintained by the National Genebank of China at the Institute of Crop Science (ICS), Chinese Academy of Agricultural Sciences (CAAS), Beijing, China. For SSR mapping, a segregating F 2 population of 190 individuals derived from the cross of G0003973 G0005527 was used. The dry seed color of G0003973 (winter hardy, from Qinghai) was olivine and that of G0005527 (cold sensitive, from Beijing) was green. This population was grown in a protected field at Qingdao Academy of Agricultural Sciences, Qingdao (QdAAS), Shandong, China. DNA extraction, library preparation and next-generation sequencing Genomic DNA was extracted from 10-day old, etiolated seedlings of each genotype cleared with sterile water, using the CTAB method. For the Illumina HiSeq 2500 run, a library was prepared with a commercial kit NEBNext Multiplex Oligos for Illumina with Index Primers Set 2 (New England Biolabs Inc., Ipswich, MA, USA) following the manufacturer's protocol (Paired-End Library Construction). The raw sequencing files were submitted to the National Center for Biotechnology Information (NCBI) short read archive under accession numbers with the accession number SRX973821. Reads initiative characterization CLC Genomics Workbench 7.5 software (CLC Inc., Aarhus, Denmark) was used in the following analyses. The quality of paired-end data was checked by the Create Sequencing QC Report Module at default parameters. Subsequent quality trimming was performed with the Trim Sequences Module using quality scores limit of 0.05 and maximum number of ambiguities of 2. The Remove of Duplicate Reads Module was used to filter redundant reads at default parameters. Finally, de novo Assembly Module was used for sequences assembly. These sequences were prepared for further SSRs mining. SSRs mining MISA (Microsatellite identification) software, a SSRs motif scanning tool written in Perl (http:// pgrc.ipk-gatersleben.de/misa/), was used for the identification and localization of SSRs or microsatellites. The identified motifs were mononucleotide to hexanucleotide, and the minimum repeat unit was defined as 10 for mononucleotide, 6 for dinucleotide, 5 for all the higher order motifs including trinucleotide, tetranucleotide, pentanucleotide and hexanucleotide. Furthermore, the maximal number of interrupting base pairs in a compound microsatellite was 20 bp. The characterizations of SSRs were obtained by statistical analysis from the MISA files. The SSRs information was extracted and statistically analyzed by in-house Perl script, plotted by R language. PCR amplification Polymerase chain reactions (PCR) were performed in 10 l reaction volumes containing 5 l 2 x TaqPCR MasterMix (Hooseen, Beijing, China), 1 l primer pair (10 M), 1.5 l of genomic DNA (30 ng) and 2.5 l of dd H 2 O. Microsatellites were amplified on a K960 Thermal Cycler (Jingle, Hangzhou, China) with the following cycle: 5 min initial denaturation at 95°C, 35 cycles of 30 s at 95°C, 30 s at the optimized annealing temperature, 45 s of elongation at 72°C, and a final extension at 72°C for 10 min. The PCR products were separated on 8% non-denaturing polyacrylamide gel electrophoresed at 280 V and 50 W and visualized by 0.1% silver nitrate staining. Polymorphic validation and genetic diversity assessment The number of alleles and polymorphism information content (PIC) of the alleles revealed by each primer pair were calculated by Powermarker v3.25 with the genotype data among 24 accessions. A cluster analysis was conducted based on the unweighted pair group method on arithmetic averages (UPGMA) algorithm using Powermarker v3.25, and a dendrogram was drawn by Powermarker v3.25 and modified by MEGA4. STRUCTURE V2.3.3 was used to analyze population structure and differentiation. Simulations were run with a burn-in of 100,000 iterations and from K (the number of populations) = 1 to 10. Runs for each K were replicated 160 times and the true K was determined according to the method described by Evanno. Linkage map construction and blast mapped SSR markers to Medicago truncatula The distorted segregation of the markers against the expected Mendelian segregation ratio was tested with Chi-squared analysis (P < 0.05) by QTL ICIMapping V3.2 software. The information of SSR markers were filled into Map Manager QTXb 20 software. For the F 2 population, the male allele was recorded as "A" and the female allele as "B", "H" was recorded when a locus was heterozygous, and "-" when there was a missing or null allele. The linkage map was constructed using the Map Manager QTXb 20 software with the parameter of Kosambi function (P < 0.0001) and marker distances in centiMorgans (cM). Finally, the linkage map was presented by JoinMap 4.0 software. Putative location of flanking sequences of mapped SSRs onto chromosomes of Medicago truncatula for synteny-based comparison was conducted by using blast method (http://phytozome.jgi.doe.gov/pz/portal.html#!search?show=BLAST&method=Org_ Mtruncatula). Illumina paired-end sequencing In this study, a total of 17.5 GB of paired-end raw sequencing data, comprising 173,245,234 reads from a 500 bp insert DNA library, was generated by Illumina Hiseq2500 system. After trimming the adaptors and removal of possible contaminations, the remaining 170,865,238 high quality read sequences were used for further analysis. Adenine was the most abundant type, accounting for 29.1% of total nucleotides, followed by thymine (28.9%), cytosine (21.0%) and guanine (21.0%). The CG content was about 42% and the average read length was 94.7 bp. Duplicated reads removing and genome de novo assembling Mining for SSRs MISA software was used for SSRs search based on contigs. The total number of SSR containing sequences was 8,899, and these sequences contained 10,207 SSRs (Table 2). In this study, mono-and di-nucleotide motifs occurred at the highest rate (accounting for 40.86% and 32.68%, respectively). Trinucleotide motifs accounted for 25.29%, while tetra-, penta-, and hexa-nucleotide motifs accounted for 1.17%. (A/T) n, (AC/GT) n and (AG/CT) n were the relatively more frequent motifs in our study. Primer design A total of 3,275 non-redundant primer pairs were designed by Primer 3.0 software and redu-ce_ssr.py (in house developed programs) based on criteria of melting temperature, CG content, lack of secondary structure and length of amplification bands. The expected length of target bands was between 110 bp and 210 bp. Validation of the SSR markers A subset of 1,644 SSR markers was randomly selected for validation. Among them 841 (51.16%) markers (S2 File) produced reliable polymorphic bands between 22 pea accessions (Pisum sativum) and two wild relatives (Pisum fulvum). Meanwhile, the monomorphic markers were listed in S3 File. The allele number per locus ranged from 2 to 10 with an average of 3.22. The polymorphism information content (PIC) with an average of 0.38, ranged from 0.08 to 0.82 (S2 File). The dendrogram clearly showed that the 24 pea and its wild relative accessions fell into three distinct clusters based on 841 polymorphic SSR markers (Fig 2). Cluster I consisted of overseas accessions except G0002305; Cluster II consisted Chinese accessions; Cluster III consisted of wild relatives. The population structure of this diverse panel of cultivated pea and its wild relative was inferred by using STRUCTURE V2.3.3 with the dataset of 841 SSR markers. Three sub-populations were identified, based on K values (Fig 3, ). The rational for thisK is to make salient the break in slope of the distribution of L(K) at the true K. The entries from China, from other countries and the wild species were separated into 3 sub-populations (Fig 4), in good according with the three clusters in the UPGMA dendrogram. The results were in accordance with those published earlier. Using novel SSR markers to enhance the density of genetic linkage map A segregating F 2 population derived from the cross between G0003973 and G0005527 was used for mapping the newly validated SSR markers. Among the 1,644 SSRs used in genetic diversity analysis, 63 were polymorphic between the two parents. Being amplified in the population, 22 of the 63 SSRs showed significant segregation distortion (P < 0.05) in S4 File. These distorted markers were excluded from linkage map construction. The Map Manager QTXb 20 was used to add the newly developed SSR markers to the genetic linkage map which had been published. Consequently, 41 polymorphic markers that segregated in appropriate Mendalian ratios were used to run Map Manager QTXb 20 software, of which 33 markers were mapped to the existing linkage groups. However, the remaining eight markers were not linked to any mapped markers on the linkage map. The new map contained 199 markers including the 33 newly added markers (Table 3) in 13 linkage groups with an average genetic distance of 9.5 cM between neighboring markers and covered 1890.88 cM (Fig 5). Discussion SSR markers are excellent genetic markers because they are co-dominant, multi-allelic and reproducible. In genetics, SSRs have been widely used for diversity analysis, linkage map construction, QTL mapping and association mapping. Pea is important in genetics, because of the work of J.G. Mendel. However, the pea genome is very large, which seriously hindered pea genomic research. The nuclear genome size of pea was estimated to be 9.09 pg DNA/2C, which corresponds to a haploid genome size (1C) of 4.45 Gbp, one and half times larger than the human genome of 3Gb. Compared with other legume crops such as soybean (Glycine max) of 1.1 Gb and barrel medic (Medicago truncatula) of 0.47 Gb (http://www.jcvi.org/medicago/), More efforts are needed to develop molecular tools especially for SSR and SNP (single nucleotide polymorphism) markers in order to build a solid foundation for its genomic research in peas. Using NGS technology for the identification of SSR markers is effective Consistent with previous reports, results from this study demonstrated that Illumina paired-end sequencing offers an opportunity for high-throughput identification of SSRs with diverse motifs from economically important crop plant species. Within a relatively short time period, our sequencing experiment generated a total of 17.5 GB of raw paired-end sequencing data. From this raw data, 343,849 contigs were effectively assembled and used for SSR markers development. A total of 3,275 non-redundant primers were designed and nearly half of them More reliable validation of the NGS based SSR markers was conducted In the published studies of other plant species, only a small proportion of newly developed SSR markers was tested. In this study, more than half (1,644 markers) was carefully tested in two different ways. One way was genetic diversity analysis, the other way was the mapping of the novel markers to a linkage map based on an existing mapping population. More than 51% tested SSRs involved in this study were polymorphic among 24 accessions and clearly divided into 3 sub-groups. Meanwhile, 33 novel SSRs were anchored onto a previous genetic linkage group. Chinese pea germplasm differs from that of other countries The comparison of the diversity of Chinese and foreign peas by using 841 polymorphic SSR markers in our study identified a significant degree of diversity (Figs 2 and 4). This result coincided with a previous study by using 21 informative SSRs to assess and compare the genetic diversity of 1,243 Chinese pea genotypes from 28 provinces to 774 pea genotypes that represented a globally diverse germplasm collection, and the Chinese pea germplasm was found genetically distinct from the global gene pool sourced outside China. On the other hand, our genotype data did reveal an exception. In our experiment, G0002305 is an accession collected from Inner Mongolia. The cluster analysis grouped this accession into Cluster I with germplasm accessions collected outside China (Figs 2 and 4). Analysis of population structure also confirmed that this Chinese accession shared more than 80% of kinship with accessions collected from outside China, especially with G0006082 from Afghanistan and G0006170 from Pakistan (Fig 4). In addition, six Chinese accessions shared variable percentages (approximately 5 to 50%) of closeness with accessions collected outside China and two accessions collected outside China share a small percentage of closeness with the Chinese accessions (Fig 4). Both cluster and population structure analyses clearly separated the cultivated pea from its wild relative accessions (Figs 2 and 4). These results implies the usefulness of the newly developed SSRs. More SSR markers were anchored on a genetic linkage map There was no genetic linkage map based on Chinese accessions previously. In 2014, we constructed the first Chinese pea linkage map constructed with 157 SSR markers. In this study, the existing linkage map has been more saturated. The new map contained 199 markers including the 33 newly added markers. We anticipate that with more effective SSR markers, QTL mapping and association study as well as marker-assisted selection in pea will become available in the near future.
|
Chronic lymphocytic leukemia with t(14 ;18) and trisomy 12: a case report. Chronic lymphocytic leukemia (CLL) is a B-cell neoplasm defined by the presence of at least 5109 G/L monoclonal B lymphocytes in the peripheral blood. It is the most common type of leukemia in adult patients from Western countries. CLL is characterized by a gradual accumulation of small, longliving, immunologically dysfunctional, morphologically mature-appearing B-lymphocytes in blood, bone marrow and lymphoid tissues. It has also been reported that CLL cells have a proliferation rate higher than previously recognized, particularly in the lymphoid tissues. The flow cytometry analysis of typical CLL identifies a monotypic B-cell population expressing a low level of surface immunoglobulins, light chain being either kappa or lambda-, CD5+, CD19+, CD23+, CD79b (dim), negative for FMC7 and CD10. Clinical presentation, course and outcome are highly variable. Interphase fluorescent in situ hybridization (I-FISH) identifies chromosomal abnormalities in about 80% of cases, most commonly involving 13q14 (55%), 11q22-23 (18%), or 17p13 deletions (7%) and trisomy 12 (16%). Therefore, five prognostic categories have been defined with a statistical model, showing the shortest median survival and treatment-free intervals in patients harboring 17p and 11q deletions, followed by trisomy 12 and a normal karyotype, whereas 13q deletion as the sole abnormality is associated with the best prognosis. We report here a rare case of CLL in a 54 year-old-man.
|
// Copyright 2000-2022 JetBrains s.r.o. and other contributors. Use of this source code is governed by the Apache 2.0 license that can be found in the COPYING file.
package com.friendly_machines.intellij.plugins.native2Debugger;
import com.friendly_machines.intellij.plugins.native2Debugger.impl.DebugProcess;
import com.friendly_machines.intellij.plugins.native2Debugger.impl.Evaluator;
import com.friendly_machines.intellij.plugins.native2Debugger.impl.GdbMiOperationException;
import com.intellij.openapi.vfs.VfsUtil;
import com.intellij.openapi.vfs.VirtualFile;
import com.intellij.ui.ColoredTextContainer;
import com.intellij.ui.SimpleTextAttributes;
import com.intellij.xdebugger.XDebuggerUtil;
import com.intellij.xdebugger.XSourcePosition;
import com.intellij.xdebugger.evaluation.XDebuggerEvaluator;
import com.intellij.xdebugger.frame.XCompositeNode;
import com.intellij.xdebugger.frame.XStackFrame;
import com.intellij.xdebugger.frame.XValueChildrenList;
import org.jetbrains.annotations.NotNull;
import org.jetbrains.annotations.Nullable;
import java.nio.file.Path;
import java.util.HashMap;
import java.util.List;
public class StackFrame extends XStackFrame {
private final HashMap<String, Object> myFrame;
private final DebugProcess myDebuggerSession;
private final XSourcePosition myPosition;
private final String myThreadId;
@Nullable
public XSourcePosition createSourcePositionFromFrame(HashMap<String, Object> gdbFrame) {
VirtualFile p = null;
if (gdbFrame.containsKey("fullname")) {
String file = (String) gdbFrame.get("fullname"); // TODO: or "file"--but that's relative
p = VfsUtil.findFile(Path.of(file), false);
}
// if (p != null && gdbFrame.containsKey("file")) {
// //String file = (String) gdbFrame.get("file");
//
// final Project project = myDebuggerSession.getSession().getProject();
// final PsiManager psiManager = PsiManager.getInstance(project);
// final PsiDocumentManager documentManager = PsiDocumentManager.getInstance(project);
// final PsiFile psiFile = psiManager.findFile(p);
// if (psiFile != null) {
// Document psiDocument = documentManager.getDocument(psiFile);
// p = documentManager.getPsiFile(psiDocument).getVirtualFile();
// //p = psiFile.getVirtualFile();
// }
// }
String line = (String) gdbFrame.get("line");
return XDebuggerUtil.getInstance().createPosition(p, Integer.parseInt(line) - 1);
}
public StackFrame(String threadId, HashMap<String, Object> gdbFrame, DebugProcess debuggerSession) {
myThreadId = threadId;
myFrame = gdbFrame;
myDebuggerSession = debuggerSession;
myPosition = createSourcePositionFromFrame(gdbFrame);
}
@Override
public Object getEqualityObject() {
return StackFrame.class;
}
@Override
public XDebuggerEvaluator getEvaluator() {
//return myFrame instanceof Debugger.StyleFrame ? new MyEvaluator((Debugger.StyleFrame)myFrame) : null;
return new Evaluator(myDebuggerSession, this);
}
@Override
public XSourcePosition getSourcePosition() {
return myPosition;
}
@Override
public void customizePresentation(@NotNull ColoredTextContainer component) {
try {
if (myFrame.containsKey("func")) {
String func = (String) myFrame.get("func");
component.append(func, SimpleTextAttributes.REGULAR_ATTRIBUTES);
}
component.append(" at ", SimpleTextAttributes.REGULAR_ATTRIBUTES);
if (myFrame.containsKey("file")) {
String file = (String) myFrame.get("file");
String line = myFrame.containsKey("line") ? (String) myFrame.get("line") : "?";
component.append(file + ":" + line, SimpleTextAttributes.LINK_ATTRIBUTES);
} else if (myFrame.containsKey("addr")) {
component.append((String) myFrame.get("addr"), SimpleTextAttributes.GRAY_ATTRIBUTES);
}
// component.setIcon ?
// TODO
} catch (ClassCastException e) {
component.append("failed to parse " + myFrame.toString(), SimpleTextAttributes.ERROR_ATTRIBUTES);
}
}
@Override
public void computeChildren(@NotNull XCompositeNode node) {
try {
String level = (String) myFrame.get("level");
List<HashMap<String, Object>> variables = myDebuggerSession.getVariables(myThreadId, level);
final XValueChildrenList list = new XValueChildrenList();
for (HashMap<String, Object> variable: variables) {
String name = (String) variable.get("name");
// TODO: optional
String value = variable.containsKey("value") ? (String) variable.get("value") : "?";
list.add(name, new Value(name, value, variable.containsKey("arg")));
}
node.addChildren(list, true);
} catch (ClassCastException | GdbMiOperationException e) {
e.printStackTrace();
}
}
public String getThreadId() {
return myThreadId;
}
public String getLevel() {
return (String) myFrame.get("level");
}
}
|
package com.shiro.dao;
import com.shiro.pojo.SysUser;
import com.shiro.vo.req.UserPageReqVo;
import com.shiro.vo.resp.UserTableRespVo;
import org.apache.ibatis.annotations.Param;
import java.util.List;
public interface SysUserDao {
int insert(SysUser record);
int insertSelective(SysUser record);
/**
* 根据 username 用户名查询
* @param username
* @return
*/
SysUser findByUsername(@Param("username") String username);
/**
* 根据 id 主键查询
* @param userId
* @return
*/
SysUser selectByPrimaryKey(String userId);
/**
* 分页查询用户(包括搜索条件)
* @param userPageReqVo
* @return
*/
List<UserTableRespVo> selectAll(UserPageReqVo userPageReqVo);
/**
* 更新用户信息
* @param sysUser
* @return
*/
int updateSelective(SysUser sysUser);
/**
* 批量/删除用户接口
* @param sysUser
* @param list
* @return
*/
int deletedUsers(@Param("sysUser") SysUser sysUser, @Param("list") List<String> list);
/**
* 通过部门id集合统计用户
* @param deptIds
* @return
*/
int selectUserInfoByDeptIds(List<String> deptIds);
}
|
Charles Stuart (abolitionist)
Biography
Charles Stuart was born in 1783 in Bermuda, as shown by Canadian census records (countering assertions that he was born in Jamaica). His father was presumably a British army officer posted to the Bermuda Garrison, possibly Lieutenant Hugh Stewart of the detachment of invalid regular soldiers belonging to the Royal Garrison Battalion, which was disbanded in 1784, following the Treaty of Paris, probably resulting in Stuart's emigration from the colony; the surviving parish registries for the period, compiled by AC Hollis-Hallett as Early Bermuda Records, 1619-1826, list no birth of a Stuart, Stewart, or Steward in or about 1783 other than an unnamed child of Lieutenant Steward, baptised in St. George's on 8 December, 1781.
Stuart was educated in Belfast and then pursued a military career as his first vocation.
He left the military in 1815 and, in 1817, emigrated to Upper Canada (Ontario) with a tidy pension. He settled in Amherstburg, Upper Canada, and began his pursuit of a cause both in Canada and England. By 1821, he was involved with the black refugees (fugitive slaves) who were beginning to arrive in the area from south of the border. He began a small black colony near Amherstburg, where he actively assisted the new arrivals to start new lives as farmers.
In 1822, Stuart took a position as the principal of Utica Academy in New York State. There he met the young Theodore Dwight Weld, who became one of the leaders of the American abolitionist movement during its formative years. By 1829, he returned to England for a time. There, Charles wrote some of the most influential anti-slavery pamphlets of the period.
In 1840 he attended the World Anti-Slavery Convention in June. One hundred and thirty of the more notable delegates were included in a large commemorative painting by Benjamin Haydon. This picture is now in the National Portrait Gallery in London.
He retired to a farm near Thornbury, Ontario, in 1850 at Lora Bay on Georgian Bay. Any product made from the use of slave labour was forbidden in his home.
|
Jordanian Foreign Policy in Confrontation with Extremism and Terrorism: The International Alliance Is a Model The study aims at researching in the Jordanian foreign Policy in confrontation with extremism and terrorism through concentration on factors and determinates that pushed Jordan to enter the International Alliance in fighting terrorism, especially fighting Daish organization. Corresponding to that significance of the study appears through discussing the role of the foreign policy and its instruments in limiting the phenomenon of extremism and terrorism and the most important strategies that the states should follow in light of the internal and external environmental impacts in confronting extremism and terrorism within the preventive procedures and terrorist crises management. The study employed the method of decision making in achieving objectives of study and its questions where the main question emerged: what are the most important determinants of the Jordanian Foreign Policy in joining the International Alliance against the terrorism, especially fighting the organization of the Islamic State Daish? The study deduced a number of results from the most important of them is that: Jordan could take a group of procedures and arrangements at the level of the foreign policy through the period in confronting extremism ant terrorism within activation of military, diplomatic and media instruments, where the economic instrument appeared from the weakest instruments of the foreign policy and a negative factor in the process of decision-making. The study proved that the joining of Jordan to the International Alliance to fight Daish Organization as a result of great impact of the geographic determinant, and availability of the organization at geographic territories forming a strategic danger on Jordan. The study also recommended with the necessity of treating the social and economic problems as an important side and supporter in enhancing the instruments of the Jordanian Foreign Policy.
|
1. Field of the Invention
The present invention relates to a voltage regulator circuit applied to an IC for driving a liquid crystal panel used in a mobile telephone, a digital camera or the like.
2. Description of Related Art
A liquid crystal panel driving IC used in a mobile telephone, a digital camera or the like is increasingly made faster in transmission of data (as high-speed serial transmission) and smaller in size. Due to this, the liquid crystal panel driving IC is often designed by a fine and low voltage process (hereinafter, referred to as “the low voltage process”) capable of using higher-speed and smaller-sized elements. In such a low voltage process, a voltage with which an element is broken down (withstand voltage of the element) necessarily falls. It is, therefore, required to pay attention to the range of a voltage to be used.
Furthermore, a power supply voltage (battery voltage) supplied from a power supply (battery) to the liquid crystal panel driving IC is often higher than the voltage used in such a low voltage process. Due to this, it is required to use the power supply voltage after regulating the voltage to an appropriate voltage using a voltage regulator circuit included in the liquid crystal panel driving IC.
Moreover, in a normal case, the power supply voltage is stabilized by a device (such as a stabilization circuit) arranged between the power supply and the liquid crystal panel driving IC, and is supplied to the liquid crystal panel driving IC as a supply voltage as a supply voltage. However, not only an average consumption current but also an instantaneous consumption current is desired to be as low as possible for the liquid crystal panel driving IC since the stabilization circuit includes such a function a's a function to prevent overcurrent.
FIG. 1 shows a configuration of a general voltage regulator circuit 110 (hereinafter, referred to as “the voltage regulator circuit 110”). The voltage regulator circuit 110 includes a differential amplifier circuit AMP1, a first resistor element R1 (hereinafter, “the resistor element R1”), and a second resistor element R2 (hereinafter, “the resistor element R2”).
The differential amplifier circuit AMP1 is connected to a high-voltage power supply [VDD] supplying a high voltage VDD and a low-voltage power supply [VSS] supplying a low-voltage VSS (ground voltage GND) lower than the high-voltage VDD, and operates with the voltage between the high-voltage VDD and the low-voltage VSS. The differential amplifier circuit AMP1 includes a positive-side input terminal +IN that is a first input terminal, a negative-side input terminal −IN that is a second input terminal, and an output terminal. A reference voltage Vref is supplied to the positive-input terminal +IN as the supply voltage.
One end of the resistor element R1 is connected to the output terminal of the differential amplifier circuit AMP1. One end of the resistor element R2 is connected to the other end of the resistor element R1, and the other end of the resistor element R2 is connected to the low-voltage power supply [VSS]. One end of the resistor element R2 is also connected to the negative-side input terminal −IN via a signal line. One end of a smoothing capacitor C1 is connected to the output terminal of the differential amplifier circuit AMP1 and one end of the resistor element R1 via an output node, and the other end of the smoothing capacitor C1 is connected to the low-voltage power supply [VSS].
The resistor elements R1 and R2 divide an output voltage Vout100 output from the differential amplifier circuit AMP1 into voltages to generate a divided voltage Vmon100 on one end of the resistor element R2. The differential amplifier circuit AMP1 amplifies the difference between the reference voltage Vref supplied to the positive-side input terminal +IN and the divided voltage Vmon100 supplied to the negative-side input terminal −IN. The smoothing capacitor C1 smoothes the output voltage Vout100 output from the differential amplifier circuit AMP1.
FIG. 2 shows a configuration of the differential amplifier circuit AMP1. The differential amplifier circuit AMP1 includes first and second N channel MOS (Metal Oxide Semiconductor) transistors MN1 and MN2 (hereinafter, referred to as “the transistors MN1 and MN2”), first to third P channel MOS transistors MP1, MP2, and MP3 (hereinafter, referred to as “the transistors MP1, MP2, and MP3”), and first and second constant current sources.
Sources of the transistors MN1 and MN2 are connected to one node in common. Gates of the transistors MN1 and MN2 are used as the negative-side input terminal −IN and the positive-side input terminal +IN of the differential amplifier circuit AMP, respectively.
A first constant current source is provided between the sources of the transistors MN1 and MN2 and the low-voltage power supply [VSS]. For example, the first constant current source is a third N channel MOS transistor MN3 (hereinafter, referred to as “the transistor MN3”). The sources of the transistors MN1 and MN2 are connected to the drain of the transistor MN3, and the low-voltage power supply [VSS] is connected to the source thereof. A bias voltage Vbias is supplied to the gate of the transistor MN3 for turning on the transistor MN3.
Sources of the transistors MP1 and MP2 are connected to the high-voltage power supply [VDD] in common, gates thereof are connected to one node in common, and drains thereof are connected to drains of the transistors MN1 and MN2, respectively. The gate of the transistor MP1 is connected to the drain of the transistor MN1.
The source of the transistor MP3 is connected to the high-voltage power supply [VDD], the gate thereof is connected to the drain of the transistor MN2, and the drain thereof is connected to one end of the resistor element R1.
A second constant current source is provided between the drain of the transistor MP3 and the low-voltage power supply [VSS]. For example, the second constant current source is a fourth N channel MOS transistor MN4 (hereinafter, referred to as “the transistor MN4”). The drain of the transistor MP3 is connected to the drain of the transistor MN4 and the low-voltage power supply [VSS] is connected to the source thereof. The bias voltage Vbias is supplied to the gate of the transistor MN4 for turning on the transistor MN4.
Next, operation performed by the voltage regulator circuit 110 will be described below.
The reference voltage Vref is supplied to the positive-side input terminal +IN of the differential amplifier circuit AMP1, and the divided voltage Vmon100 is supplied to the negative-side input terminal −IN of the differential amplifier circuit AMP1. Due to this, the differential amplifier circuit AMP1 operates so that the voltage supplied to the negative-side input terminal −IN is equal to that supplied to the positive-side input terminal +IN, that is, equal to the reference voltage Vref.
If Vref>Vmon100 (namely, if the output voltage Vout100 is lower than a voltage-of-interest), then an ON-resistance of the transistor MP3 falls, and a current I100 falls in the smoothing capacitor C1 via the differential amplifier circuit AMP1 from the high-voltage power supply [VDD]. As a result, the output voltage Vout100 rises. If Vref<Vmon100 (if the output voltage Vout100 is higher than the voltage-of-interest), then the ON-resistance of the transistor MP3 rises, and a current Isink flows in the transistor MN4 included in the differential amplifier circuit AMP1 from the smoothing capacitor C1. As a result, the output voltage Vout falls. By repeating this operation, the output voltage Vout100 is made constant to the voltage-of-interest. In this case, the output voltage Vout100=voltage-of-interest is represented by the following Equation.Vout=Vref×(R1+R2)/R2
|
<gh_stars>0
def docs(**kwargs):
def wrapper(func):
kwargs['produces'] = ['application/json']
if not hasattr(func, '__apispec__'):
func.__apispec__ = {'parameters': [], 'requests': {}, 'responses': {}}
extra_parameters = kwargs.pop('parameters', [])
extra_responses = kwargs.pop('responses', {})
func.__apispec__['parameters'].extend(extra_parameters)
func.__apispec__['responses'].update(extra_responses)
func.__apispec__.update(kwargs)
return func
return wrapper
# @request_schema(IndexSchema(strict=True), location='body')
# @request_schema(IndexSchema(strict=True), location='form')
# @request_schema(IndexSchema(strict=True), location='query')
# @request_schema(IndexSchema(strict=True), location='headers')
# @request_schema(IndexSchema(strict=True), location='path')
def request_schema(schema, location, validate=True):
if callable(schema):
schema = schema()
if not location:
location = 'body'
apispec_options = {
'default_in': location
}
def wrapper(func):
if not hasattr(func, '__apispec__'):
func.__apispec__ = {'parameters': [], 'requests': {}, 'responses': {}}
func.__apispec__['requests'][location] = {
'schema': schema,
'options': apispec_options
}
if validate:
if not hasattr(func, '__request_schemas__'):
func.__request_schemas__ = {}
func.__request_schemas__[location] = schema
return func
return wrapper
use_kwargs = request_schema
# @response_schema(IndexSchema(), 200, description='Standard response', clean=True)
def response_schema(schema, code=200, description=None, validate=True, clean=False):
if callable(schema):
schema = schema()
def wrapper(func):
if not hasattr(func, '__apispec__'):
func.__apispec__ = {'parameters': [], 'requests': {}, 'responses': {}}
func.__apispec__['responses'][code] = {
'schema': schema,
'description': description,
}
if validate:
if not hasattr(func, '__response_schemas__'):
func.__response_schemas__ = {}
func.__response_schemas__[code] = {
'schema': schema,
'clean': clean
}
return func
return wrapper
marshal_with = response_schema
|
import {all} from 'redux-saga/effects';
import {watchSignin} from './Auth/Signin';
export default function* rootSaga() {
yield all([
watchSignin(),
])
}
|
24Assessment of the effects of technique on pulmonary arterial pulse wave velocity measurement Aim The flow-area (QA) technique allows measurement of pulse wave velocity (PWV) from a single phase contrast slice. However in the pulmonary circulation reflected waves arrive during systole and may cause erroneous measurements using this technique. The aim of the study was to compare three post-processing calculations, one of which avoids the reflected wave, and the other which corrects for it, on the measurement of pulmonary PWV and its reproducibility. Materials and methods 10 young healthy volunteers (YHV) (30% male, mean age 31.5 ± 7.6) and 20 older healthy volunteers (OHV) (45% male, mean age 60.2 ± 4.0) underwent MRI using phase contrast sequences through the main pulmonary artery (MPA), right pulmonary artery (RPA) and left pulmonary artery (LPA). Measurements were repeated at 6 months in the YHV cohort and on the same visit in the OHV cohort. QA PWV was calculated using three techniques: QATrad = Q/A; QA3 = Q/A (using only the first three datapoints in the reflectionless upstroke); and QAInv = √(∑A2/∑Q2). Results QATrad produced significantly higher results than QA3 (p < 0.001) and QAInv (p < 0.001), whilst there was no difference between QA3 and QAInv (p = 0.41). In scan-rescan reproducibility, QAInv yielded improved precision over QATrad and QA3: mean (SD) of PWV differences = −0.46 (0.98) ms−1, 0.05 (0.88) ms−1, and 0.01 (0.63) ms−1 for the QATrad, QA3, and QAInv of the MPA respectively; 0.17 (0.43), 0.19 (0.83) and 0.06 (0.25) ms−1 for the QATrad, QA3, and QAInv of the RPA respectively; and −0.29 (0.31), −0.01 (0.39) and −0.06 (0.32) ms−1 for the QATrad, QA3, and QAInv of the LPA respectively (see Figure 1). Conclusion Calculations which account for wave reflections yield lower PWV than those that dont suggesting significant confounding effects from these early reflected waves. Combining a phase contrast sequence acquisition through the right pulmonary artery with a post processing technique to account for wave reflections yields the most reproducible measurements of pulmonary PWV. Abstract 24 Figure 1 Bland-Altman plots comparing interscan PWV repeatability using the 3 techniques in the 3 pulmonary arterial locations
|
<reponame>intj-t/openvmsft
/*============================================================
**
** Header: testharness.h
**
** Purpose: Primary header file for test harness.
**
**
** Copyright (c) 2006 Microsoft Corporation. All rights reserved.
**
** The use and distribution terms for this software are contained in the file
** named license.txt, which can be found in the root of this distribution.
** By using this software in any fashion, you are agreeing to be bound by the
** terms of this license.
**
** You must not remove this notice, or any other, from this software.
**
**
**=========================================================*/
#ifndef _TESTHARNESS_H
#define _TESTHARNESS_H
#ifdef WIN32
#include <windows.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <direct.h>
#include <time.h>
#include <math.h>
#define COMPILEANDLINKEXE "cl /nologo /DWIN32 /D_X86_=1 /Di386=1 /DPAL_PORTABLE_SEH=1 /Gz /Zi /Zl /Od /W3 rotor_pal.lib msvcrtd.lib /Fe"
#define COMPILEANDLINKDLL "cl /nologo /DWIN32 /D_X86_=1 /Di386=1 /DPAL_PORTABLE_SEH=1 /Gz /Zi /Zl /Od /W3 rotor_pal.lib msvcrtd.lib /LD /Fe"
#define EXEEXT ".exe"
#define DLLEXT ".dll"
#define EOLN "\n"
#else // WIN32
#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <stdlib.h>
#include <unistd.h>
#include <math.h>
#include <string.h>
#include "harness_comp_commands.h"
#define EOLN "\015\012"
#define EXEEXT ""
#endif // WIN32
#include <errno.h>
#define READ_BUF_SIZE 8192
#define LINE_BUF_SIZE 1024
#define MAX_OUTPUT 2048
#ifndef MAX_PATH
#define MAX_PATH 255
#endif
#define TSTTYPE_UNKNOWN "unknown"
#define TSTTYPE_DEFAULT "DEFAULT"
#define TSTTYPE_CLIENT "CLIENT"
#define TSTTYPE_SERVER "SERVER"
#define TSTTYPE_CLNTSRV "CLNTSRV"
typedef enum
{
UNKNOWN_TYPE = 0,
DEFAULT_TYPE,
CLIENT_TYPE,
SERVER_TYPE,
CLNTSRV_TYPE,
} TEST_TYPE;
#define TSTLANG_UNKNOWN "unknown"
#define TSTLANG_C "c"
#define TSTLANG_CPP "cpp"
typedef enum
{
TEST_LANG_UNKNOWN = 0,
TEST_LANG_C,
TEST_LANG_CPP
} TEST_LANG;
#define TEST_INFO_FILE "testinfo.dat"
#define TEST_AREA "palsuite"
#define TSTPHASE_BUILD "BUILD"
#define TSTPHASE_EXEC "EXEC"
#define TSTRESULT_PASS "PASS"
#define TSTRESULT_FAIL "FAIL"
#define TSTRESULT_DIRERR "DIR_ERROR"
#define TSTRESULT_NOINFO "NO_INFO"
#define TSTRESULT_DISABLED "DISABLED"
#define TSTRESULT_CFG_ERROR "CONFIG_ERROR"
#define SZ_BLANKDATA "-"
#define C_ESCAPE '\\'
#define C_COMMA ','
#define ENV_DIR "TH_DIR"
#define ENV_DIR_ALT "TH_TC_DIR"
#define ENV_CONFIG "TH_CONFIG"
#define ENV_XRUN "TH_XRUN"
#define ENV_RESULTS "TH_RESULTS"
#define ENV_SUBTEST "TH_SUBTEST"
#define ENV_SUMRES "TH_SUMRES"
#define PLATFORM getenv("PAL_PLATFORM")
#define BUILDTYPE getenv("PAL_BUILDTYPE")
#define BUILDNUMBER getenv("PAL_BUILDNUMBER")
#endif /* _TESTHARNESS_H */
|
from . import text_to_sequence
from . import _stop_words_en
from . import _stop_words_zh
|
The book industry in Germany The article analyzes book publishing, book distribution in Germany. Some features and tendencies of development of the German book industry in the modern period are covered. The article analyzes the geography of large publishing centers, shows German publishing houses that have their own history, traditions, market segment, publish books in the relevant field of knowledge, and finally have their own philosophy. Export markets of German books, problems of distribution of editions abroad are considered. Book market segments are highlighted. The role and place of e-book publishing, audiobook sector, e-book are clarified in the general system of the book industry. The features of book distribution, sales volumes of the largest publishers of Germany, specifics of activity of book trade networks are shown.
|
Quasi-Distributed Vibration Sensing System for Transformers Using a Phase-Sensitive OFDR A quasi-distributed vibration sensing system based on a phase-sensitive optical frequency-domain reflectometer is proposed for monitoring the vibration of power transformer oil tanks. Compared with traditional accelerometers, the proposed system has high electromagnetic interference immunity. In this article, the principles of localization and vibration sensing are introduced. Then, the system layout and the vibration signal demodulation process are explained in detail. Next, the vibration sensing performance of the system is calibrated experimentally. Finally, vibration monitoring tests are carried out in a laboratory environment and on-site to verify the vibration detection capability of the system. The proposed system can be used to measure the time domain, frequency domain, and spatial distribution of oil tank vibration, which can be used for the online monitoring of the mechanical state of power transformers.
|
Coping with the COVID-19 pandemic: the role of physical activity. An international position statement Since its appearance at the end of 2019 and the beginning of 2020 in Wuhan (China), the coronavirus disease 2019 (COVID-19) has spread rapidly worldwide. The outbreak was declared a pandemic in March 2020. Home confinement, travel restrictions, the closing of venues for exercise and recreation, and the cancellation of indoor and outdoor events including sport have been characteristic features of the public health responses around the world. The result has been a reduction in the levels of physical activity experienced by large numbers of the world population of all ages. This has caused considerable alarm for physical activity professionals around the world. In response, this position statement makes a case for the importance of continuing to embrace regular physical activity alongside the existing public health strategies that are being implemented in the management of the effects of the virus internationally. To be consistent with these policies this activity should always be away from others (application of social distancing) and preferably outdoors. Some potential benefits specific to the current situation, are suggested by reference to existing knowledge about the significance of exercise in the maintenance of a healthy immune system. However, these recommendations need to be viewed primarily within an unchanging context of the long-term value of healthy levels of physical activity for population well-being and quality of life. This has been made the more important on account of the potential harmful effects of the current reduced levels. Some recommendations for appropriate dosage and types of PA for those with different conditions are provided.
|
Orphan receptor GPR50 attenuates inflammation and insulin signaling in 3T3-L1 preadipocytes. Type 2 diabetes (T2DM) is characterized by insulin secretion deficiencies and systemic insulin resistance (IR) in adipose tissue, skeletal muscle, and the liver. Although the mechanism of T2DM is not yet fully known, inflammation and insulin resistance play a central role in the pathogenesis of T2DM. G protein-coupled receptors (GPCRs) are involved in endocrine and metabolic processes as well as many other physiological processes. GPR50 (G protein-coupled receptor 50) is an orphan GPCR that shares the highest sequence homology with melatonin receptors. The aim of this study was to investigate the effect of GPR50 on inflammation and insulin resistance in 3T3-L1 preadipocytes. GPR50 expression was observed to be significantly increased in the adipose tissue of obese T2DM mice, while GPR50 deficiency increased inflammation in 3T3-L1 cells and induced the phosphorylation of AKT and insulin receptor substrate (IRS) 1. Furthermore, GPR50 knockout in the 3T3-L1 cell line suppressed PPAR- expression. These data suggest that GPR50 can attenuate inflammatory levels and regulate insulin signaling in adipocytes. Furthermore, the effects are mediated through the regulation of the IRS1/AKT signaling pathway and PPAR- expression.
|
Negative frequency-dependent selection or alternative reproductive tactics: maintenance of female polymorphism in natural populations Background Sex-limited polymorphisms have long intrigued evolutionary biologists and have been the subject of long-standing debates. The coexistence of multiple male and/or female morphs is widely believed to be maintained through negative frequency-dependent selection imposed by social interactions. However, remarkably few empirical studies have evaluated how social interactions, morph frequencies and fitness parameters relate to one another under natural conditions. Here, we test two hypotheses proposed to explain the maintenance of a female polymorphism in a species with extreme geographical variation in morph frequencies. We first elucidate how fecundity traits of the morphs vary in relation to the frequencies and densities of males and female morphs in multiple sites over multiple years. Second, we evaluate whether the two female morphs differ in resource allocation among fecundity traits, indicating alternative tactics to maximize reproductive output. Results We present some of the first empirical evidence collected under natural conditions that egg number and clutch mass was higher in the rarer female morph. This morph-specific fecundity advantage gradually switched with the population morph frequency. Our results further indicate that all investigated fecundity traits are negatively affected by relative male density (i.e. operational sex ratio), which confirms male harassment as selective agent. Finally, we show a clear trade-off between qualitative (egg mass) and quantitative (egg number) fecundity traits. This trade-off, however, is not morph-specific. Conclusion Our reported frequency- and density-dependent fecundity patterns are consistent with the hypothesis that the polymorphism is driven by a conflict between sexes over optimal mating rate, with costly male sexual harassment driving negative frequency-dependent selection on morph fecundity. Background Evolutionary biologists have long studied visible polymorphisms as they are excellent model systems to examine micro evolutionary processes. Polymorphisms with morphs co-existing at relatively stable frequencies appear to be common, but this phenomenon can only persist under a limited range of conditions, one of these being negative frequency-dependent selection (NFDS). NFDS arises when individuals of a rare morph experiences a higher fitness than those of a more common type. Over generations, and in absence of other mechanisms, NFDS should lead to a balanced polymorphism, typically with limited fluctuations along an equilibrium frequency of the involved morphs. Classic examples of NFDS include coexistence of different colour morphs, to gain access to mates, to challenge predators and to lower sexual conflict intensity. Although the idea of NFDS has been appreciated for decades, relatively few empirical studies have tested the validity of this concept under natural, unmanipulated field conditions. Especially rare are studies which relate natural geographical variation in morph frequencies, a putative selective agent and fitness parameters of the involved morphs with one another [7,10,. Sex-limited polymorphisms represent excellent model systems to study the nature of diversifying selection and consequently they have been subject to a variety of experimental and theoretical studies. Particularly popular are studies on male polymorphisms, whose maintenance tend to be explained by a fitness advantage to the rare morph relative to the common phenotypes in the competition over mates (e.g. sneakers do better when territorials predominate; reviewed by Oliveira et al. ). Over the last few decades, however, it has become clear that polymorphisms restricted to the female sex are more common in nature than previously thought. Yet the underlying mechanisms that maintain phenotypic and genetic variation within females remain unresolved in many cases. Female polymorphisms are often considered to have evolved as a counter adaptation to reduce costs of harassment imposed by mate-searching males; e.g. butterflies, diving beetles, African bat bugs, damselflies. The wider context of this proposed mechanism is sexual conflict over optimal mating rate. Evidently, females need males to fulfil their reproductive needs. However, obtrusive males may reduce female fitness by exceeding the females' optimal number of matings. Mate searching males are considered to face fewer cognitive challenges when confronted with only one, rather than multiple female phenotypes coexisting within populations, some of which may appear like males (i.e. andromorphs ). Therefore on a phenotypic level, females may experience diversifying sexual selection to avoid sexual harassment. If the above arguments hold, then it is highly likely that female polymorphism is maintained by NFDS driven by social interactions between sexes. In much the same way that predators form a search image for the most common cryptic prey type, increased male sexual interest has been observed towards the most common female morph in a given population [19,. Recent studies showed an inverse relationship between morph-specific fecundity and morph-specific frequency in the population and suggested male harassment as most likely selective agent. Although quantification of this selective agent has been understudied in past studies, support may come from density-dependent effects on fitness. Indeed, the overall intensity of male harassment may rise with male density, either absolute or relative to female density (i.e. operational sex ratio), because male-female interactions occur more frequently under these conditions. Thus a thorough investigation of NFDS in female polymorphic systems entails evaluating the role of female morph frequency together with male densities on fitness related parameters, which forms the first aim of the current study. In addition, sex-limited polymorphisms are frequently considered as alternative reproductive tactics (ARTs), i.e. a discontinuous set of selected traits to maximize reproductive output in two or more alternative ways. Although repeatedly studied in males, recent observations in several species of owl, lizard and insect collectively indicate that female morphs may also represent ARTs. For example in the polymorphic lizard Uta stansburiana, combined density-dependent and negative frequency-dependent interactions among conspecifics determine the relative success of orange (produce many small eggs, r-strategy) and yellow (fewer but larger eggs, K-strategy) throated females. Female morphs may therefore differently allocate resources towards fitness related traits, potentially resulting in trade-offs among life-history and/or physiological traits. Although the majority of the studies cited above provide a new promising research avenue, they should be treated with caution, since many of them are performed with limited spatial replicates, without temporal replicates and/or with small sample sizes within populations. This makes it difficult to reach firm conclusions as to whether female morphs represent ARTs, especially in spatial and temporal heterogeneous environments. In this study, we examine morph-specific variation in fecundity (i.e. egg number, egg mass, clutch mass and relative body mass) under natural conditions in multiple years and across six populations, which show extreme variation in morph frequencies. The aim of this study is to evaluate two hypotheses which try to explain maintenance of female polymorphism. Based on the first hypothesis we expect that relative male density, as a proxy for intensity of male harassment and meanwhile the suggested selective force, negatively affects overall female fecundity. In doing so, we simultaneously test whether a frequencydependent fecundity advantage exists for the rare female morph due to a lower positive frequency-dependent male detection rate. Testing the second hypothesis, we explore whether female morphs exhibit ARTs in which resources are allocated differently into qualitative (egg mass) or quantitative (egg number) fecundity traits and meanwhile account for potential spatial and temporal variation. Model system Female polymorphism has been observed in more than a hundred damselfly and dragonfly species. Phenotypic ratios in laboratory cross experiments are consistent with the hypothesis that this polymorphism is genetically controlled by a single autosomal locus, with a number of alleles equal to the number of female morphs; reviewed by. Large geographic variation in frequencies and densities of males and female morphs has been described in several species, which allows us to investigate the role of social interactions in maintaining sex-limited polymorphisms. We studied the common North American damselfly, Nehalennia irene, for which male harassment estimates towards female morphs has been shown to vary in a positive frequency-dependent manner . Nehalennia irene is not an endangered nor a protected species (see COSEWIC, federal government Canada) and therefore our research complies with the Convention on Biological Diversity and the Convention on the Trade in Endangered Species of Wild Fauna and Flora. It is a small non-territorial species, which inhabits marshy or boggy waters and exhibits a discrete polymorphism restricted to the female sex. Female morphs are easily classified into andromorphs or gynomorphs based on their body coloration. Thus, while andromorph females resemble the conspecific male's body blue coloration and melanin pattern, gynomorph females have distinctive yellowish lateral thorax sides and a less conspicuous abdominal melanin pattern; for colour figures see, for pictures see. The species has one generation per year, with reproduction occurring between early June and mid-August. After locating a potential mate, a male will attempt grasping the individual in the so-called tandem formation, i.e. when the male succeeds in attaching his anal appendages to the individual's prothorax. This tandem formation can last several hours (AI, personal observation). If receptive, the female cooperates by bending her abdomen towards the male's secondary genitalia (2nd abdominal segment) to form a 'copulation wheel'. Very little additional information on reproductive biology is known for this species, but we expect that similar to related damselfly species, ovarial follicles of adult females can develop within one or a few days into roughly two hundred mature eggs. In N. irene, females lay eggs in floating pieces of dead plant material while the female is tandem-guarded by her last successful male. Several clutches of eggs are laid throughout a female's lifetime. Study sites and sample procedures Our previous work with N. irene indicated large spatial, yet limited temporal variation in female morph frequencies among populations. Specifically, andromorph frequencies range from 0 to >90% throughout the species' distribution range over Canada. For our current aims we selected six study populations that differed significantly in social conditions (see Table 1). Frequency and density estimates were obtained in a manner similar to that described in Van Gossum et al.. In short, individuals were randomly captured with an insect net while walking slowly through the reproductive area, sweeping eight-shaped figures and recording the time elapsed. All caught males, andromorphs and gynomorphs were counted and marked with a permanent marker prior to release to avoid multiple counts. Andromorph frequency (proportion andromorph females), operational sex ratio (OSR, proportion males relative to females in the reproductive zone) and male density (number of males caught per minute) were calculated and collectively quantify the social environment. Calculating these parameters including or excluding immature individuals gave very similar results, see also. Hence, we here use data based on mature, thus reproductively active, individuals given that the aim of the current study deals with sexually active individuals. OSR and male density can be used as a proxy for overall male harassment in a given population. Each of the six populations were monitored during the reproductive season over three consecutive years. Sample collection was carried out on several days throughout the reproductive season (mean ± SE: 4 ± 0.5 sample days; see Additional file 1) always between 9 am and 3 pm. This is the period before the majority of females start to oviposit (AI, personal observations). Andromorphs and gynomorphs were collected in an alternating manner to maintain a balanced design and to control for potential diurnal variation in egg number. We aimed to collect 25 adult andromorphs and 25 gynomorphs in each population for each investigated year. Measurements for relative body mass (see below) were performed in all three years (N = 43 ± 3 females per population and year; total N = 772) and the three other fecundity estimates were investigated in two successive years (N = 44 ± 3 females per population and year; total N = 547). For more detailed sample sizes per population and per year, (see Additional file 1). Mating status at the moment of capture was noted, i.e. being single or mating (i.e. involved in tandem or copulation). All individuals were stored for further measurements in 95% ethanol immediately after capture. Fecundity estimates First, an individual was placed on a sheet of absorbent paper for exactly two minutes to allow standardised evaporation and absorption of most of the ethanol. Then it was weighted on a Kern & Sohn GmbH 870 balance (accuracy 0.1 mg). Next, a digital picture was taken (Nikon D70/Tamron macro lens 90 mm 1:2.8) of the right hind wing. Using ImageJ 1.38x, wing length was measured from the second antenodal cross vein to the stigma; see for more details. Residuals of body mass were calculated by regressing body mass against wing length and were used as a measure for relative body mass (RBM). Positive values indicate relative heavy individuals for a given wing size, while negative values indicate relatively light individuals. RBM is not only considered an estimate for body condition, it is also suggested to increase with female fecundity in various insect taxa. Hence, RBM is here treated as a coarse measure of overall fecundity. Dissection of specimens was performed under a Leica MZ 12.5 stereomicroscope. Abdominal sternites were removed and fifty developed eggs were isolated on a pre-weighted aluminum foil. This high number of eggs was chosen to account for potential variation in weight among eggs of the same clutch. Eggs were then dried in an incubator (Binder -APT.line™) at 60°C for 12 h and weighted on a Sartorius SE2F balance (accuracy 1 g). Average dry weight of a single egg was calculated and further considered as a measure of egg quality since more nutrients are expected in heavier eggs. The total number of developed eggs was also counted for each specimen, and considered a quantitative measure of fecundity. Multiplying egg number with egg mass gave clutch mass as a comprehensive measure of fecundity (i.e. quality*quantity). By measuring twice a set of randomly chosen individuals among populations and years, the repeatability of our measures could be evaluated. Repeatability was calculated as the proportion of the variation between individuals to the total variation, i.e. between and within individuals. A limited measurement error was observed: body mass (R = 0.90, N = 106), wing length (R = 0.96, N = 136), egg mass (R = 0.78, N = 46), and egg number (R = 1.00, N = 66). Statistical analyses To evaluate our first hypothesis, we initially tested for statistical dependence in our estimates of the social environment. Using single regressions and adding study population as a repeated measure, andromorph frequency, OSR and male density appeared not significantly related to one another (all P ≥ 0.17 and Spearman R 2 ≤ 0.32). This allowed us to simultaneously test the predictive value of these parameters, along with female morph and their morph-specific interactions (see Table 1) in the same mixed ANCOVA models. Four such models were fitted to investigate variation in the fecundity measures, while controlling for spatial and temporal variation, as well as potential differences between mated and single females by adding study population, year and mating status as random variables. With regards to our second hypothesis, we tested whether both morphs differentially invested in qualitative (egg mass) or quantitative (egg number) reproductive traits. In doing so, we fitted an ANCOVA model with egg number treated as response variable and female morph, egg mass and their interactions as explanatory variables. A significant effect of the interaction would suggest a morphspecific trade-off among both reproductive traits. Meanwhile, we controlled for annual and spatial variation, as well as potential effects of mating status in this analysis by adding year, study population and mating status as random variables to the models. All analyses were performed in SAS 9.2 (SAS Institute Inc, Cary, NC, USA). Results With regards to our first hypothesis, both egg number (F 1,534 = 4.2; P = 0.04) and clutch mass (F 1,515 = 5.6; P = 0.02) varied with population morph frequency and in opposite directions for both female morphs (see Morph*Afreq, Table 2), which provides support for NFDS on female fecundity. Specifically, andromorph females store 8.7% more eggs and have 5.4% higher clutch mass when rare, compared with gynomorphs ( Figure 1B,D). This fecundity advantage for andromorphs gradually switched with rising andromorph frequencies towards a reverse situation with higher fecundity for gynomorphs when rare (14.4% and 16.5% difference in egg number and egg mass, respectively). A morph-specific and frequency-dependent effect was not found for egg mass and relative body mass (Table 2; Figure 1A,C). However, all four investigated fecundity estimates significantly decreased with operational sex ratio (P ≤ 0.03; Figure 1E-H), which had similar effects on andromorphs and gynomorphs (see Morph*OSR, Table 2). Male density on the other hand, had no effect on both overall fecundity measures, neither as a main effect (P ≥ 0.40), nor as an interaction with female morph (P ≥ 0.16), see Table 2. As body size may vary with latitude reviewed by and therefore potentially may influence egg number, we performed separate analyses which also included wing length. However, none of these additional analyses altered the outcome of the analyses presented in Table 2. Finally, a negative correlation was observed between egg mass and egg number (F 1,518 = 10.2, P = 0.002). This reproductive trade-off was observed within all populations, except in Quebec City (see Additional file 2). Interestingly, our results also show a trade-off across populations, in which females from different populations seem to invest more into either egg number or egg mass ( Figure 2). However, the resource allocation towards these traits did not differ between female morphs (egg mass*morph: F 1,516 = 0.43, P = 0.51; Figure 2). Finally, when controlling for spatial and temporal variation, female morphs did not differ in egg number (F 1,537 = 0.08, P = 0.78) or egg mass (F 1,518 = 0.99, P = 0.32). Discussion Our study provides some of the first empirical evidence for NFDS on female fecundity in natural conditions and as such provides an important key to understanding the maintenance of intra-sexual phenotypic and genetic variation in this species. Our conclusion was influenced by our observation of a significant inverse relationship between all four of our fecundity measures and relative male density (OSR) as an index of male harassment rate. Indeed, obtrusive males may reduce female fitness in a variety of ways ranging from physical damage to inhibiting foraging success, resulting in suboptimal fecundity; e.g. bees, damselflies. Thus our reported relationships are consistent with the idea that male sexual harassment comes with a fitness costs in females and therefore acts as a major selective force in this study system; see also. Second, we show that the rare female morph has a higher egg load relative to the common one. Clutch mass, as our comprehensive fecundity measure, shows a similar frequency-dependent relationship and is most likely driven by the pattern in egg load. Intriguingly recent work with N. irene, involving exactly the same populations and the same years as the current study, clearly indicated that males prefer to mate with the most common female morph . All of the above observations collectively support our first hypothesis that positive frequency-dependent male harassment translates into the currently presented negative frequencydependent patterns in female morph fecundity. Together with the work on polymorphic lizards, diving beetles and other damselfly species our work emphasises the importance of costly frequency -dependent social interactions as a balancing mechanism to explain intraspecific polymorphisms. Theoretical and empirical studies indicate that in absence of other mechanisms, NFDS could over generations lead to limited temporal frequency fluctuations along an equilibrium frequency; e.g. fishes, damselflies. Morph frequencies vary little over the years in N. irene (0 -25%) compared to the very large spatial variation (0 -~100%, AI unpublished results). In fact, such large spatial variation has now been reported repeatedly in polymorphic damselflies and often resembles a geographical cline (Ischnura elegans, I. senegalensis, N. irene, Megalagrion calliphya ). A single equilibrium frequency is thus clearly not reached. This may indicate that besides NFDS operating within populations, additional mechanisms may influence the currently observed population morph frequencies. An obvious suggestion is that divergent Explanatory variables include female morph frequencies (Afreq), male density (Mdens) and operational sex ratio (OSR). All four analyses are controlled for variation among populations, years and mating status (see methods). The asterisks (*) indicate tested interactions between two main effects. Numerator degrees of freedom is 1 in all cases, DF in the table refers to the denominator degrees of freedom. Note that egg mass, egg number and relative clutch mass were measured in two successive years and body mass in three subsequent years. Significant results are indicated in bold. selection or gene-by-environment interactions affect the precise equilibrium, favouring certain morphs under a given set of ecological conditions such as particular densities of con-and heterospecific damselflies, differences in climate, predation rate or parasite load among populations. In addition, historical and present-day stochastic mechanisms have been proposed as well to explain the observed large geographic frequency variation, at least in some parts of a species' distribution range. Thus, although NFDS on female fecundity appears to be a key balancing mechanism operating within populations, the importance of additional mechanisms are still under debate and should deserve more attention in future research to explain this natural phenomenon thoroughly. Interestingly, the fecundity advantage for andromorphs is much smaller when rare compared to gynomorphs, indicating an asymmetry in this relationship. This is surprising since it can be expected that andromorphs experience lower harassment rates and associated costs due to their male-like appearance, especially in populations where they are the rare morph. Previous work further indicated that female phenotypic appearance varies with the population morph frequency in this system. Indeed, andromorphs differ overall in body size and shape from gynomorphs. However when common, andromorphs resemble the smaller conspecific male more closely than gynomorphs, which is consistent with theory explaining imperfect mimicry. In addition to direct effects of male harassment (discussed above), it is likely that female fecundity is also shaped by morphological constraints, with smaller or slender females being limited in the number of eggs they can store; see also Ischnura elegans. Thus, although male-like appearance of andromorphs may be beneficial in terms of lower detection rate by harassing males, mimicry may on the other hand come along with a fecundity cost due to allometric associations. The interplay between costs and benefits of mimicry may perhaps explain the asymmetry in our fecundity relationship. Taken all together, present-day patterns in female morph fecundity may relate to direct effects of costly male harassment, perhaps combined with indirect consequences of long-term selection that altered female morph morphology. Our work also highlights a resource allocation trade-off, in which females invest in either quality (egg mass) or quantity (egg number). This trade-off not only holds within populations, but also differs among populations and may perhaps relate to the inhospitable aquatic environment of the larvae in terms of predation. Odonate larvae are generalist predators that interact aggressively towards conspecifics or heterospecifics, leading to high rates of cannibalism. Hatching from heavier eggs, resulting in larger larvae may thus be more advantageous in this (geographically heterogeneous ) competitive environment. Furthermore, we observed increasing egg mass and decreasing egg number towards Northern latitudes. It has been suggested before that body size may increase with latitude due to temperature related physical constraints of growth and development (Bergmann's rule, reviewed by ), a general pattern which is also confirmed in damselflies. Taken together, investment in heavier eggs may depend on several ecological variables, including abiotic conditions and degree of the competitive environment, but will most likely come at the expense of producing numerous eggs. (See figure on previous page.) Figure 1 Variation in fecundity traits of the damselfly N. irene. Panel A-D: Graphical interpretation of the female morph by andromorph frequency interaction. Relative fecundity for each measurement is calculated in each population and year as RF A = ln(F A /F G ) for andromorphs and RF G = ln(F G /F A ) for gynomorphs, similar to. Values above and below 0 indicate respectively, higher fecundity for andromorph (black symbols, solid line), relative to gynomorphs (white symbols, dashed line) females. Panel E-H: Decrease in fecundity estimates with operational sex ratio (OSR). Mean (± 1SE) values are given for each population and each year. Black circles, white circles and gray triangles represent fecundity in the respective successive years (Y1, Y2 and Y3). Regression curves are based on the parameter estimates of the ANCOVA models, thus including geographical and temporal dependency of the data points. This reproductive trade-off, however, did not differ between morphs, which contrasts with hypothesis two and earlier observations in polymorphic birds and lizards. These former studies indicated that female morphs may differently allocate resources among life-history traits. The current result also contrasts with recently reported alternative physiological optima in N. irene. Specifically, this latter study showed that andromorphs, compared to gynomorphs, invest more in traits related to immune function and less in flight muscles. In fact, numerous studies with female polymorphic damselflies investigated different life-history components, physiological traits (summarized by ) and morphspecific behavioural strategies to cope with male harassment. Mixed results have been reported previously, but often in favour of the hypothesis that female damselfly morphs represent ARTs in order to escape from excessive male harassment. The combination of traits that differ between the female morphs may be context dependent and/or species specific. Therefore, we suggest that future studies on ARTs focus on an integrated set of behavioural and fitness related traits in different social contexts, and perhaps quantify life-time reproductive success by means of molecular tools. Conclusion In conclusion, we provide some of the first empirical evidence collected in natural populations with extreme variation in morph frequencies demonstrating NFDS on female fecundity. As fecundity is a key component of fitness, our results explain an important part of the mechanism maintaining intra-sexual polymorphisms. Frequencydependent male sexual harassment may well be the driving force of this pattern in our study system, either directly or indirectly affecting egg number and clutch mass of female morphs in a frequency-dependent way. We also show that female morphs do not differently allocate resources into quantitative or qualitative fecundity traits, although this does not exclude the potential for alternative reproductive tactics in this system. As Oliveira et al. argues, studies on female ARTs are largely understudied and should deserve more attention in future research. Additional files Additional file 1: Detailed overview of sample dates (N sw ), key estimates of the social environment and female morph fecundity. Additional file 2: Quantity-quality trade-off within each study population.
|
As is known in the art, there is a desire for wideband, high-power (>25 dBm) silicon based amplifiers in microwave systems. However, high-speed silicon-based technologies typically incorporate CMOS or HBT devices with modest breakdown voltage levels (BVCEO <4 V), which results in high operating currents in the amplifier core. These high dc and ac currents used in the amplifier core require large width metal routing to satisfy electromigration concerns and result in lossy passive structures. The ability to distribute the dc current amongst various amplifier stages and de-couple the dc amplifier current from the ac output current enables wideband, high output power silicon-based amplifier designs
As is also known in the art, there are numerous circuit topologies used in silicon-based power amplifiers. Most designs consist of a cascode structure, using bipolar, silicon-germanium heterojunction bipolar, or CMOS transistors for the transistors, with standard L-C matching structures to deliver the maximum output power for a given source and load impedance as shown in FIG. 1. Depending on the application, packaging, and system requirements, the design can be single-ended or differential, with various biasing schemes (i.e. class A, AB, B, E, F, etc.) for the desired linearity, efficiency, and gain. In all of these design topologies the circuit is relatively narrowband, due to the resonant behavior of the matching structures.
As is also known in the art, distributed amplifiers have been shown in silicon technologies, but are generally designed in a single-ended fashion. Distributed amplifiers increase the gain-bandwidth product of a given amplifier stage by connecting several amplifier stages in series via transmission line elements. The design can also be realized by using discrete inductor and capacitors to act as artificial transmission line elements, as seen in FIG. 2. The signal propagates down the input of the distributed amplifier, being amplified by each discrete transistor, until it reaches the termination resistor. The output of each discrete transistor will then be combined in the collector output network (assuming the phase velocities of the input and output L-C networks are identical) to create the final broadband output.
These designs achieve a wideband of operation, but at modest output power levels (˜20 dBm) over the band of operation. This output power limitation is typically due to the implementation of these amplifiers, where all of the output current (or collector current) must flow through each matching inductor. This current includes the dc current for each device, resulting in significantly high dc current flowing through the inductors located near the output. In order to accommodate this large dc current, the inductors near the output must be very wide to avoid electromigration concerns. Obviously a dc blocking capacitor could be inserted between the stages, but this would then require an additional biasing inductor at the collector of each stage, which would also degrade the circuit's performance.
As is also known in the art, the use of incorporating transformer coupled silicon-base power amplifiers has also been demonstrated in numerous works, where the output match of several amplifiers is combined via transformer elements on the silicon die. Transformer coupled amplifiers, or amplifiers having spatially distributed transformers, use monolithic transformer structures (typically intertwined inductors) to combine the output of several discrete amplifiers, as shown in FIG. 3. In this case, the input signal is split evenly amongst the amplifying transistors, with each transistor receiving the same phase and amplitude. The output of each transistor will also have the same magnitude and phase, allowing them to be summed coherently. This summation of signals will result in a higher output power for the entire amplifier than what could be achieved with a single amplifying element.
Although this topology does enable higher output power, it still maintains a narrow band frequency response. Since all of the inputs and output of the circuit are in-phase and have identical matching structures, the narrow-band shape of the transfer function will also be identical resulting in an overall narrow band response.
Further, the concept of incorporating transformer coupled silicon-base power amplifiers has also been demonstrated in numerous works: P. Haldi, D. Chowdhury, P. Reynaert, L. Gang, and A. Niknejad, “A 5.8 GHz 1 V Linear Power Amplifier Using a Novel On-Chip Transformer Power Combiner in Standard 90 nm CMOS,” IEEE Journal of Solid State Circuits, vol. 43, no. 5, pp. 1054-1063, May 2008; I. Aoki, S. D. Kee, D. B Rutledge, and A. Hajimiri, “Distributed active transformer-a new power-combining and impedance-transformation technique,” IEEE Transactions on Microwave Theory and Techniques, Vol. 50, pp. 316-331, January 2002 where the output match of several amplifiers is combined via transformer elements on the silicon die. Although this topology does enable higher output power, it still maintains a narrow band frequency response. Since all of the inputs and output of the circuit are in-phase and have identical matching structures, the narrow-band shape of the transfer function will also be identical resulting in an overall narrow band response.
Distributed amplifiers have also been shown in silicon technologies, but are generally designed in a single-ended fashion, as demonstrated in B. Sewiolo, D. Kissinger, G. Fischer, and R. Weigel, “A High-Gain High-Linearity Distributed Amplifier for Ultra-Wideband-Applications Using a Low Cost SiGe BiCMOS Technology,” IEEE 10th Annual Wireless and Microwave Technology Conference, 2009, pp. 1-4, 2009. These designs achieve a wideband of operation, but at modest output power levels (˜20 dBm) over the band of operation. This output power limitation is typically due to the implementation of these amplifiers. As seen in FIG. 2, all of the output current (or collector current) must flow through each matching inductor. This current includes the dc current for each device, resulting in significantly high dc current flowing through the inductors located near the output. In order to accommodate this large dc current, the inductors near the output must be very wide to avoid electromigration concerns. Obviously a dc blocking capacitor could be inserted between the stages, but this would then require an additional biasing inductor at the collector of each stage, which would also degrade the circuit's performance.
|
The representation of numbers is assumed to interact with two visuo-motor functions. On the one hand, following the observation that children often use their fingers to learn the counting sequence and basic arithmetic operations, numbers were assumed to interact with finger movements. On the other hand, following the recurrent observation that small numbers are preferentially associated with the left side of space while large numbers are preferentially associated with the right side of space, numbers were assumed to interact with space. In this paper, we will examine the role vision plays in shaping these interactions between fingers and numbers and between numbers and space. To this aim, different experiments with blind and sighted people will be detailed. The role of developmental vision on the development of the finger-numeral representation will first be examined. Then we will investigate the influence of developmental vision on the visuo-spatial representation of numbers.
|
. Meningitis of the newborn is often accompanied by ventriculitis. This may be one of the reasons for the still unfavourable prognosis of neonatal meningitis. In a few cases we achieved sterile ventricular fluid with additional intraventricular application of antibiotics. An examination of ventricular fluids should be performed when there is the slightest suspicion of ventriculitis. Early institution of additional antibiotic therapy intraventricularly as well as some of the newer antibiotics (p.e. Cefotaxime) seem to produce better results. We observed the following complications of meningoventriculitis: hydrocephalus, porencephaly, and multicystic encephalopathy.
|
from fractal import Julia
ju = Julia([500, 500])
ju.setC(-0.12 + 0.76j)
ju.doJulia(500)
ju.wait()
|
<reponame>AlaxLee/go-spec-util
package gospec
import "testing"
// func (s *Spec) Identical(v, t string) bool
// 1. type A1 = B 则 A1 与 B 类型相同
// type A1 = B , they are identical
func TestIdentical01(t *testing.T) {
s := NewSpec(`type B int; type A1 = B`)
if !s.Identical("A1", "B") {
t.Error(`test rule failed`)
}
}
// 2. type A2 B 则 A2 与 B 类型不同
// type A2 B , they are different
func TestIdentical02(t *testing.T) {
s := NewSpec(`type B int; type A2 B`)
if s.Identical("A2", "B") {
t.Error(`test rule failed`)
}
}
// 3.1. 数组:如果 元素类型 和 数组长度 都相同,那么类型相同
// Two array types are identical if they have identical element types and the same array length.
func TestIdentical03(t *testing.T) {
s := NewSpec(`var a311, a312 [2]int; var a313 [2]int64`)
if !s.Identical("a311", "a312") {
t.Error(`test rule failed`)
}
if s.Identical("a311", "a313") {
t.Error(`test rule failed`)
}
}
// 3.2 切片:如果 元素类型 相同,那么类型相同
// Two slice types are identical if they have identical element types.
func TestIdentical04(t *testing.T) {
s := NewSpec(`var a321, a322 []int; var a323 []int64`)
if !s.Identical("a321", "a322") {
t.Error(`test rule failed`)
}
if s.Identical("a321", "a323") {
t.Error(`test rule failed`)
}
}
// 3.3 结构体: 如果 属性顺序 相同,且对应属性的名字、类型、标签都相同,那么类型相同。
// 注意:不同包里面的结构体的未导出的属性一定不相同
// 简单来说,如果在不同的包里的两个结构体满足前述条件,那么,只有当它们所有的属性都是导出属性的时候,它们俩类型才相同,只要有未导出属性,那么它们俩类型一定不相同
// Two struct types are identical if they have the same sequence of fields, and if corresponding fields have the same names, and identical types, and identical tags. Non-exported field names from different packages are always different.
func TestIdentical05(t *testing.T) {
s := NewSpec(`
type B int
type A1 = B
var a331 struct {
x int "one"
Y string
c []A1 ` + "`B slice`" + `
}
var a332 struct {
x int "one"
Y string
c []B ` + "`B slice`" + `
}
`)
if !s.Identical("a331", "a332") {
t.Error(`test rule failed`)
}
specOfStruct1 := NewSpec(`
package PA
var a333 struct {
X int "one"
Y string
C []int ` + "`int slice`" + `
}
var a335 struct {
X int "one"
y string
C []int ` + "`int slice`" + `
}
`)
specOfStruct2 := NewSpec(`
package QA
var a334 struct {
X int "one"
Y string
C []int ` + "`int slice`" + `
}
var a336 struct {
X int "one"
y string
C []int ` + "`int slice`" + `
}
`)
// Identical(v, t types.Object) bool
if !Identical(specOfStruct1.MustGetValidTypeObject("a333"), specOfStruct2.MustGetValidTypeObject("a334")) {
t.Error(`test rule failed`)
}
if Identical(specOfStruct1.MustGetValidTypeObject("a335"), specOfStruct2.MustGetValidTypeObject("a336")) {
t.Error(`test rule failed`)
}
}
// 3.4 指针:如果 基本类型(base type) 相同,那么类型相同
// Two pointer types are identical if they have identical base types.
func TestIdentical06(t *testing.T) {
s := NewSpec(`
type B int
type A1 = B
var a341 *B
var a342 *A1
`)
if !s.Identical("a341", "a342") {
t.Error(`test rule failed`)
}
}
// 3.5 函数:如果两者具有 相同数量 的参数和返回值,相应的参数和返回值的 类型相同,并且要么两个函数都有可变参数,要么都没有。
// 参数名和结果名不需要匹配。
// Two function types are identical if they have the same number of parameters and result values, corresponding parameter and result types are identical, and either both functions are variadic or neither is. Parameter and result names are not required to match.
func TestIdentical07(t *testing.T) {
s := NewSpec(`
var a351 func(a, b int, z float64, opt ...interface{}) (success bool)
var a352 func(x int, y int, z float64, too ...interface{}) (ok bool)
`)
if !s.Identical("a351", "a352") {
t.Error(`test rule failed`)
}
}
// 3.6 接口:如果两者的方法集内的方法的 名称、类型 都相同,那么类型相同。
// 来自不同程序包的未导出方法名称始终是不同的。 方法的顺序无关紧要。
// Two interface types are identical if they have the same set of methods with the same names and identical function types. Non-exported method names from different packages are always different. The order of the methods is irrelevant.
func TestIdentical08(t *testing.T) {
s := NewSpec(`
type B int
type A1 = B
var a361 interface {
X() int
y(string)
c() []A1
}
var a362 interface {
c() []B
X() int
y(string)
}
`)
if !s.Identical("a361", "a362") {
t.Error(`test rule failed`)
}
specOfInterface1 := NewSpec(`
package PA
var a363 interface {
X() int
Y(string)
C() []int
}
var a365 interface {
X() int
Y(string)
c() []int
}
`)
specOfInterface2 := NewSpec(`
package QA
var a364 interface {
C() []int
X() int
Y(string)
}
var a366 interface {
X() int
c() []int
Y(string)
}
`)
// Identical(v, t types.Object) bool
if !Identical(specOfInterface1.MustGetValidTypeObject("a363"), specOfInterface2.MustGetValidTypeObject("a364")) {
t.Error(`test rule failed`)
}
// Identical(v, t types.Object) bool
if Identical(specOfInterface1.MustGetValidTypeObject("a365"), specOfInterface2.MustGetValidTypeObject("a366")) {
t.Error(`test rule failed`)
}
}
// 3.7 字典:如果两者的 键和值的类型都相同,那么类型相同
// Two map types are identical if they have identical key and element types.
func TestIdentical09(t *testing.T) {
s := NewSpec(`
type B int
type A1 = B
var a371 map[A1]string
var a372 map[B]string
`)
if !s.Identical("a371", "a372") {
t.Error(`test rule failed`)
}
}
// 3.8 管道:如果两者的 元素类型相同、方向相同,那么类型相同
// Two channel types are identical if they have identical element types and the same direction.
func TestIdentical10(t *testing.T) {
s := NewSpec(`
type B int
type A1 = B
var a381 chan<- A1
var a382 chan<- B
`)
if !s.Identical("a381", "a382") {
t.Error(`test rule failed`)
}
}
//3.9 basic type:诸如 int、string 这种简单类型,字面量相同,那么类型相同
func TestIdentical11(t *testing.T) {
s := NewSpec(`var a391 byte; var a392 byte`)
if !s.Identical("a391", "a392") {
t.Error(`test rule failed`)
}
}
// Identical(v, t types.Type) bool
// or
// Identical(v, t types.Object) bool
// or
// Identical((code, v, t string) bool
func TestIdentical12(t *testing.T) {
s := NewSpec(`type B int; type A1 = B`)
if !Identical(s.MustGetValidType("A1"), s.MustGetValidType("B")) {
t.Error(`test rule failed`)
}
if !Identical(s.MustGetValidTypeObject("A1"), s.MustGetValidTypeObject("B")) {
t.Error(`test rule failed`)
}
if !Identical(`type B int; type A1 = B`, "A1", "B") {
t.Error(`test rule failed`)
}
}
// func (s *Spec) IdenticalIgnoreTags(v, t string) bool
// IdenticalIgnoreTags(v, t types.Type) bool
// or
// IdenticalIgnoreTags(v, t types.Object) bool
// or
// IdenticalIgnoreTags((code, v, t string) bool
// 忽略标签,然后判断结构体类型是否相等
func TestIdentical13(t *testing.T) {
code := `
type U = struct {
Name string
Address *struct {
Street string
City string
}
}
type V = struct {
Name string ` + "`" + `json:"name"` + "`" + `
Address *struct {
Street string ` + "`" + `json:"street"` + "`" + `
City string ` + "`" + `json:"city"` + "`" + `
} ` + "`" + `json:"address"` + "`" + `
}
`
s := NewSpec(code)
if !s.IdenticalIgnoreTags("U", "V") {
t.Error(`test rule failed`)
}
if !IdenticalIgnoreTags(s.MustGetValidType("U"), s.MustGetValidType("V")) {
t.Error(`test rule failed`)
}
if !IdenticalIgnoreTags(s.MustGetValidTypeObject("U"), s.MustGetValidTypeObject("V")) {
t.Error(`test rule failed`)
}
if !IdenticalIgnoreTags(code, "U", "V") {
t.Error(`test rule failed`)
}
}
|
/**
* Majority element
* <a href="https://leetcode-cn.com/problems/majority-element/">Majority element</a>
*
* @author <a href="mailto:[email protected]">xhh</a>
* @date 2020/11/29
*/
public class Solution {
public int majorityElement(int[] nums) {
int num = nums[0], vote = 0;
for (int i = 0; i < nums.length; i++) {
if(vote == 0){
num = nums[i];
}
vote = nums[i] == num ? vote + 1 : vote -1;
}
return num;
}
}
|
Hepcidin Levels and Their Determinants in Different Types of Myelodysplastic Syndromes Iron overload may represent an additional clinical problem in patients with Myelodysplastic Syndromes (MDS), with recent data suggesting prognostic implications. Beyond red blood cells transfusions, dysregulation of hepcidin, the key iron hormone, may play a role, but studies until now have been hampered by technical problems. Using a recently validated assay, we measured serum hepcidin in 113 patients with different MDS subtypes. Mean hepcidin levels were consistently heterogeneous across different MDS subtypes, with the lowest levels in refractory anemia with ringed sideroblasts (RARS, 1.43 nM) and the highest in refractory anemia with excess blasts (RAEB, 11.3 nM) or in chronic myelomonocytic leukemia (CMML, 10.04 nM) (P=0.003 by ANOVA). MDS subtypes remained significant predictors of hepcidin in multivariate analyses adjusted for ferritin and transfusion history. Consistently with current knowledge on hepcidin action/regulation, RARS patients had the highest levels of toxic non-transferrin-bound-iron, while RAEB and CMML patients had substantial elevation of C-Reactive Protein as compared to other MDS subtypes, and showed lost of homeostatic regulation by iron. Growth differentiation factor 15 did not appear as a primary hepcidin regulator in this series. If confirmed, these results may help to calibrate future treatments with chelating agents and/or hepcidin modulators in MDS patients. Introduction Myelodysplastic syndromes (MDS) are a heterogeneous group of clonal stem cell disorders characterized by dysplastic and ineffective hematopoiesis, peripheral cytopenias often including severe anemia, and a variable risk of progression to acute myleogenous leukemia (AML). Iron overload frequently occurs in MDS patients, with recent data suggesting an impact on both overall and leukemiafree survival. Though prolonged red blood cells (RBC) transfusion therapy appears the main contributor, many patients appear to develop iron overload at an early stage of the disease, before the onset of transfusions. It has been postulated that an altered production of hepcidin, the recently discovered key hormone regulating iron homeostasis, may play a role at this regard. Hepcidin is a small peptide that acts by binding to ferroportin, its receptor highly expressed on the membrane of cells involved in iron handling like iron absorbing duodenal enterocytes and macrophages recycling senescent erythrocytes. Ferroportin, the only known cellular iron exporter in vertebrates, is internalized and degraded after hepcidin binding, which results in blocking both dietary iron absorption and the release of iron from macrophages. The regulation of hepcidin is complex and mediated by different stimuli with opposing effects. Increased hepatic and plasma iron homeostatically induce hepcidin synthesis, as does inflammation, while erythropoietic activity suppresses the hormone production. The latter is finalized to increase iron supply for erythropoiesis through enhanced iron absorption and release from macrophages. Such effect becomes particularly important in diseases with ineffective erythropoiesis, where erythrocyte precursors massively expand but undergo apoptosis rather than maturing. Growth differentiation factor 15 (GDF-15), a protein produced by erythroid precursors, has been proposed to be a major hepcidin suppressor in b-thalassemia, but data in other conditions with ineffective erythropoiesis are less conclusive. Until recently, clinical studies on hepcidin in humans have been hampered by problems in the development of reliable assays. Regarding MDS, only scanty and conflicting data based on first generation semi-quantitative measurement of urinary hepcidin have been reported. We used a recently validated and improved Mass-Spectrometry based method to analyze serum hepcidin levels in MDS patients, also focusing in trying to elucidate its determinants. Patients Patients and controls were enrolled at Internal Medicine and Hematology Units in Verona (Azienda Integrata Ospedaliera-Universitaria), Florence (Ospedale Careggi) and Milan (Policlinico), all in Italy. One hundred and thirteen MDS patients (mean age 72.869.2 years; 68.1% males) were included. To be enrolled in this study, patients had to be previously untreated or treated only with transfusions. Patients treated at any time with iron chelating agents were excluded. After careful evaluation of transfusion history, patients were defined transfusion-dependent or transfusion-independent according to International Working Group (IWG) criteria. Reliable data in this sense were available for 107/133 patients. MDS subtypes were classified according to World Health Organization (WHO), and stratified for prognosis according to International Prognostic Scoring System (IPSS). To do a comparison with respect to serum hepcidin levels and the hepcidin/ferritin ratio (see below), a group of fifty-four healthy individuals (61.1% males) with rigorous definition of normal iron status as previously described in details, were used as controls. The protocol of this observational study was approved by the Ethical Committee of the Azienda Integrata Ospedaliera Universitaria of Verona, and all subjects gave written informed consent. Biochemical Assays Blood samples were obtained early in the morning after overnight fasting, immediately centrifuged, and serum was stored at 280uC in aliquots to avoid multiple freeze-thaw cycles. Serum iron, transferrin, ferritin, and C-Reactive Protein (CRP) were measured using routine standard laboratory assays. Serum hepcidin was measured by Surface-Enhanced Laser Desorption/Ionization Time-Of-Flight Mass Spectrometry (SELDI-TOF MS), using a synthetic hepcidin analogue (Hepdicin-24, Peptides International, Louisville, KY) as an internal standard, as previously described, with recent technical advances. The ratio between hepcidin and ferritin, which reflects the homeostatic ability of hepcidin to increase as a response to increased body iron was calculated as previously described. Serum non-transferrin-bound iron (NTBI) was evaluated by chromatographic method as previously described. Briefly, 450 ml of serum was added to 50 ml of nitrilotriacetic acid 800 mM (pH 7.0) and allowed to stand for 15 min. The solution was ultrafiltered using an Amicon Centricon 30 microconcentrator and the ultrafiltrate (20 ml) was injected directly into the high performance liquid chromatography (HPLC) system with a titanium pump module (Perkin Elmer S200, Boston, MA, USA). The HPLC column used for the determination of NTBI had the following characteristics: Nova-Pak C18, 4 mm, 3.96150 mm, reversed-phase column produced by Waters (Wexford, Ireland). The chromatographic conditions were the following: Flow rate 1.5 ml/min; mobile phase isocratic containing 20% acetonitrile and 80% sodium phosphate buffer, 5 mM (pH 7.0) containing 3 mM CP22; visible detection, 450 nm. A standard curve was generated by injecting different concentrations of iron prepared in a 100-fold excess of NTA. The standards were routinely run at 0 to 10 mM, although absorbance was linear up to 40 mM. Under these conditions, the 0 mM standard corresponds to 80 mM of NTA. The addition of 80 mM of NTA to the serum of normal individuals always results in negative NTBI values. Normal individuals always have negative NTBI values because blank is formed by water and nitrilotriacetic acid; water per se contains small amounts of iron that is not bound by transferrin, whereas in samples, transferrin, which is not completely saturated, captures some iron from the ferritin-nitrilotriacetic acid complex. Statistical analyses All calculations were performed using SPSS 17.0 software (SPSS Inc., Chicago, IL, USA). As many of the continuous variables of interest, including serum hepcidin, ferritin, GDF-15, and EPO, showed a non-Gaussian distribution, their values were logtransformed and expressed as geometric means with 95% confidence intervals (CIs). Quantitative data were analysed using the Student's t test or by analysis of variance (ANOVA) with polynomial contrast for linear trend, when appropriate. Qualitative data were analyzed with the x 2 test and with x 2 analysis for linear trend, when appropriate. Correlations between quantitative variables were assessed using Pearson's coefficient. Independent determinants of serum hepcidin levels were assessed at first in a linear regression model estimating b-coefficients including all the variables significantly correlated with hepcidin at univariate analysis (e.g. ferritin, CRP), as well as age, gender, and presence/absence of the diagnosis of MDS. Thereafter, aiming to evaluate the potential heterogeneity in iron/hepcidin homeostasis among the different types of MDS, the latter were codified as dummy variables in linear regression models, adjusted at first for sex, age, history of blood transfusion, and ferritin levels (model 1), then adding CRP levels (model 2). To evaluate the different degree of correlation between hepcidin and ferritin among the different types of MDS, data were analyzed in a general linear model by means of the F test for slopes. Two-sided p values,0.05 were considered statistically significant. Table 1 shows the main characteristics and iron biochemical parameters including serum hepcidin of the whole MDS population as compared with the control group. Controls were matched for gender (predominantly males), but were significantly younger than MDS patients. Recent data from our laboratory in a large population study have shown that serum hepcidin levels are consistently stable in males over a wide age range (18-90 years). Nevertheless, all multivariate analyses on serum hepcidin levels were adjusted for gender. As shown in Table 1, biochemical markers of iron overload (namely serum ferritin and transferrin saturation), as well as CRP levels, were significantly higher in MDS patients as compared to controls. In the whole MDS population serum hepcidin levels were slightly higher than in controls, but this difference did not reach the statistical significance. Nevertheless, the hepcidin/ferritin ratio was significantly lower in the whole MDS population as compared to controls. As regards to serum GDF-15, we could not directly measure this protein in controls, but levels in MDS patients (4,422 pg/ml, 95% CIs 3,591-5,445) were markedly higher than the manufacturer's reference range (641 pg/ml, 95% CIs 401-881). The same was true also for serum NTBI and EPO levels (showed in Table S1). Transfusion dependent versus transfusion independent MDS patients Biochemical parameters of MDS patients stratified according to the presence or absence of transfusion dependence are shown in Table S1. As expected, transfusion dependent (TD) MDS patients had significantly higher levels of serum ferritin and transferrin saturation as compared to transfusion independent patients. Notably, serum hepcidin levels were significantly higher in TD MDS patients as compared to either controls or non-TD patients, but the hepcidin/ferritin ratio was similar in TD and non-TD MDS patients. TD MDS patients had also significantly higher levels of serum NTBI, GDF-15, and EPO as compared to non-TD patients. Table 2 shows the clinical and biochemical characteristics of the MDS patients stratified according to the WHO classification. Serum hepcidin levels showed a significant variability across the different MDS subtypes (P = 0.003 by ANOVA), with the lowest values in patients with refractory anemia with ringed sideroblasts (RARS) and the highest values in subjects with refractory anemia with excess blasts (RAEB) and in patients with chronic myelomonocytic leukemia (CMML). The hepcidin/ferritin ratio (also showed in Figure S1) was also markedly heterogeneous across the different MDS subtypes. It was remarkably lower not only in RARS but also in patients with the 5q-syndrome, while CMML patients showed the highest values (P = 0.003 by ANOVA). Of note, patients with RARS and with the 5q-syndrome also appeared as the most iron overloaded, as suggested by the trend toward higher levels not only of serum ferritin, but also of serum transferrin saturation (P = 0.048 by ANOVA) and serum NTBI. As regards to serum GDF-15 levels, they were consistently homogeneous across the different MDS subtypes (P = 0.976 by ANOVA). GDF-15 did not correlate at all with hepcidin levels (r = 20.07; P = 0.48). On the contrary, CRP levels were significantly heterogeneous in different MDS subtypes (P = 0.008 by ANOVA), with the highest values in patients with RAEB, CMML and in those unclassified. Table S2 shows the biochemical parameters in MDS patients stratified according to the IPSS. Homeostatic control of hepcidin by iron To explore the degree of preservation of the homeostatic control of hepcidin by iron, we performed a set of general linear models. As shown in Figure 1A, when considering the MDS population as a whole, the positive correlation between hepcidin and ferritin was relatively conserved, though the hepcidin/ferritin ratio was lower than in controls suggesting a relatively blunted response. However, when MDS patients were stratified according to the different WHO subtypes a marked heterogeneity of slopes was evident ( Figure 1B). This suggested the relative preservation of the homeostatic control by iron in certain MDS subtypes like RA (Figure 2A), RARS and the 5q-syndrome ( Figure 2B), as well as the near complete loss of this mechanism in other MDS subtypes like RAEB and CMML ( Figure 2C). Noteworthy, comparing the extremes of such models, i.e. RAEB and CMML versus RARS and 5q-, a significant difference in hepcidin/ferritin slopes was found (F = 8.684; P = 0.005). Hepcidin determinants in MDS patients To evaluate the independent determinants of serum hepcidin levels in MDS patients we performed multivariate linear regression models including possible confounders like age, gender, and transfusion history (Table 3). Of note, serum ferritin levels remained as significant predictors of serum hepcidin levels, as did MDS subtypes. More precisely, when compared with RA (considered as reference subtype), RARS (with a negative bcoefficient), RAEB, and CMML (with a positive b-coefficient) were independent predictors of serum hepcidin levels (Table 3, model 1). When CRP levels were added to the model, they also were significant independent predictors of hepcidin levels along with ferritin, while the MDS subtypes with high CRP levels (RAEB and CMML) were no longer significant predictors, and the RARS subtype remained significantly associated with hepcidin (Table 3, model 2). Discussion Although percentage of marrow blasts, cytogenetic abnormalities, and cytopenias remain the prognostic cornerstones in MDS, recent data point to iron overload as an important contributing factor. Transfusion dependency have been introduced in the WHO based prognostic score, and serum ferritin levels have been associated to either overall or leukemia-free survival. Beyond the classic detrimental effect of cardiac siderosis, other iron overload-related mechanisms have been proposed, including an increased risk of infections, adverse effects on hematopoietic stem cell transplantation, and a pro-oxidative state promoting genomic instability and leukemic transformation. Thus, elucidating the pathophysiology of iron overload beyond the obvious role of RBC transfusions represents a relevant issue in MDS patients, with possible therapeutic implications in selecting those patients that may benefit at most from iron chelation therapy. As a general rule, an important factor in determining iron toxicity is represented by the route by which the element enter the body, which in turn is unable to excrete excess iron. The parenteral route, i.e. through RBC transfusions, leads to prominent macrophage iron overload that tends to be better tolerated than the intestinal route, leading to prominent overload in periportal hepatocytes and thereafter in other parenchymal cells. Given its pivotal role in orchestrating both iron absorption and recycling from macrophages, hepcidin has been an attractive candidate for studying perturbed iron homeostasis in MDS, but few and contradictory data have been available until now. Winder and colleagues studied 16 MDS patients (4 RA, 3 RARS, 3 RCMD, and 6 RAEB) 13 of them chronically transfused and found undetectable or inappropriately low urinary hepcidin in most of them. These Authors suggested that hepcidin suppression through increased erythropoietic drive, and the ensuing increased iron absorption may be generalized phenomena in MDS. Murphy and colleagues were unable to confirm these data in 17 low grade MDS patients (8 transfusion dependent and 7 treated with EPO), most of them showing normal, if not increased, urinary hepcidin levels. Besides the very limited patients series, both these studies suffered from methodological drawbacks, since they employed first generation semi-quantitative assays of urinary hepcidin that have been abandoned because of insufficient precision. To the best of our knowledge this is the largest study on hepcidin levels in MDS conducted so far. Moreover, it takes advantage from the use of a validated quantitative MS-based assay, recently further improved. Contrary to the prior hypothesis of a generalized hepcidin suppression, the main message from our data is that hepcidin production in MDS is consistently heterogeneous, a condition that appears to parallel the clinical and pathological heterogeneity of MDS by themselves. This is also in agreement with in vitro experiments showing a marked variability of sera from MDS patients in their ability to suppress hepcidin in a hepatocyte cell line. The spectrum of hepcidin levels varied broadly from conditions with mean levels less than a half of those in controls, like RARS, to other ones with mean levels more than twice of controls, like RAEB and CMML ( Table 2). As regards to the homeostatic control of hepcidin by iron, a similar heterogeneity was evident. Although the hepcidin/ ferritin ratio showed a generalized trend toward a relatively inappropriate response, the homeostatic control by iron appeared relatively conserved in MDS subtypes generally considered at low risk (like RA, RARS and the 5q-syndrome), while it appeared almost completely lost in conditions prominent dysmyelopoiesis like RAEB and CMML. Since multivariate analyses showed that CRP was also an independent determinant of hepcidin levels in MDS along with ferritin and MDS subtypes, we could hypothesize the observed hepcidin heterogeneity as the result of the relative strength of opposing stimuli in different clinical and pathological conditions (Figure 3). The main actors in this sense may be represented by the suppressing effect from ineffective erythropoiesis, variably counterbalanced by the stimulating effects from either increased iron stores or cytokines, of whom CRP is a surrogate measure. RARS may represent the prototype of MDS where the inhibition from the erythropoietic drive tends to prevail, only partially balanced by the RBC transfusions, either directly or indirectly through increased iron stores. This condition, characterized by the lowest hepcidin/ferritin ratio, indeed showed also the highest values of biochemical iron parameters indicating both an expansion of the plasma iron pool through increased absorption/recycling and parenchymal iron toxicity, like transferrin saturation and NTBI. Of note, studies from Mariani and colleagues on hepatic hepcidin mRNA in two RARS patients showed low levels consistent with this view. At the other end of the spectrum lies RAEB and CMML, where the highest levels of both hepcidin/ferritin ratio and CRP may mirror hepcidin stimulation through blast-derived cytokines that overcomes controls by iron. Consistently with this hypothesis, both RAEB and CMML lost their significant predictivity on hepcidin level in a multivariate model adjusted with CRP levels. In this condition, the relative excess of hepcidin could favour iron entrapment within macrophages, limiting toxicity due to uncontrolled release of the element into the plasma and redirection to parenchymal cells. As regards to the hepcidin suppression from the erythropietic drive, our results argue against a role of GDF-15 as the putative mediator of this biological effect in MDS, at variance with what observed in thalassemic syndromes. GDF-15, also known as bone morphogenetic protein (BMP) 14, is a secreted morphogen of the transforming growth factor-beta super-family, conferring signaling by activation of Smad 1/5/8 or mitogen-activated protein kinase (p38-MAPK). It is highly expressed by erythroid precursors in conditions of ineffective erythropoiesis, but barely detectable in normal bone marrow. In our series, which again is the largest so far evaluating GDF-15 in MDS, mean serum levels of this protein were near six to ten-fold higher than reference values, with a relative homogeneity across different MDS subtypes. Ramirez et al. measured serum GDF-15 in a specific study limited to twenty RARS patients, finding similar levels (325461400 ng/ml) to our RARS series. Of note, this fascinating and pleiotropic biomarker has been consistently associated also to cardiovascular diseases in recent studies, an issue that might merit further consideration in the future within the specific context of MDS. Nevertheless, GDF-15 was not correlated at all with hepcidin levels in our series. The apparent discrepancy of our results with those of Tanno and coworkers in thalassemia may be explained in terms of absolute levels. Indeed, the GDF-15 levels reported in thalassemic patients are consistently higher (up to more than 100,000 pg/ml) than those found in our MDS series (mean levels near 4,500 pg/ml), and in vitro studies have shown that significant hepcidin suppression requires very high levels, i.e. no less than 5,000 pg/ ml, being still incomplete at the highest dose of 100,000 pg/ml. Recent expression studies in erythroblasts have shown that erythroid regulation of hepcidin may be an heterogeneous phenomenon mediated by other molecules, i.e. TWSG1 (Twisted Gastrulation) for which serum assay is not yet available. Further studies are needed to clarify which mediators may play a role in hepcidin suppression at least in certain MDS subtypes, particularly in RARS. The observation that iron biochemical parameters are significantly higher than in controls also in our subset of non transfused patients (Table S1), also reported by others is a further argument in favour of a certain degree of iron hyperabsorption in MDS. Our study suffers of several limitations that need to be acknowledged. First, our considerations on hepcidin regulation by iron rely on ferritin levels, which are known to be an imperfect marker of iron stores. Other measures of body iron stores such as liver iron content (LIC) through Magnetic Resonance (MR) may be more accurate, considering that the ''gold standard'' represented by liver biopsy is clearly unfeasible in thrombocytopenic and generally elderly patients with several Table 3. Predictors of hepcidin levels in different linear regression models in MDS patients. comorbidities like those with MDS. Nevertheless, recent data by Armand and colleagues indicate that serum ferritin is still an acceptable marker of iron stores in MDS, since it showed a strong and significant correlation (r = 0.75, P,0.001) with estimated LIC by MR. Similarly, although our hepcidin assay is specific for the 25-mer bioactive isoform and has been clinically validated in other settings, we have to recognize that we still lack a gold standard for measuring this hormone in biological fluids. Finally, the effect of inflammatory cytokines, which may play a prominent role in certain MDS subtypes with excess myeloblast activation, could be studied only indirectly, through a surrogate like CRP. Notwithstanding these limitations our results, if confirmed, may be relevant for a better understanding of iron pathophysiology in MDS. They point toward an heterogeneity that, like in any one physiological situation, is determined by the relative strengths of competing stimuli in different MDS subtypes (Figure 3), with possible implications also at the individual level. This may help to calibrate possible future therapeutic approaches in MDS patients with either iron chelators or hepcidin modulators. Figure S1 Mean levels of serum hepcidin, serum ferritin, and hepcidin/ferritin ratio across different MDS subtypes. * : for hepcidin/ferritin ratio, P,0.001 by ANOVA with polynomial contrasts for linear trend. (TIF) Supporting Information Table S1 Biochemical parameters of the MDS patients (either as whole population or stratified into transfused or non-transfused groups) as compared to sex-matched healthy controls. (DOC)
|
Yahoo has been beaten up in the press for so long that it’s hard to remember how untouchable the company once appeared.
A fawning profile in Fortune magazine from 1998 outlined Yahoo’s commanding position. “Yahoo won the search engine wars and was poised for greater things,” the article concluded, wisely prefacing the remark by warning, “Let’s leave aside, for now, questions of whether Yahoo will be around in 10 years.” At the time, Yahoo was drawing a then-impressive 40 million users a month. By 2000, the figure would jump to 185 million.
We all know what happened next. Recent press is awash in retrospectives, takeaways and lessons learned — many of which focus on Yahoo’s failure to buy Facebook and Google, or sell to Microsoft. We’d like to think that Yahoo’s failure has made us wiser and more cautious, less likely to repeat the same mistakes. We’d also like to think that having witnessed Yahoo’s demise we would be better able to spot a company that was at peak valuation and about to begin a long-term unraveling, a company that was on the wrong side of major trends.
Would we, though? What if that company is Google, one of today’s untouchables? Yahoo went from a $125 billion valuation in 2000 to Verizon’s $4.83 billion acquisition in just 16 years. Could the same thing happen to Google, ahem, Alphabet, in 2032?
Definitely.
Google in 2016 = Yahoo in 2000? It’s possible
Of course, Google’s numbers look great now. Fresh off reporting earnings on July 28, it once again beat expectations, sending its stock price surging. Industry observers walked away impressed by the strong growth in mobile advertising revenue — seemingly a sign that Google had effectively pivoted into the next big market for digital ads. What the latest numbers conceal is that the company is approaching the height of monetizing its existing assets with advertising, and that’s exactly the time to start worrying seriously about the future.
Beneath the clean upward trajectory of Google’s success, the digital advertising industry that it has long ruled over has fallen into turmoil and rapid change. It’s not clear if Google’s advertising business will sustain its dominance.
Can Google catch up and avoid Yahoo’s fate?
Google is on the wrong side of major trends in the digital advertising industry: Google captures direct response dollars as digital ad spend shifts up the funnel, its focus is still on browsers and websites as engagement is moving into apps and feeds, Google is deeply dependent on search during a shift to serendipitous discovery and ads designed to interrupt the user’s attention are being replaced by advertising designed to engage them. Its competitor, Facebook, is on the right side of all these trends. Can Google catch up and avoid Yahoo’s fate?
Digital ad dollars are moving further up the funnel
Mary Meeker’s 2016 internet trends report had one slide that got everyone in the ad business talking. Slide No. 45 detailed the discrepancy between the time consumers spend on each channel — print, TV, desktop, mobile — and the ad dollars devoted to reaching them in those channels. TV and print were oversubscribed, while mobile received less than half the dollars it seemingly deserves.
Future growth in mobile — and in digital overall — depends on attracting ad dollars away from traditional channels like print and TV. The challenge with doing so is that ads in TV and print are generally “upper funnel” ad dollars — designed to drive awareness and intent instead of precipitate a direct response. Digital — particularly Google’s core search offering — has proved itself an excellent vehicle for direct response, but hasn’t proved its worth as a medium for the kind of upper funnel dollars that will fuel future growth.
Google’s search advertising model is built on direct response in that it charges for search ads that people click on. In theory, this is an entirely transparent model: After all, advertisers only pay when the advertising works. What it conceals is that they are taking more credit (and charging more) for value that its ads didn’t deliver. By charging you for the click that follows a search, Google effectively takes credit for the entire funnel of purchase consideration that led you to type in the search and click on the link in the first place.
AdWords can demand a high cost per click (CPC) for a competitive keyword like “Vacuum” because people who search for it and click on it already want vacuums; they developed purchase intent and are very likely to convert. But the ad itself didn’t create their purchase intent — it just takes credit for it. Google’s lower funnel ads are getting credit for upper-funnel effectiveness, in no small part because the latter is just too hard to measure.
The user experience put at risk by interruptive ads can actually form the basis of a more lucrative and sustainable offering.
The remarkable success of that model has allowed Google to soak up a ton of the money in digital advertising, and the way that advertising was measured became the rest of the ad industry’s self-imposed standard.
This isn’t Google’s fault. CPC search advertising has thus far been the most transparent model in a non-transparent system. The real “holy grail” of advertising — connecting attention to action, measuring (and giving proper credit to) every touch point along the way — is still yet to happen. But that day is coming. And once advertisers can close the loop on a real attribution model, the credit and a chunk of dollars will shift away from search and into ads that drive awareness and influence — effectively driving digital dollars into the upper and mid portions of the customer funnel where Facebook-style ads are more effective.
One of the early signs of this shift has already manifested itself in the declining aggregate CPC across Google, which fell 8 percent YoY last year.
Interruption is giving way to engagement
For everyone that’s not Google, it’s impossible to guarantee that those seeing the ad have already developed purchase intent. Actually, on average, only half of one-tenth of a percent. of people who see traditional display ads actually click on them (the average click-through rate on display ads is 0.06 percent) and only 50-60 percent are even viewable.
As a result, publishers can’t charge much for low-performing, interruptive display ad units. And so they have to sell more and more of these low-performing ads, clogging their pages with takeovers, wrappers and pop-ups that weigh on load times and ruin the user experience. The result of these interruptive tactics is that consumers have rejected digital advertising outright, leading to an unprecedented rise in ad blocking, which grew 41 percent last year.
Publishers and consumers have reached the tipping point, finally realizing that display advertising presents them with a no-win proposition. If they show enough display ads to turn a profit, they risk alienating their readers and driving them to block ads. And brands, for their part, are looking for a more effective medium than display ads for the upper-funnel advertising traditionally reserved for TV — the kind designed to drive brand awareness and purchase intent.
Smart publishers have realized that the user experience put at risk by interruptive ads can actually form the basis of a more lucrative and sustainable offering in the form of sponsored content. The success of sponsored content depends on publishers delivering some value and relevance to their audience. And advertisers are realizing that sponsored content is a more effective vehicle for driving the kind of deep engagement and influence that their upper-funnel dollars are designed to achieve.
Facebook has surpassed search; social has replaced SEO.
Facebook is built for this shift to sponsored content. From Pages to Sponsored Posts, Facebook has staked its monetization strategy on feed-based non-interruptive native units that brands and publishers are turning to in greater and greater numbers, precisely at the same time as they turn away from the interruptive rail-based ads that Google supports.
Google is aware of this threat and has enabled more content advertising on its platform by bringing such advertising to its DoubleClick display business. These new ads, however, are banners masquerading as content. This ignores the true appeal of content advertising, which is that the message can deliver some value to the consumer — some message that isn’t just “sign up now” or “buy now.” Lacking that, content advertising can look like just another ad unit and just a shift of Google’s direct response business into the feed.
Consumers shift from search to discovery
Google has a search monopoly. But search no longer has a monopoly on the way that people find and engage with content. Just as Facebook has surpassed Google in generating traffic to publishers, we are seeing the search bar slowly give way to a more organic process of discovery. As Seth Godin has explained, search is “the action of knowing what you want until you find it,” while discovery is “when the universe (or an organization or a friend) helps you encounter something you didn’t know that you were looking for.” Google is working from behind in this rapid evolution from basic search to organic discovery.
The shift to mobile has already changed the dynamics of search. Google’s average number of searches per month — 100 billion — has been the same since 2012. That nets out to about one search on mobile per day, on average. By contrast, the average user spends 50 minutes a day on Facebook. Facebook has surpassed search to become the No. 1 source of traffic for digital publishers or content providers; social has replaced SEO.
With discovery leveraging all the data that Facebook knows about the user, there is less need to search.
Advertising is moving from rails to feeds
The reason that sponsored content works so well on Facebook is that as users continue to spend more time on mobile than desktop, they are expecting to see feeds rather than websites.
This is compounded by the fact that users spend just 15 percent of their time on mobile in web browsers. The rest is spent within a select handful of apps, and Facebook’s Instagram, WhatsApp, Messenger and Facebook Mobile dominate the rankings for both downloads and time spent.
Google’s answer to this is Accelerated Mobile Pages, or AMP, a Google-hosted environment similar to Facebook Instant Articles in which pages load faster. AMP is a great improvement (try it out on g.co/amp — on your mobile device) but it’s not really a feed in the truest sense. It’s just a faster web browser. There’s none of the individual curation that you get on Facebook — or that you might have gotten if Google Plus had taken off.
The business of digital content monetization is moving to a human-centric focus on quality and engagement.
Google’s strong performance in mobile from this last quarter has convinced Wall Street that it is poised to succeed in the mobile business writ large. But despite scale of Android adoption, Google’s mobile success is just the continued success of its existing search business on a different screen. It relies on users behaving the same way on mobile as they do on desktops; namely, searching through browsers for things they already want. But as the browser gets replaced by the feed, and search replaced by discovery, Google is in as much of a disadvantage on mobile as anywhere else.
Google : Long may she reign
Unlike Yahoo, Google has been good about identifying trends and using their remarkable war chest to get ahead of them. The company has brilliantly bought its way into leadership in strategic emerging categories: just look at YouTube, Android, DoubleClick AdMob. As much credit as Google deserves for these maneuvers — it’s fair to say that the expertise that kept them ahead of the trends was not home-grown.
The business of digital content monetization is moving to a human-centric focus on quality and engagement. It is a shift that is not only technological in nature, and Google will need more than just new technology to get ahead of it. It needs a less Google-y approach to content and creativity — a cultural shift and departure from the company’s left-brain comfort zone.
|
<reponame>liquid-mind/warp
package ch.shaktipat.saraswati.test.rmi;
import java.io.Serializable;
import ch.shaktipat.saraswati.common.PersistentClass;
@PersistentClass
public class TestRunnable implements Runnable, Serializable
{
private static final long serialVersionUID = 1L;
@Override
public void run() {}
}
|
// Created 2021-11-07T05:15:41
/**
* @author KJP12
* @since ${version}
**/
module net.kjp12.database {
requires java.sql;
requires org.objectweb.asm;
exports net.kjp12.hachimitsu.database.api;
exports net.kjp12.hachimitsu.database.api.annotation;
}
|
//
// Windows1252Encoding.h
//
// Library: Foundation
// Package: Text
// Module: Windows1252Encoding
//
// Definition of the Windows1252Encoding class.
//
// Copyright (c) 2005-2007, Applied Informatics Software Engineering GmbH.
// and Contributors.
//
// SPDX-License-Identifier: BSL-1.0
//
#ifndef Foundation_Windows1252Encoding_INCLUDED
#define Foundation_Windows1252Encoding_INCLUDED
#include "Poco/Foundation.h"
#include "Poco/TextEncoding.h"
namespace Poco {
class Foundation_API Windows1252Encoding: public TextEncoding
/// Windows Codepage 1252 text encoding.
{
public:
Windows1252Encoding();
~Windows1252Encoding();
const char* canonicalName() const;
bool isA(const std::string& encodingName) const;
const CharacterMap& characterMap() const;
int convert(const unsigned char* bytes) const;
int convert(int ch, unsigned char* bytes, int length) const;
int queryConvert(const unsigned char* bytes, int length) const;
int sequenceLength(const unsigned char* bytes, int length) const;
private:
static const char* _names[];
static const CharacterMap _charMap;
};
} // namespace Poco
#endif // Foundation_Windows1252Encoding_INCLUDED
|
Bilateral Motor Cortex Plasticity in Individuals With Chronic Stroke, Induced by Paired Associative Stimulation Background: In the chronic phase after stroke, cortical excitability differs between the cerebral hemispheres; the magnitude of this asymmetry depends on degree of motor impairment. It is unclear whether these asymmetries also affect capacity for plasticity in corticospinal tract excitability or whether hemispheric differences in plasticity are related to chronic sensorimotor impairment. Methods: Response to paired associative stimulation (PAS) was assessed bilaterally in 22 individuals with chronic hemiparesis. Corticospinal excitability was measured as the area under the motor-evoked potential (MEP) recruitment curve (AUC) at baseline, 5 minutes, and 30 minutes post-PAS. Percentage change in contralesional AUC was calculated and correlated with paretic motor and somatosensory impairment scores. Results: PAS induced a significant increase in AUC in the contralesional hemisphere (P =.041); in the ipsilesional hemisphere, there was no significant effect of PAS (P =.073). Contralesional AUC showed significantly greater change in individuals without an ipsilesional MEP (P =.029). Percentage change in contralesional AUC between baseline and 5 m post-PAS correlated significantly with FM score (r = −0.443; P =.039) and monofilament thresholds (r = 0.444, P =.044). Discussion: There are differential responses to PAS within each cerebral hemisphere. Contralesional plasticity was increased in individuals with more severe hemiparesis, indicated by both the absence of an ipsilesional MEP and a greater degree of motor and somatosensory impairment. These data support a body of research showing compensatory changes in the contralesional hemisphere after stroke; new therapies for individuals with chronic stroke could exploit contralesional plasticity to help restore function.
|
Improving Human Motion Prediction Through Continual Learning Human motion prediction is an essential component for enabling closer human-robot collaboration. The task of accurately predicting human motion is non-trivial. It is compounded by the variability of human motion, both at a skeletal level due to the varying size of humans and at a motion level due to individual movement's idiosyncrasies. These variables make it challenging for learning algorithms to obtain a general representation that is robust to the diverse spatio-temporal patterns of human motion. In this work, we propose a modular sequence learning approach that allows end-to-end training while also having the flexibility of being fine-tuned. Our approach relies on the diversity of training samples to first learn a robust representation, which can then be fine-tuned in a continual learning setup to predict the motion of new subjects. We evaluated the proposed approach by comparing its performance against state-of-the-art baselines. The results suggest that our approach outperforms other methods over all the evaluated temporal horizons, using a small amount of data for fine-tuning. The improved performance of our approach opens up the possibility of using continual learning for personalized and reliable motion prediction. INTRODUCTION Human motion prediction involves forecasting future human poses given past motion. For enabling efficient Human-Robot Collaboration, a crucial aspect of robot perception is real-time anticipatory modeling of human motion. Fluid tasks such as collaborative assembly, handovers, and navigating through moving crowds require combining aspects of perception, representation, and motion analysis to accurately and timely predict probable human motion. This would enable the robot to anticipate the human pose and intent and plan accordingly around the human partner without disturbing the natural flow of the human's motion. However, accurate and timely prediction of human motion remains a non-trivial problem due to the complex and interpersonal nature of human behavior. To address the aperiodic and stochastic nature of human motion, prior work has framed the problem of predicting future poses like that of sequence learning, modeling the spatio-temporal aspect of human motion using Recurrent Neural Networks. These approaches aim to learn a unified representation from training samples that are expected to generalize for test data. However, generalization comes at the cost of learning individual subtleties of motion, which is crucial for human-robot collaboration. When training these networks, the core assumption is that the given data points are realizations of independent and identically distributed (i.i.d) random variables. However, this assumption is often violated, e.g., when training and test data come from different distributions (dataset bias or domain shift) or the data points are highly interdependent (e.g., when the data exhibits temporal or spatial correlations). Both these cases are observed in human motion prediction, making it challenging to deploy models trained on benchmark models to the real world. While generalization at the cost of learning individual preferences is sub-optimal, there is also a need to learn a robust representation over a diverse range of training samples. As such, training and generalizing over a benchmark dataset cannot be discarded and is, in fact, necessary as the first step to accurate motion prediction. Prior work on language modeling has demonstrated the benefit of learning a rich representation on a large training data followed by fine-tuning on a target task. For human motion prediction, this can be posed as a continual learning problem whereby a motion prediction model acquires prior knowledge by observing a large range of human activities. This is followed by fine-tuning its parameters to accurately capture the subtleties of motion prediction for a particular individual. Such a learning setup, however, brings additional challenges to an already non-trivial problem, with prior work on continual learning demonstrating the risk of catastrophic forgetting. To address the challenges mentioned above, we propose a continual learning scheme that can improve human motion prediction accuracy while reducing the risk of catastrophic forgetting. Our framework is modular and is developed to acquire new knowledge and refine existing knowledge based on the new input. In line with prior work on computational neuroscience, which states that the brain must carry out two complementary tasks: generalize across experiences, and retain specific episodic-like events ; we utilize a two-phase learning scheme. Our framework aims to learn a robust representation of past observations by training on a benchmark dataset in the first phase. This is achieved by using a modular encoder-decoder architecture with adversarial regularization, that has state-of-the-art performance on benchmark datasets. In the second phase, we use the representation learning aspect of the framework to condition future poses and fine-tune only the decoder module on new samples in a curriculum learning setup. This mitigates the problem of training from scratch while also providing performance gains, both quantitatively over short, mid, and long-term horizons and qualitatively in terms of generating motion that is perceptibly similar to the ground-truth. PROBLEM FORMULATION Formally defined, human motion prediction is the problem of predicting the future human pose over a horizon, given their past pose and any additional contextual information. In this paper, we assume that there is only one agent in the scene. For any particular scenario, the input to our model is the past or observed trajectory frames, spanning time = 1 to,: X = { 1,..., }. Each frame ∈ R denotes the -dimensional body pose. depends on the number of joints in the skeleton, and the dimension of the joints, where =. The expected output of the model is the future trajectory frames over horizon, i.e. the ground truth pose over the horizon = + 1 to + : Y = { +1,..., + }. Our first objective is to learn the underlying representation which would allow the model to generate feasible and accurate human poses = {+ 1,...,+ }. We assume that future human pose is conditioned on the past observed or generated poses and predict each frame in an auto-regressive manner as formulated below: where the joint distribution is parameterized by. Next, we use these learned parameters to fine-tune for a specific agent who was not observed during the training phase, using a continual learning setup. Instead of updating all the model parameters, we update a specific module, say the decoder module, with corresponding parameters *. We formulate this as follows, similar to prior work in continual learning : represents the first phase's training data, which involves learning a representation from the large data distribution. represents the second phase's training data, whereby we aim to learn the parameters for a specific human. ( | ) embeds all the prior information learned during the training phase. CONTINUAL LEARNING FOR HUMAN MOTION PREDICTION The collective goal of our approach is to accurately predict human motion while being flexible to parameter or architectural updates, given new data. Our overall framework is comprised of an encoder Figure 2: Motion prediction architecture and decoder, trained end-to-end with adversarial regularization on the latent variables, building on top of our prior work. The encoder aims to learn a rich representation over past trajectories, which the decoder can use to condition its prediction. To improve model stability and robustness of the latent space, we use adversarial regularization through discriminators. This acts as a regularizer during training and can improve the network's stability during parameter updates over new data. We will first describe the overall model for motion prediction and then discuss its flexibility for fine-tuning on a particular agent. Overall model for motion prediction Motion Encoder: The encoder learns a representation over the high-dimensional observed trajectory, projecting the input to a low-dimensional latent space. To obtain a rich and more robust representation over the past trajectories, we extract the past velocity and acceleration features along with the provided positional values, in line with prior work on motion prediction. The velocity and acceleration features are first and second-order derivatives of the position values for each skeleton joint. For encoding spatio-temporal representation from the position, velocity, and acceleration data, we employ Recurrent Neural Networks, in particular Gated Recurrent Units (GRU). We use unidirectional GRUs, as we wish to predict human motion in real-time. For each stream, the stream-specific GRU aims to extract the spatiotemporal representation that summarizes the input sequence, with the operation formulated as follows: where represents the specific stream: position, velocity or acceleration,, denotes the input to the GRU at, ℎ, −1 corresponds to the past hidden state and represent the parameters of the GRU. The output from each GRU represents disparate information corresponding to the past trajectory and needs to be fused adaptively. As such, we use a multi-head self-attention mechanism which is tasked to disentangle and extract relevant stream-specific representation. where ℎ, is the output of the attention mechanism, and represents the parameters. The output ℎ, is used to obtain the latent representation, which is tasked to characterize the observed trajectory. Latent Representation: The latent representation aims to capture relevant spatial and temporal semantics from the observed data, which can then be used to condition motion prediction. The latent representation is comprised of a continuous random variable and a categorical random variable. The motivation behind using both continuous and categorical variables is to jointly model the continuous aspect of human motion, such as the spatial semantics of a particular activity and the discrete characteristics of human motion such as the class activity or segment. To obtain the continuous latent variable, the output from the self-attention module is passed through a linear layer. In the case of the categorical latent variable, the output from the self-attention module is passed through a linear layer followed by a softmax layer. Adversarial Regularization: To enforce a prior on the latent space, we use adversarial learning, similar to the Adversarial Autoencoders (AAE) framework. This serves the purpose of a regularizer as there is a modification to the overall objective function: the objective function now consists of a reconstruction loss and an adversarial loss. We reason that this helps improve the stability of the overall framework for continual learning, as the parameters are updated based on two competing objectives: the reconstruction loss and the discriminator loss. Similar to the GAN and AAE setups, the encoder aims to confuse the discriminators by trying to ensure that its output is similar to the aggregated prior. The discriminators are trained to distinguish the true samples generated using a given prior, from the latent space output of the encoder, thus establishing a min-max adversarial game between the networks. We use two discriminators, one for the continuous latent variable and the other for the categorical latent variable. The discriminators compute the probability that a point or is a sample from the prior distribution that we are trying to model (positive samples), or from the latent space (negative sample). We use a Gaussian prior for continuous latent variables and a uniform distribution prior for categorical latent variables. Decoder: The decoder uses the latent representation and the past generated pose to predict the future pose for each time step. It is auto-regressive, i.e., it uses the output of the previous timestep to predict the current pose and has only one stream: position as the expected output is future joint positions of the human. The input to the decoder is the latent representation: and and the past generated pose, or the seed pose at time if it is predicting the first time-step, +1. This is then passed to an attention mechanism that allows the decoder to adaptively condition its output on the latent variables that provide long-term information over the observed frames and the immediate generated frame. The output from this attention mechanism is next passed to a GRU cell, similar to the one at the encoder. This is followed by a Structured Prediction Layer (SPL), which predicts each joint hierarchically following a skeleton tree, thus allowing the decoder to enforce structural prior on its final output. The operations at the decoder are formulated as follows: = (,, ℎ, −1 ) Curriculum learning for the decoder: The encoder-decoder architecture with adversarial regularization is trained to convergence on the training set. This training is followed by providing the overall architecture with unseen but small samples of motion data. This aims to relax the i.i.d assumption of the training procedure as our framework now has access to limited motion samples of the agent that it is trying to model. Our choice of continual learning scheme is the curriculum learning setup, whereby we first train the network on a comparatively simpler task of representation learning, followed by a relatively difficult task of fine-tuning its parameters for a specific human subject. Our implementation is based on findings in connectionist models, in particular self-organizing maps which reduce the levels of functional plasticity (i.e., ability to acquire knowledge in neural networks) through a two-phase training of the topographic neural map. The first phase is the organization phase, where the neural network is trained with a high learning rate and large spatial neighborhood size, allowing the network to reach an initial rough topological organization. The second phase is referred to as the tuning phase, where the learning rate and the neighborhood size are iteratively reduced for fine-tuning. We aim to adopt these findings to a sequence learning framework. Following prior work on developmental and curriculum learning, we fine-tuned the architecture on the new data. We adopt techniques that will allow us to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. In particular, we rely on discriminative fine-tuning, whereby we fine-tune only the decoder network at a different learning rate while freezing the encoder and the discriminator networks. Fine-tuning the decoder: In line with equation 5, the input to the model is the sequence of observed poses: X = { 1,..., }, with the output being of the encoder being and. However, instead of imposing a prior on the latent space and training the encoderdecoder end-to-end, we only update the decoder's parameters. We also use a lower learning rate and rely on a small number of training samples to improve model stability and reduce the likelihood of catastrophic forgetting. We leverage the representation learning capability of the framework that it attained when training on a large and diverse dataset. The encoder network is tasked to provide a representation summarizing the past observation that is used by the decoder to condition its prediction, similar to equation 6. The pre-trained weights from the training set are used to initialize the overall architecture and act as prior knowledge. The decoder weights are updated based on the reconstruction loss on the new data. EXPERIMENTAL SETUP 4.1 Dataset We evaluated the performance of our approach on the widely used human-activity dataset: UTD-MHAD. The dataset comprises 27 action classes covering activities from hand gestures to training exercises and daily activities, thus providing relevant activities for human-robot collaboration. Each activity was performed by 8 different subjects, with each subject repeating the activity 4 times. In our experiments, we use only Skeleton data for predicting human motion, following previous work in this domain, and Generalized Representation Learning We used the cross-subject evaluation scheme, training and validating on odd-numbered subjects for the first phase, thus providing the framework with a large training sample and maximizing the likelihood of encountering diverse demonstrations. To evaluate the performance, we hold out a section of the data for the validation set and early stopping. This reduces the likelihood of overfitting on the training data while also provide the mechanism for stopping the training procedure. Curriculum Learning for a specific subject Having learned a generalized representation, the second phase involved training the framework in a curriculum learning setup. Here, the experiments are conducted on a particular held-out evennumbered subject. We fine-tuned only the decoder using a reduced learning rate, with the encoder weights initialized from the first phase. As each subject has 4 trials, we trained on one trial and tested on the other 3 trials. State-of-the-art method and baseline For evaluating the efficacy of our curriculum learning setup, we compared against a non-curriculum learning framework, and the zero-velocity baseline. The first benchmark is comprised of an encoder-decoder framework, with adversarial regularization, but with no provision for curriculum learning. The zero-velocity baseline assumes that all the future predictions are identical to the last observed pose and is challenging to outperform for short-term prediction. It also allows us to gauge the movement dynamics, with a lower MSE for zero-velocity suggesting less movement and vice-versa for higher MSE. Evaluation Metric We evaluated the performance of all models using the Mean Squared Error (MSE), which is the 2 distance between the ground-truth and the predicted poses at each timestep, averaged over the number of joints and sequence length, in line with prior work. The MSE is calculated as: where, T and K are the total number of frame and joints respectively. RESULTS AND DISCUSSION Results: We present the results of all approaches on the UTD-MHAD on table 1. We report the performance of all approaches at distinct frame intervals to circumvent the problem of frame drops during data collection and subsequent evaluation. Our frame intervals aim to evaluate all models on short (2 & 4), mid (8 & 10), and long-term motion prediction (13 & 15). Table I depicts the performance of all approaches with respect to the test subjects. The results in Table I suggest that fine-turning the framework allows it to outperforms all other methods and the zero-velocity baseline for short, mid, and long-term prediction. Discussion: Our proposed approach outperformed the prior stateof-the-art approach and baseline both quantitatively by having lower MSE and qualitatively in terms of generating motion closer to the ground-truth pose (see Fig. 1). This shows the benefit of the curriculum learning approach while also suggesting that our overall framework is robust to catastrophic forgetting. The performance gain is especially significant over the mid and long-term, as the decoder is trained to learn the spatial-temporal movement pattern of a specific subject and can generate the future pose with higher accuracy. Using a curriculum-learning setup, albeit on a small training sample, allows the framework to capture individual human motion subtleties, as seen by the lower MSE, particularly over the mid and long-term horizons. The approach is particularly useful when there is significant movement over the given horizon, as seen for Subjects 4 and 6 (table 1), who have higher MSE loss on the zero-velocity baseline. For Subject 2, there is overall less movement as seen by the zero-velocity MSE loss, and hence the performance gain is not significant over the mid and long-term and even worse over the short-term. Overall, the results are particularly promising as we did not fine-tune the encoder, instead only focusing on the decoder. Further improvement can be attained by fine-tuning the encoder. CONCLUSION In this work, we present a curriculum learning approach that opens the possibility of continual learning for human motion prediction, especially if the model is especially deployed in the wild. Our framework first learns a general representation over diverse training samples before fine-tuning on a target human subject. Our experiments suggest the feasibility of curriculum learning with performance gains over non-curriculum learning approaches. Future work will focus on fine-tuning to activities and domains that were not observed in training in a zero-shot learning setup. ACKNOWLEDGEMENT This work is supported by the CCAM Innovation Award. The authors thank Tim Bakker, Tomonari Furukawa, and Qing Chang for their support.
|
def replace(self, collection):
self.child_key = None
if type(collection) == dict:
self.collection = collection
if self.autosave and self.save_to_file:
update_file(self.filepath, self.collection, self.indent, self.sort, self.reverse, self.encoding)
return self
|
As Tesla Inc. discusses potential ramifications for Chief Executive Elon Musk’s actions with the Securities and Exchange Commission, it disclosed a dramatic change to its board of directors late on Good Friday.
Maxwell House, which has handed out Haggadahs at Passover for decades, teamed up with the Amazon show ‘The Marvelous Mrs. Maisel’ this year.
Subscribe today for combined access to MarketWatch and Barron’s.
MarketWatch is pleased to bring you an unrivaled view into investment strategies with the introduction of Barron's premium content.
The latest trends and developments across the 11 sectors of the S&P 500.
In focus: Aurora Cannabis and Canopy Growth.
Oil prices this month touched the highest levels of the year, but the market now faces a number of key tests.
Oil futures climb Thursday, with U.S. prices posting a seventh weekly climb in a row—the longest streak of weekly gains in about five years.
Gold end modestly lower Thursday to post a fourth weekly loss in a row as U.S. retail figures jumped, providing support for the dollar.
Oil futures end lower on Wednesday, pressured by uncertainty surrounding global crude production, despite data from a U.S. government report that revealed the first weekly decline in U.S. crude stocks in a month.
Gold falls Wednesday for a third straight session, with prices settling at another low for the year.
In a revival of sorts, digital assets are back on the rise. By late 2018, even the staunchest bitcoin aficionado wondered if the end was ever insight to a brutal bear market that wiped more than $500 billion off the total value of all cryptocurrencies.
Check bitcoin and cryptocurrency prices, performance, and market capitalization, in one dashboard.
Perhaps a pullback toward support would refresh it.
|
<reponame>Gxuebin/naive-ui<filename>src/dialog/styles/_common.ts
export default {
titleFontSize: '18px',
padding: '16px 28px 20px 28px',
iconSize: '28px',
actionSpace: '12px',
contentMargin: '8px 0 16px 0',
iconMargin: '0 4px 0 0',
iconMarginIconTop: '4px 0 8px 0',
closeSize: '18px',
closeMargin: '22px 28px 0 0',
closeMarginIconTop: '12px 18px 0 0'
}
|
import java.lang.*;
import java.io.*;
import java.net.*;
import java.util.Scanner;
public class JavaClient{
public static void client(String multiCastAddr, int multiCastPort, String address, int port){
ClientReceiveMulticast recv = new ClientReceiveMulticast(multiCastAddr, multiCastPort);
Thread thread = new Thread(recv);
thread.start();
ClientReceiveImagesMulticast imRecv = new ClientReceiveImagesMulticast("172.16.31.10", 5002);
Thread imThread = new Thread(imRecv);
imThread.start();
try{
Socket socket = new Socket(address, port);
BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
PrintWriter writer = new PrintWriter(new OutputStreamWriter(socket.getOutputStream()));
Scanner scanner = new Scanner(System.in);
String mess = scanner.nextLine();
byte[] messB = Message.createMsg(mess);
writer.print(new String(messB));
writer.flush();
String msg;
while((msg = reader.readLine()) != null) System.out.println(msg);
}
catch(Exception e){
e.printStackTrace();
}
}
public static void main(String[] args){
if(args.length == 0){
client("172.16.31.10", 5001, "localhost", 4242);
}
else if(args.length == 4){
client(args[0], Integer.parseInt(args[1]), args[2], Integer.parseInt(args[3]));
}
else System.out.println("Wrong options");
}
}
|
Changes in Psychological Determinants of Behavior Change after Individual versus Group-Based Lifestyle-integrated Fall Prevention: Results from the LiFE-is-LiFE Trial. OBJECTIVE The Lifestyle-integrated Functional Exercise (LiFE) intervention has been shown to promote physical activity in fall-prone older adults. However, the underlying mechanisms of how LiFE functions remain unclear. This study compares the effects of the individual and group-based LiFE formats on psychological determinants of behavior change derived from the health action process approach, habit formation theory, and self-determination theory. METHODS Secondary analysis on basis of the randomized, non-inferiority LiFE-is-LiFE trial were performed. Questionnaire data on psychological determinants were obtained from older adults (M = 78.8 years, range 70-95) who took part in either the individual (n = 156) or the group-based (n = 153) LiFE intervention. Measurement points varied from three to six times, and from baseline (T1) up to a 12-month follow-up (T6). A generalized linear mixed model was specified for each determinant. RESULTS Both LiFE and gLiFE participants reported lower levels of motivational determinants at T6. LiFE participants showed significantly higher values of action planning and coping planning at T6. Participants in both formats showed increased levels of action control at T6, whereas participants' habit strength decreased post-intervention but then stabilized over time. LiFE participants showed higher levels of autonomy, competence, and relatedness throughout the study, but levels of intrinsic motivation did not differ between formats and from T1 to T6. CONCLUSION In both formats, but especially in the individual LiFE, the behavior change techniques used affected volitional rather than motivational or general determinants of behavior change. Habit strength as an important indicator of the sustainability of the LiFE exercises stabilized over time, indicating that participants, at least partly, sustained their formed habits long-term.
|
<reponame>ened/firebase-ios-sdk
/*
* Copyright 2018 Google
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "Firestore/core/src/firebase/firestore/local/reference_set.h"
#include "Firestore/core/src/firebase/firestore/model/document_key.h"
#include "Firestore/core/test/firebase/firestore/testutil/testutil.h"
#include "gtest/gtest.h"
namespace firebase {
namespace firestore {
namespace local {
using model::DocumentKey;
TEST(ReferenceSetTest, AddOrRemoveReferences) {
DocumentKey key = testutil::Key("foo/bar");
ReferenceSet referenceSet{};
EXPECT_TRUE(referenceSet.empty());
EXPECT_FALSE(referenceSet.ContainsKey(key));
referenceSet.AddReference(key, 1);
EXPECT_TRUE(referenceSet.ContainsKey(key));
EXPECT_FALSE(referenceSet.empty());
referenceSet.AddReference(key, 2);
EXPECT_TRUE(referenceSet.ContainsKey(key));
referenceSet.RemoveReference(key, 1);
EXPECT_TRUE(referenceSet.ContainsKey(key));
referenceSet.RemoveReference(key, 3);
EXPECT_TRUE(referenceSet.ContainsKey(key));
referenceSet.RemoveReference(key, 2);
EXPECT_FALSE(referenceSet.ContainsKey(key));
EXPECT_TRUE(referenceSet.empty());
}
TEST(ReferenceSetTest, RemoteAllReferencesForTargetId) {
DocumentKey key1 = testutil::Key("foo/bar");
DocumentKey key2 = testutil::Key("foo/baz");
DocumentKey key3 = testutil::Key("foo/blah");
ReferenceSet referenceSet{};
referenceSet.AddReference(key1, 1);
referenceSet.AddReference(key2, 1);
referenceSet.AddReference(key3, 2);
EXPECT_FALSE(referenceSet.empty());
EXPECT_TRUE(referenceSet.ContainsKey(key1));
EXPECT_TRUE(referenceSet.ContainsKey(key2));
EXPECT_TRUE(referenceSet.ContainsKey(key3));
referenceSet.RemoveReferences(1);
EXPECT_FALSE(referenceSet.empty());
EXPECT_FALSE(referenceSet.ContainsKey(key1));
EXPECT_FALSE(referenceSet.ContainsKey(key2));
EXPECT_TRUE(referenceSet.ContainsKey(key3));
referenceSet.RemoveReferences(2);
EXPECT_TRUE(referenceSet.empty());
EXPECT_FALSE(referenceSet.ContainsKey(key1));
EXPECT_FALSE(referenceSet.ContainsKey(key2));
EXPECT_FALSE(referenceSet.ContainsKey(key3));
}
} // namespace local
} // namespace firestore
} // namespace firebase
|
<filename>libraries/distanceField/trilinearInterpolation.h
/*************************************************************************
* *
* Vega FEM Simulation Library Version 4.0 *
* *
* "distance field" library , Copyright (C) 2007 CMU, 2018 USC *
* All rights reserved. *
* *
* Code author: <NAME> *
* http://www.jernejbarbic.com/vega *
* *
* Research: <NAME>, <NAME>, <NAME>, *
* <NAME>, <NAME>, *
* <NAME>, <NAME>, *
* <NAME>, <NAME> *
* *
* Funding: National Science Foundation, Link Foundation, *
* Singapore-MIT GAMBIT Game Lab, *
* Zumberge Research and Innovation Fund at USC, *
* Sloan Foundation, Okawa Foundation, *
* USC Annenberg Foundation *
* *
* This library is free software; you can redistribute it and/or *
* modify it under the terms of the BSD-style license that is *
* included with this library in the file LICENSE.txt *
* *
* This library is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the file *
* LICENSE.TXT for more details. *
* *
*************************************************************************/
/*
Trilinear interpolation.
*/
#define TRILINEAR_INTERPOLATION(wx,wy,wz,v000,v100,v110,v010,v001,v101,v111,v011) \
( (wx) * (wy) * (wz) * (v111) + \
(wx) * (wy) * (1-(wz)) * (v110) + \
(wx) * (1-(wy)) * (wz) * (v101) + \
(wx) * (1-(wy)) * (1-(wz)) * (v100) + \
(1-(wx)) * (wy) * (wz) * (v011) + \
(1-(wx)) * (wy) * (1-(wz)) * (v010) + \
(1-(wx)) * (1-(wy)) * (wz) * (v001) + \
(1-(wx)) * (1-(wy)) * (1-(wz)) * (v000))
#define GRADIENT_COMPONENT_X(wx,wy,wz,v000,v100,v110,v010,v001,v101,v111,v011) \
(((wy) * (wz) * (v111) + \
(wy) * (1-(wz)) * (v110) + \
(1-(wy)) * (wz) * (v101) + \
(1-(wy)) * (1-(wz)) * (v100) + \
(-1) * (wy) * (wz) * (v011) + \
(-1) * (wy) * (1-(wz)) * (v010) + \
(-1) * (1-(wy)) * (wz) * (v001) + \
(-1) * (1-(wy)) * (1-(wz)) * (v000) ) / gridX)
#define GRADIENT_COMPONENT_Y(wx,wy,wz,v000,v100,v110,v010,v001,v101,v111,v011) \
(((wx) * (wz) * (v111) + \
(wx) * (1-(wz)) * (v110) + \
(wx) * (-1) * (wz) * (v101) + \
(wx) * (-1) * (1-(wz)) * (v100) + \
(1-(wx)) * (wz) * (v011) + \
(1-(wx)) * (1-(wz)) * (v010) + \
(1-(wx)) * (-1) * (wz) * (v001) + \
(1-(wx)) * (-1) * (1-(wz)) * (v000)) / gridY)
#define GRADIENT_COMPONENT_Z(wx,wy,wz,v000,v100,v110,v010,v001,v101,v111,v011) \
(((wx) * (wy) * (v111) + \
(wx) * (wy) * (-1) * (v110) + \
(wx) * (1-(wy)) * (v101) + \
(wx) * (1-(wy)) * (-1) * (v100) + \
(1-(wx)) * (wy) * (v011) + \
(1-(wx)) * (wy) * (-1) * (v010) + \
(1-(wx)) * (1-(wy)) * (v001) + \
(1-(wx)) * (1-(wy)) * (-1) * (v000)) / gridZ)
|
The closed duodenal loop technique. The closed duodenal loop (CDL) technique, one of the first experimental models producing experimental acute pancreatitis, is described in this article. Since this model was published first by Pfeffer in 1957, it has undergone several modifications. The CDL method is an easily practicable and reproducible model to investigate acute hemorrhagic pancreatitis. In view of other available experimental models, the CDL technique has lessened in popularity.
|
/*
* Tencent is pleased to support the open source community by making wechat-matrix available.
* Copyright (C) 2019 THL A29 Limited, a Tencent company. All rights reserved.
* Licensed under the BSD 3-Clause License (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://opensource.org/licenses/BSD-3-Clause
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef MatrixFramework_h
#define MatrixFramework_h
#import "Matrix.h"
#import "MatrixIssue.h"
#import "MatrixPlugin.h"
#import "MatrixPluginConfig.h"
#import "MatrixAdapter.h"
#import "MatrixTester.h"
#import "MatrixAppRebootType.h"
#import "WCCrashReportInfoUtil.h"
#import "WCCrashReportInterpreter.h"
#import "WCCrashBlockMonitorConfig.h"
#import "WCCrashBlockMonitorPlugin.h"
#import "WCCrashBlockMonitorPlugin+Upload.h"
#import "WCBlockMonitorConfiguration.h"
#import "WCCrashBlockFileHandler.h"
#import "WCCrashBlockMonitorDelegate.h"
#import "WCBlockTypeDef.h"
#import "WCMemoryStatPlugin.h"
#import "WCMemoryStatConfig.h"
#import "WCMemoryStatModel.h"
#import "MatrixBaseModel.h"
#import "memory_stat_err_code.h"
#import "KSCrashReportWriter.h"
#import "KSThread.h"
#import "KSMachineContext.h"
#import "KSStackCursor.h"
#endif /* MatrixFramework_h */
|
Improving Anonymization Clustering Microaggregation is a technique to preserve privacy when confidential information about individuals shall be used by third parties. A basic property to be established is called k-anonymity. It requires that identifying information about individuals should not be unique, instead there has to be a group of size at least k that looks identical. This is achieved by clustering individuals into appropriate groups and then averaging the identifying information. The question arises how to select these groups such that the information loss by averaging is minimal. This problem has been shown to be NP-hard. Thus, several heuristics called MDAV,V-MDAV,... have been proposed for finding at least a suboptimal clustering. This paper proposes a more sophisticated, but still efficient strategy called MDAV∗ to construct a good clustering. The question whether to extend a group locally by individuals close by or to start a new group with such individuals is investigated in more depth. This way, a noticeable lower information loss can be achieved which is shown by applying MDAV∗ to several established benchmarks of real data and also to specifically designed random data.
|
Genetically modified (GM) crop use in Colombia: farm level economic and environmental contributions ABSTRACT This study assesses the economic and environmental impacts that have arisen from the adoption and use of genetically modified (GM) cotton and maize in Colombia in the fifteen years since GM cotton was first planted in Colombia in 2003. A total of 1.07 million hectares have been planted to cotton and maize containing GM traits since 2003, with farmers benefiting from an increase in income of US $301.7 million. For every extra US $1 spent on this seed relative to conventional seed, farmers have gained an additional US $3.09 in extra income from growing GM cotton and an extra US $5.25 in extra income from growing GM maize. These income gains have mostly arisen from higher yields (+30.2% from using stacked (herbicide tolerant and insect resistant cotton and +17.4% from using stacked maize). The cotton and maize seed technology have reduced insecticide and herbicide spraying by 779,400 kg of active ingredient (−19%) and, as a result, decreased the environmental impact associated with herbicide and insecticide use on these crops (as measured by the indicator, the Environmental Impact Quotient (EIQ)) by 26%. The technology has also facilitated cuts in fuel use, resulting in a reduction in the release of greenhouse gas emissions from the GM cotton and maize cropping area and contributed to saving scarce land resources. Introduction GM crop technology has been widely used in cotton and maize in many parts of the world for more than 20 years and GM technology in these crops was first used in the USA in 1996. Since then, its use has been extended to 55.5 million ha of maize planted in thirteen countries and 23.8 million ha of cotton also planted in thirteen countries. In Colombia, GM cotton was first grown commercially in 2003 on a restricted basis, with unrestricted planting from 2004. In the first years of commercial growing, varieties containing the insect resistance trait (Mon 531 'Bollgard I): resistant to the following pests; budworms (Heliothis virescens), earworms (Helicoverpa zeae), pink bollworm (Pectinophora gossypiella), false pink bollworm (Sacadodes pyralis), cotton worm (Alabama argillacea) and cotton leafworm (Spodoptera sp) were planted. 'Stacked' seed containing both this IR trait and the herbicide tolerance trait Mon 1445 (tolerance to glyphosate) became available from 2006. These were then followed up with second generation GM traits such as Mon 15985 (IR: Bollgard II that extended control to include the Fall Armyworm (Spodoptera)) and Mon 88913 (HT: tolerant to glyphosate 'RoundupFlex' that allowed 'over the top' spraying of gylyphosate for weed control later in the growing season) from 2009/10. Liberty Link cotton (tolerant to the herbicide glufosinate) became available in 2011 and other 'second-generation' traits such as 'Twinlink' (IR) and 'Glytol (HT tolerant to glyphosate and glufosinate) became available to farmers from 2014. Cotton seed varieties containing the stacking of these two latter traits (Twinlink and Glytol) have been rapidly adopted and accounted for 75% of GM cotton plantings in 2018 (Data source: Instituto Colombiano Agropecuario (ICA)). In 2018, GM cotton was planted on 12,103 ha (of which 98% contained both IR and HT traits: Table 1). GM maize was first grown commercially in 2006, initially on a restricted basis and post 2007, on an unrestricted basis. The first traits available (eg, 'Yieldgard' varieties containing the trait Mon 810) conveyed resistance to common maize pests like Corn borer (Diatraea) and corn earworm (Helicoverpa), with 'Herculex I' varieties containing the DAS 1507 trait conveying resistance to these two pests plus the Fall Armyworm (Spodoptera) pest. Varieties conveying HT traits (tolerance to glyphosate and glufosinate) were also approved in 2007/08, with 'stacked' seeds containing both IR and HT traits available from 2009. In subsequent years, second generation traits have become available from several companies, offering farmers more effective control of pests and reduced chance of pest resistance developing to the technology via the inclusion of more traits with additional modes of control action. In 2018, GM maize was planted on 76,014 ha, of which 92.5% contained both IR and HT traits: Table 1). This paper presents an assessment of some of the key economic and environmental impacts associated with the adoption of GM cotton and maize from 2003 and 2007 respectively in Colombia. The analysis focuses on: Gross farm income effects on costs of production, yield/production and farm income; Changes in the amount of insecticides and herbicides applied to the GM crops relative to conventionally grown alternatives and; The contribution of the technology toward reducing global greenhouse gas (GHG) emissions Methodology The approach used to estimate the impacts of the GM maize and cotton draws on the farm level and aggregate impacts identified in the global impact studies of Brookes and Barfoot. 1,2 These examined farm level economic impacts on crop yield and production gains and environmental impacts associated with changes in insecticide use and carbon emission savings associated with better pest and weed control with the GM HT and IR traits in the two crops. The material presented in this paper combines data presented in the Brookes The methodology used for assessing the environmental impact associated with pesticide use changes with GM crops in Colombia examines changes in the volume (quantity) of pesticide applied and the use of the Environmental Impact Quotient (EIQ) indicator. 3 The EIQ indicator provides an improved assessment of the impact of GM crops on the environment when compared to only examining changes in volume of active ingredient applied, because it draws on some of the key toxicity and environmental exposure data related to individual products, as applicable to impacts on farm workers, consumers and ecology. The author acknowledges that the EIQ is only a hazard indicator and has important weaknesses (see for example, Peterson R and Schleier J 4 and Kniss A and Coburn C 5 ). Nevertheless, since assessing the full environmental impact of pesticide use changes with different production systems is complex and requires substantial collection of (site-specific) data (eg, on ground water levels, soil structure), it is not surprising that no such depth of data is available to provide a full impact assessment associated with pesticide use change with GM crops in Colombia. Therefore, despite the acknowledged weaknesses of the EIQ, it has been used in this paper because it is a superior indicator to only using amount of pesticide active ingredient applied. Readers requiring further details relating the methodology should refer to the two Brookes and Barfoot 1,2 references cited above. 2013 2014 2015 2016 2017 2018 Corn 75,094 89,048 85,251 100,109 86,030 76,014 Cotton 26,913 29,838 15,868 9,814 9,075 12,103 Total 102,007 118,886 101,119 109,923 95,105 88,117 Data source: ICA -Colombian Agricultural Institute a The GM crop areas in Colombia in 2018 were equivalent to about 90% and 18% respectively of the total cotton and maize crops b Th recent decrease in the areas planted to GM crops (in particular cotton) reflects the decrease in the total area planted to these crops. Overall planting areas are largely influenced by the price received and profitability for the crops relative to alternative crops and farming activities. This has fallen, especially for cotton because of decreasing international market prices for cotton and a reduction in the level of domestic support for growers. In terms of the share of total crop plantings accounted for by GM-traited seed, these have remained at over 80% of the total cotton crop since 2012 and been between 40% and 45% of the total 'non subsistence' maize crop (or about 20%-22% of the total maize crop) since 2013 The Baseline -Nature of Production, Pests And Conventional Methods Of Control Cotton Cotton is grown in two distinct regions. The coastal (Caribbean) region accounts for 55%-60% of total plantings, of which the department of Crdoba accounts for the majority of production. Here the cotton is predominately rainfed. The other main growing region is the interior region, where the department of Tolima dominates production. The majority of cotton production in this region is irrigated. At the time of the introduction of the technology, the average area planted to cotton was 7-9 ha per producer, with average crop size being higher in the interior growing region. The total number of farms growing cotton in the early years of adoption was between 6,000 and 7,000. In 2018, the average size of cotton crop was about 30 hectares per grower, with a total of about 500-600 growers (source: Conalgodon). There are many cotton pests. The main pests targeted by the technology are budworms (Heliothis virescens), earworms (Helicoverpa zeae), pink bollworm (Pectinophora gossypiella), pink false pink bollworm (Sacadodes pyralis), cotton worm (Alabama argillacea) and cotton leafworm (Spodoptera sp). Other pests, not controlled by the IR technology are boll weevil (Picudo Antonhomus grandis) and white fly (Bemisia tabaci). It should also be noted that the original Bollgard I technology did not control cotton leafworm. Traditionally, in conventional cotton, the primary form of pest control was through the use of insecticides, with an average of about 11 applications being made during a growing season (sources: AgroBio personal communications, Cleres, 6 Brookes and Barfoot, 1,2 Zambrano et al, 7 Kleffmann (various years). Within this, six of the applications were typically made against the pests controlled by GM IR technology. The remaining 4-6 insecticide applications were/are mostly for the control of the boll weevil pest which has been, and remains, the main problempest for cotton. Quarantine measures such as requiring crops to be planted in different seasons by region are also important for the control of boll weevil. In the interior region: Cundinamarca, Huila, Tolima, Vichada and Valle del Cauca (which accounted for about 48% of total production in 2018) cotton planting is restricted to the first season (planted in February or March and harvested July-September) and in the Caribbean/Costa region -Cesar, Guajira, Sucre, Crdoba, Bolivar and Antioquia (which accounted for 52% of total production in 2018), it is restricted to the second season (planted in July-October and harvested January-March). Other quarantine measures include mandatory destruction of harvest residues and the use of pheromone traps both pre and during the crop growing season (Sources: as above plus Salazar J et al 8 ). In relation to weed control in conventional cotton this has traditionally been a combination of herbicide use (commonly a pre-emergent application of glyphosate plus two applications of diuron) and two manual/mechanical weeding cycles (source: AgroBio members personal communications and Kleffmann). Maize In 2018, the total maize crop in Colombia was about 400,000 ha, of which 65% was yellow maize, mostly used for animal feed use and 36% was white maize, for human consumption (Source: Federacin Nacional Cultivadores de Cereales, Leguminosas y Soya -Fenalce). Fenalce statistics classify production into two distinct types of production, with 'tecnificado' production, where farmers use hybrid seed and the crop is sold commercially, accounted for 54% of the area planted, with the balance of 46% 'tradicional' production, where subsistence farming for own-household /domestic consumption is practiced and farmers typically do not use hybrid seed. The crop is grown in most regions of Colombia, although the main departments where commercial maize is grown are Meta Altillanura, Crdoba, Tolima and Valle which accounted for 18%, 16%, 13% and 7% respectively of total plantings in 2018. The GM maize is grown by 'tecnificado' (commercial) growers only and hence the approximate share of this crop using GM technology in recent years has been within the range of 36% to 48% of the total (commercial) crop (36% in 2018). The main pests of maize in Colombia are Fall Armyworm (Spodoptera), Corn borer (Diatraea), corn earworm (Helicoverpa) and sucking pests (Dalbulus maidis). GM IR technology in maize targets the first three of these pests. Corn borers have traditionally been the main insect pest with widest incidence, with lower levels of incidence of Fall Armyworm and cutworms (source: AgroBio members personal communications). As indicated in Brookes G, 9 with all pests, the pest pressure incidence and levels of infestation typically vary by region and year, being influenced by local climatic conditions, the extent to which conventional forms of control (notably the application of insecticides) are used and planting times (early planted crops are usually better able to withstand attacks compared to crops planted later in year). This means that the negative impact on crop yields can vary widely from zero in years or seasons of no pest pressure to in excess of 50%, when pest pressures are high and insecticides are not used (see for example, Brookes, 9 and Brookes G and Barfoot P. 1 The traditional method of control of maize pests in commercial crops has been the use of insecticides, with crops typically subject to 1-2 applications for the control of corn boring, armyworm and cutworm pests, and 1-2 applications for the control of sucking pests (plus seed treatments). Given the widespread and regular incidence of pest pressure across all growing regions, almost all (commercial) growers traditionally used insecticides for control of the main maize pests (sources: AgroBio member personal communications and Kleffmann pesticide usage statistics). Since GM IR maize technology became available to farmers, the highest concentrations of adopters have, not surprisingly, been in the regions of Tolima, Valle del Cauca, Crdoba and Meta regions ( Fig. 1), which are also the main maizegrowing regions (see above). Weed control in conventional maize has been mostly based on the use of herbicides; the use of active ingredients like pendimethalin, acetochlor, atrazine and glyphosate/glufosinate pre-emergence, possibly followed by hand weeding (Source: AgroBio member personal communications). Yield Impacts In assessing the performance of the GM technology in the two crops of cotton and maize in Colombia, it is important to recognize that there are a number of factors that have/do impact on its performance: Pest pressure: The level of crop and yield damage caused by pests (both those that the GM IR technology targets and other pests) varies by location, year, climatic factors, timing of planting, whether insecticides are used This means that performance identified in the early years of adoption may not necessarily be representative of performance in later years. For example, the second generation of GM insect resistance genes in (Bollgard II) cotton provided control of more pests than the first generation of GM (Bollgard I) cotton. Also, the underlying performance of seed varieties containing GM traits is subject to change as new, better performing seed varieties are developed. The influence of these factors can be seen in the findings of some of early studies into impact of using GM technology in Colombia (summarized in Table 2): Zambrano et al. 7 This study examined the early adoption of IR cotton. It was undertaken in 2007-08, interviewed 364 farmers, mostly in the two most important cotton producing departments of Crdoba and Tolima plus Sucre, which has a relatively small cotton growing area but a significant number of small-scale producers. The survey found that farmers using the IR cotton had higher yields than those using conventional varieties but higher costs of production per hectare. In terms of costs per tonne of cotton fiber, these were, however, lower for the IR cotton growers. The main benefit came from higher yields via enhanced protection against pest attack rather than (expected) reductions in the use of insecticides. The study found that IR cotton growers in two out of the three departments surveyed spent more on insecticides that farmers using conventional varieties. The continued significant expenditure on insecticides reflected the need to control pests that the IR cotton technology did (does) not control (notably boll weevil) and because most IR cotton adopters at that time were larger farms with more resources and access to inputs and machinery than their conventional counterparts (eg, Tolima was the most economically advanced cotton growing region, where the vast majority of farmers had access to irrigation and machinery). The highest levels of adoption were also found in Tolima which was the region which had experienced the highest incidence of pest pressure for the pests controlled by the IR technology. This contrasted with the coastal region of Crdoba and Sucre, where production was mostly rainfed, farmers had less access to machinery and pests not controlled by the IR technology were the primary pests (especially boll weevil and white fly but also, at that time armyworm, a pest that was latterly controlled by the second generation of IR cotton available in varieties in later years). The yield differences between farmers using IR and conventional cotton varied considerably (higher yields for IR cotton of +9.2% in Crdoba, +17.6% Sucre and +75% in Tolima). It is important to recognize that only some of these yield differences were attributable to the IR technology aloneother important factors being access to adequate resources and inputs, quality of land, access to irrigation and incidence of pests not controlled by the IR technology and efficacy of conventional control methods of these pests and the underlying performance of the seed variety used. When the authors adjusted their yield analysis to take account of some of these factors, essentially by comparing the performance of IR cotton and conventional cotton grown on the same farm (in Tolima), where farmers were essentially using varieties of similar underlying yield performance, the difference in yield performance in favor of GM IR cotton was +35%. Weather also influenced the results in the coastal region, with, for example, drought during the growing season, followed by unusually heavy rains in Sucre affecting yields. At the time, the authors concluded that overall adoption of IR cotton was showing clear yield and income benefits in Tolima but was less economically advantageous to farmers in the Coastal region (Crdoba and Sucre) because of a combination of lower levels of pest pressure for the pests controlled by the IR technology and factors unrelated to the technology such as less access to inputs, credit and machinery and weather extremes during the season the study was undertaken; Fonseca L and Zambrano P 10 extended some of the earlier impact analysis of IR cotton by examining the yield impact, specific to some of the new (in 2008-09) varieties containing both IR and HT technology, based on data from the national cotton association Conalgodon. At this time, two of the varieties containing both IR and HT (tolerance to glyphosate), DP455BRR and Deltaopal RR. This small scale, localized study interviewed 20 farmers (10 growing GM maize and 10 conventional growers) and found the yield difference in favor of the GM maize to be +22%, with overall costs of production also being lower by 14% for GM maize growers (higher cost of the GM seed, more than offset by reduced expenditure on insecticides and herbicides). It also found that the GM maize production system had a lower (beneficial) environmental impact on the environment, as measured by the Environmental Impact Quotient (EIQ) than the conventional maize production system mainly because of the elimination of use of insecticides and a change in the profile of herbicides used (use of five herbicides being replaced by one, glufosinate, for weed control). As the authors acknowledged, this study related to one small growing region in the first growing season of 2009. It also related to GM seed technology that was tolerant to one herbicide, glufosinate, whereas most of the latterly adopted stacked GM maize was tolerant to glyphosate only, or to both glyphosate and glufosinate; vila Mndez K et al 12 and Reyes G et al 13 examined the environmental impact of using both GM cotton and maize. The analysis relating to GM maize essentially summarized the finding of the 2009 analysis referred to above, whilst the cotton analysis was based on interviewing 20 cotton farmers (15 growing some of the then first varieties of stacked GM cottonthe stacked traits of Bollgard 1 and glyphosate tolerance) and 5 growing conventional varieties) in the municipality of El Espinal, in the department of Tolima in the first half of 2009. The paper concluded that the GM varieties delivered higher yields of about +14%. In relation to the environmental impact of insecticide use, as measured by the EIQ indicator, these were worse for GM cotton than the environmental impact associated with insecticide use on conventional cotton. The environmental impact of herbicide use, as measured by the EIQ indicator on GM cotton was however, better than the environmental impact associated with herbicide use on conventional cotton. The study was, however very small scale and localized, and hence not representative of cotton production across all regions. It is also likely that differences in the nature of farming practices used by the early GM technology adopters compared to conventional growers probably had an important influence on the amount of insecticides used. In addition, the early stacked varieties of GM cotton introduced in the first season of 2009 experienced poor performance (eg, poor boll formation) resulting in inferior yields (see for example, Fonseca L and Zambrano P. 10 The 2008 season in this region was also very wet, with little sunshine and this also affected performance of these new varieties; Zambrano P et al 14 undertook analysis of experience in using GM cotton in 2010 through interviews with 34 farmers in El Espinal (Tolima) and 45 farmers in Ceret (Crdoba). Whilst the study focused on the role of women in cotton production, it collected some data relating to the relative performance of the two types of cotton production. The analysis found a range of yield impacts from −23% to +21% across the two localities for the performance of the GM cotton growers compared to conventional growers. Later studies are limited to a 2017 study (based on data collected in 2013-2015) by Cleres for AgroBio and an update, in 2019 (based on data collected in 2018). 6,15 Both studies were based on interviews with a combination of farmers growing conventional and GM crops plus interviews with extension advisors, industry (seed company) advisors, representatives of farmer associations and public sector researchers. The 2017 Cleres data identified average yield gains for stacked-traited maize and cotton of +16% and +24.7% respectively. 6 The 2019 update found the average yield gains to be +8% for stacked-traited maize and +72% for stacked-traited cotton. The 2019 study was, however based on very small samples of farms and was much less representative of production systems across the crops in different regions of the country than the earlier study. 15 It should be noted that these latter studies were made against a background of different (second generation) GM crop technology availability and significantly higher adoption levels than at the time of the early studies. The performance of second-generation GM IR traits in both cotton and maize has been better and more consistent than the first generation of GM IR traited seed. Thus, the levels of pest control of 'Bollgard II' cotton technology which has two or more modes of action for pest control relative to the single mode of action in the early 'Bollgard I' technology were better (eg, improved control of pests later in the growing season and control of the Fall Armyworm). Pests such as pink bollworm and false pink bollworm (commonly known as Colombian and India pink bollworms) and Trichoplusia sp have an almost zero incidence in crops with GM IR traits, while Spodoptera and Heliothis occur at levels that usually, either do not require or, only require one or (possibly) two insecticide applications for control throughout the production cycle. In contrast, in conventional cotton crops between four and six insecticide applications is commonly required for the control of these pests. On the other hand, due to the decrease in the number of insecticide applications, some secondary pests have assumed greater relative importance, especially sucking pests. For example, the white fly pest is now considered to be the second most significant pest after boll weevil. In relation to maize, all of the second generation of GM IR maize provided better levels of control of three of main pests of the crop (Fall Armyworm, Corn Borer and Corn Earworm) compared to some of the first-generation GM IR seed that targeted control only of Corn Borer pests. As a result, the average number of insecticide applications has fallen from 4-5 with conventional varieties to 1-2 for varieties containing second generation IR traits. In both crops, weed control systems have changed from a combination of mostly pre-emergent herbicides and hand/mechanical weeding (typically 3-4 applications/weeding cycles) to the use of single pre-emergent application of herbicide followed by a post-emergent 'over the top' application of glyphosate or glufosinate. In relation to adoption levels, in 2008-09, GM IR cotton adoption was 40%-50% of the total crop, with GM HT cotton in its first year of adoption. In 2018, GM (stacked) cotton seed accounts for about 90% of the total crop. GM maize was also in its early years of availability in 2008-09 (about 20,000 ha using the technology). By 2018, the area of maize planted to seed containing GM technology had increased to annually between 70,000 ha and 90,000 ha, equal to about 35%-40% of the commercial 'tecnificado' crop. The analysis presented in the section below on farm income and production draws on the various research referred to above and summarized in Table 2. Additional information is provided in Appendix 1. In terms of average yield gains over the respective periods of adoption for GM cotton and maize, these were +30.2% for (IR/stacked) cotton and +17.4% for IR/stacked maize. Impacts on Farm Income and Crop Production At the farm level, GM cotton and maize seed technology has provided Colombian farmers with higher yields mostly from better pest control (relative to pest control obtained from conventional insecticide technology). In some cases, the technology has also provided for higher yields via improved weed control. The technology has also provided savings in expenditure on insecticides and weed control for many farmers. In cotton, the farm level studies identified average reductions in annual expenditure on insecticides of between US $41/ha and US $63/ha (annual average saving of about US $55/ha) and in maize, insecticide use decreased by between US $42/ha and US $55/ha (annual average saving US $45/ha: see Appendix 1: sources as Table 2). For weed control, the studies identified average reductions in annual cotton weed control costs of between US $34/ha and US $105/ha (annual average saving US $92/ha) and in maize annual weed control costs fell by between US $32/ha and US $44/ha (annual average saving of US $37/ha: sources: as Table 2). The combination of these impacts has increased the incomes of farmers using the technology by US $301.7 million over the fifteen-year period 2003-2018. This is the equivalent of an average farm income gain of US $294/ha per year for stacked maize and US $358/ha for stacked cotton. In 2018, the income gain was US $19 million ( Table 3). The largest share of the farm income benefits has been maize US $188.1 million (62%), with US $113.6 million in cotton. Examining the cost farmers pay for accessing GM seed technology, the average additional cost of seed (seed premium) relative to conventional seed, over the period of adoption were US $79/ha for maize (US $65/ha in 2018 for stacked maize) and US $171/ha for cotton (US $107/ha in 2018 for stacked cotton). These cost of technology values are equal to 19% (maize) and 32% (cotton) of the total (gross) technology gains (before deduction of the additional cost of the technology payable to the seed supply chainthe cost of the technology accrues to the seed supply chain including sellers of seed to farmers, seed multipliers, plant breeders, distributors and the GM technology providers). In terms of investment, over the 15 years of adoption, this means that for each extra dollar invested in GM cotton crop seeds in Colombia, farmers gained an average US $3.09 and over the 12 years of adoption of GM maize, for each extra dollar invested in GM maize crop seeds in Colombia, farmers gained an average US $5.25. Based on the yield gains referred to in Table 2, the GM IR technology has added 0.63 million tonnes of maize and cotton lint to production since 2002 (Table 4). This extra production contributes to reducing pressure on farmers to use additional land for crop production. To illustrate, if GM maize technology had not been available to farmers in 2018, maintaining production levels for this year using conventional technology would have required the planting of an additional 11,240 hectares of agricultural land to maize. This equates to about 5.2% of the total commercial area planted to maize in 2018. Impacts on the Environment Associated with Insecticide and Herbicide Use and Greenhouse Gas Emissions GM IR maize and cotton traits have contributed to a reduction in the environmental impact associated with insecticide use on a significant proportion of the areas devoted to these crops. Since 2003, the use of insecticides on the GM IR cotton area was reduced by 176,500 kg of active ingredient (−25% reduction), and the environmental impact associated with insecticide use on these crops, as measured by the EIQ indicator, fell by 27% ( Table 5). The use of herbicides on cotton has fallen by about 45,000 kg (−5%), with the associated environmental impact, as measured by the EIQ indicator also falling by 5% since this technology was first used in 2007. The use of insecticides on the GM IR maize area has decreased by 279,400 kg of active ingredient (−66% reduction), and the environmental impact associated with insecticide use on these crops, as The scope for impacts on greenhouse gas emissions associated with GM crops in Colombia has come from one principal source; fuel savings associated with less frequent insecticide and herbicide applications. The use of GM IR cotton and maize has resulted in total savings equal to 8,761 million kg of carbon dioxide not released into the atmosphere, arising from less fuel use of 3.28 million liters. This is equivalent to taking 5,410 cars off the road for a year. To provide context, this represents a very small, positive contribution to greenhouse gas reduction when compared to the 5.4 million cars registered in Colombia (2017: statistical source Ministry of Transportation). Other Impacts The various pests targeted by the IR traits in maize damage crops making them susceptible to fungal damage and the development/buildup of fumonisins (a group of cancer-causing mycotoxins produced by a number of fusarium mold species) in the grain. This increases the possibility of grain failing to meet the maximum permitted thresholds for the presence of these toxins, set by buyers in the food and animal feed sectors. A number of studies have identified that the use of GM IR maize has, through a significant reduction in pest damage and the levels of fumonisins found in grains, led to an improvement in grain quality (eg, Folcher L et al 16,Bakan et al 17 ). This then is likely to result in less maize being rejected by users in both the food and feed using sectors in any country where this technology is used. The author is not aware of any publicly available data that has examined this issue in Colombia (or elsewhere). The adoption of GM IR maize has also provided a number of other benefits, identified in analysis such as Brookes. 18 These include improved production risk management, with the seed technology being seen by many farmers as a form of insurance against corn boring pest damage. Farmers have also been able to reduce the amount of time monitoring levels of pest pressure and the technology has made harvesting easier because of fewer problems of fallen crops. Whilst, there is no data available on the time saving derived from these changes, the gains are likely to be limited (eg, savings associated with reduced insecticide application, where applicable have been typically only 2-4 treatments). The evidence presented above in this paper has identified largely positive impacts associated with the use of GM technology in both crops over the cumulative periods of adoption for GM cotton and maize. However, it is important to recognize in the early years of adoption of both the IR technology and when stacked-traited seed became available, especially in cotton, difficulties and negative impacts arose for some farmers. These were due to a combination of factors such as the technology not being available in leading varieties suited to all local growing conditions, which resulted in poor performance relative to conventional varieties for some farmers. In addition, the knowledge transfer (advice provided to farmers) about management of the new varieties (eg, about the most appropriate pest and weed control practices) was considered to be poor/inadequate. The poor performance of some of the first stacked cotton varieties also resulted in legal cases being bought against the main technology provider at that time and this poor performance may have contributed to the adoption level of GM traited seed in the cotton sector subsequently falling as a proportion of the total crop in the next year. The subsequent increase in adoption levels, especially in cotton, to This decline in planting is likely to reflect the poor profitability from growing cotton for some farmers (even when using GM seed technology) relative to alternative agricultural enterprises (eg, maize, rice and livestock enterprises) and difficulties in competing with imported cotton. Concluding Comments GM cotton and maize technology has now been used by many farmers in Colombia for up to 15 years and, in 2018, about 88,000 hectares were planted to seeds containing this technology (equal to 90% and 36% respectively of the total cotton and (commercial) maize area in Colombia). The seed technology has helped farmers grow more food and feed (567,000 tonnes of additional maize 2007-2018 and 68,000 tonnes of cotton lint 2003-2018), using fewer resources and therefore contributed to reducing the pressure on scarce resources such as land. The extra production and reduced cost of pest and weed control have provided maize farmers with higher incomes equal to an average of US $294/ha and an average return on investment equal to +US $5.25 for each extra US $1 spent on GM maize seed relative to conventional seed. For cotton farmers, the average increase in income has been + US $358/ha, with an average return on investment equal to +US $3.09 for each extra US $1 spent on GM seed relative to conventional seed. This additional farm income from growing GM cotton and maize will have boosted farm household incomes and, assuming some of this additional income has been spent by the households, this additional expenditure will have provided a wider economic boost to the local (rural) and possibly national economy. The technology has also contributed to reducing the environmental impact associated with insecticide and herbicide use and made a small contribution to lowering fossil fuel use for crop spraying. Overall, the impact evidence from the fifteen years of adoption of GM cotton and twelve years of GM maize points to a net positive contribution toward addressing the crop production and environmental challenges facing agriculture in Colombia. Funding This work was supported by the AgroBio Colombia. The cost of the technology represents the value paid by farmers to the seed supply chain including sellers of seed to farmers, seed multipliers, plant breeders, distributors and the GM technology providers. It does not represent the value accruing to the technology providers but to the whole seed supply chain. The cost of the most used form of the technologyseed containing stacked genes for IR and HT traits were $70.76/ha for maize and $107.3/ha for cotton. Yield gains derive from a reduction of pest damage (IR trait) in maize and a combination of improved pest and weed control in cotton Insecticide and herbicide use change Reduction in fuel and water use from less frequent insecticide applications For insecticide and herbicide applications, the quantity of energy required to apply the insecticide is based on use of a 50foot boom sprayer which consumes approximately 0.84 liters/ha. 19 In terms of carbon emissions, each liter of tractor diesel consumed contributes an estimated 2.67 kg of carbon dioxide into the atmosphere (so 1 less application reduces carbon dioxide emissions by 2.24 kg/ha). Base yields used where GM technology delivers a positive yield gain In order to avoid over-stating the positive yield effect of GM technology (where studies have identified such an impact) when applied at a national level, average (national level) yields used have been adjusted downwards (see example below). Production levels based on these adjusted levels were then cross checked with total production values based on reported average yields across the total crop. Sources: Insecticide and herbicide use changes based on Brookes and Barfoot 2, Cleres 6,15 and personal communications with industry staff about more recent/current insecticides and herbicides that are/would need to be used to control pests or for weed control, if GM maize and cotton technologies were not used
|
import sys
import time
import collections
import lua_bind.lua_bind as lua_bind
def task_run_lua_bind(argi):
lua_bind.run(sys.argv[argi:])
def print_help():
print("usage: parser [option] [task] [params]")
print("cmd arguments:")
print("options:")
print(" -help : display help")
print(" -<task> -help: display task help")
print("task:")
print(" -lua_bind : lua bind (use -lua_bind -help to get more infos)")
def main():
start_time = time.time()
if len(sys.argv) <= 1:
print_help()
exit(1)
#######################################
# option
option_index = 0
if sys.argv[1] == "-help":
print_help()
option_index += 1
if (len(sys.argv) <= option_index + 1):
exit(1)
#######################################
# task
tasks = collections.OrderedDict()
tasks["-lua_bind"] = { "run":task_run_lua_bind }
for i in range(1, len(sys.argv)):
key = sys.argv[i]
if key not in tasks.keys():
continue
tasks.get(key, )["run"](i)
if __name__ == '__main__':
main()
|
Baicalein Abrogates Reactive Oxygen Species (ROS)mediated Mitochondrial Dysfunction during Experimental Pulmonary Carcinogenesis In Vivo Our current study aimed to evaluate the chemotherapeutic efficacy of baicalein (BE) in Swiss albino mice, which is exposed to benzo(a)pyrene for its ability to alleviate mitochondrial dysfunction and systolic failure. Here, we report that oral administration of B(a)P (50 mg/kg body weight)induced pulmonary genotoxicities in mice was assessed in terms of elevation in reactive oxygen species (ROS) generation and DNA damage in lung mitochondria. MDADNA adducts were formed in immunohistochemical analysis, which confirmed nuclear DNA damage. mRNA expression levels studied by RTPCR analysis of voltagedependent anion channel (VDAC) and adenine nucleotide translocase (ANT) were found to be significantly decreased and showed a marked increase in membrane permeability transition pore (MPTP) opening. Accompanied by upregulated BclxL and downregulated Bid, Bim and Cytc proteins studied by immunoblot were observed in B(a)Pinduced lung cancerbearing animals. Administration of BE (12 mg/kg body weight) significantly reversed all the above deleterious changes. Moreover, assessment of mitochondrial enzyme system revealed that BE treatment effectively counteracts B(a)Pinduced downregulated levels/activities of isocitrate dehydrogenase, ketoglutarate dehydrogenase, succinate dehydrogenase, malate dehydrogenase, NADH dehydrogenase, cytochromeCoxidase and ATP levels. Restoration of mitochondria from oxidative damage was further confirmed by transmission electron microscopic examination. Further analysis of lipid peroxidation, superoxide dismutase, catalase, glutathione peroxidase, glutathioneStransferase, glutathione reductase, reduced glutathione, vitamin E and vitamin C in lung mitochondria was carried out to substantiate the antioxidant effect of BE. The overall data conclude that chemotherapeutic efficacy of BE might have strong mitochondria protective and restoration capacity in subcellular level against lung carcinogenesis in Swiss albino mice.
|
package com.omnifix.demo;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInfo;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;
import reactor.core.publisher.Flux;
import reactor.test.StepVerifier;
import reactor.test.StepVerifierOptions;
class ReactiveTenPinBowlingTests {
private ReactiveBowling reactiveBowling;
@BeforeAll
static void setUpBeforeClass() throws Exception {}
@AfterAll
static void tearDownAfterClass() throws Exception {}
@BeforeEach
void setUp() throws Exception {
reactiveBowling = new ReactiveBowling();
}
@AfterEach
void tearDown() throws Exception {}
@Test
void canRecordOneRound(TestInfo testInfo) {
int firstRoll = 5;
int secondRoll = 3;
Flux<Integer> pinsStream = Flux.fromIterable(List.of(firstRoll, secondRoll));
StepVerifier.create(
pinsStream, StepVerifierOptions.create().scenarioName(testInfo.getDisplayName()))
.expectNext(firstRoll)
.as("Check first roll")
.as("Check score after first roll")
.expectNext(secondRoll)
.as("Check second roll")
.expectComplete()
.verify();
assertThat(reactiveBowling.play(pinsStream).collect(Collectors.toList()).block())
.containsExactly(5, 8); // Expected score sequence
}
@Test
void cannotExceedTotalGameFrames() {
Flux<Integer> pinsStream =
Flux.fromIterable(List.of(10, 10, 10, 10, 10, 10, 10, 10, 10, 1, 2, 3));
assertThatThrownBy(
() -> {
reactiveBowling.play(pinsStream).blockLast();
})
.isInstanceOf(IllegalStateException.class)
.hasMessageContaining("End of game!");
}
@ParameterizedTest(name = "{0}")
@MethodSource("provideParameters")
void multipleScenariosTests(String scenarioName, List<Integer> pins, Integer score) {
Flux<Integer> pinsStream = Flux.fromIterable(pins);
assertThat(reactiveBowling.play(pinsStream).blockLast())
.isEqualTo(score); // Expected score sequence
}
private static Stream<Arguments> provideParameters() {
return Stream.of(
Arguments.of("One frame", List.of(5, 3), 8),
Arguments.of("Three STRIKES followed by a frame", List.of(10, 10, 10, 2, 3), 72),
Arguments.of(
"9 STRIKES followed by a frame",
List.of(10, 10, 10, 10, 10, 10, 10, 10, 10, 2, 6),
258),
Arguments.of(
"Max score - ALL STRIKES",
List.of(10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10),
300), // All STRIKEs
Arguments.of(
"Altenate STRIKES start with STRIKE",
List.of(10, 1, 2, 10, 1, 2, 10, 1, 2, 10, 1, 2, 10, 1, 2),
80), // Alternate STRIKEs
Arguments.of(
"Alternate STRIKES start with frame",
List.of(1, 2, 10, 1, 2, 10, 1, 2, 10, 1, 2, 10, 1, 2, 10, 1, 2),
80), // Alternate STRIKEs, start without a STRIKE
Arguments.of(
"All STRIKEs, finish with SPARE from single roll",
List.of(10, 10, 10, 10, 10, 10, 10, 10, 10, 0, 10, 7),
267),
Arguments.of(
"All STRIKEs, finish with SPARE from two rolls",
List.of(10, 10, 10, 10, 10, 10, 10, 10, 10, 2, 8, 4),
266),
Arguments.of("SPARE from two rolls", List.of(4, 6, 7), 24),
Arguments.of("SPARE from second roll only", List.of(0, 10, 5), 20),
Arguments.of("STRIKE followed by a frame", List.of(10, 3, 2), 20));
}
}
|
Histochemical study of superoxide dismutase in the ovary of the rat during the oestrous cycle. Superoxide dismutase, an enzyme which causes the dismutation of superoxide free radical anions to generate hydrogen peroxide, has been localized in the rat ovary. The negative-staining method was used to provide photo-induced reduction of nitro-blue tetrazolium in cryostat sections of rat ovaries for histochemical localization of superoxide dismutase. Superoxide dismutase was found in growing follicles, the membrane granulosa of Graafian follicles, ovulated follicles and blood vessels. It is suggested that superoxide dismutase may play a role in regulating follicular development, ovulation and luteal functions.
|
<gh_stars>10-100
import Lyric from 'lyric-parser';
const lyricStr =
'<KEY>';
const lyric = new Lyric(lyricStr, handler);
function handler(params: { lineNum: number; txt: string }) {
// this hanlder called when lineNum change
}
lyric.play(0);
lyric.stop();
lyric.seek(0);
lyric.togglePlay();
|
def _get_last_line(self, node):
v = _LineNumberVisitor(self._ast)
v.visit(node)
return v.line
|
/** Callback that computes a new trajectory when the dragger is rotated */
class MyDraggerCallback : public osgManipulator::DraggerCallback
{
public:
MyDraggerCallback(const TrajectoryFollower* tf, Trajectory* trajOut, const FrameTransform* modelXform)
: _tf(tf), _trajOut(trajOut), _modelXform(modelXform)
{}
MyDraggerCallback(const MyDraggerCallback& org, const osg::CopyOp& copyop = osg::CopyOp::SHALLOW_COPY)
: osg::Object(org, copyop) {}
virtual bool receive(const osgManipulator::Rotate3DCommand& cmd)
{
if(!_tf.valid()) return false;
double lastTime = _tf->getLastTime();
Trajectory* lastTraj = _tf->getLastTrajectory();
if(lastTraj == nullptr) return false;
lastTraj->lockData();
const Trajectory::DataSource* posDataSource = _tf->getDataSource();
unsigned int numPoints = lastTraj->getNumPoints(posDataSource);
int index;
int val = lastTraj->getTimeIndex(lastTime, index);
if(val < 0)
{
OSG_NOTICE << "MyDraggerCallback ERROR: Time out of range" << std::endl;
return false;
}
else if(index == numPoints)
{
OSG_NOTICE << "MyDraggerCallback ERROR: Time at end of Trajectory" << std::endl;
return false;
}
osg::Vec3d pos, pos1, pos2;
lastTraj->getPosition(index , pos1[0], pos1[1], pos1[2]);
lastTraj->getPosition(index+1, pos2[0], pos2[1], pos2[2]);
const Trajectory::DataArray& times = lastTraj->getTimeList();
double frac = (lastTime - times[index])/(times[index+1] - times[index]);
pos = pos1 + (pos2 - pos1)*frac;
osg::Vec3d vel, vel1, vel2;
lastTraj->getOptional(index , 0, vel1[0], vel1[1], vel1[2]);
lastTraj->getOptional(index+1, 0, vel2[0], vel2[1], vel2[2]);
vel = vel1 + (vel2 - vel1)*frac;
osg::Quat quatSCToWorld, att1, att2;
lastTraj->getAttitude(index , att1[0], att1[1], att1[2], att1[3]);
lastTraj->getAttitude(index+1, att2[0], att2[1], att2[2], att2[3]);
quatSCToWorld.slerp(frac, att1, att2);
lastTraj->unlockData();
osg::Quat quatDragger;
_modelXform->getAttitude(quatDragger);
osg::Vec3d velSC = quatSCToWorld.inverse() * vel;
osg::Vec3d velDragger = quatDragger * velSC;
osg::Vec3d dvSC = velDragger - velSC;
osg::Vec3d dvInertial = quatSCToWorld * dvSC;
vel += dvInertial;
double ta, a, e, i, w, RAAN;
CartToKep(pos, vel, ta, a, e, i, w, RAAN);
fillTrajectory(a, e, i, w, RAAN, _trajOut.get());
showTrajData(a, e, i, w, RAAN);
return false;
}
private:
osg::observer_ptr<const TrajectoryFollower> _tf;
osg::observer_ptr<Trajectory> _trajOut;
osg::observer_ptr<const FrameTransform> _modelXform;
}
|
Isolated one-sided cerebellar agenesis following an attempted medical termination of pregnancy A 35-year-old para 2 0 presented at the outpatient clinic requesting the termination of an 8-week pregnancy. A pelvic ultrasound scan confirmed a single viable intrauterine gestation of 8 weeks and 5 days duration. The medical history included investigation for secondary infertility cervical cautery for a chronic vaginal discharge, request for sterilisation 9 years earlier and amytriptyline over dosage 2 years earlier. Following a full discussion and counselling about options for a termination of pregnancy, she chose to undergo a medical termination of pregnancy. The medical termination of pregnancy was carried out at 10 weeks gestation with 200 mg mifepristone and 1 g cervagem. Pain relief was provided with 10 mg morphine and 2 tablets of cocodamol. She had bleeding but failed to pass products of conception. She was discharged home to be reviewed in the clinic a few days later with an ultrasound scan. She defaulted her follow-up appointment. She subsequently was seen at her GP surgery 5 weeks later, when she was already 15 weeks and 4 days pregnant. A pregnancy test was positive and she indicated that she was keen to continue with the pregnancy, though she had concerns about fetal abnormality, secondary to the drugs that had been administered for the termination of the pregnancy a few weeks earlier. A repeat ultrasound scan at 17 weeks gestation showed rightsided cerebellar agenesis and this was confirmed on a detailed anomaly scan. Following a full discussion about the diagnosis and management options, she consented to undergo a medical termination of pregnancy at 23 weeks and 3days. The post mortem confirmed agenesis of the right cerebellar hemisphere.
|
<filename>sixpack-java/src/main/java/com/seatgeek/sixpack/log/HttpLoggingInterceptorLoggerAdapter.java
package com.seatgeek.sixpack.log;
import com.seatgeek.sixpack.Sixpack;
import okhttp3.logging.HttpLoggingInterceptor;
public class HttpLoggingInterceptorLoggerAdapter implements HttpLoggingInterceptor.Logger {
private final Logger logger;
public HttpLoggingInterceptorLoggerAdapter(Logger logger) {
this.logger = logger;
}
@Override
public void log(String message) {
logger.log(Sixpack.SIXPACK_LOG_TAG, message);
}
}
|
<gh_stars>1-10
// Copyright 2020 ZUP IT SERVICOS EM TECNOLOGIA E INOVACAO SA
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package repository
import (
"github.com/ZupIT/horusec/development-kit/pkg/databases/relational"
tokenRepository "github.com/ZupIT/horusec/development-kit/pkg/databases/relational/repository/token"
"github.com/ZupIT/horusec/development-kit/pkg/entities/api"
tokenUseCases "github.com/ZupIT/horusec/development-kit/pkg/usecases/tokens"
"github.com/google/uuid"
)
type IController interface {
CreateTokenRepository(*api.Token) (string, error)
DeleteTokenRepository(tokenID uuid.UUID) error
GetAllTokenRepository(repositoryID uuid.UUID) (*[]api.Token, error)
}
type Controller struct {
tokenRepository tokenRepository.IRepository
tokenUseCases tokenUseCases.ITokenUseCases
}
func NewController(postgresRead relational.InterfaceRead, postgresWrite relational.InterfaceWrite) IController {
return &Controller{
tokenRepository: tokenRepository.NewTokenRepository(postgresRead, postgresWrite),
tokenUseCases: tokenUseCases.NewTokenUseCases(),
}
}
func (c *Controller) CreateTokenRepository(token *api.Token) (key string, err error) {
token.SetKey(<KEY>())
_, err = c.tokenRepository.Create(token)
if err != nil {
return "", err
}
return token.GetKey().String(), nil
}
func (c *Controller) DeleteTokenRepository(tokenID uuid.UUID) error {
return c.tokenRepository.Delete(tokenID)
}
func (c *Controller) GetAllTokenRepository(repositoryID uuid.UUID) (*[]api.Token, error) {
return c.tokenRepository.GetAllOfRepository(repositoryID)
}
|
Antibacterial activity of epidural infusions. The incidence of epidural abscess following epidural catheterisation appears to be increasing, being recently reported as one in 1000 among surgical patients. This study was designed to investigate the antibacterial activity of various local anaesthetics and additives, used in epidural infusions, against a range of micro-organisms associated with epidural abscess. The aim was to determine which, if any, epidural infusion solution has the greatest antibacterial activity. Bupivacaine, ropivacaine and levobupivacaine crystals were dissolved and added to Mueller-Hinton Agar in concentrations of 0.06%, 0.125%, 0.2%, 0.25%, 0.5% and 1%. Fentanyl, adrenaline and clonidine were also mixed with agar in isolation and in combination with the local anaesthetics. Using a reference agar dilution method, the minimum inhibitory concentrations were determined for a range of bacteria. Bupivacaine showed antibacterial activity against Staphylococcus aureus, Enterococcus faecalis and Escherichia coli with minimum inhibitory concentrations between 0.125% and 0.25%. It did not inhibit the growth of Pseudomonas aeruginosa at any of the concentrations tested. Levobupivacaine and ropivacaine showed no activity against Staphylococcus aureus, Enterococcus faecalis and Pseudomonas aeruginosa, even at the highest concentrations tested, and minimal activity against Escherichia coli (minimum inhibitory concentrations 0.5% and 1% respectively). The presence of fentanyl, adrenaline and clonidine had no additional effect on the antibacterial activity of any of the local anaesthetic agents. The low concentrations of local anaesthetic usually used in epidural infusions have minimal antibacterial activity. While the clinical implications of this in vitro study are not known, consideration should be given to increasing the concentration of bupivacaine in an epidural infusion or to administering a daily bolus of 0.25% bupivacaine to reduce the risk of epidural bacterial growth.
|
def start_manager(self):
raise NotImplementedError('start_manager() method not implemented in '
'TaskManager for %s' % self._rts)
|
/*
* @brief This is the third method of the class. It trains the SVM classifier.
*
* @param The first parameter is a boolean which commands the method to either save or not save the classifier.
* @param The second parameter is the name of the classifier if it is to be saved.
*/
void Train::trainSVM(const bool saveClassifier = false,
const cv::String classifierName = "") {
const int rows = static_cast<int>(gradientList.size());
const int cols = static_cast<int>(std::max(gradientList[0].cols,
gradientList[0].rows));
cv::Mat temp(1, cols, CV_32FC1), trainData(rows, cols, CV_32FC1);
for (auto index = 0; index < gradientList.size(); index++) {
if (gradientList[index].cols == 1) {
transpose(gradientList[index], temp);
temp.copyTo(trainData.row(static_cast<int>(index)));
} else if (gradientList[index].rows == 1) {
gradientList[index].copyTo(trainData.row(static_cast<int>(index)));
}
}
std::cout << "Training SVM Classifier" << std::endl;
classifier->train(trainData, cv::ml::ROW_SAMPLE, labels);
std::cout << "Training Finshed" << std::endl;
if (saveClassifier)
classifier->save(classifierName);
}
|
A hierarchically structured anatase-titania/indium-tin-oxide nanocomposite as an anodic material for lithium-ion batteries Anatase phase titanium dioxide (titania) is a potential anodic material for lithium-ion batteries (LIBs), although a serious drawback is its low electrical conductivity. As an attempt to address this issue, a new hierarchically structured nanotubular anatase-titania/indium-tin-oxide (titania/ITO) composite was fabricated. Nanotubular structured anatase titania with a tube wall thickness of ca. 10 nm was firstly synthesized by employing a natural cellulose substance (ordinary laboratory filter paper) as the structural template, and a thin layer of ITO with a thickness of ca. 15 nm composed of fine ITO crystallites (particle sizes ca. 5 nm) was deposited by means of a solvothermal process. When this composite was utilized as an anodic material for LIBs, it delivered a high initial discharge capacity of 1773.4 mA h g−1 and a stable discharge capacity of 298.6 mA h g−1 after 120 charge/discharge processes cycled at 1 C. The unique hierarchical structure of the composite reduced the diffusion length for both electronic and ionic transport, improved the electrodeelectrolyte contact area, and enhanced the accommodation of volume expansion and constriction during the lithium ion insertion/extraction process. Hence, improved electrochemical performances such as high reversible capacity, good rate performance and high cycling stability were achieved.
|
<filename>steps/elastic/base/elastic.go
package base
import (
"crypto/tls"
"io/ioutil"
"net/http"
elastic "github.com/elastic/go-elasticsearch/v8"
)
const (
CertFile = "/tmp/cert-file"
CertKeyFile = "/tmp/cert-file"
)
func createCertificate(certFile, certKey string) (*tls.Certificate, error) {
// Write content to files
err := ioutil.WriteFile(CertFile, []byte(certFile), 0644)
if err != nil {
return nil, err
}
err = ioutil.WriteFile(CertKeyFile, []byte(certKey), 0644)
if err != nil {
return nil, err
}
cert, err := tls.LoadX509KeyPair(CertFile, CertKeyFile)
if err != nil {
return nil, err
}
return &cert, nil
}
type Args struct {
Host string `env:"HOST,required"`
Port string `env:"PORT,required"`
Username string `env:"USERNAME"`
Password string `env:"PASSWORD"`
CertContent string `env:"CERT_CONTENT"`
CertKeyContent string `env:"CERT_KEY_CONTENT"`
UnsafeSSL bool `env:"UNSAFE_SSL"`
}
func CreateClient(args *Args) (*elastic.Client, error) {
address := args.Host + ":" + args.Port
cfg := &elastic.Config{
Addresses: []string{
address,
},
Username: args.Username,
Password: args.Password,
}
httpTransport := &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: args.UnsafeSSL,
},
}
var cert *tls.Certificate
var err error
if !args.UnsafeSSL && args.CertContent != "" && args.CertKeyContent != "" {
cert, err = createCertificate(args.CertContent, args.CertKeyContent)
if err != nil {
return nil, err
}
httpTransport.TLSClientConfig.Certificates = []tls.Certificate{*cert}
}
cfg.Transport = httpTransport
client, err := elastic.NewClient(*cfg)
if err != nil {
return nil, err
}
return client, nil
}
|
def _layout_graph_down(graph):
nodes = graph.selected_nodes() or graph.all_nodes()
graph.auto_layout_nodes(nodes=nodes, down_stream=True)
|
import "jest-extended";
import { Block, BlockFactory } from "../../../../packages/crypto/src/blocks";
import { IBlockData } from "../../../../packages/crypto/src/interfaces";
import { configManager } from "../../../../packages/crypto/src/managers";
import { dummyBlock } from "../fixtures/block";
const expectBlock = ({ data }: { data: IBlockData }) => {
delete data.idHex;
const blockWithoutTransactions: IBlockData = { ...dummyBlock };
blockWithoutTransactions.reward = blockWithoutTransactions.reward;
blockWithoutTransactions.totalAmount = blockWithoutTransactions.totalAmount;
blockWithoutTransactions.topReward = blockWithoutTransactions.topReward;
blockWithoutTransactions.totalFee = blockWithoutTransactions.totalFee;
blockWithoutTransactions.removedFee = blockWithoutTransactions.removedFee;
delete blockWithoutTransactions.transactions;
expect(data).toEqual(blockWithoutTransactions);
};
beforeEach(() => configManager.setFromPreset("devnet"));
describe("BlockFactory", () => {
describe(".fromHex", () => {
it("should create a block instance from hex", () => {
expectBlock(BlockFactory.fromHex(Block.serializeWithTransactions(dummyBlock).toString("hex")));
});
});
describe(".fromBytes", () => {
it("should create a block instance from a buffer", () => {
expectBlock(BlockFactory.fromBytes(Block.serializeWithTransactions(dummyBlock)));
});
});
describe(".fromData", () => {
it("should create a block instance from an object", () => {
expectBlock(BlockFactory.fromData(dummyBlock));
});
});
describe(".fromJson", () => {
it("should create a block instance from JSON", () => {
expectBlock(BlockFactory.fromJson(BlockFactory.fromData(dummyBlock).toJson()));
});
});
});
|
/**
* Opens a new HQ Socket connection and attempts to perform the "Data"
* handshake.
*
* @return The opened socket on success. <code>null</code> on failure.
* @throws SecurityException
* @throws IOException
*/
private Connection openAndHandshake() throws SecurityException, IOException
{
Socket s = connector.connect();
Connection c = new SocketConnection(s, false, true);
boolean success = false;
try
{
success = handshaker.performHandshake(runId, c);
}
finally
{
if (!success)
c.close();
}
if (success)
return c;
else
return null;
}
|
// Ensure the multi-cursor can correctly iterate across multiple overlapping subcursors.
func TestMultiCursor_Multiple_Overlapping_Reverse(t *testing.T) {
mc := tsdb.MultiCursor(
NewCursor([]CursorItem{
{Key: 0, Value: 0},
{Key: 3, Value: 3},
{Key: 4, Value: 4},
}, false),
NewCursor([]CursorItem{
{Key: 0, Value: 0xF0},
{Key: 2, Value: 0xF2},
{Key: 4, Value: 0xF4},
}, false),
)
if k, v := mc.SeekTo(4); k != 4 || v.(int) != 4 {
t.Fatalf("unexpected key/value: %x / %x", k, v)
} else if k, v = mc.Next(); k != 3 || v.(int) != 3 {
t.Fatalf("unexpected key/value: %x / %x", k, v)
} else if k, v = mc.Next(); k != 2 || v.(int) != 0xF2 {
t.Fatalf("unexpected key/value: %x / %x", k, v)
} else if k, v = mc.Next(); k != 0 || v.(int) != 0 {
t.Fatalf("unexpected key/value: %x / %x", k, v)
} else if k, v = mc.Next(); k != tsdb.EOF {
t.Fatalf("expected eof, got: %x / %x", k, v)
}
}
|
Comment to the paper "The energy conservation law for electromagnetic field in application to problems of radiation of moving particles " In the paper the energy conservation law was applied to a problem of radiation of a charged particle in an external electromagnetic field. The authors consecutively and mathematically strictly solved the problem but received wrong result. They derived an expression which includes a change of the energies of the electromagnetic fields accompanying the moving particle corresponding to the initial and final velocity of the particle. The energy of the field accompanying the particle is the energy of the particle of the electromagnetic origin. It should not enter the solution of the problem. The origin of the mistake made by the authors is discussed in this comment.
|
import face_alignment
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from skimage import io
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
# Run the 3D face alignment on a test image.
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D,
enable_cuda=True, flip_input=False)
# fa.make_rct_files("Databases/lfpw/testset")
# fa.load_STN_from_caffe_weights("Models/From_TwoStage/network_300W_parts", "Models/network_300W_theta.pth")
# result_list = []
# fa.use_STN("Models/12-07_Theta.pth")
# fa.train_STN("Databases/lfpw/trainset_normals_only", 1, "Models/12-07_Theta.pth")
# fa.use_STN("Models/12-07_Theta2.pth")
# fa.train_STN("Databases/lfpw/trainset", 1, "Models/12-13_STN.pth")
# fa.train_STN("Databases/10W", 4, "Models/test.pth")
fa.use_STN("Models/12-13_STN.pth")
# fa.use_STN("Models/network_300W_theta.pth")
# fa.use_STN_from_caffe()
# result_list = fa.process_folder("Databases/lfpw/testset", 1)
result_list = fa.process_folder("Databases/10W", 4)
for [image_name, preds_all] in result_list:
landmarks, gt_landmarks, proposal_img, frontal_img, _, _ = preds_all
preds = landmarks[-1] # [-1]: Use only the last face detected when there are multiple faces in one picture (all_faces=True)
gts = gt_landmarks[-1]
input = io.imread(image_name)
fig = plt.figure(figsize=(10,10), tight_layout=True)
ax = fig.add_subplot(2, 2, 1)
ax.imshow(input)
ax.plot(gts[0:17,0] ,gts[0:17,1], marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[17:22,0],gts[17:22,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[22:27,0],gts[22:27,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[27:31,0],gts[27:31,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[31:36,0],gts[31:36,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[36:42,0],gts[36:42,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[42:48,0],gts[42:48,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[48:60,0],gts[48:60,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(gts[60:68,0],gts[60:68,1],marker='o',markersize=4,linestyle='-',color='b',lw=1)
ax.plot(preds[0:17,0],preds[0:17,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[17:22,0],preds[17:22,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[22:27,0],preds[22:27,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[27:31,0],preds[27:31,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[31:36,0],preds[31:36,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[36:42,0],preds[36:42,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[42:48,0],preds[42:48,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[48:60,0],preds[48:60,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.plot(preds[60:68,0],preds[60:68,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
ax.axis('off')
if fa.landmarks_type == face_alignment.LandmarksType._3D:
ax = fig.add_subplot(2, 2, 2, projection='3d')
surf = ax.scatter(preds[:,0]*1.2,preds[:,1],preds[:,2],c="cyan", alpha=1.0, edgecolor='b')
ax.plot3D(preds[:17,0]*1.2,preds[:17,1], preds[:17,2], color='blue' )
ax.plot3D(preds[17:22,0]*1.2,preds[17:22,1],preds[17:22,2], color='blue')
ax.plot3D(preds[22:27,0]*1.2,preds[22:27,1],preds[22:27,2], color='blue')
ax.plot3D(preds[27:31,0]*1.2,preds[27:31,1],preds[27:31,2], color='blue')
ax.plot3D(preds[31:36,0]*1.2,preds[31:36,1],preds[31:36,2], color='blue')
ax.plot3D(preds[36:42,0]*1.2,preds[36:42,1],preds[36:42,2], color='blue')
ax.plot3D(preds[42:48,0]*1.2,preds[42:48,1],preds[42:48,2], color='blue')
ax.plot3D(preds[48:,0]*1.2,preds[48:,1],preds[48:,2], color='blue' )
ax.view_init(elev=90., azim=90.)
ax.set_xlim(ax.get_xlim()[::-1])
if proposal_img:
ax = fig.add_subplot(2, 2, 3)
ax.imshow(proposal_img[-1])
ax.axis('off')
if frontal_img:
ax = fig.add_subplot(2, 2, 4)
ax.imshow(frontal_img[-1])
ax.axis('off')
plt.show()
|
Decodability of group homomorphisms beyond the johnson bound Given a pair of finite groups G and H, the set of homomorphisms from G to H form an error-correcting code where codewords differ in at least 1/2 the coordinates. We show that for every pair of abelian groups G and H, the resulting code is (locally) list-decodable from a fraction of errors arbitrarily close to its distance. At the heart of this result is the following combinatorial result: There is a fixed polynomial p() such that for every pair of abelian groups G and H, if the maximum fraction of agreement between two distinct homomorphisms from G to H is, then for every > 0 and every function f:G -> H, the number of homomorphisms that have agreement + with f is at most p(1/). We thus give a broad class of codes whose list-decoding radius exceeds the "Johnson bound". Examples of such codes are rare in the literature, and for the ones that do exist, "combinatorial" techniques to analyze their list-decodability are limited. Our work is an attempt to add to the body of such techniques. We use the fact that abelian groups decompose into simpler ones and thus codes derived from homomorphisms over abelian groups may be viewed as certain "compositions" of simpler codes. We give techniques to lift list-decoding bounds for the component codes to bounds for the composed code. We believe these techniques may be of general interest.
|
import React, { useState, useEffect } from "react"
import styled from "styled-components"
import Icon from "./Icon"
const Card = styled.div<{
shouldShow: boolean
}>`
display: ${(props) => (props.shouldShow ? `block` : `none`)};
position: relative;
background: ${(props) => props.theme.colors.warning};
padding: 1.5rem;
border-radius: 4px;
color: ${(props) => props.theme.colors.black300};
@media (max-width: ${(props) => props.theme.breakpoints.l}) {
margin-bottom: 2rem;
}
`
const CloseIconContainer = styled.span`
position: absolute;
cursor: pointer;
top: 1.5rem;
right: 1.5rem;
& > svg {
fill: ${(props) => props.theme.colors.black300};
}
`
export interface IProps {
storageKey: string
}
const DismissibleCard: React.FC<IProps> = ({ children, storageKey }) => {
const [shouldShow, setshouldShow] = useState(false)
useEffect(() => {
if (localStorage && localStorage.getItem(storageKey) !== null) {
setshouldShow(false)
} else {
setshouldShow(true)
}
}, [storageKey])
const handleClose = () => {
if (localStorage) {
localStorage.setItem(storageKey, "true")
}
setshouldShow(false)
}
return (
<Card shouldShow={shouldShow}>
<CloseIconContainer onClick={handleClose}>
<Icon name="close" />
</CloseIconContainer>
{children}
</Card>
)
}
export default DismissibleCard
|
<gh_stars>1-10
/*
* Copyright © <NAME> 2019-2021. All rights reserved
*/
package com.chillibits.particulatematterapi.shared;
import java.util.Random;
public class SharedUtils {
private SharedUtils() {}
public static double round(double value, int places) {
if (places < 0) throw new IllegalArgumentException();
long factor = (long) Math.pow(10, places);
double newValue = value * factor;
long tmp = Math.round(newValue);
return (double) tmp / factor;
}
public static String generateRandomString(int length) {
int leftLimit = 48; // numeral '0'
int rightLimit = 122; // letter 'z'
return new Random().ints(leftLimit, rightLimit + 1)
.filter(i -> (i <= 57 || i >= 65) && (i <= 90 || i >= 97))
.limit(length)
.collect(StringBuilder::new, StringBuilder::appendCodePoint, StringBuilder::append)
.toString();
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.