content
stringlengths 7
2.61M
|
---|
A Combined Experimental and Theoretical Approach to Study Temperature and Moisture Dynamic Characteristics of Intermittent Paddy Rice Drying Intermittent drying of paddy rice is fully investigated both theoretically and experimentally. A model is developed to describe simultaneous heat and mass transfer for the drying stages and mass transfer for the tempering ones. The model is considered for both cylindrical and spherical geometries. The model excels in considering non-constant paddy rice and air physical properties as well as surface vaporization and convection. The consequent equations are numerically solved with finite-difference method of line using implicit RungeKutta. Furthermore, a set of experiments is conducted in a laboratory-scale fluidized bed dryer to estimate the moisture diffusivity of rice and evaluate the effects of different parameters. Two correlations for moisture diffusivity are derived for each geometry based on the experimental results. It is noteworthy that the geometry choice leads to significantly different moisture diffusivities. As a result, the diffusivity values obtained for spherical presentation is 2.64 times greater than that of cylinder. Moreover, the cylindrical model fits the experimental results more precisely, especially for tempering stage (AARDcyl = 1.03%; AARDsph = 1.53%). Model results reveal that thermal equilibrium is quickly reached within the first 2 min. Air velocity shows no influential effect on drying upon establishment of fluidized condition. In addition, drying rate is drastically improved after applying the tempering stage. A definition for tempering stage efficiency is also proposed which shows that 3 h tempering will be 80% efficient for the studied case. Rising temperature significantly improves the drying rate, while it does not contribute much in the tempering efficiency.
|
<gh_stars>0
/* Copyright Airship and Contributors */
#import "UALocation.h"
@class UAPreferenceDataStore;
@class UAAnalytics;
/*
* SDK-private extensions to UALocation
*/
@interface UALocation() <CLLocationManagerDelegate>
NS_ASSUME_NONNULL_BEGIN
///---------------------------------------------------------------------------------------
/// @name Location Internal Properties
///---------------------------------------------------------------------------------------
/**
* The location manager.
*/
@property (nonatomic, strong) CLLocationManager *locationManager;
/**
* The data store.
*/
@property (nonatomic, strong) UAPreferenceDataStore *dataStore;
/**
* The system version.
*/
@property (nonatomic, strong) UASystemVersion *systemVersion;
/**
* Flag indicating if location updates have been started or not.
*/
@property (nonatomic, assign, getter=isLocationUpdatesStarted) BOOL locationUpdatesStarted;
NS_ASSUME_NONNULL_END
@end
|
<reponame>nikochan2k/excel-template
import { ExcelTemplator } from "./ExcelTemplator";
ExcelTemplator.readFile = (_path: string): Promise<Buffer> => {
throw new Error("file protocol is not supported");
};
export * from "./ExcelTemplator";
|
<gh_stars>0
use npm_rs::Npm;
fn main() {
println!("cargo:rerun-if-changed=index.js");
println!("cargo:rerun-if-changed=package.json");
Npm::default().install(None).run("build").exec().unwrap();
}
|
More than 100 girls were missing on Wednesday, police said, two days after a Boko Haram attack on their school in northeast Nigeria that has raised fears of a repeat of the 2014 Chibok kidnapping that shocked the world.
Islamist militants stormed the Government Girls Science Secondary School in Dapchi, Yobe state, on Monday evening. Locals initially said the girls and their teachers fled.
But fears have been growing about the whereabouts of the students.
Around 50 parents and guardians converged on the school on Wednesday to demand answers, as police said 111 were still missing.
The police commissioner of Yobe state, Abdulmaliki Sumonu told reporters in the state capital, Damaturu, that "815 students returned to the school and were visibly seen, out of 926 in the school".
"The rest are missing. No case of abduction has so far been established," he added.
The length of time since the attack and Boko Haram's use of kidnapping as a weapon during its nearly nine-year insurgency has increased fears of another mass abduction.
The jihadists gained worldwide notoriety in April 2014 when they abducted 276 girls from their school in Chibok, in neighbouring Borno state.
Fifty-seven escaped in the immediate aftermath and since May last year, 107 have either escaped or been released as part of a government-brokered deal.
A total of 112 are still being held.
Abubakar Shehu, whose niece is among those missing from Dapchi, told AFP: "Our girls have been missing for two days and we don't know their whereabouts.
"Although we were told they had run to some villages, we have been to all these villages mentioned without any luck. We are beginning to harbour fears the worst might have happened.
"We have the fear that we are dealing with another Chibok scenario."
The state-run boarding school in Dapchi caters for girls aged 11 and above from across Yobe state, which is one of three worst affected by the insurgency.
Inuwa Mohammed, whose 16-year-old daughter, Falmata, is also missing, said it was a confused picture and that parents had been frantically searching surrounding villages.
"Nobody is telling us anything officially," he said. "We still don't know how many of our daughters were recovered and how many are still missing.
"We have been hearing many numbers, between 67 and 94."
Yobe's education commissioner, Mohammed Lamin, said the school had been shut and a rollcall of all the girls who have returned was being conducted.
"It is only after the head-count that we will be able to say whether any girls were taken," he said.
Some of the girls had fled to villages up to 30 kilometres (nearly 20 miles) away through the remote bushland, he added.
Nigeria's information minister said he would visit Dapchi on Thursday with the defence and foreign ministers.
Boko Haram has seized thousands of women and young girls, as well as men and boys of fighting age during the conflict, which has left at least 20,000 dead since 2009.
Some 300 children were among 500 people abducted from the town of Damasak in November 2014.
Getting accurate information from the remote northeast remains difficult. The army still largely controls access and infrastructure has been devastated by nine years of conflict.
In Chibok, the military initially claimed the students had all been found but was forced to backtrack when parents and the school principal said otherwise.
As the issue gained world attention, spawning the hashtag #BringBackOurGirls, the then president Goodluck Jonathan was increasingly criticised for his lacklustre response.
The mass abduction and Jonathan's handling of it was seen as contributing to his 2015 election defeat to Muhammadu Buhari, who promised to bring the Boko Haram insurgency to an end.
But despite Buhari's repeated claims the group is weakened to the point of defeat, civilians remain vulnerable to suicide attacks and hit-and-run raids in the remote northeast.
Security analysts told AFP on Tuesday that government ransom payments to secure the release of the Chibok girls could have given the under-pressure group ideas for financing.
"They need money for arms, ammunitions, vehicles, to keep their army of fighters moving across the borders," said Amaechi Nwokolo, from the Roman Institute of International Studies.
"They're spending a lot of money on arms and logistics."
|
/**
* Generic parser that read line by line, meaning each call to parseNext will return the next line
*
* @author deepak
*/
public class GenericLineByLineParser implements Parser<String>
{
private BufferedReader br;
/**
* Construct fasta parser with input source
* @param reader input source
*/
public GenericLineByLineParser(InputStreamReader reader) {
this.br = new BufferedReader(reader);
}
/**
* TODO
* @return parsing result
* @throws ParsingException if parsing failed
*/
@Override
public String parseNext() throws ParsingException {
try {
return br.readLine();
} catch (IOException e) {
throw new ParsingException(e.getMessage());
}
}
@Override
public void close() throws IOException {
if (br == null) {
return;
}
try {
br.close();
} finally {
br = null;
}
}
}
|
Undernutrition among children under 5 years of age in Yemen: Role of adequate childcare provided by adults under conditions of food insecurity Objective: This study examined the associations between the adequacy of childcare provided by adult caretakers and childhood undernutrition in rural Yemen, independent of household wealth and food consumption. Methods: We analyzed data of 3,549 children under the age of 5 years living in rural areas of Yemen based on the 2013 Yemen Baseline Survey of Mother and Child Health. Nutritional status was evaluated by the presence of underweight, stunting, and wasting according to the World Health Organization child growth standards. The impact of childcare including leaving children alone, putting older children into labor force, and the use of antenatal care while pregnant on child undernutrition was assessed and adjusted for food consumption by children, household composition, demographic and educational background of caretakers, and household wealth. Results: The prevalence of underweight, stunting, and wasting was 46.2%, 62.6%, and 11.1%, respectively. Not leaving children alone, keeping children out of the labor force, and use of antenatal care were associated with a lower risk of underweight (odds ratio = 0.84, P = 0.016; OR = 0.84, P = 0.036; and OR = 0.85, P = 0.042) and stunting (OR = 0.80, P = 0.004; OR = 0.82, P = 0.024; and OR = 0.78, P = 0.003). After further adjustment for food consumption, the associations between adequate childcare indicators and lower odds of stunting remained significant (OR = 0.73, P = 0.025; OR = 0.72, P = 0.046; and OR = 0.76, P = 0.038). Conclusions: A marked prevalence of stunting among rural children in Yemen was observed. Adequate childcare by adult caretakers in families is associated with a lower incidence of underweight and stunting among children under 5 years of age. Promoting adequate childcare by adult household members is a feasible option for reducing undernutrition among children in rural Yemen. Introduction Child nutrition indicators in Yemen are some of the worst among low-to middle-income countries. The reported national prevalence of stunting in Yemen was the second highest in the world following Afghanistan in 2010 1). The national average prevalence of underweight, stunting, and wasting was 39.0%, 46.5%, and 16.3%, respectively, among children under 5 years of age in 2013; furthermore, the prevalence of underweight and wasting did not change significantly between 2003 and 2013 2). In 2015, it was projected that 1 million children under 5 years of age would experience moderate acute undernutrition and that 320,000 would be at risk of severe acute undernutrition 3,4). With the escalating political crisis in Yemen, since late 2014, food insecurity has increased because of the sporadic availability of essential food commodities, insufficient fuel, lack of income or employment opportunities, and the disruption of markets and trade. Conflict-related damage to infrastructure, shortages, and a lack of staff are among the causes of the collapse of basic health services in Yemen 4). In addition, agricultural production is failing because of inadequate rainfall and the high cost and uneven availability of agricultural inputs (such as seed, fertilizer, farm tools, animal feed, and fuel for irrigation pumps) 3). Despite these critical situations and uncertainties, clues to relieve undernutrition should be sought. The availability of adequate care for preschool children has recently been discussed as an important part of the home environment for early childhood development 5). One in five children under 5 years of age in low-and middleincome countries is without adult care for at least an hour in a given week 5). Parental unavailability and poor working conditions, limited support networks, and the inability to afford childcare were suggested as factors associated with children being left alone at home 6). Children who receive longer and more intensive childcare reportedly eat a larger variety of foods and have higher height-for-age and weightfor-age values compared with children who receive poorer childcare 7). Child labor in low-to middle-income countries reflects poverty and a lack of care within families. Widespread child labor in low-and middle-income countries also indicates the disadvantages of children in poor households. Worldwide, 12% of children between the ages of 5 and 14 years must work 8). Lower levels of education and poorer physical and mental health are more prevalent among children who work than among those who do not work 9). The attitudes of caregivers toward maternal and child health and knowledge acquired during antenatal care visits can lower the risk of child undernutrition. The formal education level of mothers and delivery at a health facility are associated with better child nutritional status 10). Health education provided during antenatal care is also associated with better nutritional status among children living in low-and middle-income countries 11). The objectives of this study were to examine the associations between the adequacy of childcare provided by adult caretakers and childhood undernutrition in rural Yemen, independent of household wealth and food consumption. Study design We conducted a study using the 2013 UNICEF Yemen Baseline Survey of Mother and Child Health data, collected using the Arabic language version of the Multiple Indicator Cluster Survey (MICS) 4 household questionnaire forms 12,13). The survey protocol was reviewed and approved by the Central Statistics Office, Ministry of Planning and Cooperation-Yemen. Study setting Yemen is one of the poorest countries in the Middle East and North Africa region. It is the most densely populated country in the Arabian Peninsula with a population of 26 million 14), most living in rural areas (71%) 15). Poverty, which has been significantly aggravated as a result of the political crisis, has risen from 42% of the population in 2009 to 54.5% in 2012; the gross domestic product (GDP) per capita was estimated to be US$ 1,408 in 2013 (Middle East and North Africa: US$ 4,677) 15,16). Political instability, widespread poverty, low educational attainment, and low agricultural productivity from a limited availability of land suitable for the cultivation of food grains, and water scarcity have subjected 41.1% of the population of Yemen to food insecurity 17). In Yemen, UNICEF has identified the 106 most vulnerable districts (of a total of 333 districts) as candidates for its development program interventions for the years 2012-2015 18). These districts have a total estimated population of 7.3 million, including 1.2 million children under the age of 5 years. The large majority (92%) of households in these UNICEF targeted districts live in rural areas 13). Subjects A two-stage cluster sampling of subjects from 106 districts was performed based on a set of all available data at the district level. The 106 districts were split into five strata constituting 318 clusters. In the first stage, clusters were selected randomly within the strata with a probability proportional to the size of the population. In the second stage, households were selected from each selected cluster using simple random sampling. The household sampling frame was developed by counting or relisting. We conducted an analysis of 3,549 eligible children under 5 years of age (n = 2,300 mothers) nested in 2,115 households. The study was restricted to singleton birth children to control for confounders in the study design. Of 3,781 children, 232 cases were excluded from the analysis: 46 for incomplete questionnaires, 131 for anthropometric flags and error tracking, and 55 for multiple births. Criteria for the inclusion of subjects in the analysis were age under 5 years, singleton birth, and anthropometric measurements within specified ranges. Finally, 3,549 subjects were included in the analysis (Figure 1). Variables Nutritional status indicators: The main outcome variables in this study were nutritional status indicators, measured as underweight, stunting, and wasting. According to the measured weight and height, age in months, and sex, the Z-score values of weight-for-age, height-for-age, and weight-for-height were computed using World Health Organization (WHO) Anthro software 19), which applies the WHO Child Growth Standards 20). Subjects with Z-scores falling outside of the following ranges were excluded: weight-forage Z-score of -6 to +5, height-for-age Z-score of -6 to +5, and weight-for-height Z-score of -5 to +5 19). Underweight, stunted, and wasted were defined by WHO as less than 2 SDs of weight-for-age, height-for-age, and weight-for-height, respectively 20,21). Wealth index: Household wealth was measured using the wealth index calculated according to the Filmer and Pritchett method, which was developed using a principal component analysis based on household asset data (i.e., ownership of consumer items such as a TV, radio, car, refrigerator, mobile phone, computer, and generator) as well as housing characteristics (i.e., source of drinking water, sanitation facilities, type of floor materials, number of rooms, and access to electricity) 22). Quintiles of the wealth index scores among the subjects were calculated, and the individual categories were named as follows: poorest, poor, middle, rich, and richest. Food consumption: Food items consumed by children were reported for children aged 6-23 months of age. The consumption of any foods from each of the following six categories during the previous day was counted as the consumption of one item: (a) porridge/paste, bread, rice, pasta, or any foods made from grains or potatoes; (b) foods made from broad beans, pinto beans, peas, lentils, peanuts, or any other legumes; (c) milk, cheese, yogurt, buttermilk, ice cream, or other milk product; (d) fresh, dried, or packaged liver, kidney, heart, meat or other soup, beef, lamb or goat's meat, chicken, or fish; (e) eggs; and (f) carrots, squash, sweet potato, leafy vegetables, or ripe mango or papaya. The sum of the number of items was used as a food consumption variable. Caretaker characteristics: The age of the caretaker of the children who were examined as subjects and their school attendance (never or ever) were used as caretaker variables. If the caretaker was a woman who had delivered one or more children during the 2 years preceding the survey, the use of antenatal care at least once during that pregnancy (no or yes) was also used as a caretaker variable. Number of family members: According to the ages of individual family members in the respective households, the following variables were evaluated: number of children 0-4 years old, including the subject child; number of children 5-14 years old; number of adults 15-39 years old; and number of adults 40-64 years old. Adequate childcare environment of families: Whether or not the adult family members had left the subject children unattended for more than 1 hour in the 1 week preceding the survey (no or yes) was used as a variable of the childcare environment. This parameter included adults who left the subject child attended by another child under 10 years of age. The number of children in the family aged 5-14 years old who had joined the labor force was counted and used as another variable of the childcare environment. Statistical methods The prevalence of underweight, stunting, and wasting according to the characteristics of the children, caretakers, and households were calculated. A multivariate logistic regression analysis was performed to investigate the relationships between nutritional indicators and the characteristics of children and caretakers and the number of family members. We applied multiple imputation methods to analyze dataset with some missing values. Multiple imputation is a statistical method used to improve the validity of epidemiological research results and to reduce the waste of resources caused by missing data 23). Logistic regression models for binary and ordinal variables were used to handle missing dichotomous and ordinal data, respectively. The average proportion of missing data for all variables in this study was less than 2%. Predictive mean matching was used to handle missing continuous data. An additional multivariate logistic regression analysis was performed to evaluate independent associations between nutritional indicators and variables of adequate childcare by adult caretakers (adult family members did not leave children alone, no children in the household who were 5-14 years old were engaged in labor, and the use of antenatal care by caretakers) after adjustment for the influence of household wealth and food consumption. Interaction effects between child age and adequate childcare were investigated. The data were identified and analyzed using StataMP version 13.0 (StataCorp LP, College Station, TX, USA). Table 1 shows the frequency distribution of children according to the characteristics of children, caretakers, and households. The average age of the children was 28.7 months (standard error , 16.7 months), and the average age of the caretakers was 29.9 years (SE, 7.0 years). The average household size was 7.7 members (SE, 3.7 members) among 2,115 households. Results The Spearman rank correlation between the wealth index quintile and the number of food items consumed by children 6-23 months old was 0.20 (P < 0.001). Among 2,115 households, the Spearman rank correlation between household size and the wealth index quintile was 0.10 (P < 0.001). Table 2 shows the prevalence of underweight, stunting, and wasting according to the characteristics of children, caretakers, and households. The overall prevalence of underweight, stunting, and wasting was 46.2%, 62.6%, and 11.1%, respectively. The consumption of fewer food items was associated with the prevalence of underweight, and the non-use of antenatal care by caretakers was associated with the prevalence of underweight, stunting, and wasting. The prevalence of underweight, stunting, and wasting among children from poor families was significantly higher than those among children from richer families. There were variations in the prevalence of undernutrition according to the number of adult members in the households. Children from families where adult family members left the subject children unattended for more than 1 hour in a week were more likely to exhibit an underweight, stunting, or wasting status. Children from families where older children in the family had entered the labor force were more likely to be underweight or stunted. Table 3 shows the results of a multivariable logistic regression analysis to show the associations between undernutrition and characteristics of child and caretakers and the number of household members according to age groups. A higher number of men and women 15-39 years old was significantly associated with a lower prevalence of underweight (OR, 0.94; 95% confidence interval , 0.90-0.98; P = 0.007) and stunting (OR, 0.95; 95% CI, 0.91-0.99; P = 0.023). Table 4 shows the adjusted ORs of undernutrition for the variables of adequate childcare (adult family members did not leave children alone, no children in the household aged 5-14 years old and engaged in labor, and use of antenatal care by caretakers) after adjustment for the characteristics of children and caretakers, the number of family members, and household wealth. Not leaving children alone was significantly associated with a lower prevalence of underweight (OR, 0.84; 95% CI, 0.72-0.97; P = 0.016) and stunting (OR, 0.80; 95% CI, 0.69-0.93; P = 0.004). Older children not working was significantly associated with a lower prevalence of underweight (OR, 0.84; 95% CI, 0.71-0.99; P = 0.036) and stunting (OR, 0.82; 95% CI, 0.69-0.97; P = 0.024). Use of antenatal care was significantly associated with a lower prevalence of underweight (OR, 0.85; 95% CI, 0.72-0.99; P = 0.042) and stunting (OR, 0.78; 95% CI, 0.66-0.92; P = 0.003). Statistical significant interactions were observed between child age and both not leaving children alone (P = 0.003) and use of antenatal care (P = 0.029). The adjusted OR for not leaving children alone on underweight was 1.23 (95% CI, 0.92-1.65; P = 0.165) for child age of 0 months and 0.57 (95% CI, 0.42-0.76; P <0.001) for child age of 59 months. The adjusted OR for use of antenatal care on underweight was 1.10 (95% CI, 0.83-1.46; P = 0.505) for child age of 0 months and 0.59 (95% CI, 0.41-0.85; P = 0.004) for child age of 59 months. Table 5 shows the adjusted ORs of undernutrition for not leaving children alone, older children not working, and use of antenatal care after adjustments for food consumption by children and other variables shown in Table 4. A higher number of food items consumed was significantly associated with a lower prevalence of underweight and stunting. Variables of adequate childcare by adults (i.e., not leaving children alone, older children not working, and use of ante-natal care) were significantly associated with a lower prevalence of stunting. A statistically significant interaction was observed between child age and use of antenatal care (P = 0.049). The adjusted OR for use of antenatal care on stunting was 5.03 (95% CI, 0.75-33.54; P = 0.095) for child age of 0 months and decreased to 0.44 (95% CI, 0.24-0.80; P = 0.008) for child age of 59 months. Discussion This study presented the prevalence and determinants of underweight, stunting, and wasting among children under 5 years of age in rural areas of Yemen, which were influenced by food insecurity. Children living with a small number of adult members aged 15-39 years old were significantly underweight and stunted. Multivariate analyses revealed that stunting was negatively associated with adult family members who did not leave children alone, families with no older children engaged in labor, and the use of antenatal care by caretakers, independent of household wealth and food consumption by children. The present study used a standardized questionnaire developed by UNICEF. The subjects were randomly selected according to a standard protocol, the household response rate was 93%, and 94% of eligible children were included in the analysis. The magnitude of the selection bias was considered relatively low. Child growth was assessed and analyzed according to the standard methods developed by the WHO. Household wealth was evaluated using a strategy similar to that used in the UNICEF Multiple Indicator Cluster Surveys as well as the Demographic and Health Surveys. Therefore, the results of this study can be interpreted with reference to previous surveys using UNICEF standardized protocols. The prevalence of underweight and stunting among children in rural areas of Yemen estimated in this study were higher than the national average. Stunting is an indicator of previous growth failure resulting from chronic undernutrition, whereas wasting indicates current or acute undernutrition resulting from failure to gain weight or weight loss; underweight is a composite measure of stunting and wasting 21). The prevalence of stunting is reportedly higher in less wealthy households 24), and the magnitude of socioeconomic inequalities in underweight and stunting was larger in countries with a higher prevalence of underweight and stunting, respectively 25). Children whose families were educated and consisted of fewer than five family members had significantly lower odds of undernutrition compared with peers in illiterate families and family sizes of more than five members, respectively 26). Independent of household wealth, a lower paternal and maternal education level was also associated with child stunting 27). Household wealth and the educational status of caretakers consistently affected the nutritional status of children in this study, in agreement with earlier studies. The average number of household members in Yemen is 6.7, which is higher than the averages of other Arabic countries such as Egypt (4.1, as of 2014) and Jordan (5.1, as of 28,29). There are some unique characteristics of family composition in Yemen. An extended family, where relatives live together, is common in Yemen, and more than 25% of households consist of nine or more family members 2). There are also polygamous families, and 6.1% of women live with co-wives 2). The results of the present study showed that the number of household members between the ages of 15 and 39 years old was associated with the alleviation of undernutrition in children. Other studies counting the number of children and adults inclusive showed that the number of household members of all ages was associated with undernutrition. Having a larger number of siblings, which often suggests more competition for food, was associated with child undernutrition among children under 5 years old 30,33). Withinhousehold cooperation by adult members could improve the quality of childcare in the household 34). Children in households where children received adequate care from caretakers had a lower risk of child undernutrition; the prevalence of undernutrition was lower in households where adult members of the family did not leave the children alone and in which children had not entered the labor force. These associations remained significant after adjustments for the number of family members, household wealth, and food consumption by subject children. A large number of household adult members in households in Yemen have some protective functions that enable them to protect the health of children living within the household. The present study showed an association between the use of antenatal care by caretakers and the nutritional status of children among poor populations in a food-insecure country, even after adjustments for within-population variations in socioeconomic status and educational level. These findings suggest that the potential health benefits of antenatal care services extend beyond the conditions at the time of the visit to influence a wider range of health issues in the future. The potential mechanism that could explain the impact of inadequate childcare by adult family members on nutritional status may be exerted through a lack of feeding and hygiene practices. Feeding practices do not just refer to the foods recommended for a child, but it means also the broad range of dietary, behavioral, and physiological processes involved 39). Our findings have shown that food consumption is a mediator of the association between childcare and stunting. Children left alone at home could be exposed to poor feeding practices including breastfeeding and complementary feeding 35,36). Poor sanitation and high-risk hygiene behaviors expose children to infectious disease, especially diarrheal diseases, which would lead to undernutrition 38,40). The number of food items consumed was associated with undernutrition in children. This study confirmed evi-dence from a previous study that a higher dietary diversity improves the nutritional condition of children 41). An analysis of the Demographic and Health Surveys showed that a high dietary diversity was associated with a higher height-for-age Z-score in nine countries 42). In addition to promoting appropriate childcare and improving the socioeconomic status of the household, efforts should be continued to secure the availability of food and to improve the nutritional status of children in poorer areas. The present study has some limitations. Food consumption patterns were assessed among children aged 6-23 months old. This study had a cross-sectional design, and temporal relations are not provided. The cross-sectional nature of this study limits the potential influence of a selection bias on the population; for example, subjects who are severely malnourished might have died at a younger age and therefore would not be included in the present analysis. The inclusion of polygamous families in this study should be considered when generalizing the results of this study to other populations. Involvement in the occupation of caretakers and their families could have been a confounder of the association between adequate childcare and child nutrition; however, the potential magnitude of the confounding bias was considered to be limited because the variation of occupations in rural area is relatively small. Interactions by child age were observed. Associations between adequate childcare by adult caretakers and underweight among olderage children were strong, whereas those associations among infants were small or missing; however, these results do not contradict our conclusions. Inequities of availability of adequate childcare increases as children grow older, although the differences at very young ages are small. This is the first study to focus on the roles of adequate childcare provided by adults among households in rural Yemen, where food insufficiency exists and undernutrition of children is prevalent. Caregiving practices, parenting, feeding, and caregiving resources are now partly considered when analyzing issues related to child nutrition and development in low-and middle-income countries 43). The present study suggests potential alternative strategies for promoting child nutrition by improving the childcare environment under sustained limited access to food. Child undernutrition is an urgent and prioritized public health issue in rural Yemen, where the majority of women are illiterate and economically poor. Food security in Yemen worsened between 2009 and 2011 17). The ongoing political crisis since 2014 has further worsened the situation 44). Because food insecurity continues to exist, the caretaking environment for children can act as a control gate to improve the nutritional status of children. Although the recovery of food security and the alleviation of poverty might be direct and root-cause solutions for normalizing the nutritional status of children, provision of adequate childcare by adult family members should also be sought in Yemen. Attention to appropriate childcare should be a focus of interest and should be integrated into health programs. Inter-sectoral approaches should also be enhanced to promote adequate caretaking by adult family members. In conclusion, severe undernutrition in children still exists in rural areas of Yemen. Gradients in the prevalence of undernutrition according to wealth and food consumption exist. Independent of household wealth and food consumption by children, factors such as continuous attention to childcare by adult caretakers who do not leave children alone, having no older children engaged in labor, and the participation of caretakers in childcare education by attending antenatal care were associated with a better nutritional status of children. In addition to international cooperation efforts to improve food security in rural areas affected by conflicts, integrated efforts to advance childcare by adult caretakers in families should be emphasized to prevent child undernutrition in rural Yemen.
|
<filename>src/extension.ts<gh_stars>0
// The module 'vscode' contains the VS Code extensibility API
// Import the module and reference it with the alias vscode in your code below
import * as vscode from 'vscode';
import net = require('net');
import {
LanguageClient,
LanguageClientOptions,
StreamInfo,
ExecuteCommandRequest,
ExecuteCommandParams,
RequestType,
ParameterStructures,
RequestType1
} from 'vscode-languageclient/node';
let client: LanguageClient;
class Parameter {
constructor(readonly label: string, readonly placeholder: string) {}
}
async function executeCommand(command: string, parameters: Parameter[]) {
let userArgs: string[] = [];
for (let p of parameters) {
let options: vscode.InputBoxOptions = {
prompt: p.label,
placeHolder: p.placeholder
};
let userArg = await vscode.window.showInputBox(options);
if (userArg === undefined) return; // silently cancel operation if user cancels input
userArgs.push(userArg);
}
let exec: ExecuteCommandParams = {
command: command,
arguments: userArgs
}
let execPromise = client.sendRequest(ExecuteCommandRequest.type, exec);
execPromise.then((response) => {
vscode.window.showInformationMessage(`${command}(${userArgs.join(", ")}) result:\n${response}`);
});
}
// this method is called when your extension is activated
// your extension is activated the very first time the command is executed
export function activate(context: vscode.ExtensionContext) {
// Use the console to output diagnostic information (console.log) and errors (console.error)
// This line of code will only be executed once when your extension is activated
console.log('Mo|E client activated');
// The command has been defined in the package.json file
// Now provide the implementation of the command with registerCommand
// The commandId parameter must match the command field in package.json
let disposable = vscode.commands.registerCommand('mope-client.connect', () => {
// The code you place here will be executed every time your command is executed
// Display a message box to the user
let options: vscode.InputBoxOptions = {
prompt: "Server port:",
value: "6667"
};
let userPort = vscode.window.showInputBox(options);
userPort.then((x) => {
startLanguageClient(parseInt(x ?? "6667"));
}, (reason) => {
vscode.window.showInformationMessage("User rejected input for reason "+reason);
});
});
context.subscriptions.push(disposable);
disposable = vscode.commands.registerCommand('mope-client.disconnect', () => {
deactivate();
});
context.subscriptions.push(disposable);
// sendExpression
disposable = vscode.commands.registerCommand(
'mope-client.sendExpression',
() => executeCommand("ExecuteCommand", [
new Parameter("OM scripting command", "simulate(Modelica.Electrical.Analog.Examples.Rectifier)")
])
);
context.subscriptions.push(disposable);
// loadFile
disposable = vscode.commands.registerCommand(
'mope-client.loadFile',
() => executeCommand("LoadFile", [
new Parameter("Filename", "/home/mote/Downloads/example.mo")
])
);
context.subscriptions.push(disposable);
// checkModel
disposable = vscode.commands.registerCommand(
'mope-client.checkModel',
() => executeCommand("CheckModel", [
new Parameter("Model name", "Modelica.Electrical.Analog.Examples.Rectifier")
])
);
context.subscriptions.push(disposable);
// AddPath
disposable = vscode.commands.registerCommand(
'mope-client.addPath',
() => executeCommand("AddPath", [
new Parameter("Path", "/home/mote/Documents/modelica-libraries")
])
);
context.subscriptions.push(disposable);
// GetPath
disposable = vscode.commands.registerCommand(
'mope-client.getModelicaPath',
() => executeCommand("GetPath", [])
);
context.subscriptions.push(disposable);
// loadModel
disposable = vscode.commands.registerCommand(
'mope-client.loadModel',
() => executeCommand("LoadModel", [
new Parameter("Model name", "Modelica.Electrical.Analog.Examples.Rectifier")
])
);
context.subscriptions.push(disposable);
// getVersion
disposable = vscode.commands.registerCommand(
'mope-client.getVersion',
() => executeCommand("Version", [])
);
context.subscriptions.push(disposable);
}
function startLanguageClient(port: number) {
let connectionInfo = {
port: port,
host: "127.0.0.1"
};
let serverOptions = () => {
// Connect to language server via socket
let socket = net.connect(connectionInfo);
let result: StreamInfo = {
writer: socket,
reader: socket
};
return Promise.resolve(result);
};
let clientOptions: LanguageClientOptions = {
// Register the server for Modelica code
documentSelector: [{ scheme: 'file', language: 'modelica' }],
synchronize: {
// Notify the server about file changes to Modelica files contained in the workspace
fileEvents: vscode.workspace.createFileSystemWatcher('**/*.mo')
}
};
client = new LanguageClient(
'mopeClient',
'Mo|E client',
serverOptions,
clientOptions
);
client.start();
client.onReady().then(addWorkspaceFoldersToModelicaPath);
}
function addWorkspaceFoldersToModelicaPath() {
for (let wsf of vscode.workspace.workspaceFolders ?? []) {
let added = ensurePathIsInModelicaPath(wsf.uri.fsPath);
added.then((x) => {
if (x) {
vscode.window.showInformationMessage(`Added workspace path ${wsf.uri.fsPath} to MODELICAPATH.`);
}
})
}
}
// this method is called when your extension is deactivated
export function deactivate(): Thenable<void> | undefined {
if (!client) {
return undefined;
}
// FIXME this information message is currently not shown (maybe because it is cleaned up too early?)
console.log("Mo|E is being deactivated");
let msg = vscode.window.showInformationMessage("Mo|E client was deactivated, please reconnect using the command 'Mo|E connect'");
let res = msg.then(disconnect, disconnect);
return res;
}
export function disconnect(): Thenable<void> | undefined {
console.log("Mo|E connection was closed");
return client.stop();
}
async function ensurePathIsInModelicaPath(fsPath: string) {
let currentPath: string = await client.sendRequest(ExecuteCommandRequest.type, {
command: "GetPath",
arguments: []
})
if (currentPath.indexOf(fsPath) >= 0) { return false; }
await client.sendRequest(ExecuteCommandRequest.type, {
command: "AddPath",
arguments: [fsPath]
});
return true;
}
|
/* ====================================================================
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==================================================================== */
package org.apache.poi.hmef;
import java.io.ByteArrayInputStream;
import junit.framework.TestCase;
import org.apache.poi.POIDataSamples;
import org.apache.poi.hmef.attribute.MAPIAttribute;
import org.apache.poi.hmef.attribute.MAPIRtfAttribute;
import org.apache.poi.hsmf.datatypes.MAPIProperty;
import org.apache.poi.util.IOUtils;
import org.apache.poi.util.LittleEndian;
import org.apache.poi.util.StringUtil;
public final class TestCompressedRTF extends TestCase {
private static final POIDataSamples _samples = POIDataSamples.getHMEFInstance();
private static final String block1 = "{\\rtf1\\adeflang102";
private static final String block2 = block1 + "5\\ansi\\ansicpg1252";
/**
* Check that things are as we expected. If this fails,
* then decoding has no hope...
*/
public void testQuickBasics() throws Exception {
HMEFMessage msg = new HMEFMessage(
_samples.openResourceAsStream("quick-winmail.dat")
);
MAPIAttribute rtfAttr = msg.getMessageMAPIAttribute(MAPIProperty.RTF_COMPRESSED);
assertNotNull(rtfAttr);
assertTrue(rtfAttr instanceof MAPIRtfAttribute);
// Check the start of the compressed version
byte[] data = ((MAPIRtfAttribute)rtfAttr).getRawData();
assertEquals(5907, data.length);
// First 16 bytes is header stuff
// Check it has the length + compressed marker
assertEquals(5907-4, LittleEndian.getShort(data));
assertEquals(
"LZFu",
StringUtil.getFromCompressedUnicode(data, 8, 4)
);
// Now Look at the code
assertEquals((byte)0x07, data[16+0]); // Flag: cccUUUUU
assertEquals((byte)0x00, data[16+1]); // c1a: offset 0 / 0x000
assertEquals((byte)0x06, data[16+2]); // c1b: length 6+2 -> {\rtf1\a
assertEquals((byte)0x01, data[16+3]); // c2a: offset 16 / 0x010
assertEquals((byte)0x01, data[16+4]); // c2b: length 1+2 -> def
assertEquals((byte)0x0b, data[16+5]); // c3a: offset 182 / 0xb6
assertEquals((byte)0x60, data[16+6]); // c3b: length 0+2 -> la
assertEquals((byte)0x6e, data[16+7]); // n
assertEquals((byte)0x67, data[16+8]); // g
assertEquals((byte)0x31, data[16+9]); // 1
assertEquals((byte)0x30, data[16+10]); // 0
assertEquals((byte)0x32, data[16+11]); // 2
assertEquals((byte)0x66, data[16+12]); // Flag: UccUUccU
assertEquals((byte)0x35, data[16+13]); // 5
assertEquals((byte)0x00, data[16+14]); // c2a: offset 6 / 0x006
assertEquals((byte)0x64, data[16+15]); // c2b: length 4+2 -> \ansi\a
assertEquals((byte)0x00, data[16+16]); // c3a: offset 7 / 0x007
assertEquals((byte)0x72, data[16+17]); // c3b: length 2+2 -> nsi
assertEquals((byte)0x63, data[16+18]); // c
assertEquals((byte)0x70, data[16+19]); // p
assertEquals((byte)0x0d, data[16+20]); // c6a: offset 221 / 0x0dd
assertEquals((byte)0xd0, data[16+21]); // c6b: length 0+2 -> g1
assertEquals((byte)0x0e, data[16+22]); // c7a: offset 224 / 0x0e0
assertEquals((byte)0x00, data[16+23]); // c7b: length 0+2 -> 25
assertEquals((byte)0x32, data[16+24]); // 2
}
/**
* Check that we can decode the first 8 codes
* (1 flag byte + 8 codes)
*/
public void testFirstBlock() throws Exception {
HMEFMessage msg = new HMEFMessage(
_samples.openResourceAsStream("quick-winmail.dat")
);
MAPIAttribute attr = msg.getMessageMAPIAttribute(MAPIProperty.RTF_COMPRESSED);
assertNotNull(attr);
MAPIRtfAttribute rtfAttr = (MAPIRtfAttribute)attr;
// Truncate to header + flag + data for flag
byte[] data = new byte[16+12];
System.arraycopy(rtfAttr.getRawData(), 0, data, 0, data.length);
// Decompress it
CompressedRTF comp = new CompressedRTF();
byte[] decomp = comp.decompress(new ByteArrayInputStream(data));
String decompStr = new String(decomp, "ASCII");
// Test
assertEquals(block1.length(), decomp.length);
assertEquals(block1, decompStr);
}
/**
* Check that we can decode the first 16 codes
* (flag + 8 codes, flag + 8 codes)
*/
public void testFirstTwoBlocks() throws Exception {
HMEFMessage msg = new HMEFMessage(
_samples.openResourceAsStream("quick-winmail.dat")
);
MAPIAttribute attr = msg.getMessageMAPIAttribute(MAPIProperty.RTF_COMPRESSED);
assertNotNull(attr);
MAPIRtfAttribute rtfAttr = (MAPIRtfAttribute)attr;
// Truncate to header + flag + data for flag + flag + data
byte[] data = new byte[16+12+13];
System.arraycopy(rtfAttr.getRawData(), 0, data, 0, data.length);
// Decompress it
CompressedRTF comp = new CompressedRTF();
byte[] decomp = comp.decompress(new ByteArrayInputStream(data));
String decompStr = new String(decomp, "ASCII");
// Test
assertEquals(block2.length(), decomp.length);
assertEquals(block2, decompStr);
}
/**
* Check that we can correctly decode the whole file
* TODO Fix what looks like a padding issue
*/
public void testFull() throws Exception {
HMEFMessage msg = new HMEFMessage(
_samples.openResourceAsStream("quick-winmail.dat")
);
MAPIAttribute attr = msg.getMessageMAPIAttribute(MAPIProperty.RTF_COMPRESSED);
assertNotNull(attr);
MAPIRtfAttribute rtfAttr = (MAPIRtfAttribute)attr;
byte[] expected = IOUtils.toByteArray(
_samples.openResourceAsStream("quick-contents/message.rtf")
);
CompressedRTF comp = new CompressedRTF();
byte[] data = rtfAttr.getRawData();
byte[] decomp = comp.decompress(new ByteArrayInputStream(data));
// Check the length was as expected
assertEquals(data.length, comp.getCompressedSize() + 16);
assertEquals(expected.length, comp.getDeCompressedSize());
// Will have been padded though
assertEquals(expected.length+2, decomp.length);
byte[] tmp = new byte[expected.length];
System.arraycopy(decomp, 0, tmp, 0, tmp.length);
decomp = tmp;
// By byte
assertEquals(expected.length, decomp.length);
for(int i=0; i<expected.length; i++) {
assertEquals(expected[i], decomp[i]);
}
// By String
String expString = new String(expected, "ASCII");
String decompStr = rtfAttr.getDataString();
assertEquals(expString.length(), decompStr.length());
assertEquals(expString, decompStr);
}
}
|
<gh_stars>0
#include <stdio.h>
int main(void)
{
int i, j, k;
printf("Enter a three-digit number:");
scanf_s("%1d%1d%1d", &i, &j, &k);
printf("The reversal is:%d%d%d", k, j, i);
return 0;
}
|
<filename>frontend/src/API/devicesAPI.ts
import { default as axios } from '../core/axios';
import Notification from '../components/Notification';
import errorHandler from './utils/errorHandler';
export const devicesAPI = {
getDevices: async () => {
try {
return await axios.get('/api/devices')
} catch(err) {
errorHandler(err)
}
},
createDevice: async (device: any) => {
try {
let response = await axios.post('/api/devices', device)
Notification({
text: 'DeviceProfile was created!',
type: 'success',
title: "Success!"
})
return (response)
} catch(err) {
errorHandler(err)
}
},
}
export default devicesAPI
|
def findMaxGuests(arrl, exit, n):
# Sort arrival and exit arrays
arrl.sort();
exit.sort();
# guests_in indicates number of
# guests at a time
guests_in = 1;
max_guests = 1;
time = arrl[0];
i = 1;
j = 0;
# Similar to merge in merge sort to
# process all events in sorted order
while (i < n and j < n):
# If next event in sorted order is
# arrival, increment count of guests
if (arrl[i] <= exit[j]):
guests_in = guests_in + 1;
# Update max_guests if needed
if(guests_in > max_guests):
max_guests = guests_in;
time = arrl[i];
# increment index of arrival array
i = i + 1;
else:
guests_in = guests_in - 1;
j = j + 1;
return max_guests
for _ in range(int(input())):
n,k = map(int,input().split())
ar = list(map(int,input().split()))
start = []
end = []
for i in range(n//2):
start.append(min(ar[i],ar[i*-1 -1]) + 1)
end.append(max(ar[i],ar[i*-1 -1]) + k)
start.append(ar[i]+ar[i*-1 -1])
end.append(ar[i]+ar[i*-1 -1])
print(n-findMaxGuests(start,end,len(start)))
|
<filename>src/main/java/com/hust/edu/cn/greedy/_122.java<gh_stars>10-100
package com.hust.edu.cn.greedy;
class _122 {
public int maxProfit(int[] prices) {
if (prices.length == 0) {
return 0;
}
int min = prices[0], max = prices[0];
int res = 0;
for (int i = 1; i < prices.length; i++) {
if (prices[i] >= max) {
max = prices[i];
} else {
res += max - min;
max = prices[i];
min = prices[i];
}
}
if (max != min) {
res += max - min;
}
return res;
}
}
|
Wildcat formation
Wildcat formation describes a formation for the offense in football in which the ball is snapped not to the quarterback but directly to a player of another position lined up at the quarterback position. (In most systems, this is a running back, but some playbooks have the wide receiver, fullback, or tight end taking the snap.) The Wildcat features an unbalanced offensive line and looks to the defense like a sweep behind zone blocking. A player moves across the formation prior to the snap. However, once this player crosses the position of the running back who will receive the snap, the play develops unlike the sweep.
The Wildcat is a gambit rather than an overall offensive philosophy. It can be a part of many offenses. For example, a spread-option offense might use the Wildcat formation to keep the defense guessing, or a West Coast offense may use the power-I formation to threaten a powerful run attack.
The Wildcat scheme is a derivation of Pop Warner's Single Wing offense dating back to the 1920s. The Wildcat was invented by Billy Ford and Ryan Wilson, and was originally called the "Dual" formation. The offensive coaching staff of the Kansas State Wildcats, namely Bill Snyder and Del Miller, made significant contributions to the formation's development throughout the 1990s and 2000s and is often cited as being the formation's namesake. It has been used since the late 1990s at every level of the game, including the CFL, NFL, NCAA, NAIA, and high schools across North America. Coaching staffs have used it with variations and have given their versions a variety of names. The Wildcat was popularized in the first decade of the 2000s by South Carolina Gamecocks coach Steve Spurrier to utilize Syvelle Newton in all offensive positions on the field. It was also used in that decade by Arkansas Razorbacks to utilize the unique skill-set of their three running backs of Darren McFadden, Felix Jones, and Peyton Hillis. Though its popularity as a regular offensive weapon has waned in recent years as defenses have adapted to it, some teams will still use it occasionally to run a trick play.
History
One possible precursor to the wildcat formation was named the "wing-T", and is widely credited to being first implemented by Coach Tubby Raymond and Delaware Fightin' Blue Hens football team. Tubby Raymond later wrote a book on the innovative formation. The wildcat's similarity to the wing-T is the focus on series football, where the initial movements of every play look similar. For example, the wing-T makes use of motion across the formation as well in order to draw a reaction from the defense, but runs several different plays from the same look.
Another possible precursor to the wildcat is the offense of Six-Man Football, a form of high school football, played mostly in rural West Texas and Montana, that was developed in 1934. In six-man, the person who receives the snap may not run the ball past the line of scrimmage. To bypass this limitation, teams often snap the ball to a receiver, who then tosses the ball to the potential passer. The passer may then throw the ball to a receiver or run with the ball himself.
The virtue of having a running back take the snap in the wildcat formation is that the rushing play is 11-on-11, although different variations have the running back hand off or throw the football. In a standard football formation, when the quarterback stands watching, the offense operates 10-on-11 basis. The motion also presents the defense with an immediate threat to the outside that it must respect no matter what the offense decides to do with the football.
High school
The Wall Street Journal credited Hugh Wyatt, a longtime coach in the Pacific Northwest, with naming the offense. Wyatt, coaching the La Center High School Wildcats, published an article in Scholastic Coach and Athletic Director magazine in 1998, where he explained his version of the offense, which relied on two wing backs as the two backfield players directly behind the center, alternating to receive the snap. Other high school football programs across the United States adopted Wyatt's Wildcat offense.
College
Alabama's David Palmer was one of the first "wildcat" quarterbacks on the national scene running the formation in 1993.
The wildcat was popularized on the college level by Bill Snyder, head coach of the Kansas State University Wildcats with Michael Bishop as quarterback in 1997 and 1998 when they made a run at the top of the national rankings. Bishop rushed for 1304 career yards in two seasons, including 748 yards on 177 carries during the '98 season. This type of offense was the catalyst for Urban Meyer's offense during the start of his career. It was Meyer's success with quarterback Josh Harris at Bowling Green that helped the formation come to the forefront.
The wildcat has been continued by current Auburn head coach Gus Malzahn, and former Ole Miss Rebels offensive coordinator David Lee when they were offensive coordinators for the Arkansas Razorbacks after seeing the success of Bill Snyder and Urban Meyer. In 2006, Malzahn was the offensive coordinator for the Razorbacks. Malzahn introduced the wildcat into the Arkansas offense. When Malzahn left for Tulsa in 2007, Lee became the offensive coordinator for the Razorbacks. Both Malzahn and Lee ran a variation of the wildcat formation which prominently featured running backs Darren McFadden and Felix Jones. The wildcat formation was sometimes called the "wildhog" (in honor of the Razorback mascot at the University of Arkansas) and subsequently rebranded as the "Wild Rebel" when Arkansas head coach Houston Nutt went to Ole Miss as head coach (Ole Miss' mascot being the Rebels), and a variation involving a direct snap to a tight end has also been called the "Wild Turkey" popularized by the Virginia Tech Hokies.
Several other college teams have used the wildcat formation regularly, including the wildcats of Kansas State, Kentucky, and Villanova, as well as the Pitt Panthers. Pitt had great success with the formation having star running back LeSean McCoy or running back LaRod Stephens-Howling take the snap. The Panthers scored numerous times from this formation during those years. Villanova won the 2009 FCS championship with a multiple offense that included the wildcat, with wide receiver Matt Szczur taking the snap. Szczur scored a key touchdown in the Wildcats' semifinal against William & Mary out of the formation, and made a number of big plays out of the wildcat against Montana in the final.
UCF also uses a wildcat formation they call the "Wild Knight". It was originally intended to be run by Rob Calabrese, even after he lost the starting job in 2010 to Jeff Godfrey, but he tore his ACL using the play to score a rushing touchdown against Marshall on October 13, 2010. At the time, most agreed that Calabrese was effective at running the Wild Knight formation.
National Football League
The wildcat formation made an appearance in 1998, when Minnesota Vikings' offensive coordinator Brian Billick began employing formations where QB Randall Cunningham lined up as a wide receiver and third-down specialist David Palmer took the direct snap from the center with the option to pass or run.
In the 1998 NFC Championship, with 7:58 to go in the third quarter, on a second and 5 play, the Atlanta Falcons deployed quarterback Chris Chandler wide left as a receiver while receiver Tim Dwight took a direct snap and ran 20 yards for a first down.
In a December 24, 2006 game between the Carolina Panthers and Atlanta Falcons, the Panthers deployed a formation without a quarterback and snapped the ball directly to running back DeAngelo Williams for much of the game. The Panthers, under head coach John Fox and offensive coordinator Dan Henning, elected to run the ball—mostly in this formation—for the first twelve plays of the opening drive, and ran the ball 52 times, with only 7 passing plays. The coaching staff named the package "Tiger" when running back DeAngelo William was on the field and "Wildcat" when backup quarterback Brett Basanez was under center, both after their respective alma matters, the University of Memphis and Northwestern University. Coordinator Henning later developed this concept into the "Wildcat" as the offensive coordinator for the Miami Dolphins.
Relying on the experience of quarterbacks coach David Lee who had run the scheme at Arkansas, the 2008 Miami Dolphins under Henning implemented the wildcat offense beginning in the third game of the 2008 season with great success, instigating a wider trend throughout the NFL. The Dolphins started the wildcat trend in the NFL lining up either running back Ronnie Brown (in most cases) or Ricky Williams to take a shotgun snap with the option of handing off, running, or throwing. Through eleven games, the wildcat averaged over seven yards per play for the Dolphins. "It could be the single wing, it could be the Delaware split buck business that they used to do," Dolphins offensive coordinator Dan Henning said. "It comes from all of that." On September 21, 2008, the Miami Dolphins used the wildcat offense against the New England Patriots on six plays, which produced 5 touchdowns (four rushing and one passing—from Ronnie Brown himself) in a 38–13 upset victory.
As the popularity of the wildcat spread during the 2008 NFL season, several teams began instituting it as a part of their playbook.
Defending plays from the wildcat requires linemen and linebackers to know and execute their own assignments without over-pursuing what may turn into a fake or a reverse. The formation's initial success in 2008 can be attributed in part to surprise—defenses had not practiced their countermeasures against such an unusual offensive strategy. Since then, most teams are well prepared to stop the wildcat; an example came in November 2008 when the Patriots traveled to Miami nine weeks after the Dolphins win in Foxborough; Bill Belichick's defense limited the wildcat to just 27 yards and forced the Dolphins to try a conventional passing attack; the game lead changed six times but the Patriots wore out the Dolphins with a 48–28 win.
Though defenses now understand how to stop the wildcat, it does not mean the formation is no longer useful. A defense's practice time is finite. Opponents who prepare to stop the wildcat have less time available to prepare for other offensive approaches. Many teams admit to spending an inordinate amount of time having to prepare for this scheme. The Philly Special, an iconic play during Super Bowl LII, was run out of the wildcat.
Other teams that use the wildcat formation in the NFL may use different names for their versions. For example, the Carolina Panthers call their version the 'Mountaineer formation', named after the Appalachian State Mountaineers, the alma mater of their wildcat quarterback Armanti Edwards, who played quarterback for the Mountaineers. The Denver Broncos utilize 'Wild Horses', developed in 2009. The New York Jets referred to their version as the Tigercat formation in reference to Brad Smith having attended the University of Missouri when Smith played for New York from 2009–2010. The 2011 Minnesota Vikings referred to their formation as the "Blazer package" which employed former UAB Blazers quarterback Joe Webb.
Canadian Football League
Until the 2009 season, a technicality in the league rules made the wildcat offense illegal; essentially, the rule stated that a designated quarterback must be in position to take all snaps. This has since been changed.
|
. OBJECTIVE To investigate the expression of lymphatic vessel endothelial hyaluronan receptor-1 (LYVE-1), the homeobox gene (Prox-1), in patients with non-small cell lung cancer (NSCLC), the relationship with microlymphatic vessel density, lymph node metastasis and their clinic pathological value. METHODS Forty specimens of the NSCLC as experimental group and eleven pulmonary benign diseases as control group were studied. The expressions of LYVE-1, Prox-1 and CD31 protein in specimens of NSCLC and normal pulmonary tissue were studied with immunohistochemical (IHC) technique. Microlymphatic vessel density (MLVD) and microvessel density (MVD) were counted. Meanwhile, all specimens were also examined by conventional pathological method. Clinicopathological data of each patient were recorded and analyzed. RESULTS Among 40 cases of the center of NSCLC cancerous tissues, the MLVDs marked by LYVE-1 and Prox-1 were 4.22 +/- 1.25 and 1. 99 +/- 1.49 respectively, which were significantly lower than those of the pulmonary benign diseases tissues (P = 0.00). The MLVDs marked by LYVE-1 and Prox-1 in NSCLC cancerous invasive edge were 10.89 +/- 2.06 and 6.63 +/- 1.99 respectively, which were significantly higher than those in the center of cancerous tissues and those of the pulmonary benign diseases tissues (P = 0.000). The MLVDs marked by LYVE-1 and Prox-1 in cancerous invasive edge were not correlated with age, gender, site and dimension of lesion, types of histological and degree of differentiation, but correlated significantly with lymph node metastasis (P = 0.000) and PTNM stage (P = 0.000). Meanwhile, along with lymph node metastasis and increasing of PTNM stage, the expressions of LYVE-1 and Prox-1 protein and MLVDs have significantly increased, but the microvessel density marked by CD31 in cancerous invasive edge was not correlated significantly with lymph node metastasis (P = 0.450) and PTNM stage (P = 0.377). Significant correlation between LYVE-1 and Prox-1 (r = 0.529, P = 0.000) expression was observed in NSCLC; moreover, no correlations between LYVE-1 and CD31, Prox-1 and CD31 (r = 0.034, P = 0.837; r = -0.075, P = 0.647) were The functional microlymphatic vessels correlated with lymphatic metastasis are mainly observed. CONCLUSION located in the cancerous invasive edge rather than the center of cancerous tissues. LYVE-1 and Prox-1 might be acted as molecular phenotypes of lymphangioghesis in NSCLC and as important markers for evaluating lymphatic metastasis and prognosis in patients with NSCLC.
|
The St. Louis Circuit Attorney’s Office has charged 29-year-old Dujuan Williams with murder, assault, and armed criminal action in a homicide that occurred on July 4, 2017. Police say Williams was the shooter in a vehicle that fire upon a car in the 1600 block of Cole in north St. Louis.
Two victims were shot, one of the victims Bobby Slack, 24, died at the scene. The second victim, a 26-year0old man suffered minor injuries.
Williams bond has been set at $750,000 cash only.
|
package com.example.breeze;
import androidx.appcompat.app.AppCompatActivity;
import android.os.Bundle;
public class welcome_user extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_welcome_user);
}
}
|
Training and detraining effects of a combined-strength and aerobic exercise program on blood lipids in patients with coronary artery disease. PURPOSE The aim of this study was to investigate training and detraining effects on blood lipids and apolipoproteins induced by a specific program that combined strength and aerobic exercise in patients with coronary artery disease (CAD). METHODS For this study, 14 patients participated in a supervised 8-month training program composed of two strength sessions (60% of 1 repetition maximum) and two aerobic training sessions (60%-85% of maximum heart rate), and 13 patients served as a control group. Blood samples for total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-C), apolipoproteins A1 (apo-A1) and B (apo-B), and lipoprotein (a) (Lp) were obtained along with muscular strength at the beginning of the study, after 4 and 8 months of training and after 3 months of detraining. RESULTS The patients in the intervention group showed favorable alterations after 8 months of training (TC, -9.4; TG, -18.6; HDL-C, 5.2; apo-A1, 11.2%; P <.05), but these were reversed after 3 months of detraining (TC, +3.7; TG, 16.1; HDL-C, -3.6; apo-A1, -5.5%). In addition, body strength also improved after training (27.8%) but reversed (-12.9%) after detraining (P <.05). The patients in the control group did not experience any significant alterations. CONCLUSIONS The results indicate that an 8-month training program combining strength and aerobic exercise induces favorable muscular and biochemical adaptations, on TC, TG, HDL-C, and apo-A1 levels, protecting patients with CAD. After 3 months of detraining, however, the favorable adaptations were reversed, underscoring the need of uninterrupted exercise throughout life.
|
<gh_stars>1-10
import java.util.Date;
import net.runelite.mapping.Export;
import net.runelite.mapping.Implements;
import net.runelite.mapping.ObfuscatedName;
import net.runelite.mapping.ObfuscatedSignature;
import net.runelite.rs.ScriptOpcodes;
@ObfuscatedName("w")
@Implements("WorldMapData_0")
public class WorldMapData_0 extends AbstractWorldMapData {
@ObfuscatedName("sz")
@ObfuscatedSignature(
signature = "Lkg;"
)
@Export("masterDisk")
static ArchiveDisk masterDisk;
@ObfuscatedName("ex")
@ObfuscatedSignature(
signature = "Lkw;"
)
@Export("spriteIds")
static GraphicsDefaults spriteIds;
@ObfuscatedName("gz")
@ObfuscatedSignature(
signature = "[Llp;"
)
@Export("modIconSprites")
static IndexedSprite[] modIconSprites;
WorldMapData_0() {
}
@ObfuscatedName("z")
@ObfuscatedSignature(
signature = "(Lkl;I)V",
garbageValue = "-443857335"
)
@Export("init")
void init(Buffer var1) {
int var2 = var1.readUnsignedByte();
if (var2 != WorldMapID.field256.value) {
throw new IllegalStateException("");
} else {
super.minPlane = var1.readUnsignedByte();
super.planes = var1.readUnsignedByte();
super.regionXLow = var1.readUnsignedShort() * 4096;
super.regionYLow = var1.readUnsignedShort() * 4096;
super.regionX = var1.readUnsignedShort();
super.regionY = var1.readUnsignedShort();
super.groupId = var1.method5453();
super.fileId = var1.method5453();
}
}
@ObfuscatedName("n")
@ObfuscatedSignature(
signature = "(Lkl;I)V",
garbageValue = "1549979331"
)
@Export("readGeography")
void readGeography(Buffer var1) {
super.planes = Math.min(super.planes, 4);
super.floorUnderlayIds = new short[1][64][64];
super.floorOverlayIds = new short[super.planes][64][64];
super.field164 = new byte[super.planes][64][64];
super.field152 = new byte[super.planes][64][64];
super.decorations = new WorldMapDecoration[super.planes][64][64][];
int var2 = var1.readUnsignedByte();
if (var2 != class30.field253.value) {
throw new IllegalStateException("");
} else {
int var3 = var1.readUnsignedByte();
int var4 = var1.readUnsignedByte();
if (var3 == super.regionX && var4 == super.regionY) {
for (int var5 = 0; var5 < 64; ++var5) {
for (int var6 = 0; var6 < 64; ++var6) {
this.readTile(var5, var6, var1);
}
}
} else {
throw new IllegalStateException("");
}
}
}
public boolean equals(Object var1) {
if (!(var1 instanceof WorldMapData_0)) {
return false;
} else {
WorldMapData_0 var2 = (WorldMapData_0)var1;
return var2.regionX == super.regionX && var2.regionY == super.regionY;
}
}
public int hashCode() {
return super.regionX | super.regionY << 8;
}
@ObfuscatedName("v")
@ObfuscatedSignature(
signature = "([Lbo;II[I[IB)V",
garbageValue = "59"
)
@Export("sortWorlds")
static void sortWorlds(World[] var0, int var1, int var2, int[] var3, int[] var4) {
if (var1 < var2) {
int var5 = var1 - 1;
int var6 = var2 + 1;
int var7 = (var2 + var1) / 2;
World var8 = var0[var7];
var0[var7] = var0[var1];
var0[var1] = var8;
while (var5 < var6) {
boolean var9 = true;
int var10;
int var11;
int var12;
do {
--var6;
for (var10 = 0; var10 < 4; ++var10) {
if (var3[var10] == 2) {
var11 = var0[var6].index;
var12 = var8.index;
} else if (var3[var10] == 1) {
var11 = var0[var6].population;
var12 = var8.population;
if (var11 == -1 && var4[var10] == 1) {
var11 = 2001;
}
if (var12 == -1 && var4[var10] == 1) {
var12 = 2001;
}
} else if (var3[var10] == 3) {
var11 = var0[var6].isMembersOnly() ? 1 : 0;
var12 = var8.isMembersOnly() ? 1 : 0;
} else {
var11 = var0[var6].id;
var12 = var8.id;
}
if (var11 != var12) {
if ((var4[var10] != 1 || var11 <= var12) && (var4[var10] != 0 || var11 >= var12)) {
var9 = false;
}
break;
}
if (var10 == 3) {
var9 = false;
}
}
} while(var9);
var9 = true;
do {
++var5;
for (var10 = 0; var10 < 4; ++var10) {
if (var3[var10] == 2) {
var11 = var0[var5].index;
var12 = var8.index;
} else if (var3[var10] == 1) {
var11 = var0[var5].population;
var12 = var8.population;
if (var11 == -1 && var4[var10] == 1) {
var11 = 2001;
}
if (var12 == -1 && var4[var10] == 1) {
var12 = 2001;
}
} else if (var3[var10] == 3) {
var11 = var0[var5].isMembersOnly() ? 1 : 0;
var12 = var8.isMembersOnly() ? 1 : 0;
} else {
var11 = var0[var5].id;
var12 = var8.id;
}
if (var12 != var11) {
if ((var4[var10] != 1 || var11 >= var12) && (var4[var10] != 0 || var11 <= var12)) {
var9 = false;
}
break;
}
if (var10 == 3) {
var9 = false;
}
}
} while(var9);
if (var5 < var6) {
World var13 = var0[var5];
var0[var5] = var0[var6];
var0[var6] = var13;
}
}
sortWorlds(var0, var1, var6, var3, var4);
sortWorlds(var0, var6 + 1, var2, var3, var4);
}
}
@ObfuscatedName("u")
@ObfuscatedSignature(
signature = "(II)Z",
garbageValue = "387088123"
)
@Export("loadInterface")
public static boolean loadInterface(int var0) {
if (ViewportMouse.Widget_loadedInterfaces[var0]) {
return true;
} else if (!Widget.Widget_archive.tryLoadGroup(var0)) {
return false;
} else {
int var1 = Widget.Widget_archive.getGroupFileCount(var0);
if (var1 == 0) {
ViewportMouse.Widget_loadedInterfaces[var0] = true;
return true;
} else {
if (UserComparator5.Widget_interfaceComponents[var0] == null) {
UserComparator5.Widget_interfaceComponents[var0] = new Widget[var1];
}
for (int var2 = 0; var2 < var1; ++var2) {
if (UserComparator5.Widget_interfaceComponents[var0][var2] == null) {
byte[] var3 = Widget.Widget_archive.takeFile(var0, var2);
if (var3 != null) {
UserComparator5.Widget_interfaceComponents[var0][var2] = new Widget();
UserComparator5.Widget_interfaceComponents[var0][var2].id = var2 + (var0 << 16);
if (var3[0] == -1) {
UserComparator5.Widget_interfaceComponents[var0][var2].decode(new Buffer(var3));
} else {
UserComparator5.Widget_interfaceComponents[var0][var2].decodeLegacy(new Buffer(var3));
}
}
}
}
ViewportMouse.Widget_loadedInterfaces[var0] = true;
return true;
}
}
}
@ObfuscatedName("u")
@ObfuscatedSignature(
signature = "(ILhp;Ljava/lang/String;Ljava/lang/String;IZS)V",
garbageValue = "6590"
)
public static void method194(int var0, AbstractArchive var1, String var2, String var3, int var4, boolean var5) {
int var6 = var1.getGroupId(var2);
int var7 = var1.getFileId(var6, var3);
class197.field2386 = 1;
class197.musicTrackArchive = var1;
class188.musicTrackGroupId = var6;
class49.musicTrackFileId = var7;
TileItem.field1223 = var4;
WorldMapSectionType.musicTrackBoolean = var5;
MusicPatchNode2.field2382 = var0;
}
@ObfuscatedName("m")
@ObfuscatedSignature(
signature = "(Ljava/lang/CharSequence;B)I",
garbageValue = "71"
)
@Export("hashString")
public static int hashString(CharSequence var0) {
int var1 = var0.length();
int var2 = 0;
for (int var3 = 0; var3 < var1; ++var3) {
var2 = (var2 << 5) - var2 + Entity.charToByteCp1252(var0.charAt(var3));
}
return var2;
}
@ObfuscatedName("ax")
@ObfuscatedSignature(
signature = "(ILcu;ZI)I",
garbageValue = "-1153827827"
)
static int method177(int var0, Script var1, boolean var2) {
String var3;
int var4;
if (var0 == ScriptOpcodes.APPEND_NUM) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
var4 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3 + var4;
return 1;
} else {
String var9;
if (var0 == ScriptOpcodes.APPEND) {
Interpreter.Interpreter_stringStackSize -= 2;
var3 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize];
var9 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize + 1];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3 + var9;
return 1;
} else if (var0 == ScriptOpcodes.APPEND_SIGNNUM) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
var4 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3 + HealthBar.intToString(var4, true);
return 1;
} else if (var0 == ScriptOpcodes.LOWERCASE) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3.toLowerCase();
return 1;
} else {
int var6;
int var10;
if (var0 == ScriptOpcodes.FROMDATE) {
var10 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
long var11 = 86400000L * (11745L + (long)var10);
Interpreter.Interpreter_calendar.setTime(new Date(var11));
var6 = Interpreter.Interpreter_calendar.get(5);
int var16 = Interpreter.Interpreter_calendar.get(2);
int var8 = Interpreter.Interpreter_calendar.get(1);
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var6 + "-" + Interpreter.Interpreter_MONTHS[var16] + "-" + var8;
return 1;
} else if (var0 != ScriptOpcodes.TEXT_GENDER) {
if (var0 == ScriptOpcodes.TOSTRING) {
var10 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = Integer.toString(var10);
return 1;
} else if (var0 == ScriptOpcodes.COMPARE) {
Interpreter.Interpreter_stringStackSize -= 2;
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = class189.method3615(Interpreter.compareStrings(Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize], Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize + 1], WorldMapLabelSize.clientLanguage));
return 1;
} else {
int var5;
byte[] var13;
Font var14;
if (var0 == ScriptOpcodes.PARAHEIGHT) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
Interpreter.Interpreter_intStackSize -= 2;
var4 = Interpreter.Interpreter_intStack[Interpreter.Interpreter_intStackSize];
var5 = Interpreter.Interpreter_intStack[Interpreter.Interpreter_intStackSize + 1];
var13 = Tile.archive13.takeFile(var5, 0);
var14 = new Font(var13);
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = var14.lineCount(var3, var4);
return 1;
} else if (var0 == ScriptOpcodes.PARAWIDTH) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
Interpreter.Interpreter_intStackSize -= 2;
var4 = Interpreter.Interpreter_intStack[Interpreter.Interpreter_intStackSize];
var5 = Interpreter.Interpreter_intStack[Interpreter.Interpreter_intStackSize + 1];
var13 = Tile.archive13.takeFile(var5, 0);
var14 = new Font(var13);
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = var14.lineWidth(var3, var4);
return 1;
} else if (var0 == ScriptOpcodes.TEXT_SWITCH) {
Interpreter.Interpreter_stringStackSize -= 2;
var3 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize];
var9 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize + 1];
if (Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize] == 1) {
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3;
} else {
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var9;
}
return 1;
} else if (var0 == ScriptOpcodes.ESCAPE) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = AbstractFont.escapeBrackets(var3);
return 1;
} else if (var0 == ScriptOpcodes.APPEND_CHAR) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
var4 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3 + (char)var4;
return 1;
} else if (var0 == ScriptOpcodes.CHAR_ISPRINTABLE) {
var10 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = TileItem.isCharPrintable((char)var10) ? 1 : 0;
return 1;
} else if (var0 == ScriptOpcodes.CHAR_ISALPHANUMERIC) {
var10 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = AbstractArchive.isAlphaNumeric((char)var10) ? 1 : 0;
return 1;
} else if (var0 == ScriptOpcodes.CHAR_ISALPHA) {
var10 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = UserComparator7.isCharAlphabetic((char)var10) ? 1 : 0;
return 1;
} else if (var0 == ScriptOpcodes.CHAR_ISNUMERIC) {
var10 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = AbstractWorldMapIcon.isDigit((char)var10) ? 1 : 0;
return 1;
} else if (var0 == ScriptOpcodes.STRING_LENGTH) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
if (var3 != null) {
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = var3.length();
} else {
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = 0;
}
return 1;
} else if (var0 == ScriptOpcodes.SUBSTRING) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
Interpreter.Interpreter_intStackSize -= 2;
var4 = Interpreter.Interpreter_intStack[Interpreter.Interpreter_intStackSize];
var5 = Interpreter.Interpreter_intStack[Interpreter.Interpreter_intStackSize + 1];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3.substring(var4, var5);
return 1;
} else if (var0 == ScriptOpcodes.REMOVETAGS) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
StringBuilder var17 = new StringBuilder(var3.length());
boolean var15 = false;
for (var6 = 0; var6 < var3.length(); ++var6) {
char var7 = var3.charAt(var6);
if (var7 == '<') {
var15 = true;
} else if (var7 == '>') {
var15 = false;
} else if (!var15) {
var17.append(var7);
}
}
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var17.toString();
return 1;
} else if (var0 == ScriptOpcodes.STRING_INDEXOF_CHAR) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
var4 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = var3.indexOf(var4);
return 1;
} else if (var0 == ScriptOpcodes.STRING_INDEXOF_STRING) {
Interpreter.Interpreter_stringStackSize -= 2;
var3 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize];
var9 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize + 1];
var5 = Interpreter.Interpreter_intStack[--Interpreter.Interpreter_intStackSize];
Interpreter.Interpreter_intStack[++Interpreter.Interpreter_intStackSize - 1] = var3.indexOf(var9, var5);
return 1;
} else if (var0 == ScriptOpcodes.UPPERCASE) {
var3 = Interpreter.Interpreter_stringStack[--Interpreter.Interpreter_stringStackSize];
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3.toUpperCase();
return 1;
} else {
return 2;
}
}
} else {
Interpreter.Interpreter_stringStackSize -= 2;
var3 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize];
var9 = Interpreter.Interpreter_stringStack[Interpreter.Interpreter_stringStackSize + 1];
if (class223.localPlayer.appearance != null && class223.localPlayer.appearance.isFemale) {
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var9;
} else {
Interpreter.Interpreter_stringStack[++Interpreter.Interpreter_stringStackSize - 1] = var3;
}
return 1;
}
}
}
}
}
|
/// Recursively searches the catalog for a matching authority
pub fn find(&self, name: &LowerName) -> Option<&RwLock<Authority>> {
self.authorities.get(name).or_else(|| {
let name = name.base_name();
if !name.is_root() {
self.find(&name)
} else {
None
}
})
}
|
<reponame>xioxin/LabSound
// License: BSD 2 Clause
// Copyright (C) 2015+, The LabSound Authors. All rights reserved.
#include "LabSound/core/AudioContext.h"
#include "LabSound/core/AnalyserNode.h"
#include "LabSound/core/AudioHardwareDeviceNode.h"
#include "LabSound/core/AudioHardwareInputNode.h"
#include "LabSound/core/AudioListener.h"
#include "LabSound/core/AudioNodeInput.h"
#include "LabSound/core/AudioNodeOutput.h"
#include "LabSound/core/NullDeviceNode.h"
#include "LabSound/core/OscillatorNode.h"
#include "LabSound/extended/AudioContextLock.h"
#include "internal/Assertions.h"
#include "concurrentqueue/concurrentqueue.h"
#include "libnyquist/Encoders.h"
#include <assert.h>
#include <queue>
#include <stdio.h>
namespace lab
{
enum class ConnectionOperationKind : int
{
Disconnect = 0,
Connect,
FinishDisconnect
};
struct PendingNodeConnection
{
ConnectionOperationKind type;
std::shared_ptr<AudioNode> destination;
std::shared_ptr<AudioNode> source;
int destIndex = 0;
int srcIndex = 0;
float duration = 0.1f;
PendingNodeConnection() = default;
~PendingNodeConnection() = default;
};
struct PendingParamConnection
{
ConnectionOperationKind type;
std::shared_ptr<AudioParam> destination;
std::shared_ptr<AudioNode> source;
int destIndex = 0;
PendingParamConnection() = default;
~PendingParamConnection() = default;
};
struct AudioContext::Internals
{
Internals(bool a)
: autoDispatchEvents(a)
{
}
~Internals() = default;
bool autoDispatchEvents;
moodycamel::ConcurrentQueue<std::function<void()>> enqueuedEvents;
moodycamel::ConcurrentQueue<PendingNodeConnection> pendingNodeConnections;
moodycamel::ConcurrentQueue<PendingParamConnection> pendingParamConnections;
std::vector<float> debugBuffer;
const int debugBufferCapacity = 1024 * 1024;
int debugBufferIndex = 0;
void appendDebugBuffer(AudioBus* bus, int channel, int count)
{
if (!bus || bus->numberOfChannels() < channel || !count)
return;
if (!debugBuffer.size())
{
debugBuffer.resize(debugBufferCapacity);
memset(debugBuffer.data(), 0, debugBufferCapacity);
}
if (debugBufferIndex + count > debugBufferCapacity)
debugBufferIndex = 0;
memcpy(debugBuffer.data() + debugBufferIndex, bus->channel(channel)->data(), sizeof(float) * count);
debugBufferIndex += count;
}
void flushDebugBuffer(char const* const wavFilePath)
{
if (!debugBufferIndex || !wavFilePath)
return;
nqr::AudioData fileData;
fileData.samples.resize(debugBufferIndex + 32);
fileData.channelCount = 1;
float* dst = fileData.samples.data();
memcpy(dst, debugBuffer.data(), sizeof(float) * debugBufferIndex);
fileData.sampleRate = static_cast<int>(44100);
fileData.sourceFormat = nqr::PCM_FLT;
nqr::EncoderParams params = { 1, nqr::PCM_FLT, nqr::DITHER_NONE };
int err = nqr::encode_wav_to_disk(params, &fileData, wavFilePath);
debugBufferIndex = 0;
}
};
void AudioContext::appendDebugBuffer(AudioBus* bus, int channel, int count)
{
m_internal->appendDebugBuffer(bus, channel, count);
}
void AudioContext::flushDebugBuffer(char const* const wavFilePath)
{
m_internal->flushDebugBuffer(wavFilePath);
}
AudioContext::AudioContext(bool isOffline, bool autoDispatchEvents)
: m_isOfflineContext(isOffline)
{
static std::atomic<int> id {1};
m_internal.reset(new AudioContext::Internals(autoDispatchEvents));
m_listener.reset(new AudioListener());
m_audioContextInterface = std::make_shared<AudioContextInterface>(this, id);
++id;
if (isOffline)
{
updateThreadShouldRun = 1;
graphKeepAlive = 0;
}
}
AudioContext::~AudioContext()
{
LOG_TRACE("Begin AudioContext::~AudioContext()");
m_audioContextInterface.reset();
if (!isOfflineContext())
graphKeepAlive = 0.25f;
updateThreadShouldRun = 0;
cv.notify_all();
if (graphUpdateThread.joinable())
graphUpdateThread.join();
m_listener.reset();
uninitialize();
#if USE_ACCELERATE_FFT
FFTFrame::cleanup();
#endif
ASSERT(!m_isInitialized);
ASSERT(!m_automaticPullNodes.size());
ASSERT(!m_renderingAutomaticPullNodes.size());
LOG_INFO("Finish AudioContext::~AudioContext()");
}
void AudioContext::lazyInitialize()
{
if (m_isInitialized)
return;
// Don't allow the context to initialize a second time after it's already been explicitly uninitialized.
ASSERT(!m_isAudioThreadFinished);
if (m_isAudioThreadFinished)
return;
if (m_device.get())
{
if (!isOfflineContext())
{
// This starts the audio thread and all audio rendering.
// The destination node's provideInput() method will now be called repeatedly to render audio.
// Each time provideInput() is called, a portion of the audio stream is rendered.
graphKeepAlive = 0.25f; // pump the graph for the first 0.25 seconds
graphUpdateThread = std::thread(&AudioContext::update, this);
device_callback->start();
}
cv.notify_all();
m_isInitialized = true;
}
else
{
LOG_ERROR("m_device not specified");
ASSERT(m_device);
}
}
void AudioContext::uninitialize()
{
LOG_TRACE("AudioContext::uninitialize()");
if (!m_isInitialized)
return;
// for the case where an OfflineAudioDestinationNode needs to update the graph:
updateAutomaticPullNodes();
// This stops the audio thread and all audio rendering.
device_callback->stop();
// Don't allow the context to initialize a second time after it's already been explicitly uninitialized.
m_isAudioThreadFinished = true;
updateAutomaticPullNodes(); // added for the case where an NullDeviceNode needs to update the graph
m_isInitialized = false;
}
bool AudioContext::isInitialized() const
{
return m_isInitialized;
}
void AudioContext::handlePreRenderTasks(ContextRenderLock & r)
{
ASSERT(r.context());
// At the beginning of every render quantum, update the graph.
m_audioContextInterface->_currentTime = currentTime();
// check for pending connections
if (m_internal->pendingParamConnections.size_approx() > 0 ||
m_internal->pendingNodeConnections.size_approx() > 0)
{
// take a graph lock until the queues are cleared
ContextGraphLock gLock(this, "AudioContext::handlePreRenderTasks()");
// resolve parameter connections
PendingParamConnection param_connection;
while (m_internal->pendingParamConnections.try_dequeue(param_connection))
{
if (param_connection.type == ConnectionOperationKind::Connect)
{
AudioParam::connect(gLock,
param_connection.destination,
param_connection.source->output(param_connection.destIndex));
// if unscheduled, the source should start to play as soon as possible
if (!param_connection.source->isScheduledNode())
param_connection.source->_scheduler.start(0);
}
else
AudioParam::disconnect(gLock,
param_connection.destination,
param_connection.source->output(param_connection.destIndex));
}
// resolve node connections
PendingNodeConnection node_connection;
std::vector<PendingNodeConnection> requeued_connections;
while (m_internal->pendingNodeConnections.try_dequeue(node_connection))
{
switch (node_connection.type)
{
case ConnectionOperationKind::Connect:
{
AudioNodeInput::connect(gLock,
node_connection.destination->input(node_connection.destIndex),
node_connection.source->output(node_connection.srcIndex));
if (!node_connection.source->isScheduledNode())
node_connection.source->_scheduler.start(0);
}
break;
case ConnectionOperationKind::Disconnect:
{
node_connection.type = ConnectionOperationKind::FinishDisconnect;
requeued_connections.push_back(node_connection); // save for later
if (node_connection.source)
{
// if source and destination are specified, then don't ramp out the destination
// source will be completely disconnected
node_connection.source->scheduleDisconnect();
}
else if (node_connection.destination)
{
// destination will be completely disconnected
node_connection.destination->scheduleDisconnect();
}
}
break;
case ConnectionOperationKind::FinishDisconnect:
{
if (node_connection.duration > 0)
{
node_connection.duration -= AudioNode::ProcessingSizeInFrames / sampleRate();
requeued_connections.push_back(node_connection);
continue;
}
if (node_connection.source && node_connection.destination)
{
//if (!node_connection.destination->disconnectionReady() || !node_connection.source->disconnectionReady())
// requeued_connections.push_back(node_connection);
//else
AudioNodeInput::disconnect(gLock, node_connection.destination->input(node_connection.destIndex), node_connection.source->output(node_connection.srcIndex));
}
else if (node_connection.destination)
{
//if (!node_connection.destination->disconnectionReady())
// requeued_connections.push_back(node_connection);
//else
for (int in = 0; in < node_connection.destination->numberOfInputs(); ++in)
{
auto input= node_connection.destination->input(in);
if (input)
AudioNodeInput::disconnectAll(gLock, input);
}
}
else if (node_connection.source)
{
//if (!node_connection.destination->disconnectionReady())
// requeued_connections.push_back(node_connection);
//else
for (int out = 0; out < node_connection.source->numberOfOutputs(); ++out)
{
auto output = node_connection.source->output(out);
if (output)
AudioNodeOutput::disconnectAll(gLock, output);
}
}
}
break;
}
}
// We have incompletely connected nodes, so next time the thread ticks we can re-check them
for (auto & sc : requeued_connections)
m_internal->pendingNodeConnections.enqueue(sc);
}
AudioSummingJunction::handleDirtyAudioSummingJunctions(r);
updateAutomaticPullNodes();
}
void AudioContext::handlePostRenderTasks(ContextRenderLock & r)
{
ASSERT(r.context());
AudioSummingJunction::handleDirtyAudioSummingJunctions(r);
updateAutomaticPullNodes();
}
void AudioContext::synchronizeConnections(int timeOut_ms)
{
cv.notify_all();
// don't synch if the context is suspended as that will simply max out the timeout
if (!device_callback->isRunning())
return;
while (m_internal->pendingNodeConnections.size_approx() > 0 && timeOut_ms > 0)
{
std::this_thread::sleep_for(std::chrono::milliseconds(5));
timeOut_ms -= 5;
}
}
void AudioContext::connect(std::shared_ptr<AudioNode> destination, std::shared_ptr<AudioNode> source, int destIdx, int srcIdx)
{
if (!destination)
throw std::runtime_error("Cannot connect to null destination");
if (!source)
throw std::runtime_error("Cannot connect from null source");
if (srcIdx > source->numberOfOutputs())
throw std::out_of_range("Output index greater than available outputs");
if (destIdx > destination->numberOfInputs())
throw std::out_of_range("Input index greater than available inputs");
m_internal->pendingNodeConnections.enqueue({ConnectionOperationKind::Connect, destination, source, destIdx, srcIdx});
}
void AudioContext::disconnect(std::shared_ptr<AudioNode> destination, std::shared_ptr<AudioNode> source, int destIdx, int srcIdx)
{
if (!destination && !source)
return;
if (source && srcIdx > source->numberOfOutputs())
throw std::out_of_range("Output index greater than available outputs");
if (destination && destIdx > destination->numberOfInputs())
throw std::out_of_range("Input index greater than available inputs");
m_internal->pendingNodeConnections.enqueue({ConnectionOperationKind::Disconnect, destination, source, destIdx, srcIdx});
}
void AudioContext::disconnect(std::shared_ptr<AudioNode> node, int index)
{
if (!node)
return;
m_internal->pendingNodeConnections.enqueue({ConnectionOperationKind::Disconnect, node, std::shared_ptr<AudioNode>(), index, 0});
}
bool AudioContext::isConnected(std::shared_ptr<AudioNode> destination, std::shared_ptr<AudioNode> source)
{
if (!destination || !source)
return false;
AudioNode* n = source.get();
for (int i = 0; i < destination->numberOfInputs(); ++i)
{
auto c = destination->input(i);
if (c->destinationNode() == n)
return true;
}
return false;
}
void AudioContext::connectParam(std::shared_ptr<AudioParam> param, std::shared_ptr<AudioNode> driver, int index)
{
if (!param)
throw std::invalid_argument("No parameter specified");
if (!driver)
throw std::invalid_argument("No driving node supplied");
if (index >= driver->numberOfOutputs())
throw std::out_of_range("Output index greater than available outputs on the driver");
m_internal->pendingParamConnections.enqueue({ConnectionOperationKind::Connect, param, driver, index});
}
// connect a named parameter on a node to receive the indexed output of a node
void AudioContext::connectParam(std::shared_ptr<AudioNode> destinationNode, char const*const parameterName,
std::shared_ptr<AudioNode> driver, int index)
{
if (!parameterName)
throw std::invalid_argument("No parameter specified");
std::shared_ptr<AudioParam> param = destinationNode->param(parameterName);
if (!param)
throw std::invalid_argument("Parameter not found on node");
if (!driver)
throw std::invalid_argument("No driving node supplied");
if (index >= driver->numberOfOutputs())
throw std::out_of_range("Output index greater than available outputs on the driver");
m_internal->pendingParamConnections.enqueue({ConnectionOperationKind::Connect, param, driver, index});
}
void AudioContext::disconnectParam(std::shared_ptr<AudioParam> param, std::shared_ptr<AudioNode> driver, int index)
{
if (!param)
throw std::invalid_argument("No parameter specified");
if (index >= driver->numberOfOutputs())
throw std::out_of_range("Output index greater than available outputs on the driver");
m_internal->pendingParamConnections.enqueue({ConnectionOperationKind::Disconnect, param, driver, index});
}
void AudioContext::update()
{
if (!m_isOfflineContext) { LOG_TRACE("Begin UpdateGraphThread"); }
const float frameLengthInMilliseconds = (sampleRate() / (float) AudioNode::ProcessingSizeInFrames) / 1000.f; // = ~0.345ms @ 44.1k/128
const float graphTickDurationMs = frameLengthInMilliseconds * 16; // = ~5.5ms
const uint32_t graphTickDurationUs = static_cast<uint32_t>(graphTickDurationMs * 1000.f); // = ~5550us
ASSERT(frameLengthInMilliseconds);
ASSERT(graphTickDurationMs);
ASSERT(graphTickDurationUs);
// graphKeepAlive keeps the thread alive momentarily (letting tail tasks
// finish) even updateThreadShouldRun has been signaled.
while (updateThreadShouldRun != 0 || graphKeepAlive > 0)
{
if (updateThreadShouldRun > 0)
--updateThreadShouldRun;
// A `unique_lock` automatically acquires a lock on construction. The purpose of
// this mutex is to synchronize updates to the graph from the main thread,
// primarily through `connect(...)` and `disconnect(...)`.
std::unique_lock<std::mutex> lk;
if (!m_isOfflineContext)
{
lk = std::unique_lock<std::mutex>(m_updateMutex);
cv.wait_until(lk, std::chrono::steady_clock::now() + std::chrono::microseconds(5000)); // awake every five milliseconds
}
if (m_internal->autoDispatchEvents)
dispatchEvents();
{
const double now = currentTime();
const float delta = static_cast<float>(now - lastGraphUpdateTime);
lastGraphUpdateTime = static_cast<float>(now);
graphKeepAlive -= delta;
}
if (lk.owns_lock())
lk.unlock();
if (!updateThreadShouldRun)
break;
}
if (!m_isOfflineContext) { LOG_TRACE("End UpdateGraphThread"); }
}
void AudioContext::addAutomaticPullNode(std::shared_ptr<AudioNode> node)
{
std::lock_guard<std::mutex> lock(m_updateMutex);
if (m_automaticPullNodes.find(node) == m_automaticPullNodes.end())
{
m_automaticPullNodes.insert(node);
m_automaticPullNodesNeedUpdating = true;
if (!node->isScheduledNode())
{
node->_scheduler.start(0);
}
}
}
void AudioContext::removeAutomaticPullNode(std::shared_ptr<AudioNode> node)
{
std::lock_guard<std::mutex> lock(m_updateMutex);
auto it = m_automaticPullNodes.find(node);
if (it != m_automaticPullNodes.end())
{
m_automaticPullNodes.erase(it);
m_automaticPullNodesNeedUpdating = true;
}
}
void AudioContext::updateAutomaticPullNodes()
{
/// @TODO this seems like work for the update thread.
/// m_automaticPullNodesNeedUpdating can go away in favor of
/// add and remove doing a cv.notify.
/// m_automaticPullNodes should be an add/remove vector
/// m_renderingAutomaticPullNodes should be the actual live vector
if (m_automaticPullNodesNeedUpdating)
{
std::lock_guard<std::mutex> lock(m_updateMutex);
// Copy from m_automaticPullNodes to m_renderingAutomaticPullNodes.
m_renderingAutomaticPullNodes.resize(m_automaticPullNodes.size());
unsigned j = 0;
for (auto i = m_automaticPullNodes.begin(); i != m_automaticPullNodes.end(); ++i, ++j)
{
m_renderingAutomaticPullNodes[j] = *i;
}
m_automaticPullNodesNeedUpdating = false;
}
}
void AudioContext::processAutomaticPullNodes(ContextRenderLock & r, int framesToProcess)
{
for (unsigned i = 0; i < m_renderingAutomaticPullNodes.size(); ++i)
{
m_renderingAutomaticPullNodes[i]->processIfNecessary(r, framesToProcess);
}
}
void AudioContext::enqueueEvent(std::function<void()> & fn)
{
m_internal->enqueuedEvents.enqueue(fn);
cv.notify_all(); // processing thread must dispatch events
}
void AudioContext::dispatchEvents()
{
std::function<void()> event_fn;
while (m_internal->enqueuedEvents.try_dequeue(event_fn))
{
if (event_fn) event_fn();
}
}
void AudioContext::setDeviceNode(std::shared_ptr<AudioNode> device)
{
m_device = device;
if (auto * callback = dynamic_cast<AudioDeviceRenderCallback *>(device.get()))
{
device_callback = callback;
}
}
std::shared_ptr<AudioNode> AudioContext::device()
{
return m_device;
}
bool AudioContext::isOfflineContext() const
{
return m_isOfflineContext;
}
std::shared_ptr<AudioListener> AudioContext::listener()
{
return m_listener;
}
double AudioContext::currentTime() const
{
return device_callback->getSamplingInfo().current_time;
}
uint64_t AudioContext::currentSampleFrame() const
{
return device_callback->getSamplingInfo().current_sample_frame;
}
double AudioContext::predictedCurrentTime() const
{
auto info = device_callback->getSamplingInfo();
uint64_t t = info.current_sample_frame;
double val = t / info.sampling_rate;
auto t2 = std::chrono::high_resolution_clock::now();
int index = t & 1;
if (!info.epoch[index].time_since_epoch().count())
return val;
std::chrono::duration<double> elapsed = std::chrono::high_resolution_clock::now() - info.epoch[index];
return val + elapsed.count();
}
float AudioContext::sampleRate() const
{
// sampleRate is called during AudioNode construction to initialize the
// scheduler, but DeviceNodes are not scheduled.
// during construction of DeviceNodes, the device_callback will not yet be
// ready, so bail out.
if (!device_callback)
return 0;
return device_callback->getSamplingInfo().sampling_rate;
}
void AudioContext::startOfflineRendering()
{
if (!m_isOfflineContext)
throw std::runtime_error("context was not constructed for offline rendering");
m_isInitialized = true;
device_callback->start();
}
void AudioContext::suspend()
{
device_callback->stop();
}
// if the context was suspended, resume the progression of time and processing in the audio context
void AudioContext::resume()
{
device_callback->start();
}
} // End namespace lab
|
Concurrent changes in regional cholinergic parameters and nest odor preference in the early postnatal rat after lead exposure. The effect of pre- and postnatal lead ingestion on choline acetyltransferase (ChAT) activity, muscarinic cholinergic receptors (mACHR) and on nest odor preference, was investigated in the early postnatal male rat. Long Evans dams were given either 0.15% or 0.25% lead acetate (controls, 0.125% or 0.075% sodium acetate) in their drinking solution from the first day of pregnancy and during lactation. Mean blood lead levels were 55 micrograms/100ml at postnatal day 6 in ML-treated offspring and 35 micrograms/100ml at PN 9 in LL-treated pups. General health of pups and dams was not affected. Lead-treated offspring showed a reduced preference for or ability to identify the smell of home bedding, when tested at PN 9. A decrease in binding (Bmax) of N-methyl- scopolamine (NMS) was detected in olfactory bulb and in visual cortex of LL-treated rats at PN 9; the affinity (KD) was unchanged. On the other hand, ChAT-activity of ML-treated offspring was significantly increased in olfactory bulb at PN 6. These results suggest that stage-specific behaviors depending on sensory functions and cholinergic projection systems in related brain areas are sensitive to pre- and postnatal lead exposure.
|
Self-Adjuvanting Lipoprotein Conjugate GalCer-RBD Induces Potent Immunity against SARS-CoV-2 and its Variants of Concern Safe and effective vaccines against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and its variants are the best approach to successfully combat the COVID-19 pandemic. The receptor-binding domain (RBD) of the viral spike protein is a major target to develop candidate vaccines. -Galactosylceramide (GalCer), a potent invariant natural killer T cell (iNKT) agonist, was site-specifically conjugated to the N-terminus of the RBD to form an adjuvantprotein conjugate, which was anchored on the liposome surface. This is the first time that an iNKT cell agonist was conjugated to the protein antigen. Compared to the unconjugated RBD/GalCer mixture, the GalCer-RBD conjugate induced significantly stronger humoral and cellular responses. The conjugate vaccine also showed effective cross-neutralization to all variants of concern (B.1.1.7/alpha, B.1.351/beta, P.1/gamma, B.1.617.2/delta, and B.1.1.529/omicron). These results suggest that the self-adjuvanting GalCer-RBD has great potential to be an effective COVID-19 vaccine candidate, and this strategy might be useful for designing various subunit vaccines. ■ INTRODUCTION The coronavirus disease 2019 (COVID-19) pandemic, caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has severely affected public health and economic stability that led to social issues worldwide. As the emerging variants led to a mounting death toll, safe and effective vaccines were still the best manner to combat this global pandemic. Since the COVID-19 outbreak, vaccine candidates against SARS-CoV-2 have emerged at an unprecedented scale and speed worldwide. In general, candidate vaccines are divided into six categories, including inactivated virus, live attenuated virus, recombinant viral vectors, protein subunits, virus-like particles, and nucleic acid-based candidates. 1 Although some advanced candidates have moved into clinical trial stages and have been granted licences, 2 uncertainties still remain due to the rapid spread of mutated SARS-CoV-2. Therefore, it is necessary to develop diverse platforms and strategies for preparing successful COVID-19 vaccines. The spike (S) protein plays a pivotal role in binding to the angiotensin-converting enzyme 2 (ACE2) receptor on host cells via its receptor-binding domain (RBD), 3,4 which is the major target for protein subunit vaccines against COVID-19. 5−8 These vaccines are designed to induce immune responses toward specific epitopes (B and T cell epitopes) on the S protein, particularly on the RBD, thereby averting eosinophilic immunopathology or antibody-dependent enhancement (ADE) of the disease. 9 In addition, subunit vaccines are relatively safe and easily manufactured compared with traditional vaccines based on whole virus. Some commonly used chemicals to inactivate viruses are potentially carcinogenic such as -propiolactone, which inactivate viruses at both the protein and nucleic acid levels, and thus may lead to destruction of the crucial antigenic protein structures. 10 However, the major drawback of protein subunit vaccines is their weak immunogenicity; thus, an adjuvant serving as a "danger signal" is often utilized in subunit vaccines to elicit robust humoral and cellular immunity. 11 −13 Among the subunit vaccine candidates against COVID-19, several adjuvants, including aluminum, 8,14 STING agonist cyclic di-GMP (CDG), 15 toll-like receptor (TLR) agonists such as the TLR7/8 agonist, 16 and CpG oligodeoxynucleotides (CpG), and monophosphoryl lipid A (MPLA), 17−19 have been utilized to successfully improve the immunity in mice. However, substantially improving the immunogenicity of the antigen for subunit vaccines against COVID-19 still remains a challenging task. As a potent immune activator for invariant natural killer T (iNKT) cells, -galactosylceramide (GalCer, also known as KRN7000) has been applied in many vaccine constructs. 20,21 iNKT cells are a unique subset of T lymphocytes with phenotypic markers of both T and natural killer (NK) cells. Straddling the innate and adaptive arms of the immune system, these cells modulate a wide range of immune effector properties. Once activated by GalCer, iNKT cells release copious cytokines (IL-4 and IFN-) and license dendritic cells (DCs) to enhance their capacity to induce specific humoral and cellular responses. 22−24 In addition, iNKT cells may also directly help B cells to proliferate and undergo antibody class switching and affinity maturation. 25 Therefore, GalCer and its derivatives are important adjuvants distinguished from those directly regulating conventional T cell-dependent immunogenic responses and are often used as admixed adjuvants in antitumor 26,27 and antiviral vaccines. 28 Besides, several studies have conjugated GalCer with antigens such as small molecules, 29,30 carbohydrates, 31−33 and peptides 34−36 to develop potent self-adjuvanting vaccines. However, to date, GalCer has not been reported to conjugate with protein antigens as a built-in stimulator. Conventional protein-based subunit vaccines are often simply admixed with external adjuvants. 11−13 In this work, we developed a covalently conjugated adjuvant−protein vaccine that was easily prepared and highly effective ( Figure 1A). For the first time, the iNKT cell agonist was conjugated to a protein antigen as a built-in stimulator. As the RBD is an immunodominant antigen of SARS-CoV-2 and accounts for 90% of the immune serum-neutralizing activity, 37 the RBD (S protein residues 319-541) was chosen as the protein antigen to develop the conjugate vaccine. Using a pyridoxal 5-phosphate (PLP)-mediated transamination reaction, the N-terminus of the RBD protein was converted to the ketone and sitespecifically conjugated with the adjuvant molecule via the oxime reaction ( Figure 1B). 38,39 Herein, a single GalCer molecule was covalently linked to the N-terminal amino acid (Arg) of the RBD, which was further prepared as liposomes for the use as a COVID-19 vaccine candidate. The co-delivery of the adjuvant and antigen was guaranteed by stable covalent conjugation; hence, the antigen-specific immune response could be boosted not only by the indirect CD4+ and CD8+ T cell stimulation but also by direct help toward B cells from iNKT cells. 40 Meanwhile, the conjugation approach would also affect the physical properties of the adjuvant. There are three major advantages of this conjugate vaccine: simple and well-defined composition with a site-specific conjugation, no interference of immunogenic epitopes on the antigen protein, and potent efficiency with low adjuvant doses of liposomal formulation. In this study, we explored the effect of conjugation on humoral and cellular immune responses in mice and assessed its potential for developing an effective vaccine candidate against SARS-CoV-2 and variants of concern (VOCs). ■ RESULTS AND DISCUSSION Preparation of Site-specifically Conjugated GalCer-RBD. Our previous study has indicated that the immunogenicity of the antigen is significantly enhanced when the adjuvant molecule of the TLR7 agonist is covalently conjugated to the antigen-loaded protein. 41 In this study, the Difference between the traditional protein vaccine design and that used in this work. (B) First, the RBD protein is incubated with pyridoxal 5-phosphate (PLP), which transaminates the N-terminus of arginine to form a ketone. Second, the keto-protein reacts with alkoxyamine (GalCer-linker) to form an oxime-linked protein−lipid conjugate. (C) Liposomal formulation of GalCer-RBD was prepared before mice vaccination. 6−8 week old female BALB/c mice (n = 5 per group) were immunized subcutaneously on days 1, 15, and 29. Mice sera were collected on day 0 before initial immunization and on days 14, 28, and 35 after immunizations and splenocytes were isolated from vaccinated mice on day 35. iNKT agonist GalCer was covalently linked to the Nterminus of the protein for the first time. This site-specifically conjugated vaccine has minimal interference on immunogenic epitopes with a well-defined component. Although several studies have conjugated lipids on the protein using peptide ligation strategies to fuse lipopeptides on protein fragments 42,43 or chemical reactions to randomly attach adjuvants on the residues of protein, such as functionalization of lysine side chains, 44−48 these protein modification approaches can be difficult to control and need additional purification and the protein antigen epitopes are unavoidably interfered with, to a certain extent. Therefore, to prepare the GalCer-RBD vaccine using a relatively simple transamination approach, the 6position of GalCer was modified with a linker with an alkoxyamine group (Scheme 1). First, to synthesize a alkoxyamine-modified linker, compound 3 was prepared by reacting 2- ethanol with sodium azide. The subsequent reaction of 3 with ethyl bromoacetate gave intermediate 4. The azide was then reduced by the Staudinger reaction followed by a reaction with Boc-aminooxy acetic acid to afford 5. Then, acid 6 was obtained by the hydrolysis of 5. For the 6-position modification of GalCer, compound 7 was first synthesized according to the method given in a previous report. 32 Then, the 6-OH group was effectively converted to an azide group by TsCl and NaN 3 to give azide-modified GalCer 8. After the removal of the Bn groups and the reduction of the azide group of 8 with Pd(OH) 2 /C, acylation with linker 6 led to the GalCer analogue 9. The Boc group of compound 9 was removed to give GalCer-ONH 2 1, with a high purity of 95% ( Figure S11). Next, the N-terminal residue of the RBD was site-specifically oxidized to a ketone group by pyridoxal 5-phosphate (PLP)mediated transamination. The imine intermediate formed by PLP and the N-terminus has an proton with a much lower pK a value, which allows the ketone to form uniquely at this site after the imine tautomerization and hydrolysis. 38,39 Then, the resulting ketone (the N-terminal residue is Arg in this case) was conjugated with one GalCer-linker molecule by a stable oxime linkage ( Figure 1B). The amino acid present in the Nterminus of the RBD is arginine, which is suitable for (B) (a) TCl (1.5 equiv), Et 3 N (2.0 equiv), CH 2 Cl 2, 2 h; (b) NaN 3 (3.0 equiv), DMF, 80°C, 5 h, 81% Two Steps; (c) Pd(OH) 2 / C, CH 2 Cl 2 /MeOH, rt, 8 h; (d) 6 (2.0 equiv), HBTU (1.5 equiv), NMM (10 equiv), CH 2 Cl 2 /MeOH, rt, 2 h, 53% Two Steps; and (e) CH 2 Cl 2 /TFA, rt, 0.5 h, Quantitative Yield conversion to ketone groups by PLP with good yield. 38,39 Analysis of the final GalCer-RBD conjugate by MALDI-TOF mass spectrometry indicated that one GalCer molecule covalently links with the RBD ( Figure S3). To evaluate the yield of the conjugation reaction, an RBD protein 319/321 fragment containing an Arg residue at its Nterminus was synthesized as a model tripeptide to conjugate with O-ethylhydroxylamine hydrochloride (Scheme S1). 49 The results of HPLC and ESI-MS analysis indicated a >95% overall yield of the conjugation reaction ( Figure S1). Besides, the RBD protein was modified with a fluorescent biolabeling molecule, rhodamine B (RhB), on the N-terminus using PLPmediated transamination (Scheme S2). RP-HPLC and SDS-PAGE analysis demonstrated a >85% overall yield of the RhB-RBD conjugation reaction ( Figure S2). Therefore, the adjuvant−protein compound could be obtained in high yields through the transamination reaction. Finally, the vaccine formulation was prepared as liposomes to further improve the aqueous solubility of the GalCer-RBD conjugate or unconjugated RBD/GalCer mixture (for characterization details, see Figure S4). As an ideal delivery system for vaccines, liposomes protect the antigen from degradation and can be efficiently taken up by DCs. 50,51 We also investigated liposomes encapsulating different protein subunits of the spike protein as effective COVID-19 vaccine candidates. 52 In the conjugate vaccine, the lipid tails of the GalCer-RBD conjugate facilitate the anchoring of protein antigens on the liposome surface, mimicking lipoproteins anchored on the cell membrane, such as glycosylphosphatidylinositol (GPI) membrane anchors. 53 Meanwhile, the resulting liposomes biomimic the virus capsid structure that provides a multivalent effect of the antigen protein, thereby facilitating the recognition and uptake by the DCs with more effective activation. Vaccination. The vaccines with the same RBD and/or GalCer (10 g RBD, 0.28 g GalCer for GalCer-RBD or RBD/GalCer) doses were immunologically evaluated in 6 to 8 week old female BALB/c mice. RBD alone (10 g) and RBD (10 g) plus alum adjuvant (100 L) were used as negative and positive controls, respectively. GalCer-RBD and RBD/ GalCer were further prepared in the form of liposomes with 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC) and cholesterol (50/40/1 molar ratio of DSPC/cholesterol/GalCer-RBD or RBD/GalCer). Five mice in each group were immunized subcutaneously (S.C.) on days 1, 15, and 29 ( Figure 1C). Mice sera were collected on day 0 before initial immunization and days 14, 28, and 35 after immunizations, and splenocytes were isolated from vaccinated and untreated mice on day 35. In addition, another ten groups of mice were immunized S.C. or intraperitoneally (I.P.) with GalCer-RBD, RBD/GalCer, RBD/Al, RBD, and PBS for the evaluation of cytokine secretion and DC activation in vivo. For these groups, the doses of the antigen or adjuvant were set as 2 nmol (60 g of RBD, 1.68 g of GalCer) for the S.C. route and 1 nmol (30 g of RBD, 0.84 g of GalCer) for the I.P. route. Mice sera collected at 2 and 24 h after injection were evaluated for secretion of IL-4 and IFN-, respectively, and splenocytes were isolated 24 h after S.C. administration. Conjugation Significantly Promoted Innate Immunity and RBD-Specific Antibody Responses. To investigate the impact of conjugated adjuvant on immune responses, we first evaluated cytokine secretion and DC activation in vivo. The results showed that a high level of IFN- was observed in the mice sera of GalCer-RBD group 24 h after S.C. or I.P. injection ( Figure S5A,B). In the intraperitoneally administered groups, the GalCer-RBD-injected mice produced a ∼5-fold higher level of IFN- but slightly reduced IL-4 than that of GalCer/RBD ( Figure S5B,C), suggesting an enhanced Th1biased immune response. Next, as DC maturation leads to the upregulation of co-stimulatory molecules (e.g., CD80, CD86, MHC I, and MHC II), the expression of CD86 was analyzed for the evaluation of iNKT cell-mediated DC activation in spleen 24 h after S.C. administration. The flow cytometry assay showed an increased expression of CD86 on CD11c + DCs (Figure 2), suggesting that GalCer-RBD effectively activated DCs mediated by iNKT cells. These results indicate that the conjugation approach altered the physical properties of GalCer and thus improved the pharmacokinetic profiles, leading to greatly enhanced activation of innate immunity. RBD antigen-specific antibody titers of each immunization were determined by ELISA. The IgM antibody titer of GalCer-RBD-immunized mice was approximately equal to that of the control groups ( Figure S6). However, the IgG antibody titer of GalCer-RBD-immunized mice was 14.5-, 8.7-, and 5.9-fold higher than that of RBD-, RBD/Al-, and RBD/GalCer-immunized mice on day 35, respectively ( Figure 3A). The remarkable IgG antibody response initiated by GalCer-RBD suggests that the conjugation of GalCer with the RBD significantly improves the immunogenicity of the protein antigen, which might be due to the co-delivery of the glycolipid adjuvant with the full protein to prime both B cells and T cells through cognate and noncognate help of iNKT cells. 54 High IgG titers in the GalCer-RBD-administered mice after the first immunization indicate that adaptive immune responses were rapidly activated. Meanwhile, the GalCer-RBD-immunized mice had elicited exceptionally higher IgG antibody titers than those of RBD-(33-fold), RBD/Al-(37fold), and RBD/GalCer (10-fold)-immunized mice on day 28 ( Figure 3A). The rapid and strong humoral immune responses provoked by GalCer-RBD after the second immunization suggests that a two-dose vaccination is enough to induce highly effective humoral immunity. GalCer-RBD also showed high efficacy in inducing antibody class switching from IgM to RBD-specific IgG. These results indicate that covalently conjugating GalCer and the antigen protein induces rapid and potent immune responses. In addition, the efficacy of GalCer-RBD was still well-promoted by iNKT cells after restimulation, as the IgG titer increased 7.4-fold between the second and third immunizations, which is distinctive from the blunted antibody response of boosting immunizations with GalCer-containing vaccines. 32,33 The IgG subclass distribution of each group was primarily IgG1 ( Figure 3B). Meanwhile, GalCer-RBD-administered mice showed a more balanced enhancement of the Th1/Th2 response, with the IgG2a and IgG2b responses elicited being approximately 12-and 16-fold higher than those in the RBD/ Al (Th2-biased)-immunized group, respectively. Hence, it is beneficial to use the GalCer-RBD conjugate as a vaccine candidate against viral infections because the immune responses induced by such conjugate vaccines feature a broad IgG subclass distribution for effective protection. GalCer-RBD Conjugate Induced RBD-Specific, Cytokine-Producing T Cell Development. An effective vaccine against COVID-19 should induce not only humoral specific antibodies but also cellular immune responses to provide a full range of protective immunity. 55 To evaluate the RBD-specific cellular immune responses, splenocytes were collected from vaccinated mice one week after the last immunization, and the antigen-specific responses were measured by IFN- ELISPOT assay. The splenocytes were stimulated with 50 g/mL overlapping peptide pool (spanning SARS-CoV-2-S RBD) for 18 h before forming IFN- spots. As shown in Figure 4A, GalCer-RBD vaccination significantly increased the number of IFN- spots, with ∼4and ∼26-fold increases in the number of IFN- spots compared to that after immunization with RBD/GalCer and RBD/Al, respectively. Therefore, the conjugate vaccine enhanced cellular responses with increased numbers of antigen-specific cytokine-producing cells. To further characterize the contribution of these vaccine candidates to RBD-specific cellular immunity, cytokineproducing CD4+ ( Figure 4F−H) and CD8+ ( Figure 4C−E) T cells of splenocytes from immunized mice were evaluated by flow cytometry on day 35. As indicated in Figure 4C, 2.57, 1.32, and <1% CD8+ T cells derived from GalCer-RBD-, RBD/GalCer-, and RBD/Al-immunized mice, respectively, produced both IFN- and TNF- cytokines following the stimulation of the overlapping peptide pool (spanning SARS-CoV-2-S RBD). A similar trend was observed for CD4+ T cells ( Figure 4F) and IFN-+ or TNF-+ cytokine-secreting cells (Figures 4D,E,G,H and S7). The specific cellular responses toward the RBD collectively confirmed that GalCer-RBD provoked a potent cellular and humoral immune responses, implying that the GalCer-RBD conjugate is a promising and effective candidate for the COVID-19 vaccine. Pseudovirus and Live Virus Neutralizing Activity and Cross-Neutralization of Variants. Neutralizing antibody responses in mice were assessed against both wild-type (WT) pseudotyped and live SARS-CoV-2 virus. Furthermore, crossneutralization assay of WT and all VOCs (B.1.1.7/alpha, B.1.351/beta, P.1/gamma, B.1.617.2/delta, and B.1.1.529/ omicron) was evaluated using different pseudovirus assay approaches. WT pseudovirus neutralization ID 50 (pVNT 50 ) of mice sera on day 35 is shown in Figure 5A. As expected, GalCer-RBD vaccination generated the highest neutralizing antibody activity (mean pVNT 50 = 11,549), followed by that after vaccination with RBD/GalCer (mean pVNT 50 Journal of Medicinal Chemistry pubs.acs.org/jmc Article these variants indicates that GalCer-RBD has the potential for the protection against SARS-CoV-2 and muted viruses as a candidate vaccine. Anti-RBD Antibodies from GalCer-RBD-Immunized Mice Effectively Blocked the Binding of RBD to ACE2. Sera collected on day 35 and pooled per group were also tested for their ability to inhibit the binding of the RBD protein to the ACE2-overexpressing HEK293 cells using flow cytometry ( Figure 6). No mice serum was added to the positive control, and unstained cells were used as the negative control. The mean fluorescence intensity (MFI) of the cells incubated with the RBD-His protein without sera was defined as 100% binding. Inhibition of binding was calculated as the percentage of reduced binding to the ACE2 receptor in the presence of diluted sera from mice immunized with different vaccine candidates. The results showed that mice sera (1/20 dilution) from GalCer-RBD-, RBD/GalCer-, and RBD/Al-immunized mice blocked the binding of the RBD to the ACE2 receptor at inhibition rates of 82, 41, and 32%, respectively. Thus, the anti-RBD antibodies induced in GalCer-RBDimmunized mice had the strongest inhibition to suppress RBD- ■ CONCLUSIONS In summary, this study for the first time conjugates the iNKT cell agonist on the protein antigen, and the resulting GalCer-RBD conjugate anchored on liposomes biomimicking the virus capsid structure remarkably enhances the protective immune response against SARS-CoV-2. Compared to the unconjugated RBD/GalCer mixture, the GalCer-RBD conjugate enhanced the immune efficacy of the adjuvant and produced significantly stronger humoral responses and cellular responses in mice. The rapid antibody response with two-dose immunization also makes the conjugate vaccine an applicable vaccine candidate for mass vaccination. Moreover, the antisera from GalCer-RBD-immunized mice induced potent neutralizing responses with high pseudotyped neutralizing titers and live virus neutralization activity, indicating efficient protective immunity by this vaccine candidate. Meanwhile, the cross-neutralization of different SARS-CoV-2 VOCs (B. ■ EXPERIMENTAL SECTION Chemical Synthesis. General Information. All reactions were carried out under a dry argon atmosphere using oven-dried glassware and magnetic stirring. The solvents were dried prior to use as follows: THF was heated at reflux over sodium benzophenone ketyl; CH 2 Cl 2 was dried over CaH 2. Aluminum thin-layer chromatography (TLC) sheets (silica gel 60 F 254 ) of 0.2 mm thickness were used to monitor the reactions. The spots were visualized with short-wavelength UV light or by charring after spraying with a solution prepared from one of the following solutions: phosphomolybdic acid (5.0 g) in 95% EtOH (100 mL); p-anisaldehyde solution (2.5 mL of p-anisaldehyde, 2 mL of AcOH, and 3.5 mL of conc. H 2 SO 4 in 100 mL of 95% EtOH); or ninhydrin solution (0.3 g of ninhydrin in 100 mL of nbutanol; add 3 mL of AcOH). Flash chromatography was carried out with silica gel 60 (230-400 ASTM mesh). GalCer was prepared according to our reported procedure. 32 NMR spectra were obtained on a 400 or 600 MHz spectrometer. Chemical shifts were reported in parts per million (ppm). Electrospray ionization mass (ESI-MS) was performed on a TSQ Quantum Access MAX (Thermo Fisher. Strong inhibition of the RBD proteins binding to ACE2-HEK293 by mice sera from GalCer-RBD immunized mice. Pooled serum samples (1/20 diluted) were collected on day 35 assayed for inhibition of the binding of the recombinant RBD-His protein to HEK293 cells overexpressing ACE2 by flow cytometry. In the positive control, no mice serum was added, and cells without any staining were used as a negative control. MFI values of the APC-A channel from cells incubated with the RBD-His protein without sera were defined as 100% binding. Inhibition of binding was calculated as the percentage of reduced binding to the ACE2 receptor in the presence of diluted sera from different immunization groups. Data are shown as the mean ± SEM of three independent experiments. Statistical significance was determined using one-way ANOVA with Dunn's multiple comparison test. p < 0.0001: ****, p < 0.001: ***, p < 0.01: **, and p < 0.05: *. Scientific). High-resolution mass spectrometry (HRMS) was recorded on a Bruker micrOTOF II ESI-TOF using a positive electrospray ionization (ESI + ). Protein MALDI data were collected on a Bruker MALDI-TOF/TOF UltrafleXtreme spectrometer. The matrix used for MALDI-TOF was 3-(4-hydroxy-3,5-dimethoxyphenyl) prop-2enoic acid. The HPLC data were obtained on an Agilent 1260 fitted with an evaporative light-scattering detector (ELSD). The purities of compounds 9 and 1 are >95%, as determined by HPLC-ELSD (see the Supporting Information). Immunological Test. Materials and Reagents. Reagents used were RPMI-1640, DMEM (Gibco), and FBS (fetal bovine serum) (Gibco). The SARS-CoV-2-S-RBD was purchased from SinoBiological (40592-VNAH, 1.02 mg/mL in PBS, no His-tag). Bovine serum albumin (BSA) and the alum adjuvant (Alum) were purchased from Thermo Fisher Scientific. 1,2-Distearoyl-sn-glycero-3-phosphocholine (DSPC) was purchased from TCI. Cholesterol was purchased from Energy Chemical. Peroxidase-conjugated AffiniPure goat antimouse kappa, IgG1, IgG2a, IgG2b, and IgG3 antibodies were purchased from Southern Biotechnology, and peroxidase-conjugated AffiniPure goat anti-mouse kappa antibodies IgG and IgM were purchased from Jackson ImmunoResearch. The stable ACE2-HEK293 cell line was generated from HEK293 cells, which were transfected with an empty pCMV6-AC-GFP plasmid and a pCMV6-AC-GFP plasmid with a human ACE2 gene. The cells were selected by G418 (500 g/mL). Monoclonal cell lines were derived by limited dilution. All animal experiments were performed at Laboratory Animal Centre of Huazhong Agriculture University (Wuhan, China). Animal experiments were conducted according to the animal ethics guidelines and following the recommendations concerning laboratory animal welfare. Preparation and Characterization of Liposomes for Candidate Vaccines. Liposomal formulations of GalCer-RBD and RBD/ GalCer were prepared by following previously reported protocols. 32,50 To prepare the liposomes for one dose, a mixture of DSPC (13.64 g, 16.40 nmol) and cholesterol (5.34 g, 13.12 nmol) (and GalCer (0.28 g, 0.33 nmol) for RBD/GalCer liposomes) was dissolved in 2 mL of CH 2 Cl 2 /MeOH (1:1, v/v); the solvents were removed under reduced pressure through evaporation, which generated a thin lipid film on the flask wall. Then, GalCer-RBD (62.16 g, 1.97 nmol) (or RBD, 60 g, 1.97 nmol) was added in the flask followed by overnight freeze drying. Next, 1.2 mL of PBS (pH 7.4) was added to hydrate the film, which was finally sonicated for 10 min and injected to mice (200 L per mouse) immediately. The molar ratio of DSPC/cholesterol/antigen protein was 50:40:1. The average particle diameter and zeta potential were characterized using a Zetasizer Nano ZS instrument (Malvern) at rt. In Vivo Cytokine Assay by ELISA. The cytokine levels in mice sera were assayed using ELISA kits (IFN- and IL-4, BD Pharmingen) according to the manufacturer's protocol. Briefly, 96-well plates (Costar type 3590, Corning Inc.) were coated with capture antibodies dissolved in the coating buffer per well and incubated at 4°C overnight. The wells were then blocked with FBS for 1 h at rt. After blocking, 100 L/well standard, sera, or control was added and incubated for 2 h at rt. After washing, the working detector (detection antibody and Sav-HRP reagent) was added to each well. The plates were incubated for 1 h at rt. Then, the plates were washed, and the tetramethyl benzidine (TMB) substrate solution was added for 30 min in the dark. The reactions were stopped with 2 N H 2 SO 4 at rt. The absorbance was measured at 450 nm using a plate reader (BioTek, Synergy H1). Analysis of Antibody Titers and Subtypes by ELISA. The RBD protein was dissolved in the prepared NaHCO 3 /Na 2 CO 3 buffer (50 mM, pH 9.5) with a final concentration of 1 g/mL. Next, 96-well plates were coated with the RBD protein at 4°C overnight. Then, the coated plates were washed three times with PBST (PBS + 0.1% Tween) and blocked with 2% BSA in PBS (100 L/well) at 37°C for 1 h. After washing three times, the plates were incubated with the serially diluted sera samples in PBS containing 0.1% BSA (100 L/ well) at 37°C for 1 h. After other washing steps, the plates were incubated with one of the HRP-linked goat anti-mouse antibody IgG, IgM, IgG1, IgG2a, IgG2b, or IgG3, 1:5000 dilution in PBST (100 L/ well) at 37°C for 1 h. After the final washing steps, TMB (500 L, 0.2 mg/mL) in 9.5 mL of 0.05 M phosphate citrate buffer at pH 5.0 with 32 L of 3% (w/v) urea hydrogen peroxide was added and allowed to react for 5 min in the dark. Next, the colorimetric reactions were terminated with 2.0 M H 2 SO 4. Absorbance was recorded at 450 nm with a microplate reader (BioTek, Synergy H1). Analysis of IFN- Secreting cells of Splenocytes by ELISPOT. IFN secreting cells of splenocytes from each immunized group after the last boost were detected using ELISPOT kits (DAKEWE) according to the manufacturer's instructions. The 96-well plates were precoated with rat anti-mouse IFN-. A total of 200 L of RPMI1640 without FBS was added to each well to activate the monoclonal antibodies. Splenocytes harvested from vaccinated mice were seeded into the wells (1 10 6 cells/well) in RPMI 1640 with 10% (v/v) FBS, 100 U/ mL penicillin, and 100 g/mL streptomycin containing 50 g/mL peptide pool (GenScript, RP30020) (Spike, 1Met-643Phe, 158 peptides (15 mers with 11 aa overlap) spanning the SARS-CoV-2-S RBD) in duplicate. The cells were first cultured for 18 h at 37°C and 5% CO 2 and then lysed with distilled H 2 O for 10 min at 4°C. After washing the plates six times, biotinylated anti-mouse IFN- antibodies (1:100) were added and incubated for 1 h at 37°C. After other washing steps, the plates were incubated with streptavidin-HRP (1:100) for additional 1 h. After the final washing steps, AEC was added at 100 L per well to develop spots in the dark for 30 min at rt; then, the reaction was quenched with distilled H 2 O, and plates were air dried before counting. Intracellular Cytokine Staining and Flow Cytometry. Cytokineproducing CD4+ and CD8+ T cells were evaluated in vitro with flow cytometry. Splenocytes of immunized mice after the last boost were Journal of Medicinal Chemistry pubs.acs.org/jmc Article cultured in RPMI medium 1640 with 10% (v/v) FBS, 100 U/mL penicillin, and 100 g/mL streptomycin containing 50 g/mL peptide pool for 18 h. Brefeldin A (BD Biosciences) was administrated 12 h before staining to block intracellular cytokine secretion. Cells were then washed in stain buffer (1% BSA, 1% FBS and 0.1% (m/v) NaN 3 in PBS) and stained for 30 min at 4°C with anti-CD3, anti-CD8, and anti-CD4 (all from BioLegend). Afterward, cells were fixed and permeabilized to facilitate intracellular staining with anti-IFN- and anti-IL-4 (BioLegend). For NKT-mediated DC activation assay, cells were stained with anti-CD11c and anti-CD86 (Biolegend). All labeled lymphocytes were gated on a FACSAriaIII flow cytometer (BD Biosciences). Pseudovirus Neutralization Assay. Pseudovirus neutralization assay was performed using lentivirus-based SARS-CoV-2 pseudoviruses bearing WT (Genomeditech, GM-0220PV07) and B. Briefly, mouse sera were preheated at 56°C for 30 min and serially diluted before incubating with 2 10 4 TCID 50 pseudoviruses for 1 h at rt in duplicate. The mixture was added to 2 10 4 HEK293T-ACE2 cells per well and incubated for 48 h of incubation in a 5% CO 2 environment at 37°C. The luminescence was measured using a Bio-lite luciferase assay system (Genomeditech, G0483M001 and G0483M002) and detected for relative light units (RLUs) using a microplate reader (BioTek, Synergy H1). The titer of neutralization antibody (pVNT 50 ) was defined as the reciprocal serum dilution at which the RLUs were reduced by 50% compared to those in the virus control wells (virus + cells) after the subtraction of background RLUs in the control groups with cells only. Live Virus Neutralization Assay. A plaque reduction neutralization test (PRNT) for live SARS-CoV-2 virus was developed as previously described. 61 Briefly, Vero E6 cells were seeded at 1.5 10 5 per well in a 24-well culture plate and grown overnight before use. Serial twofold dilutions of heat-inactivated (30 min at 56°C) serum samples were prepared in DMEM medium. An equal volume of SARS-CoV-2 working stock containing 200 TCID 50 was added, and the serum− virus mixture was incubated at 37°C for 1 h. The antibody−virus mixture was then added into the 24-well culture plate with the cell supernatant removed and incubated for 1 h at 37°C. The serum− virus mixture was removed from Vero E6 cells followed by DMEM, and 0.9% carboxymethyl cellulose was overlaid. At 3 days after infection, cells were fixed and stained and then rinsed with water. Cells infected with SARS-CoV-2 were applied as the positive control. Neutralization (%) was calculated as the percentage of reduced plagues in the presence of 1/400 diluted sera from different vaccination groups. The neutralization titer (NT 50 ) was expressed as the reciprocal of the serum dilution that prevented the viral cytopathic effect in 50% of the wells. All the work with live SARS-CoV-2 virus was performed in a biosafety level 3 facility at the Wuhan Institute of Virology. Analysis of Inhibition of RBD-His Binding to HEK293-ACE2 Cells. A FACS-based method was used to evaluate the inhibition rate of binding between RBD-His and HEK293-ACE2 cells. Briefly, freshly trypsinized ACE2-HEK293 cells in stain buffer (1% BSA, 1% FBS and 0.1% (m/v) NaN 3 in PBS) were added in 1.5 mL microcentrifuge tubes (1 10 6 /tube) and then incubated with the recombinant spike RBD-His (0.5 g/mL) protein and pooled sera (1/20 diluted) from immunized mice of each group for 1 h at 4°C. Cells were then washed three times with PBS and stained with His-tag antibody iFluor 647 (GenScript) for 30 min. After another washing step, cells were analyzed on a FACSAriaIII flow cytometer (BD Biosciences). Statistical Analyses. Comparison of multiple groups for statistical significance was carried out via one-way ANOVA with Tukey post hoc tests. Statistically significant responses are indicated by asterisks; data were analyzed using GraphPad Prism (GraphPad Software, San Diego, CA). Flow cytometry data were analyzed in Cytexpert 2.3 software. * s Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jmedchem.1c02000. Transamination of model tripeptide; preparation of RhB-RBD; MALDI-TOF-MS analysis of GalCer-RBD; characterization of vaccine liposomes; cytokine secretions in mice sera measured by ELISA; anti-RBD IgM antibody; flow cytometry assay; neutralization of pseudovirus and live SARS-CoV-2; compound NMR data; and purity assessment of the final compounds (PDF)
|
<reponame>Superlokkus/code<gh_stars>0
/*! @file registry.cpp
*
*/
#include <registry.hpp>
void mkdt::registry::register_service(mkdt::service_identifier service_id,
std::shared_ptr<mkdt::registry::receiver> service_object,
std::function<void(error)> completion_handler) {
boost::asio::dispatch(this->io_context_, boost::asio::bind_executor(this->registry_strand_,
[=, completion_handler = std::move(completion_handler)]() {
const auto new_object_id = this->uuid_gen_();
this->services_.emplace(service_id,new_object_id);
this->objects_.emplace(new_object_id, service_object);
this->router_.register_service(service_id, std::move(completion_handler),
[=] (auto callback) {
boost::asio::dispatch(this->io_context_, boost::asio::bind_executor(this->registry_strand_,
[=]() {
callback(new_object_id);
}));
});
}));
}
void mkdt::registry::send_message_to_object(const mkdt::object_identifier &receiver, const std::string &message,
std::function<void(error)> handler) {
}
void mkdt::registry::use_service_interface(mkdt::service_identifier service_id,
std::function<void(error, object_identifier)> handler) {
this->router_.use_service_interface(service_id, std::move(handler));
}
|
Janez Jansa speaks during a session in parliament in Ljubljana February 27, 2013. REUTERS/Srdjan Zivulovic
LJUBLJANA (Reuters) - Slovenia’s chief opposition leader was sentenced to two years in jail on Wednesday for bribery in a 2006 deal with Finnish defense group Patria, one of a number of corruption scandals that have fuelled public anger over the country’s financial crisis.
Janez Jansa had denied taking money in the aborted purchase of 135 Patria armored vehicles while he was prime minister and is expected to appeal. Two co-defendants were also found guilty and jailed for 22 months.
High-level corruption allegations have stirred public anger over a financial crisis that has exposed a culture of cronyism in the ex-Yugoslav republic, and could see it become the latest euro zone country to seek an international bailout.
Six people in Finland are being prosecuted over the same deal and an Austrian court has already convicted an Austrian citizen for corruption. The 278-million-euro ($363 million) contract was scrapped in 2012 after the allegations surfaced.
The Finnish government owns around 73 percent of Patria while European Aeronautic Defence and Space Company (EADS) holds some 27 percent.
Jansa championed Slovenia’s drive to secede from Yugoslavia in 1991 and was prime minister from 2004 to 2008 and again for a year until March 2013. His center-right government fell after an anti-corruption commission said Jansa was unable to explain the origins of a significant part of his income over the past several years. ($1 = 0.7650 euros)
|
def driver(self):
if self.sender in self.__driver:
return self.__driver[self.sender]
for entry in plugins.notifications():
try:
self.__driver[entry.module_name] = entry.load()()
except ImportError:
logger.warning('Error importing %s', entry.module_name)
return self.__driver[self.sender]
|
<filename>src/test/rules/NoDirectImportsTest.ts<gh_stars>1-10
import { RuleTester } from 'eslint';
import { noDirectImports } from '../../main/ts/rules/NoDirectImports';
const ruleTester = new RuleTester({
parser: require.resolve('@typescript-eslint/parser'),
parserOptions: { sourceType: 'module' }
});
ruleTester.run('no-direct-imports', noDirectImports, {
valid: [
{
code: 'import { Fun } from \'@ephox/katmari\';'
}
],
invalid: [
{
code: 'import { Arr } from \'@ephox/katmari/lib/main/api/Arr\';',
errors: [{ message: 'Direct import to @ephox/katmari/lib/main/api/Arr is forbidden.' }]
},
{
code: 'import { Unicode } from \'@ephox/katamari/src/main/ts/ephox/katamari/api/Unicode\';',
errors: [{ message: 'Direct import to @ephox/katamari/src/main/ts/ephox/katamari/api/Unicode is forbidden.' }]
}
]
});
|
package client
import (
// load the packages
_ "github.com/thecodeteam/rexray/libstorage/drivers/os/darwin"
)
|
Windows 7 comes with a new feature that allows computers that run it to be converted more easily into a virtual WiFi hotspot, a hub to which other devices including smartphones and internet appliances, can connect seamlessly.
So why would you convert your computer into a wireless router? For a start, it removes the need to have yet another peripheral, especially useful when you roam along or travel in group plus it saves power especially if you intend to use it on a 24 hour basis.
Then there's the fact that a computer that's used as a wireless router is more easily upgradable, just use another WiFi adaptor; this means that an old P4 computer can become a 802.11n WiFi router just by adding a £20 card.
Indeed, even a laptop can be transformed into a wireless router and most recent ones already have a 802.11n adaptor already. Finally, an adhoc wireless router reduces security risks because it would be created on demand rather than always left on as routers are normally.
Arguably, you don't get the four LAN ports you usually get on a router but if wireless is your preferred mode of connection, then why not.
Windows 7 was supposed to offer a Virtual WiFi option as an integral part of its feature list but that never happened. A nifty little application called Connectify enables the miracle to happen. Connect any internet line to your laptop for example and Connectify will transform it into a generic provider of bandwidth.
This works as well for mobile broadband dongles, cable, ADSL and even tethered phones. We have yet to see whether it can actually be used as a repeater though (albeit an expensive one).
Connectify uses features that are present in all versions of Windows 7 (except Starter edition) and Windows Server 2008 R2 onwards. It won't work with other current Windows OSes with or without service packs.
Setting it up is a matter of minutes. Download Connectify and install it. Start the application, fill out the appropriate details, including login and password.
Connectify essentially creates a virtual router access point that resides within the computer and run simultaneously with an existing AP connection.
Other computers and devices can connect to it as they would normally do with any other wireless routers; you will have to choose between the Access Point - where you share a WiFi connection using the same card that you're actually using to access the resource and Adhoc where the internet connection is separate from the transmitting device.
The new Connectify v1.2 which was launched on the 30th of March introduces improved Ad Hoc functionality with a useful but risque Open/no encryption mode.
There's also a new Easy Set Up wizard, essential for novices, and an improved user interface with more statistics especially if you want to know more about clients that are connecting to your computer; an easy way to identify rogue ones.
We've suggested to Connectify that they investigate the possibility of bonding resources virtually, either combining two or more WiFi devices into a single one or combining two or more internet connections into one.
The developers have also promised that they will be working on improving the number of network devices that currently support the application, a list of which you can find here. You can download Connectify here, learn more about it here and follow them on Twitter here.
|
June 6, 2007 -- Neil Simon's endearing tale of a midlife crisis gone terribly awry launches the University of Wyoming 2007 Snowy Range Summer Theatre and Dance Festival, June 12-16.
"The Last of the Red Hot Lovers," shows nightly at 7:30 in the Fine Arts Center studio theatre. Tickets cost $5 for students, $8 for seniors (60 and older) and $10 for others. Tickets are available at the Fine Arts Center box office, by calling (307) 766-6666, or by visiting www.uwyo.edu/finearts.
Jay Edelnant, professor of theatre at the University of Northern Iowa, directs the opening play that features middle-aged restaurateur Barney Cashman, who yearns for one big romantic fling to spice up his predictable life. Cashman is a gentle soul who has been married to his high school sweetheart for 23 years and is inexperienced with adultery.
Cashman makes hilarious attempts to become "a spoiler of women" after discovering his mother's apartment will be empty one day a week. As he haplessly attempts to seduce wildly unsuitable women, he finds his mother's empty apartment is not the love nest he had once imagined.
This production of "The Last of the Red Hot Lovers" features Devin Sanchez, a UW Department of Theatre and Dance alumna who is a working actress in New York City.
Since graduating in 2004, Sanchez has performed in both Off-Broadway and Off-Off Broadway productions. She also has appeared in TV's “Law and Order SVU” and Walt Disney's “Enchanted,” and starred in the TV pilot of “Temps.” Sanchez recently was accepted into the prestigious Atlantic Theater founded by David Mamet and William H. Macy.
Edelnant, a Roy Carver Fellow and Sasakawa Fellow, served as the national chair of the prestigious Kennedy Center American College Theatre Festival and on the governing board for the Association for Theatre in Higher Education.
|
The Research of Eutrophic Wastewater Treatment Process Design The treatment object of this paper is eutrophic organic wastewater, put forth effort to research a processing mode which is low energy consumption, high efficiency, construction and operation cost less.Due to the eutrophic wastewater not only contains a lot of organic matter,but also contains rich nitrogen and phosphorus,the effect not beautiful when using general anaerobic treatment.Based on the step anaerobic reaction and step aerobic reaction processing way, through the experiment proved that this method not only can effectively remove organic matter in water pollution, but also can produce methane to use.
|
/**
*/
package org.robot.model.robot.impl;
import org.eclipse.emf.common.notify.Notification;
import org.eclipse.emf.ecore.EClass;
import org.eclipse.emf.ecore.InternalEObject;
import org.eclipse.emf.ecore.impl.ENotificationImpl;
import org.robot.model.robot.ExecuteStatement;
import org.robot.model.robot.RobotPackage;
import org.robot.model.robot.Scenario;
/**
* <!-- begin-user-doc -->
* An implementation of the model object '<em><b>Execute Statement</b></em>'.
* <!-- end-user-doc -->
* <p>
* The following features are implemented:
* </p>
* <ul>
* <li>{@link org.robot.model.robot.impl.ExecuteStatementImpl#getDestination <em>Destination</em>}</li>
* </ul>
*
* @generated
*/
public class ExecuteStatementImpl extends StatementImpl implements ExecuteStatement {
/**
* The cached value of the '{@link #getDestination() <em>Destination</em>}' reference.
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @see #getDestination()
* @generated
* @ordered
*/
protected Scenario destination;
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
protected ExecuteStatementImpl() {
super();
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
@Override
protected EClass eStaticClass() {
return RobotPackage.Literals.EXECUTE_STATEMENT;
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
public Scenario getDestination() {
if (destination != null && destination.eIsProxy()) {
InternalEObject oldDestination = (InternalEObject) destination;
destination = (Scenario) eResolveProxy(oldDestination);
if (destination != oldDestination) {
if (eNotificationRequired())
eNotify(new ENotificationImpl(this, Notification.RESOLVE,
RobotPackage.EXECUTE_STATEMENT__DESTINATION, oldDestination, destination));
}
}
return destination;
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
public Scenario basicGetDestination() {
return destination;
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
public void setDestination(Scenario newDestination) {
Scenario oldDestination = destination;
destination = newDestination;
if (eNotificationRequired())
eNotify(new ENotificationImpl(this, Notification.SET, RobotPackage.EXECUTE_STATEMENT__DESTINATION,
oldDestination, destination));
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
@Override
public Object eGet(int featureID, boolean resolve, boolean coreType) {
switch (featureID) {
case RobotPackage.EXECUTE_STATEMENT__DESTINATION:
if (resolve)
return getDestination();
return basicGetDestination();
}
return super.eGet(featureID, resolve, coreType);
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
@Override
public void eSet(int featureID, Object newValue) {
switch (featureID) {
case RobotPackage.EXECUTE_STATEMENT__DESTINATION:
setDestination((Scenario) newValue);
return;
}
super.eSet(featureID, newValue);
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
@Override
public void eUnset(int featureID) {
switch (featureID) {
case RobotPackage.EXECUTE_STATEMENT__DESTINATION:
setDestination((Scenario) null);
return;
}
super.eUnset(featureID);
}
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
@Override
public boolean eIsSet(int featureID) {
switch (featureID) {
case RobotPackage.EXECUTE_STATEMENT__DESTINATION:
return destination != null;
}
return super.eIsSet(featureID);
}
} //ExecuteStatementImpl
|
Mannose 6-phosphate-independent endocytosis of beta-glucuronidase. II. Purification of a cation-dependent receptor from bovine liver. A new binding protein, which recognizes a specific peptide sequence from pronase digested bovine beta-glucuronidase, has been isolated from bovine liver membranes. Prior work has shown that this peptide (IIIb2) contains a Ser-X-Ser sequence, where X might be a posttranslational modified Trp. This receptor was detergent-extracted from total bovine liver membranes and purified by affinity chromatography on a bovine beta-glucuronidase-Sepharose and a IIIb2 peptide-Sepharose column. Binding of bovine beta-glucuronidase to the isolated receptor requires divalent cations, and their presence was necessary to maintain the receptor-ligand complex. Only the peptide sequence containing the fraction IIIb2 was able to impair the binding of the bovine enzyme to the receptor, no other peptide from bovine beta-glucuronidase had an effect on binding. When analyzed by SDS-PAGE under reducing conditions, two bands were observed, a major band of 78 kDa and a faint band of 72 kDa. Rabbit antibodies against this binding protein revealed the presence of the 78 kDa protein in membranes from bovine liver, human and bovine fibroblasts. These antibodies impaired human fibroblasts endocytosis of the bovine but not of the human beta-glucuronidase, which is taken up by a 300 kDa receptor that recognizes phosphomannosyl moieties in the enzyme.
|
<gh_stars>10-100
import random
import datetime
import mistune
import json
from operator import itemgetter
from django.shortcuts import render
from django.views.generic.base import View
from django.conf import settings
from django.http import HttpResponse
from django.core import serializers
from pure_pagination import Paginator, EmptyPage, PageNotAnInteger
from .models import Links, Article, Category, Tag
def global_setting(request):
"""
将settings里面的变量 注册为全局变量
"""
active_categories = Category.objects.filter(active=True).order_by('index')
return {
'SITE_NAME': settings.SITE_NAME,
'SITE_DESC': settings.SITE_DESCRIPTION,
'SITE_KEY': settings.SECRET_KEY,
'SITE_MAIL': settings.SITE_MAIL,
'SITE_ICP': settings.SITE_ICP,
'SITE_ICP_URL': settings.SITE_ICP_URL,
'SITE_TITLE': settings.SITE_TITLE,
'SITE_TYPE_CHINESE': settings.SITE_TYPE_CHINESE,
'SITE_TYPE_ENGLISH': settings.SITE_TYPE_ENGLISH,
'active_categories': active_categories
}
class Index(View):
"""
首页展示
"""
def get(self, request):
all_articles = Article.objects.all().defer('content').order_by('-add_time')
top_articles = Article.objects.filter(is_recommend=1).defer('content')
# 首页分页功能
try:
page = request.GET.get('page', 1)
except PageNotAnInteger:
page = 1
p = Paginator(all_articles, 9, request=request)
articles = p.page(page)
return render(request, 'index.html', {
'all_articles': articles,
'top_articles': top_articles,
})
class Friends(View):
"""
友链链接展示
"""
def get(self, request):
links = Links.objects.all()
card_num = random.randint(1, 10)
return render(request, 'friends.html', {
'links': links,
'card_num': card_num,
})
class Detail(View):
"""
文章详情页
"""
def get(self, request, pk):
article = Article.objects.get(id=int(pk))
article.viewed()
mk = mistune.Markdown()
output = mk(article.content)
#**查找上一篇
previous_article = Article.objects.filter(category=article.category, id__lt=pk).defer('content').order_by('-id')[:1]
previous_article = previous_article[0] if len(previous_article) else None
#**查找下一篇
next_article = Article.objects.filter(category=article.category, id__gt=pk).defer('content').order_by('id')[:1]
next_article = next_article[0] if len(next_article) else None
return render(request, 'detail.html', {
'article': article,
'previous_article': previous_article,
'next_article': next_article,
'detail_html': output,
})
class Archive(View):
"""
文章归档
"""
def get(self, request):
all_articles = Article.objects.all().defer('content').order_by('-add_time')
all_date = all_articles.values('add_time')
latest_date = all_date[0]['add_time']
all_date_list = []
for i in all_date:
all_date_list.append(i['add_time'].strftime("%Y-%m-%d"))
# 遍历1年的日期
end = datetime.date(latest_date.year, latest_date.month, latest_date.day)
begin = datetime.date(latest_date.year-1, latest_date.month, latest_date.day)
d = begin
date_list = []
temp_list = []
delta = datetime.timedelta(days=1)
while d <= end:
day = d.strftime("%Y-%m-%d")
if day in all_date_list:
temp_list.append(day)
temp_list.append(all_date_list.count(day))
else:
temp_list.append(day)
temp_list.append(0)
d += delta
date_list.append(temp_list)
temp_list = []
# 文章归档分页
try:
page = request.GET.get('page', 1)
except PageNotAnInteger:
page = 1
p = Paginator(all_articles, 10, request=request)
articles = p.page(page)
return render(request, 'archive.html', {
'all_articles': articles,
'date_list': date_list,
'end': str(end),
'begin': str(begin),
})
class CategoryList(View):
def get(self, request):
categories = Category.objects.all()
return render(request, 'category.html', {
'categories': categories,
})
class CategoryView(View):
def get(self, request, pk):
categories = Category.objects.all()
articles = Category.objects.get(id=int(pk)).article_set.all().defer('content')
try:
page = request.GET.get('page', 1)
except PageNotAnInteger:
page = 1
p = Paginator(articles, 9, request=request)
articles = p.page(page)
return render(request, 'article_category.html', {
'categories': categories,
'pk': int(pk),
'articles': articles
})
class TagList(View):
def get(self, request):
tags = Tag.objects.all()
return render(request, 'tag.html', {
'tags': tags,
})
class TagView(View):
def get(self, request, pk):
tags = Tag.objects.all()
articles = Tag.objects.get(id=int(pk)).article_set.all().defer('content')
try:
page = request.GET.get('page', 1)
except PageNotAnInteger:
page = 1
p = Paginator(articles, 9, request=request)
articles = p.page(page)
return render(request, 'article_tag.html', {
'tags': tags,
'pk': int(pk),
'articles': articles,
})
class About(View):
def get(self, request):
articles = Article.objects.all().defer('content').order_by('-add_time')
categories = Category.objects.all()
tags = Tag.objects.all()
all_date = articles.values('add_time')
latest_date = all_date[0]['add_time']
end_year = latest_date.strftime("%Y")
end_month = latest_date.strftime("%m")
date_list = []
for i in range(int(end_month), 13):
date = str(int(end_year)-1)+'-'+str(i).zfill(2)
date_list.append(date)
for j in range(1, int(end_month)+1):
date = end_year + '-' + str(j).zfill(2)
date_list.append(date)
value_list = []
all_date_list = []
for i in all_date:
all_date_list.append(i['add_time'].strftime("%Y-%m"))
for i in date_list:
value_list.append(all_date_list.count(i))
temp_list = [] # 临时集合
tags_list = [] # 存放每个标签对应的文章数
tags = Tag.objects.all()
for tag in tags:
temp_list.append(tag.name)
temp_list.append(len(tag.article_set.all()))
tags_list.append(temp_list)
temp_list = []
tags_list.sort(key=lambda x: x[1], reverse=True) # 根据文章数排序
top10_tags = []
top10_tags_values = []
for i in tags_list[:10]:
top10_tags.append(i[0])
top10_tags_values.append(i[1])
return render(request, 'about.html', {
'articles': articles,
'categories': categories,
'tags': tags,
'date_list': date_list,
'value_list': value_list,
'top10_tags': top10_tags,
'top10_tags_values': top10_tags_values
})
class AllArticle(View):
def get(self, request):
articles = Article.objects.order_by('-add_time').values('id', 'title', 'desc')
rst = [{'id': d['id'], 'title': d['title'], 'content': d['desc']} for d in articles]
return HttpResponse(json.dumps(rst, ensure_ascii=False))
|
A blade damper is essentially a valve or plate that is positioned over an orifice and is used to regulate the flow of air, gas or liquids ("fluids") through the orifice. The blade damper is generally mounted on an axle for rotation within the orifice, with the axle mounted relative to the orifice. One such use of a blade damper occurs within a heat exchanger having a combustor containing a fluidized bed wherein one or more dampers are placed within the heat exchanger to control the flow of air to the fluidized bed.
In general the flow of fluid through an orifice is determined by the pressure differential across the orifice and by the surface area of the orifice. Ideally, when a damper is closed it should not allow any fluid to flow through the orifice; when it is fully open it should not restrict the flow of fluid through the orifice; and, when it is partially open, it should allow fluid to flow through the orifice relative to the percentage opening of the damper. With proper control of pressure, the flow of fluid through the orifice is approximately proportional to the opening of the damper. However, this linear relationship between the flow of fluid through the orifice and the damper opening is often impossible to obtain due leakage of fluid through the damper boundary planes perpendicular to the axle.
Several techniques are known in the prior art for preventing leakage of fluid through an orifice sealed by a blade damper when the damper is closed. For example, the blades of the damper are overlapped with the edge of the orifice, or sealing strips are provided at the edge of the blade. Other techniques are employed to allow for maximum fluid flow through an orifice when the damper blade is fully open. However, none of these techniques have addressed the problem of controlling fluid leakage through boundary planes perpendicular to the axle when the blade damper is partially open.
|
<reponame>asharakeh/bayes-od-rc
import argparse
import os
import time
import sys
import json
import yaml
import tensorflow as tf
import numpy as np
import src.core as core
import src.retina_net.experiments.validation_utils as val_utils
from src.retina_net import config_utils
from src.retina_net.builders import dataset_handler_builder
from src.retina_net.models.retinanet_model import RetinaNetModel
from src.retina_net.anchor_generator import box_utils
from src.retina_net.experiments import inference_utils
keras = tf.keras
def test_model(config):
# Get testing config
test_config = config['testing_config']
ckpt_idx = test_config['ckpt_idx']
uncertainty_method = test_config['uncertainty_method']
use_full_covar = test_config['use_full_covar']
nms_config = test_config['nms_config']
# Create dataset class
dataset_config = config['dataset_config']
training_dataset = dataset_config['dataset']
dataset_config['dataset'] = test_config['test_dataset']
dataset_handler = dataset_handler_builder.build_dataset(
dataset_config, 'test')
# Set keras training phase
keras.backend.set_learning_phase(0)
print("Keras Learning Phase Set to: " +
str(keras.backend.learning_phase()))
# Create Model
with tf.name_scope("retinanet_model"):
model = RetinaNetModel(config['model_config'])
# Initialize the model from a saved checkpoint
checkpoint_dir = os.path.join(
core.data_dir(), 'outputs',
config['checkpoint_name'], 'checkpoints', config['checkpoint_name'])
predictions_dir = os.path.join(
core.data_dir(), 'outputs',
config['checkpoint_name'], 'predictions')
os.makedirs(predictions_dir, exist_ok=True)
if not os.path.exists(checkpoint_dir):
raise ValueError('{} must have at least one checkpoint entry.'
.format(checkpoint_dir))
# Instantiate mini-batch and epoch size
epoch_size = int(dataset_handler.epoch_size)
# Create Dataset
# Main function to create dataset
dataset = dataset_handler.create_dataset()
# Batch size goes in parenthesis.
batched_dataset = dataset.repeat(1).batch(1)
# `prefetch` lets the dataset fetch batches, in the background while the model is validating.
batched_dataset = batched_dataset.prefetch(
buffer_size=tf.data.experimental.AUTOTUNE)
print('Starting inference at ' +
time.strftime('%Y-%m-%d-%H:%M:%S', time.gmtime()))
# Initialize the model checkpoint manager
ckpt = tf.train.Checkpoint(step=tf.Variable(0), net=model)
# Begin inference loop
all_checkpoint_states = tf.train.get_checkpoint_state(
checkpoint_dir).all_model_checkpoint_paths
start = time.time()
checkpoint_to_restore = all_checkpoint_states[ckpt_idx - 1]
ckpt_id = val_utils.strip_checkpoint_id(checkpoint_to_restore)
# Make directories if these dont exist
predictions_dir_ckpt = os.path.join(predictions_dir,
'testing',
dataset_config['dataset'],
str(ckpt_id),
uncertainty_method)
if dataset_config['dataset'] == 'rvc':
predictions_dir_ckpt = os.path.join(
predictions_dir_ckpt,
dataset_config['rvc']['paths_config']['sequence_dir'])
if uncertainty_method == 'bayes_od':
predictions_dir_ckpt += '_' + \
test_config['bayes_od_config']['fusion_method']
os.makedirs(os.path.join(predictions_dir_ckpt, 'data'), exist_ok=True)
loc_mean_dir = os.path.join(predictions_dir_ckpt, 'mean')
loc_cov_dir = os.path.join(predictions_dir_ckpt, 'cov')
cat_param_dir = os.path.join(predictions_dir_ckpt, 'cat_param')
cat_count_dir = os.path.join(predictions_dir_ckpt, 'cat_count')
os.makedirs(loc_mean_dir, exist_ok=True)
os.makedirs(loc_cov_dir, exist_ok=True)
os.makedirs(cat_param_dir, exist_ok=True)
os.makedirs(cat_count_dir, exist_ok=True)
print('\nRunning checkpoint ' + str(ckpt_id) + '\n')
# Restore checkpoint. expect_partial is needed to get rid of
# optimizer/loss graph elements.
ckpt.restore(checkpoint_to_restore).expect_partial()
# Perform dataset-specific setup of result output
if dataset_config['dataset'] == 'kitti':
pass
elif dataset_config['dataset'] == 'bdd' or dataset_config['dataset'] == 'coco' or dataset_config['dataset'] == 'pascal':
final_results_list = []
# Single json file for bdd dataset
prediction_json_file_name = os.path.join(
predictions_dir_ckpt, 'data', 'predictions.json')
elif dataset_config['dataset'] == 'rvc':
final_results_list = []
# Single json file for bdd dataset
prediction_json_file_name = os.path.join(
predictions_dir_ckpt, 'data', 'predictions.json')
# Inference loop starts here. Iterate over samples once.
for counter, sample_dict in enumerate(batched_dataset):
output_class_counts, output_boxes_vuhw, output_covs, nms_indices, predicted_boxes_iou_mat = inference_utils.bayes_od_inference(
model, sample_dict, test_config['bayes_od_config'], nms_config, dataset_name=dataset_config['dataset'], use_full_covar=use_full_covar)
output_class_counts = output_class_counts.numpy()
output_boxes_vuhw = output_boxes_vuhw.numpy()
output_covs = output_covs.numpy()
nms_indices = nms_indices.numpy()
predicted_boxes_iou_mat = predicted_boxes_iou_mat.numpy()
if output_boxes_vuhw.size > 0:
output_classes, output_boxes_vuhw, output_covs, output_counts = inference_utils.bayes_od_clustering(
output_class_counts, output_boxes_vuhw, output_covs, nms_indices, predicted_boxes_iou_mat, affinity_threshold=nms_config['iou_threshold'])
if output_boxes_vuhw.size > 0:
output_boxes_vuhw = np.squeeze(output_boxes_vuhw, axis=2)
output_boxes = box_utils.vuhw_to_vuvu_np(output_boxes_vuhw)
else:
output_boxes_vuhw = output_boxes_vuhw
output_boxes = output_boxes_vuhw
output_counts = output_boxes_vuhw
else:
output_classes = output_boxes_vuhw
output_boxes = output_boxes_vuhw
output_covs = output_boxes_vuhw
output_counts = output_boxes_vuhw
# Perform index mapping in case training and testing datasets are not
# the same
if training_dataset != dataset_config['dataset'] and output_boxes.size > 0:
if dataset_config['dataset'] == 'kitti_tracking':
dataset_config['dataset'] = 'kitti'
output_classes_mapped = inference_utils.map_dataset_classes(
training_dataset, dataset_config['dataset'], output_classes)
else:
output_classes_mapped = output_classes
# Perform dataset-specific saving of outputs
if dataset_config['dataset'] == 'kitti':
predictions_kitti_format = val_utils.predictions_to_kitti_format(
output_boxes, output_classes_mapped)
prediction_file_name = os.path.join(
predictions_dir_ckpt,
'data',
dataset_handler.sample_ids[counter] + '.txt')
mean_file_name = os.path.join(
loc_mean_dir,
dataset_handler.sample_ids[counter] + '.npy')
covar_file_name = os.path.join(
loc_cov_dir,
dataset_handler.sample_ids[counter] + '.npy')
cat_param_file_name = os.path.join(
cat_param_dir,
dataset_handler.sample_ids[counter] + '.npy')
cat_count_file_name = os.path.join(
cat_count_dir, dataset_handler.sample_ids[counter] + '.npy')
if predictions_kitti_format.size == 0:
np.savetxt(prediction_file_name, [])
else:
np.savetxt(
prediction_file_name,
predictions_kitti_format,
newline='\r\n',
fmt='%s')
elif dataset_config['dataset'] == 'bdd':
predictions_bdd_format = val_utils.predictions_to_bdd_format(
output_boxes,
output_classes_mapped,
dataset_handler.sample_ids[counter],
category_list=dataset_handler.training_data_config['categories'])
final_results_list.extend(predictions_bdd_format)
mean_file_name = os.path.join(
loc_mean_dir,
dataset_handler.sample_ids[counter] + '.npy')
covar_file_name = os.path.join(
loc_cov_dir,
dataset_handler.sample_ids[counter] + '.npy')
cat_param_file_name = os.path.join(
cat_param_dir,
dataset_handler.sample_ids[counter] + '.npy')
cat_count_file_name = os.path.join(
cat_count_dir, dataset_handler.sample_ids[counter] + '.npy')
sys.stdout.write(
'\r{}'.format(counter + 1) + ' /' + str(epoch_size))
np.save(mean_file_name, output_boxes_vuhw)
np.save(covar_file_name, output_covs)
np.save(cat_param_file_name, output_classes)
np.save(cat_count_file_name, output_counts)
elapsed_time = time.time() - start
time_per_sample = elapsed_time / dataset_handler.epoch_size
frame_rate = 1.0 / time_per_sample
print("\nMean frame rate: " + str(frame_rate))
# Final dataset-specific wrap up work for checkpoint
# results
if dataset_config['dataset'] == 'kitti':
pass
else:
with open(prediction_json_file_name, 'w') as fp:
json.dump(final_results_list, fp, indent=4,
separators=(',', ': '))
def main():
"""Object Detection Model Validator
"""
# Defaults
default_gpu_device = '0'
default_config_path = core.model_dir(
'retina_net') + '/configs/retinanet_bdd.yaml'
# Allowed data splits are 'train','train_mini', 'val', 'val_half',
# 'val_mini'
default_data_split = 'val'
# Parse input
parser = argparse.ArgumentParser() # Define argparser object
parser.add_argument('--gpu_device',
type=str,
dest='gpu_device',
default=default_gpu_device)
parser.add_argument('--yaml_path',
type=str,
dest='yaml_path',
default=default_config_path)
parser.add_argument('--data_split',
type=str,
dest='data_split',
default=default_data_split)
args = parser.parse_args()
# Set CUDA device id
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_device
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
# Load in configuration file as python dictionary
with open(args.yaml_path, 'r') as yaml_file:
config = yaml.load(yaml_file, Loader=yaml.FullLoader)
# Make necessary directories, update config with checkpoint path and data
# split
config = config_utils.setup(config, args)
# Go to inference function
test_model(config)
if __name__ == '__main__':
main()
|
Reported Order of Importance Does not Predict Fixation Order when Viewing Driving Scenes Distracted driving and its negative effects on driving performance are well documented. Eye movement patterns of distracted drivers have also been studied, though insight into what the driver specifically looks at is not as well understood. Researchers have studied eye movement metrics like, eyes-off-road glance times, time-to-first-fixation, among others, over an entire drive, but not what the driver is looking at in a specific moment in time. The current study used eye tracking to investigate what objects and areas people looked at in driving scenes and what they reported they would look at later in the same scenes. The results suggest that people look where they say they would look, but not in the order they reported they would look. This finding demonstrates that participants may scrutinize scenes differently at various times, but attend to the same objects or areas, indicating an associated importance, semantic constraints, and relevance for driving.
|
Responsibility and punishment: whose mind? A response. Cognitive neuroscience is challenging the Anglo-American approach to criminal responsibility. Critiques, in this issue and elsewhere, are pointing out the deeply flawed psychological assumptions underlying the legal tests for mental incapacity. The critiques themselves, however, may be flawed in looking, as the tests do, at the psychology of the offender. Introducing the strategic structure of punishment into the analysis leads us to consider the psychology of the punisher as the critical locus of cognition informing the responsibility rules. Such an approach both helps to make sense of the counterfactual assumptions about offender psychology embodied in the law and provides a possible explanation for the human conviction of the existence of free will, at least in others.
|
describe('Login',()=>{
it('should check validation on login page', ()=>{
cy.visit('auth/login');
const validateLogin = ['email','password']
validateLogin.forEach(key=>{
cy.datacy(key).find('input').focus().blur();
cy.datacy(key).should('have.class','mat-form-field-invalid');
})
cy.login('newUser','<EMAIL>','<PASSWORD>');
cy.contains('Sorry :( Credentials Dont Match!');
})
it('should login using admin credential',()=>{
cy.login('admin');
})
})
|
//
// MCBaseWorkSpaceManager.h
// NPushMail
//
// Created by wuwenyu on 2017/2/9.
// Copyright © 2017年 sprite. All rights reserved.
//
#import <Foundation/Foundation.h>
@interface MCBaseWorkSpaceManager : NSObject
+ (void)refreshWorkSpaceData;
@end
|
/**
* Store the version of the code using Semantic Versioning.
*
* <p>The major/minor version number will be updated when significant functionality has changed.
* Otherwise the patch version will be incremented.
*
* <p>Note that this is the version of the uk.ac.sussex.gdsc.analytics package. It may be different
* from the Maven version for the gdsc-analytics artifact.
*
* @see "http://semver.org/"
*/
public final class VersionUtils {
/** The major version. */
public static final int MAJOR = 2;
/** The minor version. */
public static final int MINOR = 0;
/** The patch version. */
public static final int PATCH = 0;
/** The major version string. */
public static final String VERSION_X;
/**
* The major.minor version string.
*/
public static final String VERSION_X_X;
/**
* The major.minor.patch version string.
*/
public static final String VERSION_X_X_X;
/** Define level 1. */
private static final int LEVEL_ONE = 1;
/** Define level 2. */
private static final int LEVEL_TWO = 2;
static {
VERSION_X = getVersion(1);
VERSION_X_X = getVersion(2);
VERSION_X_X_X = getVersion(3);
}
/**
* Do not allow public construction.
*/
private VersionUtils() {
// Do nothing
}
/**
* Get the version as a string. The string is built as major.minor.patch using the specified
* number of levels.
*
* @param levels The number of levels (1-3).
* @return The version
*/
public static String getVersion(int levels) {
final StringBuilder version = new StringBuilder().append(MAJOR);
if (levels > LEVEL_ONE) {
version.append('.').append(MINOR);
}
if (levels > LEVEL_TWO) {
version.append('.').append(PATCH);
}
return version.toString();
}
}
|
After waiting nearly four years Marco Antonio Barrera is about to step back into the ring with Manny Pacquiao, the man who shocked the western world. The last time around Barrera had to be stopped by his corner after suffering through nearly 11 brutal rounds against the powerful Filipino.
This time it's Pac Man who's coming into the ring as a big favorite and he's carrying some additional baggage. Pacquiao began training with Freddie Roach a month late and in a different country than everyone expected. However the biggest questions being raised do not concern his conditioning. Instead, it's the life the champion is leading outside the ring that has had critics doubting his focus in the past year. Pacquiao is far and away the biggest star in his native country and he's been roundly accused of falling into the typical pitfalls of celebrity. After a rather complacent victory (as complacent as a seventh round KO can be) over Jorge Solis some people—myself included—have speculated that Barrera could be a very live underdog tonight.
If you're looking to offset the cost of the pay-per-view you might want to throw down a few bucks on Barrera to win by decision at 4/1. It's far from a sure thing, but an all-time great Mexican champion like Barrera always has a fighting chance.
|
A Study on factors affecting Job Seekers Perception and Behavioural Intention towards E- Recruitment Technology has played a vital role in education not only in enhancing the students academic excellence, improving teachers professional quality but also had proved pertinent in the recruitment of the students once they become job seekers. Where education makes them ready to be recruited in the industry, technology eases the process of recruitment through E-Recruitment 1. With technological development, the modern way of recruitment (E-recruitment) is used in majority of the corporates and the present generation with the strong inclination towards technology are dominating the work sphere and increasing efficiency and effectiveness at workplace. E-Recruitment is the latest trend and has been adopted by many large and small corporations. Augmented use of e-recruitment methods and systems facilitates this trend by eliminating much of the routine administrative work involved in recruiting and allowing human resource manager to more easily monitor and track recruitment related activities. (Holm, 201).2 The purpose of this study is to examine the impact of factors on perception of Job Seekers, who are millennial students in this study, and their behavioural intention (BI) towards E-Recruitment. The type of research employed in this study is exploratory cum descriptive. Factor Analysis and regression are used as tools of analysis. In the conclusion, this study will enable one to understand that there is a significant effect of perception of job seekers on intention towards e-recruitment.
|
The simple practice of slow breathing may help people deal with the physical and emotional reactions to moderate pain, a small study suggests.
Researchers say the findings, published in the journal Pain, offer support for the idea that yoga-style breathing exercises and meditation can help ease chronic pain.
The study gauged pain responses among 27 women with the chronic pain condition fibromyalgia and 25 healthy women the same age.
Researchers found that when they had the women perform slow breathing, it dampened their reactions to a moderately painful stimulus — brief pulses of heat from a probe placed on the palm. Overall, the women rated the pain intensity as lower and reported less emotional discomfort when they slowed their normal breathing rate down by half.
The benefit was greater and more consistent among the healthy study participants than those with fibromyalgia.
However, the findings suggest that breathing techniques could offer an additional way to deal with fibromyalgia or other types of chronic pain, according to lead researcher Dr. Alex J. Zautra, a psychology professor at Arizona State University in Tempe.
"What's really valuable is that we were able to put this yoga-like, meditation approach under the microscope," he told Reuters Health in an interview.
The study did not assess any formal yoga or meditation technique, but did look at the effects of becoming more aware of your breathing, which is at the foundation of those practices. The findings, according to Zautra, appear to be the first to show that "how we breathe" does alter perceptions of and responses to pain.
He and his colleagues are currently studying the effects of mindfulness meditation as part of fibromyalgia treatment.
Fibromyalgia is a syndrome marked by widespread aches and pains — on both sides of the body and above and below the waist — along with other symptoms such as fatigue, sleep problems and depression. Its cause is unclear — there are no physical signs, such as inflammation — but researchers believe that fibromyalgia involves problems in how the brain processes pain signals.
"It is not 'all in your head,'" Zautra noted, "but it may be in your brain."
Slow breathing, he explained, may help by bringing a better balance to the activities of the sympathetic and parasympathetic nervous systems.
The sympathetic nervous system activates what is often dubbed the "fight-or-flight" response during times of stress — increasing heart rate, blood pressure and perspiration, for example. If the sympathetic nervous system is seen as an accelerator, then the parasympathetic nervous system is akin to a brake.
Learning breathing techniques might be particularly useful for painful conditions like fibromyalgia, but Zautra said there is also potential for helping people deal with other types of chronic pain, like osteoarthritis and lower back pain.
People are "remarkably resilient" in their capacity to recover from pain, Zautra explained. "Sometimes they just need a little help."
|
Best Answer: How long have you owned the game? if you have just bought it there was a MASSIVE patch released a few months ago which does take one helluva long time to download. if you have had the game a while and it has just recently stopped working then your best bet is to contact EA support for a resolution, although good luck with that because in my experience EA support is run by a bunch of condescending sphincter lickers who have no right to walk on gods good earth and they need to be dragged testicles first over a 2 mile dragstrip covered with broken glass and battery acid
Source(s): 272+ hours of gamplay
bil13_uk · 8 years ago
0 Thumbs up 0 Thumbs down Report Abuse
|
"""Constants for the bot."""
import os
from pathlib import Path
TOKEN = os.environ.get("FRIENDO_TOKEN")
MEME_USERNAME = os.environ.get("MEME_USERNAME")
MEME_PASSWORD = os.environ.get("MEME_PASSWORD")
# event api key
EVENT_API_KEY = os.environ.get("EVENT_API_KEY")
WEATHER_TOKEN = os.environ.get("WEATHER_TOKEN")
COMMAND_PREFIX = "."
VERSION = "1.2."
NAME = "Friendo"
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
IMG_CACHE = Path(BASE_DIR, "image_cache")
BASE_GITHUB_REPO = "https://github.com/fisher60/Friendo_Bot"
<<<<<<< develop
LOG_FILE_NAME = "friendo.log"
=======
LOG_FILE_PATH = Path(BASE_DIR, "logs")
>>>>>>> develop
API_COGS = ["events", "memes"]
|
<reponame>hochwagenlab/SNP-ChIP
"""
Name: Bedgraph2VariableStepWiggle.py
Created by: <NAME>
Date: 4/11/16
Update: 4/13/17 to work with python3 and make all chromosome file
Update: 4/18/17 to only make all chromosome file if more than one chromosome is included
Update: 4/23/17 to work with ChIPseq-pipeline_v3.sbatch
Note: currently only works with span=1
Key detail 1: Bedgraphs use a 0-based coordinate system, while wiggle files use a 1-based coordinate system.
Key detail 2: The end position of a bedgraph is not inclusive.
"""
##################################################################################
# Modules
from collections import defaultdict
import os
import sys
import optparse
from datetime import datetime
##################################################################################
# Functions
def read_bedgraph(bedgraph):
# Purpose: to read in a bedgraph file and create a sorted dict composed of lists of lists
# step 1: read in bedgraph
f = open(bedgraph, 'r')
bedG = f.readlines()
f.close()
# step 2: organize bedgraph into a dict (chromosomes)
# of lists (each row of bedgraph) of lists (start, end, score)
for i in range( len(bedG) ):
bedG[i] = bedG[i].strip().split('\t')
bedD = defaultdict(list)
for i in bedG:
if len(i) != 4:
print( "Some rows in this bedgraph are not complete. Cannot create a wiggle file.\n" )
exit()
else:
bedD[i[0]].append( list(map(float, i[1:])) )
# step 3: for each chromosome do numeric sort by start
for key in bedD.keys():
bedD[key].sort()
return bedD
def create_variable_wiggle(bedgraph, bedD):
# Purpose: to convert a bedgraph to a variable-step wiggle file
# step 1: get information from file name
filename = bedgraph.split(".")[0]
location = os.getcwd()
if os.path.isdir(filename):
exit()
else:
if ( len(bedD) > 1 ):
os.mkdir(filename)
os.chdir(filename)
# if bedgraph file includes SPMR, it means it is from MACS2 with SPMR analysis
if filename.find("W3") != -1:
dataSource = "Extended tag pileup (200bp) from MACS2 with FE and SPMR normalization for every 1 bp"
elif filename.find("SPMR") != -1:
dataSource = "Extended tag pileup from MACS2 with SPMR normalization for every 1 bp"
else:
dataSource = filename
# step 2: determine chromosome naming system
# chromosome list for SK1 and SacCer3 in chromosome number order
SK1K = ( "chr01", "chr02", "chr03", "chr04", "chr05", "chr06", "chr07", "chr08",
"chr09", "chr10", "chr11", "chr12", "chr13", "chr14", "chr15", "chr16" )
SacCer3 = ( "chrI", "chrII", "chrIII", "chrIV", "chrV", "chrVI", "chrVII", "chrVIII",
"chrIX", "chrX", "chrXI", "chrXII", "chrXIII", "chrXIV", "chrXV", "chrXVI" )
# use first key as method to determine which genome data was mapped to
if list(bedD.keys())[0] in SK1K:
chromSet = SK1K
elif list(bedD.keys())[0] in SacCer3:
chromSet = SacCer3
else:
print( "Do not recognize chromosome names.\n" )
sys.exit()
# step 3: for each chromosome make header, do calculations, and write to file
for chrName in chromSet:
print( chrName + ": " + datetime.now().ctime() )
# create header
header = ( "track type=wiggle_0 name=" + filename + "_" + chrName + " description=" + dataSource +
"\nvariableStep chrom=" + chrName + " span=1\n" )
out = [ ]
# for each row in bedgraph for individual chromosome
if chrName in bedD.keys():
for row in range( len( bedD[chrName] ) ):
# ensure that end of previous row is not bigger than start of current row or trim start position of row
if row != 0:
if bedD[chrName][row][0] < bedD[chrName][row-1][1]:
print ( "Warning: Overlaps begin at " + chrName + ":" + str( int( bedD[chrName][row][0] ) ) +
"-" + str( int( bedD[chrName][row][1] ) ) + " ... Trimming row\n" )
bedD[chrName][row][0] = bedD[chrName][row-1][1]
# skip rows with zero as score
if bedD[chrName][row][2] != 0:
# expand positions and convert to wiggle numbering system and add score for each position
positions = range( int( bedD[chrName][row][0] ) + 1, int( bedD[chrName][row][1] ) + 1 )
tmp = "\n".join( [ str(position) + "\t" + str( bedD[chrName][row][2] ) for position in positions ] )
out.append(tmp)
# to only print out files for chromosomes with information
if ( len(out) != 0 ):
# write wig file for individual chromosome
f = open( filename + "_" + chrName + ".wig", 'w')
f.write( header )
f.write( '\n'.join(out) )
f.close()
if ( len(bedD) > 1 ):
# write to wig file for all chromosomes
g = open( filename + "_all.wig", 'a')
g.write( header )
g.write( '\n'.join(out) )
g.write( '\n' )
g.close()
os.chdir(location)
################################################################################
# Main
desc="""
A script to convert from a bedgraph into a variableStep wiggle file.
It is NOT designed to work with bedgraphs with overlapping fragments.
Creates wiggle files for each individual chromosome with information.
Uses input file name to determine output file names.
Note: this function can currently only handle span=1.
Note: this function is designed to work with python3.
"""
# parse object for managing input options
#parser = optparse.OptionParser()
parser = optparse.OptionParser(description=desc)
# essental data, defines commandline options
parser.add_option('-b', dest= "bedgraph", default= '', help= "This is the name \
of the input bedgraph file.")
# load the inputs
(options,args) = parser.parse_args()
# reads the inputs from commandline
bedgraph = options.bedgraph
bedgraphDict = read_bedgraph( bedgraph )
a = create_variable_wiggle( bedgraph, bedgraphDict )
|
/**
* Allows model to add more commands to execute
* @param lst - list of Commands to add to myCommands
*/
public void addCommands(Collection<Command> lst) {
for (Command c : lst) {
myCommands.add(c);
}
}
|
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package fr.insee.sugoi.commons.services.configuration;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import com.fasterxml.jackson.dataformat.xml.ser.ToXmlGenerator;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;
import org.springframework.http.converter.xml.MappingJackson2XmlHttpMessageConverter;
@Configuration
public class XmlConfig {
@Bean
public MappingJackson2XmlHttpMessageConverter mappingJackson2XmlHttpMessageConverter(
Jackson2ObjectMapperBuilder builder) {
XmlMapper xmlMapper = builder.createXmlMapper(true).build();
xmlMapper.enable(ToXmlGenerator.Feature.WRITE_XML_DECLARATION);
return new MappingJackson2XmlHttpMessageConverter(xmlMapper);
}
}
|
Minerals explained 40: The spinels It has been established that Mg-rich olivine (Mg,Fe) 2 SiO 4 is one of the principal minerals in the upper levels of the Earths mantle. With an induced increase in depth (~400 km), an increase in pressure within the mantle, and because the co-ordination of cations of olivine is similar to that of spinel, a transition from olivine to spinel becomes a practical possibility. The outcome is the formation of a denser and more compact structure of ~10 per cent. The phase changes have important consequences in the properties and behaviour of the mantle. Seismic studies show that the creation of these denser phases occurs at depths of ~400 km with pressures of ~130 kbar and temperatures of 1500 °C. It has been further suggested that the same phase transformations may be one of the mechanisms which initiate deep focus earthquakes, especially if the phase changes be relatively rapid and explosive. They may also be the principal mechanism in the sinking of oceanic crustal slabs in plate tectonics. While these mechanisms are fundamental to the study of spinel genesis and global processes, the genetic mechanisms of the Spinel Group are more complex and highly variable. The large proportion of members of the group are of high temperature origin. There are at least 30 oxide minerals which have the spinel structure, although the Spinel Supergroup involves many more. Some are rare and hardly relevant to this study. The principal members of the Group explained here have the formula AB 2 0 4, where A is a divalent metal such as Mg, Fe or Mn and B is a trivalent metal such as Al, Fe or Cr. Fortunately most spinels form conveniently into three series determined by a B metal: a Spinel Series (sensu stricto) with Al; a Magnetite Series with Fe; and a Chromite Series with Cr (Table 1). There is extensive cationic exchange (solid solution) within each series but very little between series. For practical reasons only the principal member of each series has been chosen for explanation, although other related members will be mentioned as necessary. Minerals explained 40: The spinels
|
import * as React from 'react';
import { withData as withOrbit, WithDataProps } from 'react-orbitjs';
import { ResourceObject } from 'jsonapi-typescript';
import { defaultOptions, REVIEWERS_TYPE } from '@data';
import { ReviewerAttributes } from '@data/models/reviewer';
export interface IProvidedProps {
createRecord: (attrs: ReviewerAttributes, relationships) => any;
removeRecord: () => any;
updateAttribute: (attribute: string, value: any) => any;
updateAttributes: (attrs: ReviewerAttributes) => any;
}
interface IOwnProps {
reviewer: ResourceObject<REVIEWERS_TYPE, ReviewerAttributes>;
}
type IProps = IOwnProps & WithDataProps;
export function withDataActions<T>(WrappedComponent) {
class ReviewerDataActionWrapper extends React.Component<IProps & T> {
createRecord = async (attributes: ReviewerAttributes, relationships) => {
const { dataStore } = this.props;
await dataStore.update(
(q) =>
q.addRecord({
type: 'reviewer',
attributes,
relationships,
}),
defaultOptions()
);
};
removeRecord = async () => {
const { reviewer, dataStore } = this.props;
await dataStore.update(
(q) =>
q.removeRecord({
type: 'reviewer',
id: reviewer.id,
}),
defaultOptions()
);
};
updateAttribute = async (attribute: string, value: any) => {
const { reviewer, dataStore } = this.props;
await dataStore.update(
(q) => q.replaceAttribute(reviewer, attribute, value),
defaultOptions()
);
this.forceUpdate();
};
updateAttributes = (attributes: ReviewerAttributes) => {
const { reviewer, updateStore } = this.props;
const { id, type } = reviewer;
return updateStore(
(q) =>
q.replaceRecord({
id,
type,
attributes,
}),
defaultOptions()
);
};
render() {
const actionProps = {
createRecord: this.createRecord,
removeRecord: this.removeRecord,
updateAttributes: this.updateAttributes,
updateAttribute: this.updateAttribute,
};
return <WrappedComponent {...this.props} {...actionProps} />;
}
}
return withOrbit({})(ReviewerDataActionWrapper);
}
|
/*
* Copyright 2009-2017, Acciente LLC
*
* Acciente LLC licenses this file to you under the
* Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the
* License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in
* writing, software distributed under the License is
* distributed on an "AS IS" BASIS, WITHOUT WARRANTIES
* OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.acciente.oacc.sql.internal.persister;
import java.sql.Connection;
import java.sql.SQLException;
public class SQLConnection {
private final Connection connection;
public SQLConnection(Connection connection) {
this.connection = connection;
}
public SQLStatement prepareStatement(String sql) throws SQLException {
return new SQLStatement(connection.prepareStatement(sql));
}
public SQLStatement prepareStatement(String sql, String[] generatedKeyColumns) throws SQLException {
return new SQLStatement(connection.prepareStatement(sql, generatedKeyColumns));
}
public void close() throws SQLException {
this.connection.close();
}
}
|
/**
* <p>
* Store the specified DeepWellPlate in the specified WritableVersion.
* </p>
*
* <p>
* TODO Currently unable to persist createDate.
* </p>
*
* @param dwp - the DeepWellPlate to store
* @param wv - the WritableVersion in which to store the DeepWellPlate
* @throws BusinessException if something goes wrong
*/
public void create(DeepWellPlate dwp, WritableVersion wv) throws BusinessException {
log.debug("In create");
ScreenDAO screenDAO = new ScreenDAO(wv);
RefHolder screenHolder = screenDAO.getPO(dwp.getScreen());
if (null == screenHolder) {
throw new BusinessException("no such screen: " + dwp.getScreen().getName());
}
HolderCategory dwpCategory = DAOUtils.findHolderCategory(wv, HOLDER_CATEGORY_NAME);
if (null == dwpCategory) {
throw new BusinessException("no \"" + HOLDER_CATEGORY_NAME
+ "\" category - xtalPiMS ref data not installed?");
}
SampleCategory screenCategory =
wv.findFirst(SampleCategory.class, SampleCategory.PROP_NAME, SCREEN_CATEGORY_NAME);
if (null == screenCategory) {
throw new BusinessException("no \"" + SCREEN_CATEGORY_NAME
+ "\" category - xtalPiMS ref data not installed?");
}
Set<SampleCategory> scs = new HashSet<SampleCategory>();
scs.add(screenCategory);
try {
Map<String, Object> hattr = new HashMap<String, Object>();
hattr.put(Holder.PROP_NAME, dwp.getBarcode());
hattr.put(Holder.PROP_STARTDATE, dwp.getActivationDate());
hattr.put(Holder.PROP_ENDDATE, dwp.getDestroyDate());
Set<HolderCategory> cats = new HashSet<HolderCategory>();
cats.add(dwpCategory);
hattr.put(Holder.PROP_HOLDERCATEGORIES, cats);
log.debug("creating Holder");
AbstractModelObject plateHolder = new Holder(wv, hattr);
log.debug("created Holder");
Map<String, Object> rhoattr = new HashMap<String, Object>();
rhoattr.put(RefHolderOffset.PROP_HOLDER, plateHolder);
rhoattr.put(RefHolderOffset.PROP_REFHOLDER, screenHolder);
rhoattr.put(RefHolderOffset.PROP_COLOFFSET, 0);
rhoattr.put(RefHolderOffset.PROP_ROWOFFSET, 0);
rhoattr.put(RefHolderOffset.PROP_SUBOFFSET, 0);
new RefHolderOffset(wv, rhoattr);
log.debug("created RefHolderOffset");
for (RefSamplePosition position : screenHolder.getRefSamplePositions()) {
log.debug("start of loop");
WellPosition wp =
new WellPosition(position.getRowPosition(), position.getColPosition(), position
.getSubPosition());
Float currentAmount = null;
String unit = "L";
ConditionQuantity cq = dwp.getConditions().get(wp);
if (null != cq) {
currentAmount = new Float(cq.getQuantity());
unit = cq.getUnit();
}
Map<String, Object> attr = new HashMap<String, Object>();
attr.put(Sample.PROP_COLPOSITION, position.getColPosition());
attr.put(Sample.PROP_CURRENTAMOUNT, currentAmount);
attr.put(Sample.PROP_AMOUNTDISPLAYUNIT, "ul");
attr.put(Sample.PROP_AMOUNTUNIT, unit);
attr.put(Sample.PROP_HOLDER, plateHolder);
attr.put(Sample.PROP_NAME, plateHolder.getName() + ":" + wp.toStringNoSubPosition());
attr.put(Sample.PROP_REFSAMPLE, position.getRefSample());
attr.put(Sample.PROP_ROWPOSITION, position.getRowPosition());
attr.put(Sample.PROP_SUBPOSITION, position.getSubPosition());
attr.put(Sample.PROP_SAMPLECATEGORIES, scs);
log.debug("creating Sample");
new Sample(wv, attr);
log.debug("created Sample");
log.debug("end of loop");
}
log.debug("done create");
}
catch (ConstraintException e) {
throw new BusinessException(e.getMessage(), e);
}
}
|
Hubble Space Telescope WFC3 Grism Spectroscopy and Imaging of a Growing Compact Galaxy at z=1.9 We present HST/WFC3 grism spectroscopy of the brightest galaxy at z>1.5 in the GOODS-South WFC3 Early Release Science grism pointing, covering the wavelength range 0.9-1.7 micron. The spectrum is of remarkable quality and shows the redshifted Balmer lines Hbeta, Hgamma, and Hdelta in absorption at z=1.902, correcting previous erroneous redshift measurements from the rest-frame UV. The average rest-frame equivalent width of the Balmer lines is 8+-1 Angstrom, which can be produced by a post-starburst stellar population with a luminosity-weighted age of ~0.5 Gyr. The M/L ratio inferred from the spectrum implies a stellar mass of ~4x10^11 Msun. We determine the morphology of the galaxy from a deep WFC3 F160W image. Similar to other massive galaxies at z~2 the galaxy is compact, with an effective radius of 2.1+-0.3 kpc. Although most of the light is in a compact core, the galaxy has two red, smooth spiral arms that appear to be tidally-induced. The spatially-resolved spectroscopy demonstrates that the center of the galaxy is quiescent and the surrounding disk is forming stars, as it shows Hbeta in emission. The galaxy is interacting with a companion at a projected distance of 18 kpc, which also shows prominent tidal features. The companion has a slightly redder spectrum than the primary galaxy but is a factor of ~10 fainter and may have a lower metallicity. It is tempting to interpret these observations as"smoking gun"evidence for the growth of compact, quiescent high redshift galaxies through minor mergers, which has been proposed by several recent observational and theoretical studies. Interestingly both objects host luminous AGNs, as indicated by their X-ray luminosities, which implies that these mergers can be accompanied by significant black hole growth. This study illustrates the power of moderate dispersion, low background near-IR spectroscopy at HST resolution, which is now available with the WFC3 grism. INTRODUCTION The formation history of massive galaxies is not well understood. Present-day galaxies with stellar masses 3 10 11 M ⊙ are typically giant elliptical galaxies in the centers of galaxy groups. These galaxies have old stellar populations and follow tight scaling relations between their velocity dispersions, sizes, surface brightnesses, line strengths, and other parameters (e.g., Djorgovski & Davis 1987;). At redshifts z ∼ 2 massive galaxies form a more complex population. A fraction of the population is forming stars at a high rate, as determined from their brightness in the restframe UV or IR, emission lines such as H, and other indicators (e.g., ;;;, and many other studies). However, others have no clear indications of ongoing star formation and have spectral energy distributions (SEDs) characterized by strong Balmer-or 4000 breaks (e.g., ;). The existence of these "quiescent" galaxies at this early epoch is in itself remarkable, and provides constraints on the accretion and thermodynamics of gas in massive halos at z > 2 (e.g., ;Dekel & Birnboim 2006). What is perhaps even more surprising is that these galaxies are structurally very different from early-type galaxies in the nearby Universe: their effective radii are typically 1-2 kpc, much smaller than nearby giant ellipticals (e.g., ;;van ;). Several explanations have been offered for the dramatic size difference between local massive galaxies and quiescent galaxies at high redshift. The simplest is that observers underestimated the sizes and/or overestimated the masses. Al-though subtle errors are almost certainy present in the interpretation of the data, recent studies suggest that it is difficult to change the sizes and the masses by more than a factor of 1.5, unless the IMF is altered (e.g.,,,. Other explanations include extreme mass loss due to a quasar-driven wind (), strong radial age gradients leading to large differences between mass-weighted and luminosity-weighted ages (;La Barbera & de Carvalho 2009), star formation due to gas accretion, and selection effects (e.g., van ;van der ). Perhaps the most plausible mechanism for bringing the compact z ∼ 2 galaxies onto the local mass-size relation is (minor) merging (e.g., ;Naab, Johansson, & Ostriker 2009;van ;Carrasco, Conselice, & Trujillo 2010). Numerical simulations predict that such mergers are frequent (Guo & White 2008;); furthermore, they may lead to stronger size growth than mass growth (). From an analysis of mass evolution at fixed number density, van infer that massive galaxies have doubled their mass since z = 2, and suggest that ∼ 80 % of this mass growth can be attributed to mergers. Although qualitatively consistent with observations and theory the minor merger scenario currently has little direct evidence to support it. It is also not clear whether properties other than sizes and masses are easily explained in this context; one of the open questions is why present-day elliptical galaxies are so red and homogeneous if half of their mass was accreted from the general field at relatively recent times. Ideally we would identify and study the infalling population directly at high redshift, but so far this has been hampered by FIG. 2.-HST ACS and WFC3 images of FW-4871 and its companion FW-4887. The ACS color image was created from the B435, V606, and z850 bands, and the WFC3 image from the Y098, J125, and H160 bands. FW-4871 has a compact core and spiral arms, which may be the result of an interaction with FW-4887. Red circles are the locations of X-ray sources in the Luo et al. catalog, with the size of the circles indicating the uncertainties in the positions. Both galaxies host an AGN. The SEDs of the two galaxies (from ) are shown in the right-most panel. The galaxies are both red and have broadly similar SEDs. the limitations of ground-based spectroscopy and ground-and space-based near-IR imaging. In this Letter, we use the exquisite WFC3 grism on the Hubble Space Telescope (HST), in combination with WFC3 imaging, to study the environment of a quiescent compact galaxy at z = 1.9. As we show below, the observations presented here provide the first direct evidence for minor mergers as a mechanism for the growth of compact galaxies at high redshift. We use H 0 = 70 km s −1 Mpc −1, m = 0.3, and = 0.7. Magnitudes are on the AB system. SELECTION AND BASIC DATA The Early Release Science (ERS) WFC3 imaging observations of the GOODS-South field comprise a mosaic of eight HST pointings. All eight pointings were observed with a suite of imaging filters but only one was observed with the G102 and G141 grisms. The grism data are important for measuring the redshifts, ages, and star formation rates of massive galaxies at high redshift and indispensable for measuring the redshifts of any faint companion galaxies. The G141 grism is particularly useful as it is very sensitive and its wavelength range of 1.1 m -1.7 m covers the redshifted Balmer lines, 4000 break, and emission at z ∼ 2. Here we concentrate on the brightest galaxy at z > 1.5 in the 2 2 grism field, indicated with the arrow in Fig. 1. The galaxy has ID number 4871 in the K-selected FIREWORKS catalog of GOODS-South, and has a total K magnitude of 19.7. ACS and WFC3 color images of the galaxy are shown in Fig. 2, along with the SED from the Wuyts et al. catalog. The galaxy is faint and unremarkable in the ACS bands but very bright in the WFC3 images, owing to its red SED. It is composed of a compact core in addition to diffuse spiral arms, which appear to originate from a tidal interaction with a companion galaxy, object FW-4887 in the FIREWORKS catalog. The companion has a similar SED as 4871 but is a factor of ∼ 10 fainter at K = 22.0. It has a 2 long tidal tail, extending away from FW-4871. Interestingly, both FW-4871 and FW-4887 are X-ray sources (, ID numbers 145 and 142 respectively). Their X-ray luminosities are 6.4 10 43 ergs s −1 and 3.5 10 43 ergs s −1 respectively, where we used the full-band fluxes from Luo et al. and the redshift derived be-. Filled circles are galaxies at z > 1.5, with the size of the circle indicating the brightness in the K band. The green box shows the location of the single HST/WFC3 G141 grism exposure that has been obtained as part of the WFC3 ERS. The arrow indicates the brightest galaxy at z > 1.5 in this pointing, object FW-4871 in the Wuyts et al. catalog. low. These luminosities would imply star formation rates ≫ 1000 M ⊙ yr −1 (Ranalli, Comastri, & Setti 2003), and we conclude that both galaxies almost certainly host an active galactic nucleus (AGN). The AGN in the companion galaxy is likely heavily obscured: FW-4887 has an 8 m "upturn" (see Fig. 2) and is a very bright MIPS 24 m source with a flux density of 0.4 mJy. FW-4871 has been targeted several times for optical spectroscopy. Three spectroscopic redshifts are available, all from the GOODS-VIMOS survey: z = 0.352, z = 2.494, and z = 2.609, with qualities C, C, and B respectively (;). As we show below all three redshifts are incorrect. 3. HST WFC3 GRISM SPECTROSCOPY The field was observed with the G102 and G141 grisms, providing continuous wavelength coverage from 0.8-1.7 m for all objects in the 2 2 WFC3/IR field of view. Each grism image has a total integration time of 4212 s, divided over four dithered exposures in two orbits. We reduced the grism observations and extracted spectra using a combination of standard pyraf tasks (e.g., multidrizzle), the aXe package (), and custom scripts to improve background subtraction and optimize the extraction apertures (see, e.g., ). The wavelength calibration and extraction apertures for G102 and G141 are based on undispersed images in Y 098 and H 140 respectively. These direct images were obtained at the same dither positions as the dispersed data. The grism spectrum of FW-4871 is shown in Fig. 3; it is of very high quality with S/N ≈ 90 per 47 pixel at 1.2 m. The galaxy has strong H, H, and H absorption lines, and a pronounced Balmer break. The redshift z = 1.902 ± 0.002. The lines are undetected; the upper limit on their restframe equivalent width is 2. Note that these lines (and H) are completely inaccessible from the ground, as they fall in between the J and H atmospheric windows. The average rest-frame equivalent width of H, H, and H is 8 ±1, which implies that a post-starburst population dominates the rest-frame optical light. We fitted the spectrum with Bruzual & Charlot stellar population synthesis models (see, e.g., ). Good fits are obtained for populations with low star formation rates at the epoch of observation but relatively young luminosity-weighted ages (≈ 0.5 Gyr), combined with a moderate amount of dust (A V ∼ 1). Adopting simple top-hat star formation histories, we find that the data can be fit with an extreme burst of ∼ 5000 M ⊙ yr −1 at z ∼ 2.2 (purple), or with a star formation rate of ∼ 500 M ⊙ yr −1 sustained over ∼ 1 Gyr (orange). In the latter model the star formation truncated only 150 Myr prior to the epoch of observation, comparable to the dynamical time at the distance of the companion galaxy. Models with less dust and higher luminosity-weighted ages do not fit the spectrum well as they have stronger Ca H+K and weaker H absorption than is observed; as an example, the red model in Fig. 3 has a luminosity-weighted age of 1 Gyr and A V = 0.3 and is a poor fit to the spectrum. Scaling the models to the total magnitudes given in Wuyts et al. we find that the stellar mass of FW-4871 is (4 ± 1) 10 11 M ⊙ for a Chabrier IMF. We also extracted a spectrum of the companion galaxy from the grism data, even though it is quite faint at H = 22.4. As can be seen in Fig. 3 we clearly detect the continuum, thanks to the low background from space and the lack of sky lines. The galaxy has a continuum break in the same wavelength region as FW-4871 and shows oxygen lines and H in emission. Its redshift of z = 1.898 ± 0.003 is consistent with that of FW-4871, demonstrating that the two are associated. Assuming that the H emission is due to star formation we derive a star formation rate of order 5 − 10 M ⊙ yr −1 (Kennicutt 1998, for A V = 1 − 2 mag). Interestingly, the spectrum of the companion galaxy is redder than that of FW-4871, although this is difficult to quantify due to contamination of its spectrum from a nearby object. This may be caused by dust and/or the presence of an old stellar population. 4. STRUCTURE AND SPATIALLY-RESOLVED SPECTROSCOPY As discussed in § 1, massive quiescent galaxies at z ∼ 2 typically have very small sizes. Despite its spiral arms this is also the case for FW-4871, as most of its light comes from a compact core. We quantified this by fitting Sersic models to the H 160 image using galfit (). Other objects in the field, including the companion galaxy, were masked in the fit. The fit and the residuals are shown in Fig. 4. The asymmetric spiral pattern is a striking feature in the residual image. The best-fit Sersic index n = 3.7 ± 0.3 and the best-fit effective radius r e = 0. 25 ± 0. 03, corresponding to 2.1 ± 0.3 kpc. The formal errors are very small; the quoted uncertainties indicate the full range of solutions obtained when using different stars in the field as PSFs, but do not include other sources of systematic error. The S/N of the grism data is sufficiently high that we can FIG. 4.-Sersic fits to the H160 image of FW-4871, which was drizzled to a pixel scale of 0. 065. The galaxy image (a), the best-fitting model (b), and the residual (c) are shown. The 3D plots illustrate that most of the light is in a compact core. The residual image shows a regular two-armed spiral, which may have been induced by a tidal interaction. compare the spectrum of the core to that at larger radii. As shown in Fig. 5 the average spectrum of the inner 4 pixels (r ≤ 0. 13) is similar to that at large radii (0. 13 < r < 0. 65), with the notable exception of H: it is undetected away from the center, which implies that it is filled in by emission. We demonstrate this by subtracting the Bruzual & Charlot model shown in Fig. 3 from both the central spectrum and the outer spectrum. The spectrum of the inner parts shows no systematic residuals, but the spectrum away from the center shows a positive residual at the wavelength of H. We infer that FW-4871 is not entirely "dead" but is forming stars in the spiral arms. The amount of star formation is difficult to quantify and depends on the assumed reddening; assuming E(B − V) ∼ 0.3 it is ∼ 20 M ⊙ yr −1. DISCUSSION The WFC3 grism and imaging data of FW-4871 may provide "smoking gun" evidence for minor mergers as an important growth mechanism of massive galaxies: FW-4871 is a massive, compact galaxy at z ∼ 2 which is interacting with FIG. 5.-Spatially-resolved Balmer lines. The red spectrum is for the central r < 0. 13 of FW-4871 (r < 1 kpc) and the blue spectrum is for radii 0. 13 < r < 0. 65. Residual spectra, obtained by subtracting the (light grey) model from the data, are also shown. At large radii H is filled in by emission, possibly due to star formation associated with the spiral arms. The non-detection of (and , which is not shown) suggests a high metallicity for the gas in these regions. a ∼ 10 less massive companion. The quiescent spectrum of the primary galaxy is qualitatively consistent with the spectra of other compact high redshift galaxies and with the old stellar ages of present-day early-type galaxies. This mode of growth has been proposed by several recent studies to explain the size difference between massive galaxies at high redshift and low redshift (e.g., ;). Nearby ellipticals have gradients in their color and metallicity, such that they are bluer and more metal-poor at larger radii (e.g., Franx, Illingworth, & Heckman 1989). Interestingly, we can begin to address the origin of these gradients with the kind of data that we are now getting from HST. The relatively strong oxygen lines and weak H of the infalling galaxy imply log R 23 ∼ 1, and a metallicity that is 1/3 times the Solar value (Pilyugin & Thuan 2005). The spectrum extracted from the disk of FW-4871 has, by contrast, no detected oxygen lines and an unambiguous detection of H. It has log R 23 0, which implies a Solar or super-Solar metallicity. Qualitatively these results are consistent with the idea that the metallicity gradients of elliptical galaxies reflect a gradual increase with radius in the fraction of stars that came from infalling low-mass satellites. The apparent absence of star formation in the central regions of FW-4871 might be related to its active nucleus. It has been suggested by many authors that AGN could prevent gas cooling and star formation (e.g., ) and in this context the observed properties of FW-4871, such as the lower limit on the ratio of its X-ray luminosity to and H, may provide constraints on the mechanism(s) of AGN feedback (see also ;Kriek et al., 2009). In any case, the fact that both interacting galaxies host an AGN is remarkable, as it demonstrates that their black holes are undergoing a "growth spurt" prior to their merger. We note here that the only indication of the AGNs in the optical and near-IR is a faint emission line in the VIMOS spectra of FW-4871, which we now identify 1 as C IV. There are several important caveats, uncertainties, and complications. First, FW-4871 is not only growing through the accretion of FW-4887, but also through star formation. There is evidence for star formation in the companion (although its emission lines could be influenced by its active nucleus) and also in the spiral arms of FW-4871. In most models such "residual" star formation takes place in the center of the most massive galaxy (see, e.g., ), but that is in fact the only place where we do not see evidence for star formation. We note, however, that because of the large mass of FW-4871 the specific star formation rate of the entire system is low at SFR / M stellar 10 −10 yr −1. Second, although the spectrum of FW-4871 resembles those of the compact galaxies studied in Kriek et al. and van Dokkum et al., the galaxy formed its stars at significantly lower redshift. As shown in § 3 its star formation rate probably was ∼ 500 M ⊙ yr −1 as recently as 150 Myr prior to the epoch of observation, i.e., at z ≈ 2. It is therefore not a direct descendant of quiescent galaxies at z ∼ 2.3. Interestingly, star forming galaxies at z > 2 are typically larger than FW-4871 in the rest-frame optical (e.g., ), which may imply that FW-4871 is unusual or that a significant fraction of the star formation in massive galaxies at z ∼ 2.5 takes place in heavily obscured, compact regions. Third, the fact that the time since the truncation of star formation is similar to the dynamical time calls into question whether we are witnessing a "two-stage" galaxy formation process, with steady accretion of satellite galaxies following an initial highly dissipational star formation phase (e.g., ;). An alternative interpretation is that the companion galaxy is somehow related to the truncation, for example by triggering the AGN in FW-4871 ∼ 150 Myr ago. Numerical simulations that aim to reproduce both the 2D spectrum and the morphological features might shed some light on these issues. As illustrated in this Letter the WFC3 camera on HST has opened up a new regime of detailed spectroscopic and imaging studies of high redshift galaxies. The quality of the rest-frame optical continuum spectra shown in Fig. 3 greatly exceeds what can be achieved from the ground (see, e.g., ), and the grism provides simultaneous spectroscopy of all 200-300 objects with H 23 in the WFC3 field. Future WFC3 spectroscopic and imaging surveys over large areas have the potential to robustly measure the evolution of galaxies over the redshift range 1 < z < 3. We thank the WFC3 ERS team for their exciting program and Marijn Franx, Hans-Walter Rix, Mariska Kriek, Katherine Whitaker, and Anna Pasquali for comments.
|
Minocycline ameliorates cognitive impairment induced by whole-brain irradiation: an animal study. BACKGROUND It has been long recognized that cranial irradiation used for the treatment of primary and metastatic brain tumor often causes neurological side-effects such as intellectual impairment, memory loss and dementia, especially in children patients. Our previous study has demonstrated that whole-brain irradiation (WBI) can cause cognitive decline in rats. Minocycline is an antibiotic that has shown neuroprotective properties in a variety of experimental models of neurological diseases. However, whether minocycline can ameliorate cognitive impairment induced by ionizing radiation (IR) has not been tested. Thus this study aimed to demonstrate the potential implication of minocycline in the treatment of WBI-induced cognitive deficits by using a rat model. METHODS Sprague Dawley rats were cranial irradiated with electron beams delivered by a linear accelerator with a single dose of 20 Gy. Minocycline was administered via oral gavages directly into the stomach before and after irradiation. The open field test was used to assess the anxiety level of rats. The Morris water maze (MWM) was used to assess the spatial learning and memory of rats. The level of apoptosis in hippocampal neurons was measured using immunohistochemistry for caspase-3 and relative markers for mature neurons (NeuN) or for newborn neurons (Doublecortin (DCX)). Neurogenesis was determined by BrdU incorporation method. RESULTS Neither WBI nor minocycline affected the locomotor activity and anxiety level of rats. However, compared with the sham-irradiated controls, WBI caused a significant loss of learning and memory manifest as longer latency to reach the hidden platform in the MWM task. Minocycline intervention significantly improved the memory retention of irradiated rats. Although minocycline did not rescue neurogenesis deficit caused by WBI 2 months post-IR, it did significantly decreased WBI-induced apoptosis in the DCX positive neurons, thereby resulting in less newborn neuron depletion 12 h after irradiation. CONCLUSIONS Minocycline significantly inhibits WBI-induced neuron apoptosis, leading to less newborn neurons loss shortly after irradiation. In the long run, minocycline improves the cognitive performance of rats post WBI. The results indicate a potential clinical implication of minocycline as an effective adjunct in radiotherapy for brain tumor patients. Background As an important treatment modality for primary and metastatic brain tumors, cranial irradiation often causes neurological side-effects such as intellectual impairment, memory loss and dementia, especially in children patients. The cognitive decline has been suggested to be due to radiation-induced deficits in the hippocampaldependent functions of learning, memory and spatial information processing. Although the mechanisms underlying radiation-induced cognitive impairment remain to be elucidated, the studies using radiation-induced learning and memory deficit animal models have shown that the decline of hippocampal-dependent functions is generally accompanied by hippocampal apoptosis, decreased hippocampal proliferation, reduced neurogenesis, and marked alteration in neurogenic microenvironment. In attempt to alleviate the neurotoxicity of radiotherapy for brain tumor patients and improve the quality of their life after treatment, intense effort is being made to develop the methods that can attenuate radiation-induced cognitive impairment. For example, exercise, transplantation of human fetal-derived neural stem cells, and some pharmacological agents such as Lithium compound have been shown to improve cognitive function post irradiation. Minocycline, a clinical available antibiotic, has been demonstrated to be neuro-protective in animal models of several acute central neural system (CNS) injuries and neurodegenerative diseases. In the present study, we tested whether minocycline could inhibit radiationinduced cognitive decline. We found that minocycline intervention significantly attenuated the learning and memory loss caused by whole-brain irradiation (WBI) 2 months post-irradiation. Our short-term study showed that minocycline significantly prevented hippocampal neurons, especially DCX+ neurons from WBI-induced apoptosis 6 hours post-irradiation, thus leading to less newborn neurons loss. However, minocycline had no effect on neurogenesis deficit 2 months after WBI. The results indicate a potential implication for minocycline in ameliorating radiation-induced cognitive dysfunction. Animals and experimental groups All animal procedures were carried out in accordance with Soochow University Medical Experimental Animal Care Guidelines based on the National Animal Ethical Policies. One-month-old male Sprague Dawley rats weighing 90-110 g (obtained from the Experimental Animal Center of Soochow University) were used as described previously. The animals were housed in cages at an ambient temperature of 22 ± 1°C with a 12-h light-dark cycle. Pelleted rat chow and tap water were available ad libitum. Rats were randomly allocated into six groups: untreated control (CN), minocycline (CM), sham control (SCN), sham minocycline (SCM), radiation (RN) and minocycline plus radiation (RM) (n = 18/group except the SCN and SCM groups which had n = 12/group). The CN group or the CM group received only saline or minocycline, respectively; The SCN group or the SCM group were subjected to the radiation procedure with 0 Gy in addition to receiving saline or minocycline. The CN, CM, SCN and SCM groups are referred as the control groups in the text. The RN group or the RM group were subjected to WBI in addition to the treatment with either saline or minocycline. Minocycline treatment One day before radiation, rats received either a total dose of 90 mg/kg of clinical grade minocycline (100 mg/capsule, Huishi Pharmaceutical Ltd. Co., P. R. China) dissolved in saline in 2 divided doses or the same volume of physiological saline alone (vehicle, 4.5 ml/kg) via oral gavages directly into the stomach. After irradiation, animals were administered with either saline or minocycline twice daily (45 mg/kg/d) for 2 months. Irradiation Prior to irradiation, animals were anesthetized with 3.6% chloral hydrate (360 mg/kg, i.p.). Then the WBI was performed using 4-MeV electron beams delivered by a linear accelerator (Philips SL − 18, Philips, UK) at room temperature (RT), as described previously. Briefly, a 20 20 cm lead shielding block with 10 holes specifically cut for WBI was used. The size of each hole was~3.5 2.0 cm (length width). One hole was for one rat brain. The other parts of rat's body were shielded with the lead block. For irradiated rats, a single WBI dose of 20 Gy was given at a dose rate of 210-220 cGy/min. General observation and body weight gain After irradiation, rats were monitored for their motor activity, feeding and drinking behavior, as well as the side effects such as nausea, ataxia, and topical skin reaction. Their body weights were recorded biweekly. All these observations were recorded and used as parameters of general changes after treatment. Open field test Since the level of anxiety could affect the performance of learning and memory test, open field test was used to assess the anxiety level of all groups as described previously. Briefly, three days after 2-month minocycline intervention, open field test was performed in a silent dimly lit room. The open field used in this study, which was a square sound proof chamber (410 410 505 mm), was made of Plexiglas so that rats were visible from outside of the chamber. The floor of the chamber was divided into 25 8 8 cm squares, and the area that contained 9 squares in the middle was called the central region. A video camera was placed above the center of the open field. The rats were placed in the central region and allowed to move freely and to explore the environment for 10 min. The movements were recorded by computeraided video-tracking system (Jiliang, Shanghai, China). The total distance rats traveled around the open field area and the time they spent in the central region of the open field were analyzed. After each individual test, the apparatus was cleaned with 10% ethanol thoroughly to remove any olfactory cues. Morris water maze (MWM) The MWM is generally used to assess the spatial learning and memory of rats. The experimental apparatus, which is a circular water tank (160 cm diameter, Jiliang, Shanghai, China) filled with opaque water (22 ± 1°C), and has a hidden submerged platform (9 cm diameter) in the center of the target quadrant, was used to assess the ability of a rat to locate the platform. The rats were placed into each of the 4 quadrants of the pool for 60 s, and were trained to locate the submerged hidden platform. They had to find the platform using only the distal spatial cues available in the testing room. When failed, the rats were gently placed on the platform for 10 s. The time rats spent on searching and mounting the platform (i.e. latency) and their swim speeds were recorded by video-tracking system (Jiliang, Shanghai, China). After 4 days of place navigation tests, a 30-s spatial probe test (the submerged platform removed) was performed on day 5. The time rats spent on crossing the target quadrant and all four quadrants were recorded for 30 s by the tracking system. Assessment of neurogenesis by BrdU incorporation Four weeks after sham or WBI treatment, the rats were injected intraperitoneally with a dose of BrdU (50 mg/kg, Sigma, St Louis, MO, USA) daily for 6 consecutive days. Three weeks after the last dose of BrdU, rats were sacrificed and the brain tissues were harvested and processed using the method below for analysis of neurogenesis. Tissue preparation and immunohistochemistry Rats were sacrificed at 3, 6 or 12 h after irradiation for apoptosis measurement or 2 months post-irradiation for neurogenesis studies. To remove the brains, anesthetized rats were transcardially perfused with PBS followed by decapitation, then the brains were placed in 10% paraformaldehyde solution for 24 h. A single 5-mm-thick section containing the hippocampus was dissected and paraffin-embedded, and the brain levels were approximately 125 ± 150 m apart as previously described. At least ten non-overlapping coronal sections (4 m) were cut from three different brain levels by using a RM 2135 microtome (Leica, Germany) and mounted on poly-L-lysine-coated slides. The tissue sections were processed as previously described and double stained with mouse monoclonal anti-NeuN (specific to human, mouse, rat and chicken NeuN, 1:100, Millipore, USA) or goat anti-DCX antibodies (specific to mouse, rat and human DCX, 1:50, Santa Cruz, USA) and rabbit polyclonal anti-active caspase-3 antibodies (specific to mouse, rat, human and quail active caspase-3, 1:15, Abcam, UK). After incubation with primary antibodies, the sections were washed and incubated sequentially with their respective secondary antibodies for 1. For analysis of neurogenesis in the SGZ, the sections were dual-immunostained with NeuN and BrdU (1:120, Abcam, UK) antibodies. Then the sections were washed and sequentially incubated with the secondary antibody for NeuN and Alexa Fluor® 488 donkey anti-rat secondary antibody (1:200, Invitrogen, USA) for BrdU for 1.5 hours at RT. After washing, the sections were also stained with DAPI. Cell counts were limited to the GCL and a 50m border along the hilar margin that included the SGZ. And all samples were scored blindly. For each animal, five to six sections from three regions of the hippocampus were analyzed. The total number of positively labeled cells was determined by adding up the numbers of positive cells in both dentate gyri from all analyzed sections in the same rat. And all immunofluorescent images were captured using a Nikon confocal fluorescent microscope (A1, Japan). Statistics The results were expressed as mean ± SEM. The cognitive study data were analyzed via one-way ANOVA followed by a Tukey post hoc test for multiple comparison using OriginPro Software (v8.0). And the immunohistochemical study data were analyzed using two-sample t-test. A P < 0.05 between groups was considered significantly different. General observation and body weight gain All rats receiving WBI survived for at least two months, and showed normal motor activities and feeding and drinking behavior. A few irradiated rats showed mild local skin reactions and depilation at the irradiated spot from 2 to 6 weeks after WBI, but all irradiated rats did not have radiation sickness symptoms such as nausea, ataxia or edema. Moreover, all groups showed steady body weight gain within two months after radiation. There was no statistically significant difference in the body weight among the six groups during the observation period (P > 0.05, Figure 1), suggesting that anesthesia, minocycline and radiation did not affect the growth of rats. Minocycline ameliorated radiation-induced cognition impairment The open field test shows that there was no significant difference in the distance rats traveled in the central region and the total time of activity among six groups (P > 0.05) (Figure 2A, B). This indicated that anesthesia, WBI and minocycline had no effect on the locomotor activity and the anxiety level of rats. In MWM test, no significant difference in swimming speeds among these six groups was observed (P > 0.05) ( Figure 2C). In the place navigation test, compared with the SCN group, rats in the RN group showed longer latency, e.g. the time they needed to reach the hidden platform, after irradiation (P = 0.027). Minocycline alone did not have any effect on the latency (P > 0.05). However, minocycline intervention significantly decreased the latency of radiation group compared with WBI alone (P = 0.026) ( Figure 2D). The spatial probe test showed no effect of radiation and minocycline treatment on the percentage of target quadrant exploring time (P > 0.05) ( Figure 2E). These results suggested that minocycline intervention significantly improved the loss of learning and memory ability caused by WBI, and the protective effect was not due to factors such as motor activity and anxiety status. Minocycline did not improve hippocampal neurogenesis deficit 2 months post-irradiation The total number of BrdU positive cells in the SGZ was not statistically different among the four control groups e.g. the CN, CM, SCN and the SCM groups (P > 0.05) ( Figure 3A), suggesting that anesthesia and minocycline did not affect hippocampal neurogenesis. However, two months after receiving a single WBI dose of 20 Gy, the RN group showed a 90% decline in the number of BrdU+ cells (P < 0.01). And minocycline intervention did not make any improvement on the decline (P > 0.05) ( Figure 3A). We also found that irradiation decreased the number of BrdU+/NeuN+ mature neurons by 97% (P < 0.01), and minocycline intervention slightly increased that number (4.7 ± 0.3 for the RN group vs 6.7 ± 0.7 for the RM group), but did not reach statistical significance (P = 0.055) ( Figure 3B, C). The results suggest that minocycline did not have any protective effect on neurogenesis and neuronal differentiation deficit induced by WBI. Minocycline decreased radiation-induced apoptosis in neurons shortly after WBI We found that radiation caused an increase in the number of NeuN+ neurons with activated caspase-3, an established apoptosis marker, in the dentate GCL at 3 and 6 h post-irradiation when compared with the control groups (The four control groups e.g. the CN, CM, SCN and SCM groups showed similar caspase-3 level (data not shown)), with statistical significance only at 3 h (P = 0.029) but not 6 h (P = 0.065) post WBI. By 12 h after WBI, the numbers of NeuN+ neurons with activated caspase-3 in the RN group were back to the level in the control groups ( Figure 4A). Moreover, minocycline intervention decreased the numbers of NeuN+ neurons with activated caspase-3 at 3 and 6 h post-irradiation in the RM group, but did not achieve a significant difference when compared with the RN group (P = 0.24, 0.33) (Figure 4A), suggesting that minocycline did not have a strong protective effect on radiation-induced apoptosis in NeuN+ mature neurons in the dentate GCL. In contrast to fewer apoptotic neurons in the dentate GCL post WBI, radiation resulted in a significant increase in apoptosis in the dentate SGZ at 3 and 6 h postirradiation in the RN group compared with the control groups (P < 0.001) ( Figure 5A). Similar to the dentate Figure 1 Body weight gain in all groups of rats within two months after WBI. There was no significant difference in body weights among the six groups (P > 0.05). The number of rats: n = 18/ group except the SCN and SCM groups which had n = 12/group. GCL, the four control groups e.g. the CN, CM, SCN and SCM groups showed similar caspase-3 level in the dentate SGZ (data not shown). Minocycline intervention reduced the apoptosis level in the dentate SGZ by 71% at 6 h post-irradiation in the RM group (P < 0.001), but did not significantly inhibited radiation-induced apoptosis at 3 h post-irradiation (P = 0.48, RM group vs RN group) ( Figure 5A). These results suggested that minocycline had protective effects on the neurons in the SGZ from radiation-induced apoptosis. To determine whether the observed protective effects of minocycline intervention on the neurons in the SGZ was ascribed to its protective effects on the newborn neurons, a double staining of both DCX (an immature neuron marker) and activated caspase-3 was performed. As shown in Figure 6, the apoptotic DCX+ neurons in the SGZ occurred rarely in the control groups ( Figure 6A, C). There was no difference among the four control groups (data not shown). However, WBI induced a significant increase in apoptosis of DCX+ neurons in the SGZ, and the apoptosis level appeared to peak (521 ± 51.1 caspase-3+ cells) at 3 h, then went back to the control level at 12 h post-irradiation ( Figure 6A). Minocycline appeared to slightly decrease the apoptosis level at 3 h after irradiation (P = 0.24), but significantly reduced the apoptosis level by 75% at 6 h post-irradiation when compared with the RN group (P < 0.001) ( Figure 6A). The results regarding the effect of minocycline on radiation-induced apoptosis in DCX+ neurons in the SGZ showed similar pattern to its effect on apoptosis in the SGZ, suggesting that the protective effect of minocycline on DCX+ neuron from radiation-induced apoptosis contributed to its protective effect on the SGZ. DCX+ neurons existed in large numbers in the SGZ, averaging 1583 ± 63 DCX+ neurons in sham-irradiated animals. Irradiation significantly reduced the number of DCX+ neurons in the SGZ by 47%, 72% and 85% for 3, 6 and 12 h post-IR, respectively (P < 0.001), and minocycline treatment caused a recovery in the number of DCX-positive cells by 28.4% (P = 0.007) at 3 h after irradiation ( Figure 6B). The recovery was not observed at 6 h post-irradiation. But by 12 h after radiation, the number of DCX+ neurons in the SGZ in the RM group was 91% greater than that in the RN group (P = 0.009) ( Figure 6B), suggesting that minocycline intervention could facilitate preservation of DCX+ neurons in the SGZ post-irradiation. Discussion Cranial radiation therapy often causes neurological sideeffects including cognitive impairment. Our study has demonstrated for the first time that minocycline, a clinical available antibiotic, can significantly improve the learning and memory loss in rats caused by WBI. Further studies show that minocycline intervention does not have any protective effects on neurogenesis deficit 2 months post-irradiation. However, we found that minocycline can protect the newborn and immature neurons in the dentate SGZ from radiation-induced apoptosis, thus resulting in less newborn neuron depletion shortly after WBI. The MWM has been used to reveal a severe spatial navigation deficit in the adult rats receiving a single high dose of X-rays (8-9 Gy) shortly after birth. Using the same assay, our previous study has also showed a significant cognition decline in rats exposed to X-irradiation at one-month old. In the present study, using MWM we found that minocycline intervention significantly attenuated cognitive decline in irradiated rats ( Figure 2D). Since the learning and memory performance of rats could be affected by their anxiety and motor activity, we also measured their levels of anxiety and swimming speeds, and found that both irradiation and minocycline did not affect them. Thus we could rule out the contribution of anxiety and motor activity to WBI-induced cognitive decline and the protective effect of minocycline in rats. The mechanisms underlying the decline of cognitive function is unclear, although accumulated evidence suggests that reduced hippocampal neurogenesis adversely affects memory formation. Consistent with previous report, our results demonstrated that radiationinduced learning and memory deficit was accompanied with significant decline of neurogenesis. However, no protective effects of minocycline on neurogenesis was observed in spite of its recovery effect on the learning and memory performance in irradiated rats. The effect of minocycline on neurogenesis seems somewhat controversial. The study from Kohman et al. showed that minocycline may recover some aspects of cognitive decline associated with aging, but the effect appears to be unrelated to adult hippocampal neurogenesis. Ng et al. found that despite the attenuation of activated microglia, minocycline do not support neurogenesis in the hippocampus. However, Mattei et al. recently reported that minocycline rescues decrease in neurogenesis in an animal model of schizophrenia. Moreover, we found that WBI-induced cognitive impairment was accompanied with severe neuron apoptosis, especially apoptosis in the newborn and immature neurons in the dentate SGZ, which agrees with the previous studies Figure 4 Radiation-induced apoptosis in the dentate GCL. (A) The numbers of NeuN+/caspase-3+ cells in the dentate GCL in irradiated rats at different times after irradiation. * P < 0.05, compared with the control groups. (B) In situ immunohisto-chemistry images of the dentate GCL 3 h after WBI. Cell markers are: NeuN (a nuclear antigen in mature neurons, red), caspase-3 (marker for apoptotic cells, green) and DAPI (marker for nuclei, blue). The number of rats: n = 3-4/group.. Minocycline did not have strong protective effects on radiation-induced apoptosis in mature neurons in the dentate GCL ( Figure 4A). In contrast, minocycline protected the newborn and immature neurons in the dentate SGZ from radiation-induced apoptosis ( Figures 5A and 6A), thereby resulting in less newborn neuron depletion 12 h after radiation ( Figure 6B). This is similar to the previous report that pretreatment with minocycline mitigated isoflurane-induced cognitive deficits and suppressed the isoflurane-induced caspase-3 activation and apoptosis in the hippocampus 4 h after isoflurane exposure. The neuroprotective properties of minocycline have been suggested to be due to its direct antioxidant activity, which is as good as Vitamin E. It has been shown that ionizing radiation can induce caspase-3-dependent apoptosis through generation of reactive oxygen species (ROS) in Neural stem cells. Thus an antioxidant like minocycline could inhibit caspase-3-dependent apoptosis by scavenging ROS. Although Mizumatus et al. showed that acute dose-related changes in SGZ precursor cells qualitatively correlate with later decreases in new neuron production after radiation, and suggested that precursor cell radiation response may play a contributory if not causative role in radiation-induced cognitive impairment, linking radiation-induced acute damage in hippocampus at early times to the progressive cognitive impairment at late times after radiation is still difficult. Therefore, despite our results, at this time we are unlikely to be able to draw the conclusion that the recovery effect of minocycline on the decline of cognitive Figure 5 Radiation-induced apoptosis in the dentate SGZ. (A) The total numbers of caspase-3+ cells in the dentate SGZ in irradiated rats at different times after irradiation. * P < 0.05, compared with the RN group. (B) In situ immunohistochemistry images of the dentate SGZ 6 h after WBI. Cell markers are: NeuN (red), caspase-3 (green) and DAPI (blue). The number of rats: n = 3-4/group. function of rats 2 months post-irradiation was due to its protective effect on neurons from radiation-induced apoptosis shortly after WBI. It was suggested that the alleviating effect of minocycline on long-term spatial memory impairment in aged mice was associated with the inhibition of astrocytic activation. In addition, minocycline was found to reduce astrocytic reactivation and neuroinflammation in the hippocampus of a vascular cognitive impairment rat model. Therefore, whether the mechanisms underlying the protective effects of minocycline on radiation-induced cognitive impairment involves inhibition of astrocytic activation and neuroinflammation needs to be elucidated in the further studies. Conclusions In summary, we have found minocycline, a clinical available antibiotic, does not affect normal growth, and significantly attenuates irradiation-induced cognitive impairment and protects newborn neurons from radiation-induced apoptosis, leading to less new neuron loss. The results indicate a potential clinical implication of minocycline as an effective adjunct in radiotherapy for brain tumor patients. Figure 6 Minocycline inhibited radiation-induced apoptosis in newborn neurons and decreased the depletion of total number of newborn neurons. (A) The numbers of DCX+/caspase-3+ cells in the dentate SGZ in irradiated rats at different times after irradiation. * P < 0.05, RN vs RM. (B) Quantification of the total numbers of DCX + cells in the SGZ in irradiated rats at different times after irradiation. * P < 0.05, RN vs RM. (C) In situ immunohistochemistry images of the dentate SGZ 6 h after WBI. Cell markers are: DCX (a nuclear antigen in new neurons, red), caspase-3 (green) and DAPI (blue). The number of rats: n = 3-4/group.
|
Book reviews: Parker, D.J. and Penning-Rowsell, E.C. 1980: Water planning in Britain. London: George Allen and Unwin. xx + 278 pp. £7.95 lution of conflicts between various groups; those with more political leverage, economic strength and technical expertise gained; the rest lost. Is this inevitably the case. The failures of the Volta River Project can largely be attributed to the failures of politicians and there are no good grounds for supposing that they learn from the mistakes of their predecessors. It is comforting to find that at least some engineers are concerned with the effects of their structures on environment and society, and that we can look forward to other analyses of major river projects by the School of Engineering and the Science Studies Unit of Edinburgh University. Engineering technology is well understood; the social and political management of river control works is in comparison at an elementary level. The need in the future is to consider first the people in the locality for whom development is intended and ensure that they benefit and
|
#-*- coding: utf-8 -*-
import io
import itertools
import json
import pandas as pd
import numpy as np
import quantipy as qp
import copy
import time
import sys
from link import Link
from chain import Chain
from view import View
from helpers import functions
from view_generators.view_mapper import ViewMapper
from view_generators.view_maps import QuantipyViews
from quantipy.core.tools.qp_decorators import modify
from quantipy.core.tools.dp.spss.reader import parse_sav_file
from quantipy.core.tools.dp.io import unicoder, write_quantipy
from quantipy.core.tools.dp.prep import frequency, verify_test_results
from cache import Cache
import itertools
from collections import defaultdict, OrderedDict
# Pickle modules
import cPickle
# Compression methods
import gzip
from quantipy.sandbox.sandbox import Chain as NewChain
class Stack(defaultdict):
"""
Container of quantipy.Link objects holding View objects.
A Stack is nested dictionary that structures the data and variable
relationships storing all View aggregations performed.
"""
def __init__(self,
name="",
add_data=None):
super(Stack, self).__init__(Stack)
self.name = name
self.key = None
self.parent = None
# This is the root of the stack
# It is used by the get/set methods to determine
# WHERE in the stack those methods are.
self.stack_pos = "stack_root"
self.x_variables = None
self.y_variables = None
self.__view_keys = []
if add_data:
for key in add_data:
if isinstance(add_data[key], dict):
self.add_data(
data_key=key,
data=add_data[key].get('data', None),
meta=add_data[key].get('meta', None)
)
elif isinstance(add_data[key], tuple):
self.add_data(
data_key=key,
data=add_data[key][0],
meta=add_data[key][1]
)
else:
raise TypeError(
"All data_key values must be one of the following types: "
"<dict> or <tuple>. "
"Given: %s" % (type(add_data[key]))
)
def __setstate__(self, attr_dict):
self.__dict__.update(attr_dict)
def __reduce__(self):
arguments = (self.name, )
state = self.__dict__.copy()
if 'cache' in state:
state.pop('cache')
state['cache'] = Cache() # Empty the cache for storage
return self.__class__, arguments, state, None, self.iteritems()
def __setitem__(self, key, val):
""" The 'set' method for the Stack(dict)
It 'sets' the value in it's correct place in the Stack
AND applies a 'stack_pos' value depending on WHERE in
the stack the value is being placed.
"""
super(Stack, self).__setitem__(key, val)
# The 'meta' portion of the stack is a standar dict (not Stack)
try:
if isinstance(val, Stack) and val.stack_pos is "stack_root":
val.parent = self
val.key = key
# This needs to be compacted and simplified.
if self.stack_pos is "stack_root":
val.stack_pos = "data_root"
elif self.stack_pos is "data_root":
val.stack_pos = "filter"
elif self.stack_pos is "filter":
val.stack_pos = "x"
except AttributeError:
pass
def __getitem__(self, key):
""" The 'get' method for the Stack(dict)
The method 'gets' a value from the stack. If 'stack_pos' is 'y'
AND the value isn't a Link instance THEN it tries to query the
stack again with the x/y variables swapped and IF that yelds
a result that is a Link object THEN it sets a 'transpose' variable
as True in the result and the result is transposed.
"""
val = defaultdict.__getitem__(self, key)
return val
def add_data(self, data_key, data=None, meta=None, ):
"""
Sets the data_key into the stack, optionally mapping data sources it.
It is possible to handle the mapping of data sources in different ways:
* no meta or data (for proxy links not connected to source data)
* meta only (for proxy links with supporintg meta)
* data only (meta will be inferred if possible)
* data and meta
Parameters
----------
data_key : str
The reference name for a data source connected to the Stack.
data : pandas.DataFrame
The input (case) data source.
meta : dict or OrderedDict
A quantipy compatible metadata source that describes the case data.
Returns
-------
None
"""
self._verify_key_types(name='data', keys=data_key)
if data_key in self.keys():
warning_msg = "You have overwritten data/meta for key: ['%s']."
print warning_msg % (data_key)
if data is not None:
if isinstance(data, pd.DataFrame):
if meta is None:
# To do: infer meta from DataFrame
meta = {'info': None, 'lib': None, 'sets': None,
'columns': None, 'masks': None}
# Add a special column of 1s
data['@1'] = np.ones(len(data.index))
data.index = list(xrange(0, len(data.index)))
else:
raise TypeError(
"The 'data' given to Stack.add_data() must be one of the following types: "
"<pandas.DataFrame>"
)
if not meta is None:
if isinstance(meta, (dict, OrderedDict)):
# To do: verify incoming meta
pass
else:
raise TypeError(
"The 'meta' given to Stack.add_data() must be one of the following types: "
"<dict>, <collections.OrderedDict>."
)
# Add the data key to the stack
# self[data_key] = {}
# Add the meta and data to the data_key position in the stack
self[data_key].meta = meta
self[data_key].data = data
self[data_key].cache = Cache()
self[data_key]['no_filter'].data = self[data_key].data
def remove_data(self, data_keys):
"""
Deletes the data_key(s) and associated data specified in the Stack.
Parameters
----------
data_keys : str or list of str
The data keys to remove.
Returns
-------
None
"""
self._verify_key_types(name='data', keys=data_keys)
if isinstance(data_keys, (str, unicode)):
data_keys = [data_keys]
for data_key in data_keys:
del self[data_key]
def variable_types(self, data_key, only_type=None, verbose=True):
"""
Group variables by data types found in the meta.
Parameters
----------
data_key : str
The reference name of a case data source hold by the Stack instance.
only_type : {'int', 'float', 'single', 'delimited set', 'string',
'date', time', 'array'}, optional
Will restrict the output to the given data type.
Returns
-------
types : dict or list of str
A summary of variable names mapped to their data types, in form of
{type_name: [variable names]} or a list of variable names
confirming only_type.
"""
if self[data_key].meta['columns'] is None:
return 'No meta attached to data_key: %s' %(data_key)
else:
types = {
'int': [],
'float': [],
'single': [],
'delimited set': [],
'string': [],
'date': [],
'time': [],
'array': []
}
not_found = []
for col in self[data_key].data.columns:
if not col in ['@1', 'id_L1', 'id_L1.1']:
try:
types[
self[data_key].meta['columns'][col]['type']
].append(col)
except:
not_found.append(col)
for mask in self[data_key].meta['masks'].keys():
types[self[data_key].meta['masks'][mask]['type']].append(mask)
if not_found and verbose:
print '%s not found in meta file. Ignored.' %(not_found)
if only_type:
return types[only_type]
else:
return types
def get_chain(self, *args, **kwargs):
if qp.OPTIONS['new_chains']:
chain = NewChain(self, name=None)
chain = chain.get(*args, **kwargs)
return chain
else:
def _get_chain(name=None, data_keys=None, filters=None, x=None, y=None,
views=None, orient_on=None, select=None,
rules=False, rules_weight=None):
"""
Construct a "chain" shaped subset of Links and their Views from the Stack.
A chain is a one-to-one or one-to-many relation with an orientation that
defines from which axis (x or y) it is build.
Parameters
----------
name : str, optional
If not provided the name of the chain is generated automatically.
data_keys, filters, x, y, views : str or list of str
Views will be added reflecting the order in ``views`` parameter. If
both ``x`` and ``y`` have multiple items, you must specify the
``orient_on`` parameter.
orient_on : {'x', 'y'}, optional
Must be specified if both ``x`` and ``y`` are lists of multiple
items.
select : tbc.
:TODO: document this!
Returns
-------
chain : Chain object instance
"""
#Make sure all the given keys are in lists
data_keys = self._force_key_as_list(data_keys)
# filters = self._force_key_as_list(filters)
views = self._force_key_as_list(views)
#Make sure all the given keys are in lists
x = self._force_key_as_list(x)
y = self._force_key_as_list(y)
if orient_on is None:
if len(x)==1:
orientation = 'x'
elif len(y)==1:
orientation = 'y'
else:
orientation = 'x'
else:
orientation = orient_on
described = self.describe()
if isinstance(rules, bool):
if rules:
rules = ['x', 'y']
else:
rules = []
if orient_on:
if x is None:
x = described['x'].drop_duplicates().values.tolist()
if y is None:
y = described['y'].drop_duplicates().values.tolist()
if views is None:
views = self._Stack__view_keys
views = [v for v in views if '|default|' not in v]
chains = self.__get_chains(
name=name,
data_keys=data_keys,
filters=filters,
x=x,
y=y,
views=views,
orientation=orient_on,
select=select,
rules=rules,
rules_weight=rules_weight)
return chains
else:
chain = Chain(name)
found_views = []
#Make sure all the given keys are in lists
x = self._force_key_as_list(x)
y = self._force_key_as_list(y)
if data_keys is None:
# Apply lazy data_keys if none given
data_keys = self.keys()
the_filter = "no_filter" if filters is None else filters
if self.__has_list(data_keys):
for key in data_keys:
# Use describe method to get x keys if not supplied.
if x is None:
x_keys = described['x'].drop_duplicates().values.tolist()
else:
x_keys = x
# Use describe method to get y keys if not supplied.
if y is None:
y_keys = described['y'].drop_duplicates().values.tolist()
else:
y_keys = y
# Use describe method to get view keys if not supplied.
if views is None:
v_keys = described['view'].drop_duplicates().values.tolist()
v_keys = [v_key for v_key in v_keys if '|default|'
not in v_key]
else:
v_keys = views
chain._derive_attributes(
key, the_filter, x_keys, y_keys, views, orientation=orientation)
# Apply lazy name if none given
if name is None:
chain._lazy_name()
for x_key in x_keys:
self._verify_key_exists(
x_key,
stack_path=[key, the_filter]
)
for y_key in y_keys:
self._verify_key_exists(
y_key,
stack_path=[key, the_filter, x_key])
try:
base_text = self[key].meta['columns'][x_key]['properties']['base_text']
if isinstance(base_text, (str, unicode)):
if base_text.startswith(('Base:', 'Bas:')):
base_text = base_text.split(':')[-1].lstrip()
elif isinstance(base_text, dict):
for text_key in base_text.keys():
if base_text[text_key].startswith(('Base:', 'Bas:')):
base_text[text_key] = base_text[text_key].split(':')[-1].lstrip()
chain.base_text = base_text
except:
pass
if views is None:
chain[key][the_filter][x_key][y_key] = self[key][the_filter][x_key][y_key]
else:
stack_link = self[key][the_filter][x_key][y_key]
link_keys = stack_link.keys()
chain_link = {}
chain_view_keys = [k for k in views if k in link_keys]
for vk in chain_view_keys:
stack_view = stack_link[vk]
# Get view dataframe
rules_x_slicer = self.axis_slicer_from_vartype(
rules, 'x', key, the_filter, x_key, y_key, rules_weight)
rules_y_slicer = self.axis_slicer_from_vartype(
rules, 'y', key, the_filter, x_key, y_key, rules_weight)
if rules_x_slicer is None and rules_y_slicer is None:
# No rules to apply
view_df = stack_view.dataframe
else:
# Apply rules
viable_axes = functions.rule_viable_axes(self[key].meta, vk, x_key, y_key)
transposed_array_sum = x_key == '@' and y_key in self[key].meta['masks']
if not viable_axes:
# Axes are not viable for rules application
view_df = stack_view.dataframe
else:
view_df = stack_view.dataframe.copy()
if 'x' in viable_axes and not rules_x_slicer is None:
# Apply x-rules
rule_codes = set(rules_x_slicer)
view_codes = set(view_df.index.tolist())
if not rule_codes - view_codes:
view_df = view_df.loc[rules_x_slicer]
if 'x' in viable_axes and transposed_array_sum and rules_y_slicer:
view_df = view_df.loc[rules_y_slicer]
if 'y' in viable_axes and not rules_y_slicer is None:
# Apply y-rules
view_df = view_df[rules_y_slicer]
if vk.split('|')[1].startswith('t.'):
view_df = verify_test_results(view_df)
chain_view = View(
link=stack_link,
name = stack_view.name,
kwargs=stack_view._kwargs)
chain_view._notation = vk
chain_view.grp_text_map = stack_view.grp_text_map
chain_view.dataframe = view_df
chain_view._custom_txt = stack_view._custom_txt
chain_view.add_base_text = stack_view.add_base_text
chain_link[vk] = chain_view
if vk not in found_views:
found_views.append(vk)
chain[key][the_filter][x_key][y_key] = chain_link
else:
raise ValueError(
"One or more of your data_keys ({data_keys}) is not"
" in the stack ({stack_keys})".format(
data_keys=data_keys,
stack_keys=self.keys()
)
)
# Make sure chain.views only contains views that actually exist
# in the chain
if found_views:
chain.views = [
view
for view in chain.views
if view in found_views]
return chain
return _get_chain(*args, **kwargs)
def reduce(self, data_keys=None, filters=None, x=None, y=None, variables=None, views=None):
'''
Remove keys from the matching levels, erasing discrete Stack portions.
Parameters
----------
data_keys, filters, x, y, views : str or list of str
Returns
-------
None
'''
# Ensure given keys are all valid types
self._verify_multiple_key_types(
data_keys=data_keys,
filters=filters,
x=x,
y=y,
variables=variables,
views=views
)
# Make sure all the given keys are in lists
data_keys = self._force_key_as_list(data_keys)
filters = self._force_key_as_list(filters)
views = self._force_key_as_list(views)
if not variables is None:
variables = self._force_key_as_list(variables)
x = variables
y = variables
else:
x = self._force_key_as_list(x)
y = self._force_key_as_list(y)
# Make sure no keys that don't exist anywhere were passed
key_check = {
'data': data_keys,
'filter': filters,
'x': x,
'y': y,
'view': views
}
contents = self.describe()
for key_type, keys in key_check.iteritems():
if not keys is None:
uk = contents[key_type].unique()
if not any([tk in uk for tk in keys]):
raise ValueError(
"Some of the %s keys passed to stack.reduce() "
"weren't found. Found: %s. "
"Given: %s" % (key_type, uk, keys)
)
if not data_keys is None:
for dk in data_keys:
try:
del self[dk]
except:
pass
for dk in self.keys():
if not filters is None:
for fk in filters:
try:
del self[dk][fk]
except:
pass
for fk in self[dk].keys():
if not x is None:
for xk in x:
try:
del self[dk][fk][xk]
except:
pass
for xk in self[dk][fk].keys():
if not y is None:
for yk in y:
try:
del self[dk][fk][xk][yk]
except:
pass
for yk in self[dk][fk][xk].keys():
if not views is None:
for vk in views:
try:
del self[dk][fk][xk][yk][vk]
except:
pass
def add_link(self, data_keys=None, filters=['no_filter'], x=None, y=None,
views=None, weights=None, variables=None):
"""
Add Link and View defintions to the Stack.
The method can be used flexibly: It is possible to pass only Link
defintions that might be composed of filter, x and y specifications,
only views incl. weight variable selections or arbitrary combinations of
the former.
:TODO: Remove ``variables`` from parameter list and method calls.
Parameters
----------
data_keys : str, optional
The data_key to be added to. If none is given, the method will try
to add to all data_keys found in the Stack.
filters : list of str describing filter defintions, default ['no_filter']
The string must be a valid input for the
pandas.DataFrame.query() method.
x, y : str or list of str
The x and y variables to constrcut Links from.
views : list of view method names.
Can be any of Quantipy's preset Views or the names of created
view method specifications.
weights : list, optional
The names of weight variables to consider in the data aggregation
process. Weight variables must be of type ``float``.
Returns
-------
None
"""
if data_keys is None:
data_keys = self.keys()
else:
self._verify_key_types(name='data', keys=data_keys)
data_keys = self._force_key_as_list(data_keys)
if not isinstance(views, ViewMapper):
# Use DefaultViews if no view were given
if views is None:
pass
# views = DefaultViews()
elif isinstance(views, (list, tuple)):
views = QuantipyViews(views=views)
else:
raise TypeError(
"The views past to stack.add_link() must be type <quantipy.view_generators.ViewMapper>, "
"or they must be a list of method names known to <quantipy.view_generators.QuantipyViews>."
)
qplogic_filter = False
if not isinstance(filters, dict):
self._verify_key_types(name='filter', keys=filters)
filters = self._force_key_as_list(filters)
filters = {f: f for f in filters}
# if filters.keys()[0] != 'no_filter':
# msg = ("Warning: pandas-based filtering will be deprecated in the "
# "future!\nPlease switch to quantipy-logic expressions.")
# print UserWarning(msg)
else:
qplogic_filter = True
if not variables is None:
if not x is None or not y is None:
raise ValueError(
"You cannot pass both 'variables' and 'x' and/or 'y' to stack.add_link() "
"at the same time."
)
x = self._force_key_as_list(x)
y = self._force_key_as_list(y)
# Get the lazy y keys none were given and there is only 1 x key
if not x is None:
if len(x)==1 and y is None:
y = self.describe(
index=['y'],
query="x=='%s'" % (x[0])
).index.tolist()
# Get the lazy x keys none were given and there is only 1 y key
if not y is None:
if len(y)==1 and x is None:
x = self.describe(
index=['x'],
query="y=='%s'" % (y[0])
).index.tolist()
for dk in data_keys:
self._verify_key_exists(dk)
for filter_def, logic in filters.items():
# if not filter_def in self[dk].keys():
if filter_def=='no_filter':
self[dk][filter_def].data = self[dk].data
self[dk][filter_def].meta = self[dk].meta
else:
if not qplogic_filter:
try:
self[dk][filter_def].data = self[dk].data.query(logic)
self[dk][filter_def].meta = self[dk].meta
except Exception, ex:
raise UserWarning('A filter definition is invalid and will be skipped: {filter_def}'.format(filter_def=filter_def))
continue
else:
dataset = qp.DataSet('stack')
dataset.from_components(self[dk].data, self[dk].meta)
f_dataset = dataset.filter(filter_def, logic, inplace=False)
self[dk][filter_def].data = f_dataset._data
self[dk][filter_def].meta = f_dataset._meta
fdata = self[dk][filter_def].data
if len(fdata) == 0:
raise UserWarning('A filter definition resulted in no cases and will be skipped: {filter_def}'.format(filter_def=filter_def))
continue
self.__create_links(data=fdata, data_key=dk, the_filter=filter_def, x=x, y=y, views=views, weights=weights, variables=variables)
def describe(self, index=None, columns=None, query=None, split_view_names=False):
"""
Generates a structured overview of all Link defining Stack elements.
Parameters
----------
index, columns : str of or list of {'data', 'filter', 'x', 'y', 'view'},
optional
Controls the output representation by structuring a pivot-style
table according to the index and column values.
query : str
A query string that is valid for the pandas.DataFrame.query() method.
split_view_names : bool, default False
If True, will create an output of unique view name notations split
up into their components.
Returns
-------
description : pandas.DataFrame
DataFrame summing the Stack's structure in terms of Links and Views.
"""
stack_tree = []
for dk in self.keys():
path_dk = [dk]
filters = self[dk]
# for fk in filters.keys():
# path_fk = path_dk + [fk]
# xs = self[dk][fk]
for fk in filters.keys():
path_fk = path_dk + [fk]
xs = self[dk][fk]
for sk in xs.keys():
path_sk = path_fk + [sk]
ys = self[dk][fk][sk]
for tk in ys.keys():
path_tk = path_sk + [tk]
views = self[dk][fk][sk][tk]
if views.keys():
for vk in views.keys():
path_vk = path_tk + [vk, 1]
stack_tree.append(tuple(path_vk))
else:
path_vk = path_tk + ['|||||', 1]
stack_tree.append(tuple(path_vk))
column_names = ['data', 'filter', 'x', 'y', 'view', '#']
description = pd.DataFrame.from_records(stack_tree, columns=column_names)
if split_view_names:
views_as_series = pd.DataFrame(
description.pivot_table(values='#', columns='view', aggfunc='count')
).reset_index()['view']
parts = ['xpos', 'agg', 'condition', 'rel_to', 'weights',
'shortname']
description = pd.concat(
(views_as_series,
pd.DataFrame(views_as_series.str.split('|').tolist(),
columns=parts)), axis=1)
description.replace('|||||', np.NaN, inplace=True)
if query is not None:
description = description.query(query)
if not index is None or not columns is None:
description = description.pivot_table(values='#', index=index, columns=columns,
aggfunc='count')
return description
def refresh(self, data_key, new_data_key='', new_weight=None,
new_data=None, new_meta=None):
"""
Re-run all or a portion of Stack's aggregations for a given data key.
refresh() can be used to re-weight the data using a new case data
weight variable or to re-run all aggregations based on a changed source
data version (e.g. after cleaning the file/ dropping cases) or a
combination of the both.
.. note::
Currently this is only supported for the preset QuantipyViews(),
namely: ``'cbase'``, ``'rbase'``, ``'counts'``, ``'c%'``,
``'r%'``, ``'mean'``, ``'ebase'``.
Parameters
----------
data_key : str
The Links' data key to be modified.
new_data_key : str, default ''
Controls if the existing data key's files and aggregations will be
overwritten or stored via a new data key.
new_weight : str
The name of a new weight variable used to re-aggregate the Links.
new_data : pandas.DataFrame
The case data source. If None is given, the
original case data found for the data key will be used.
new_meta : quantipy meta document
A meta data source associated with the case data. If None is given,
the original meta definition found for the data key will be used.
Returns
-------
None
"""
content = self.describe()[['data', 'filter', 'x', 'y', 'view']]
content = content[content['data'] == data_key]
put_meta = self[data_key].meta if new_meta is None else new_meta
put_data = self[data_key].data if new_data is None else new_data
dk = new_data_key if new_data_key else data_key
self.add_data(data_key=dk, data=put_data, meta=put_meta)
skipped_views = []
for _, f, x, y, view in content.values:
shortname = view.split('|')[-1]
if shortname not in ['default', 'cbase', 'cbase_gross',
'rbase', 'counts', 'c%',
'r%', 'ebase', 'mean',
'c%_sum', 'counts_sum']:
if view not in skipped_views:
skipped_views.append(view)
warning_msg = ('\nOnly preset QuantipyViews are supported.'
'Skipping: {}').format(view)
print warning_msg
else:
view_weight = view.split('|')[-2]
if not x in [view_weight, new_weight]:
if new_data is None and new_weight is not None:
if not view_weight == '':
if new_weight == '':
weight = [None, view_weight]
else:
weight = [view_weight, new_weight]
else:
if new_weight == '':
weight = None
else:
weight = [None, new_weight]
self.add_link(data_keys=dk, filters=f, x=x, y=y,
weights=weight, views=[shortname])
else:
if view_weight == '':
weight = None
elif new_weight is not None:
if not (view_weight == new_weight):
if new_weight == '':
weight = [None, view_weight]
else:
weight = [view_weight, new_weight]
else:
weight = view_weight
else:
weight = view_weight
try:
self.add_link(data_keys=dk, filters=f, x=x, y=y,
weights=weight, views=[shortname])
except ValueError, e:
print '\n', e
return None
def save(self, path_stack, compression="gzip", store_cache=True,
decode_str=False, dataset=False, describe=False):
"""
Save Stack instance to .stack file.
Parameters
----------
path_stack : str
The full path to the .stack file that should be created, including
the extension.
compression : {'gzip'}, default 'gzip'
The intended compression type.
store_cache : bool, default True
Stores the MatrixCache in a file in the same location.
decode_str : bool, default=True
If True the unicoder function will be used to decode all str
objects found anywhere in the meta document/s.
dataset : bool, default=False
If True a json/csv will be saved parallel to the saved stack
for each data key in the stack.
describe : bool, default=False
If True the result of stack.describe().to_excel() will be
saved parallel to the saved stack.
Returns
-------
None
"""
protocol = cPickle.HIGHEST_PROTOCOL
if not path_stack.endswith('.stack'):
raise ValueError(
"To avoid ambiguity, when using Stack.save() you must provide the full path to "
"the stack file you want to create, including the file extension. For example: "
"stack.save(path_stack='./output/MyStack.stack'). Your call looks like this: "
"stack.save(path_stack='%s', ...)" % (path_stack)
)
# Make sure there are no str objects in any meta documents. If
# there are any non-ASCII characters will be encoded
# incorrectly and lead to UnicodeDecodeErrors in Jupyter.
if decode_str:
for dk in self.keys():
self[dk].meta = unicoder(self[dk].meta)
if compression is None:
f = open(path_stack, 'wb')
cPickle.dump(self, f, protocol)
else:
f = gzip.open(path_stack, 'wb')
cPickle.dump(self, f, protocol)
if store_cache:
caches = {}
for key in self.keys():
caches[key] = self[key].cache
path_cache = path_stack.replace('.stack', '.cache')
if compression is None:
f1 = open(path_cache, 'wb')
cPickle.dump(caches, f1, protocol)
else:
f1 = gzip.open(path_cache, 'wb')
cPickle.dump(caches, f1, protocol)
f1.close()
f.close()
if dataset:
for key in self.keys():
path_json = path_stack.replace(
'.stack',
' [{}].json'.format(key))
path_csv = path_stack.replace(
'.stack',
' [{}].csv'.format(key))
write_quantipy(
meta=self[key].meta,
data=self[key].data,
path_json=path_json,
path_csv=path_csv)
if describe:
path_describe = path_stack.replace('.stack', '.xlsx')
self.describe().to_excel(path_describe)
# def get_slice(data_key=None, x=None, y=None, filters=None, views=None):
# """ """
# pass
# STATIC METHODS
@staticmethod
def from_sav(data_key, filename, name=None, path=None, ioLocale="en_US.UTF-8", ioUtf8=True):
"""
Creates a new stack instance from a .sav file.
Parameters
----------
data_key : str
The data_key for the data and meta in the sav file.
filename : str
The name to the sav file.
name : str
A name for the sav (stored in the meta).
path : str
The path to the sav file.
ioLocale : str
The locale used in during the sav processing.
ioUtf8 : bool
Boolean that indicates the mode in which text communicated to or
from the I/O module will be.
Returns
-------
stack : stack object instance
A stack instance that has a data_key with data and metadata
to run aggregations.
"""
if name is None:
name = data_key
meta, data = parse_sav_file(filename=filename, path=path, name=name, ioLocale=ioLocale, ioUtf8=ioUtf8)
return Stack(add_data={name: {'meta': meta, 'data':data}})
@staticmethod
def load(path_stack, compression="gzip", load_cache=False):
"""
Load Stack instance from .stack file.
Parameters
----------
path_stack : str
The full path to the .stack file that should be created, including
the extension.
compression : {'gzip'}, default 'gzip'
The compression type that has been used saving the file.
load_cache : bool, default False
Loads MatrixCache into the Stack a .cache file is found.
Returns
-------
None
"""
if not path_stack.endswith('.stack'):
raise ValueError(
"To avoid ambiguity, when using Stack.load() you must provide the full path to "
"the stack file you want to create, including the file extension. For example: "
"stack.load(path_stack='./output/MyStack.stack'). Your call looks like this: "
"stack.load(path_stack='%s', ...)" % (path_stack)
)
if compression is None:
f = open(path_stack, 'rb')
else:
f = gzip.open(path_stack, 'rb')
new_stack = cPickle.load(f)
f.close()
if load_cache:
path_cache = path_stack.replace('.stack', '.cache')
if compression is None:
f = open(path_cache, 'rb')
else:
f = gzip.open(path_cache, 'rb')
caches = cPickle.load(f)
for key in caches.keys():
if key in new_stack.keys():
new_stack[key].cache = caches[key]
else:
raise ValueError(
"Tried to insert a loaded MatrixCache in to a data_key in the stack that"
"is not in the stack. The data_key is '{}', available keys are {}"
.format(key, caches.keys())
)
f.close()
return new_stack
# PRIVATE METHODS
def __get_all_y_keys(self, data_key, the_filter="no_filter"):
if(self.stack_pos == 'stack_root'):
return self[data_key].y_variables
else:
raise KeyError("get_all_y_keys can only be called from a stack at root level. Current level is '{0}'".format(self.stack_pos))
def __get_all_x_keys(self, data_key, the_filter="no_filter"):
if(self.stack_pos == 'stack_root'):
return self[data_key].x_variables
else:
raise KeyError("get_all_x_keys can only be called from a stack at root level. Current level is '{0}'".format(self.stack_pos))
def __get_all_x_keys_except(self, data_key, exception):
keys = self.__get_all_x_keys(data_key)
return [i for i in keys if i != exception[0]]
def __get_all_y_keys_except(self, data_key, exception):
keys = self.__get_all_y_keys(data_key)
return [i for i in keys if i != exception[0]]
def __set_x_key(self, key):
if self.x_variables is None:
self.x_variables = set(key)
else:
self.x_variables.update(key)
def __set_y_key(self, key):
if self.y_variables is None:
self.y_variables = set(key)
else:
self.y_variables.update(key)
def _set_x_and_y_keys(self, data_key, x, y):
"""
Sets the x_variables and y_variables in the data part of the stack for this data_key, e.g. stack['Jan'].
This method can also be used to add to the current lists and it makes sure the list stays unique.
"""
if self.stack_pos == 'stack_root':
self[data_key].__set_x_key(x)
self[data_key].__set_y_key(y)
else:
raise KeyError("set_x_keys can only be called from a stack at root level. Current level is '{0}'".format(self.stack_pos))
def __create_combinations(self, data, data_key, x=None, y=None, weight=None, variables=None):
if isinstance(y, str):
y = [y]
if isinstance(x, str):
x = [x]
has_metadata = self[data_key].meta is not None and not isinstance(self[data_key].meta, Stack)
# any(...) returns true if ANY of the vars are not None
if any([x, y]) and variables is not None:
# Raise an error if variables AND x/y are BOTH supplied
raise ValueError("Either use the 'variables' OR 'x', 'y' NOT both.")
if not any([x, y]):
if variables is None:
if not has_metadata:
# "fully-lazy" method. (variables, x and y are all None)
variables = data.columns.tolist()
if variables is not None:
x = variables
y = variables
variables = None
# Ensure that we actually have metadata
if has_metadata:
# THEN we try to create the combinations with metadata
combinations = self.__create_combinations_with_meta(data=data, data_key=data_key, x=x, y=y, weight=weight)
else:
# Either variables or both x AND y are supplied. Then create the combinations from that.
combinations = self.__create_combinations_no_meta(data=data, data_key=data_key, x=x, y=y, weight=weight)
unique_list = set([item for comb in combinations for item in comb])
return combinations, unique_list
def __create_combinations_with_meta(self, data, data_key, x=None, y=None, weight=None):
# TODO: These meta functions should possibly be in the helpers functions
metadata_columns = self[data_key].meta['columns'].keys()
for mask, mask_data in self[data_key].meta['masks'].iteritems():
# TODO :: Get the static list from somewhere. not hardcoded.
if mask_data['type'].lower() in ['array', 'dichotomous set',
"categorical set"]:
metadata_columns.append(mask)
for item in mask_data['items']:
if "source" in item:
column = item["source"].split('@')[1]
metadata_columns.remove(column)
elif mask_data['type'].lower() in ["overlay"]:
pass
# Use all from the metadata, if nothing is specified (fully-lazy)
if x is None and y is None:
x = metadata_columns
y = metadata_columns
if all([x, y]):
metadata_columns = list(set(metadata_columns + x + y))
elif x is not None:
metadata_columns = list(set(metadata_columns + x))
elif y is not None:
metadata_columns = list(set(metadata_columns + y))
combinations = functions.create_combinations_from_array(sorted(metadata_columns))
for var in [x, y]:
if var is not None:
if weight in var:
var.remove(weight)
if all([x, y]):
combinations = [(x_item, y_item) for x_item, y_item in combinations
if x_item in x and y_item in y]
elif x is not None:
combinations = [(x_item, y_item) for x_item, y_item in combinations
if x_item in x]
elif y is not None:
combinations = [(x_item, y_item) for x_item, y_item in combinations
if y_item in y]
return combinations
def __create_combinations_no_meta(self, data, data_key, x=None, y=None, weight=None):
if x is None:
x = data.columns.tolist()
if y is None:
y = data.columns.tolist()
for var in [x, y]:
if weight in var:
var.remove(weight)
combinations = [(x_item, y_item) for x_item in x for y_item
in y if x_item != y_item]
self._set_x_and_y_keys(data_key, x, y)
return combinations
def __create_links(self, data, data_key, views, variables=None, x=None, y=None,
the_filter=None, store_view_in_link=False, weights=None):
if views is not None:
has_links = True if self[data_key][the_filter].keys() else False
if has_links:
xs = self[data_key][the_filter].keys()
if x is not None:
valid_x = [xk for xk in xs if xk in x]
valid_x.extend(x)
x = set(valid_x)
else:
x = xs
ys = list(set(itertools.chain.from_iterable(
[self[data_key][the_filter][xk].keys()
for xk in xs])))
if y is not None:
valid_y = [yk for yk in ys if yk in y]
valid_y.extend(y)
y = set(valid_y)
else:
y = ys
if self._x_and_y_keys_in_file(data_key, data, x, y):
for x_key, y_key in itertools.product(x, y):
if x_key==y_key and x_key=='@':
continue
if y_key == '@':
if not isinstance(self[data_key][the_filter][x_key][y_key], Link):
link = Link(
the_filter=the_filter,
x=x_key,
y='@',
data_key=data_key,
stack=self,
store_view=store_view_in_link,
create_views=False
)
self[data_key][the_filter][x_key]['@'] = link
else:
link = self[data_key][the_filter][x_key]['@']
elif x_key == '@':
if not isinstance(self[data_key][the_filter][x_key][y_key], Link):
link = Link(
the_filter=the_filter,
x='@',
y=y_key,
data_key=data_key,
stack=self,
store_view=store_view_in_link,
create_views=False
)
self[data_key][the_filter]['@'][y_key] = link
else:
link = self[data_key][the_filter]['@'][y_key]
else:
if not isinstance(self[data_key][the_filter][x_key][y_key], Link):
link = Link(
the_filter=the_filter,
x=x_key,
y=y_key,
data_key=data_key,
stack=self,
store_view=store_view_in_link,
create_views=False
)
self[data_key][the_filter][x_key][y_key] = link
else:
link = self[data_key][the_filter][x_key][y_key]
if views is not None:
views._apply_to(link, weights)
def _x_and_y_keys_in_file(self, data_key, data, x, y):
data_columns = data.columns.tolist()
if '>' in ','.join(y): y = self._clean_from_nests(y)
if '>' in ','.join(x):
raise NotImplementedError('x-axis Nesting not supported.')
x_not_found = [var for var in x if not var in data_columns
and not var == '@']
y_not_found = [var for var in y if not var in data_columns
and not var == '@']
if x_not_found is not None:
masks_meta_lookup_x = [var for var in x_not_found
if var in self[data_key].meta['masks'].keys()]
for found_in_meta in masks_meta_lookup_x:
x_not_found.remove(found_in_meta)
if y_not_found is not None:
masks_meta_lookup_y = [var for var in y_not_found
if var in self[data_key].meta['masks'].keys()]
for found_in_meta in masks_meta_lookup_y:
y_not_found.remove(found_in_meta)
if not x_not_found and not y_not_found:
return True
elif x_not_found and y_not_found:
raise ValueError(
'data key {}: x: {} and y: {} not found.'.format(
data_key, x_not_found, y_not_found))
elif x_not_found:
raise ValueError(
'data key {}: x: {} not found.'.format(
data_key, x_not_found))
elif y_not_found:
raise ValueError(
'data key {}: y: {} not found.'.format(
data_key, y_not_found))
def _clean_from_nests(self, variables):
cleaned = []
nests = [var for var in variables if '>' in var]
non_nests = [var for var in variables if not '>' in var]
for nest in nests:
cleaned.extend([var.strip() for var in nest.split('>')])
non_nests += cleaned
non_nests = list(set(non_nests))
return non_nests
def __clean_column_names(self, columns):
"""
Remove extra doublequotes if there are any
"""
cols = []
for column in columns:
cols.append(column.replace('"', ''))
return cols
def __generate_key_from_list_of(self, list_of_keys):
"""
Generate keys from a list (or tuple).
"""
list_of_keys = list(list_of_keys)
list_of_keys.sort()
return ",".join(list_of_keys)
def __has_list(self, small):
"""
Check if object contains a list of strings.
"""
keys = self.keys()
for i in xrange(len(keys)-len(small)+1):
for j in xrange(len(small)):
if keys[i+j] != small[j]:
break
else:
return i, i+len(small)
return False
def __get_all_combinations(self, list_of_items):
"""Generates all combinations of items from a list """
return [itertools.combinations(list_of_items, index+1)
for index in range(len(list_of_items))]
def __get_stack_pointer(self, stack_pos):
"""Takes a stack_pos and returns the stack with that location
raises an exception IF the stack pointer is not found
"""
if self.parent.stack_pos == stack_pos:
return self.parent
else:
return self.parent.__get_stack_pointer(stack_pos)
def __get_chains(self, name, data_keys, filters, x, y, views,
orientation, select, rules,
rules_weight):
"""
List comprehension wrapper around .get_chain().
"""
if orientation == 'y':
return [
self.get_chain(
name=name,
data_keys=data_keys,
filters=filters,
x=x,
y=y_var,
views=views,
select=select,
rules=rules,
rules_weight=rules_weight
)
for y_var in y
]
elif orientation == 'x':
return [
self.get_chain(
name=name,
data_keys=data_keys,
filters=filters,
x=x_var,
y=y,
views=views,
select=select,
rules=rules,
rules_weight=rules_weight
)
for x_var in x
]
else:
raise ValueError(
"Unknown orientation type. Please use 'x' or 'y'."
)
def _verify_multiple_key_types(self, data_keys=None, filters=None, x=None,
y=None, variables=None, views=None):
"""
Verify that the given keys str or unicode or a list or tuple of those.
"""
if data_keys is not None:
self._verify_key_types(name='data', keys=data_keys)
if filters is not None:
self._verify_key_types(name='filter', keys=filters)
if x is not None:
self._verify_key_types(name='x', keys=x)
if y is not None:
self._verify_key_types(name='y', keys=y)
if variables is not None:
self._verify_key_types(name='variables', keys=variables)
if views is not None:
self._verify_key_types(name='view', keys=views)
def _verify_key_exists(self, key, stack_path=[]):
"""
Verify that the given key exists in the stack at the path targeted.
"""
error_msg = (
"Could not find the {key_type} key '{key}' in: {stack_path}. "
"Found {keys_found} instead."
)
try:
dk = stack_path[0]
fk = stack_path[1]
xk = stack_path[2]
yk = stack_path[3]
vk = stack_path[4]
except:
pass
try:
if len(stack_path) == 0:
if key not in self:
key_type, keys_found = 'data', self.keys()
stack_path = 'stack'
raise ValueError
elif len(stack_path) == 1:
if key not in self[dk]:
key_type, keys_found = 'filter', self[dk].keys()
stack_path = "stack['{dk}']".format(
dk=dk)
raise ValueError
elif len(stack_path) == 2:
if key not in self[dk][fk]:
key_type, keys_found = 'x', self[dk][fk].keys()
stack_path = "stack['{dk}']['{fk}']".format(
dk=dk, fk=fk)
raise ValueError
elif len(stack_path) == 3:
meta = self[dk].meta
if self._is_array_summary(meta, xk, None) and not key == '@':
pass
elif key not in self[dk][fk][xk]:
key_type, keys_found = 'y', self[dk][fk][xk].keys()
stack_path = "stack['{dk}']['{fk}']['{xk}']".format(
dk=dk, fk=fk, xk=xk)
raise ValueError
elif len(stack_path) == 4:
if key not in self[dk][fk][xk][yk]:
key_type, keys_found = 'view', self[dk][fk][xk][yk].keys()
stack_path = "stack['{dk}']['{fk}']['{xk}']['{yk}']".format(
dk=dk, fk=fk, xk=xk, yk=yk)
raise ValueError
except ValueError:
print error_msg.format(
key_type=key_type,
key=key,
stack_path=stack_path,
keys_found=keys_found
)
def _force_key_as_list(self, key):
"""Returns key as [key] if it is str or unicode"""
return [key] if isinstance(key, (str, unicode)) else key
def _verify_key_types(self, name, keys):
"""
Verify that the given keys str or unicode or a list or tuple of those.
"""
if isinstance(keys, (list, tuple)):
for key in keys:
self._verify_key_types(name, key)
elif isinstance(keys, (str, unicode)):
pass
else:
raise TypeError(
"All %s keys must be one of the following types: "
"<str> or <unicode>, "
"<list> of <str> or <unicode>, "
"<tuple> of <str> or <unicode>. "
"Given: %s" % (name, keys)
)
def _find_groups(self, view):
groups = OrderedDict()
logic = view._kwargs.get('logic')
description = view.describe_block()
groups['codes'] = [c for c, d in description.items() if d == 'normal']
net_names = [v for v, d in description.items() if d == 'net']
for l in logic:
new_l = copy.deepcopy(l)
for k in l:
if k not in net_names:
del new_l[k]
groups[new_l.keys()[0]] = new_l.values()[0]
groups['codes'] = [c for c, d in description.items() if d == 'normal']
return groups
def sort_expanded_nets(self, view, within=True, between=True, ascending=False,
fix=None):
if not within and not between:
return view.dataframe
df = view.dataframe
name = df.index.levels[0][0]
if not fix:
fix_codes = []
else:
if not isinstance(fix, list):
fix_codes = [fix]
else:
fix_codes = fix
fix_codes = [c for c in fix_codes if c in
df.index.get_level_values(1).tolist()]
net_groups = self._find_groups(view)
sort_col = (df.columns.levels[0][0], '@')
sort = [(name, v) for v in df.index.get_level_values(1)
if (v in net_groups['codes'] or
v in net_groups.keys()) and not v in fix_codes]
if between:
if pd.__version__ == '0.19.2':
temp_df = df.loc[sort].sort_values(sort_col, 0, ascending=ascending)
else:
temp_df = df.loc[sort].sort_index(0, sort_col, ascending=ascending)
else:
temp_df = df.loc[sort]
between_order = temp_df.index.get_level_values(1).tolist()
code_group_list = []
for g in between_order:
if g in net_groups:
code_group_list.append([g] + net_groups[g])
elif g in net_groups['codes']:
code_group_list.append([g])
final_index = []
for g in code_group_list:
is_code = len(g) == 1
if not is_code:
fixed_net_name = g[0]
sort = [(name, v) for v in g[1:]]
if within:
if pd.__version__ == '0.19.2':
temp_df = df.loc[sort].sort_values(sort_col, 0, ascending=ascending)
else:
temp_df = df.loc[sort].sort_index(0, sort_col, ascending=ascending)
else:
temp_df = df.loc[sort]
new_idx = [fixed_net_name] + temp_df.index.get_level_values(1).tolist()
final_index.extend(new_idx)
else:
final_index.extend(g)
final_index = [(name, i) for i in final_index]
if fix_codes:
fix_codes = [(name, f) for f in fix_codes]
final_index.extend(fix_codes)
df = df.reindex(final_index)
return df
def get_frequency_via_stack(self, data_key, the_filter, col, weight=None):
weight_notation = '' if weight is None else weight
vk = 'x|f|:||{}|counts'.format(weight_notation)
try:
f = self[data_key][the_filter][col]['@'][vk].dataframe
except (KeyError, AttributeError) as e:
try:
f = self[data_key][the_filter]['@'][col][vk].dataframe.T
except (KeyError, AttributeError) as e:
f = frequency(self[data_key].meta, self[data_key].data, x=col, weight=weight)
return f
def get_descriptive_via_stack(self, data_key, the_filter, col, weight=None):
l = self[data_key][the_filter][col]['@']
w = '' if weight is None else weight
mean_key = [k for k in l.keys() if 'd.mean' in k.split('|')[1] and
k.split('|')[-2] == w]
if not mean_key:
msg = "No mean view to sort '{}' on found!"
raise RuntimeError(msg.format(col))
elif len(mean_key) > 1:
msg = "Multiple mean views found for '{}'. Unable to sort!"
raise RuntimeError(msg.format(col))
else:
mean_key = mean_key[0]
vk = mean_key
d = l[mean_key].dataframe
return d
def _is_array_summary(self, meta, x, y):
return x in meta['masks']
def _is_transposed_summary(self, meta, x, y):
return x == '@' and y in meta['masks']
def axis_slicer_from_vartype(self, all_rules_axes, rules_axis, dk, the_filter, x, y, rules_weight):
if rules_axis == 'x' and 'x' not in all_rules_axes:
return None
elif rules_axis == 'y' and 'y' not in all_rules_axes:
return None
meta = self[dk].meta
array_summary = self._is_array_summary(meta, x, y)
transposed_summary = self._is_transposed_summary(meta, x, y)
axis_slicer = None
if rules_axis == 'x':
if not array_summary and not transposed_summary:
axis_slicer = self.get_rules_slicer_via_stack(
dk, the_filter, x=x, weight=rules_weight)
elif array_summary:
axis_slicer = self.get_rules_slicer_via_stack(
dk, the_filter, x=x, y='@', weight=rules_weight,
slice_array_items=True)
elif transposed_summary:
axis_slicer = self.get_rules_slicer_via_stack(
dk, the_filter, x='@', y=y, weight=rules_weight)
elif rules_axis == 'y':
if not array_summary and not transposed_summary:
axis_slicer = self.get_rules_slicer_via_stack(
dk, the_filter, y=y, weight=rules_weight)
elif array_summary:
axis_slicer = self.get_rules_slicer_via_stack(
dk, the_filter, x=x, y='@', weight=rules_weight,
slice_array_items=False)
elif transposed_summary:
axis_slicer = self.get_rules_slicer_via_stack(
dk, the_filter, x='@', y=y, weight=rules_weight)
return axis_slicer
def get_rules_slicer_via_stack(self, data_key, the_filter,
x=None, y=None, weight=None,
slice_array_items=False):
m = self[data_key].meta
array_summary = self._is_array_summary(m, x, y)
transposed_summary = self._is_transposed_summary(m, x, y)
rules = None
if not array_summary and not transposed_summary:
if not x is None:
try:
rules = self[data_key].meta['columns'][x]['rules']['x']
col = x
except:
pass
elif not y is None:
try:
rules = self[data_key].meta['columns'][y]['rules']['y']
col = y
except:
pass
elif array_summary:
if slice_array_items:
try:
rules = self[data_key].meta['masks'][x]['rules']['x']
col = x
except:
pass
else:
try:
rules = self[data_key].meta['masks'][x]['rules']['y']
col = x
except:
pass
elif transposed_summary:
try:
rules = self[data_key].meta['masks'][y]['rules']['x']
col = y
except:
pass
if not rules: return None
views = self[data_key][the_filter][col]['@'].keys()
w = '' if weight is None else weight
expanded_net = [v for v in views if '}+]' in v
and v.split('|')[-2] == w
and v.split('|')[1] == 'f' and
not v.split('|')[3] == 'x']
if expanded_net:
if len(expanded_net) > 1:
if len(expanded_net) == 2:
if expanded_net[0].split('|')[2] == expanded_net[1].split('|')[2]:
expanded_net = expanded_net[0]
else:
msg = "Multiple 'expand' using views found for '{}'. Unable to sort!"
raise RuntimeError(msg.format(col))
else:
expanded_net = expanded_net[0]
if 'sortx' in rules:
on_mean = rules['sortx'].get('sort_on', '@') == 'mean'
else:
on_mean = False
if 'sortx' in rules and on_mean:
f = self.get_descriptive_via_stack(
data_key, the_filter, col, weight=weight)
elif 'sortx' in rules and expanded_net:
within = rules['sortx'].get('within', False)
between = rules['sortx'].get('between', False)
fix = rules['sortx'].get('fixed', False)
ascending = rules['sortx'].get('ascending', False)
view = self[data_key][the_filter][col]['@'][expanded_net]
f = self.sort_expanded_nets(view, between=between, within=within,
ascending=ascending, fix=fix)
else:
f = self.get_frequency_via_stack(
data_key, the_filter, col, weight=weight)
if transposed_summary or (not slice_array_items and array_summary):
rules_slicer = functions.get_rules_slicer(f.T, rules)
else:
if not expanded_net or ('sortx' in rules and on_mean):
rules_slicer = functions.get_rules_slicer(f, rules)
else:
rules_slicer = f.index.values.tolist()
try:
rules_slicer.remove((col, 'All'))
except:
pass
return rules_slicer
@modify(to_list='batches')
def _check_batches(self, dk, batches='all'):
"""
Returns a list of valid ``qp.Batch`` names.
Parameters
----------
batches: str/ list of str, default 'all'
Included names are checked against valid ``qp.Batch`` names. If
batches='all', all valid ``Batch`` names are returned.
Returns
-------
list of str
"""
if not batches:
return []
elif batches[0] == 'all':
return self[dk].meta['sets']['batches'].keys()
else:
valid = self[dk].meta['sets']['batches'].keys()
not_valid = [b for b in batches if not b in valid]
if not_valid:
msg = '``Batch`` name not found in ``Stack``: {}'
raise KeyError(msg.format(not_valid))
return batches
def _x_y_f_w_map(self, dk, batches='all'):
"""
"""
def _append_loop(mapping, x, fn, f, w, ys):
if not x in mapping:
mapping[x] = {fn: {'f': f, tuple(w): ys}}
elif not fn in mapping[x]:
mapping[x][fn] = {'f': f, tuple(w): ys}
elif not tuple(w) in mapping[x][fn]:
mapping[x][fn][tuple(w)] = ys
elif not all(y in mapping[x][fn][tuple(w)] for y in ys):
yks = set(mapping[x][fn][tuple(w)]).union(set(ys))
mapping[x][fn][tuple(w)] = list(yks)
return None
arrays = self.variable_types(dk, verbose=False)['array']
mapping = {}
y_on_y = {}
batches = self._check_batches(dk, batches)
for batch in batches:
b = self[dk].meta['sets']['batches'][batch]
xs = b['x_y_map'].keys()
ys = b['x_y_map']
f = b['x_filter_map']
w = b['weights']
fs = b['filter']
for x in xs:
if x == '@':
for y in ys[x]:
fn = f[y] if f[y] == 'no_filter' else f[y].keys()[0]
_append_loop(mapping, x, fn, f[y], w, ys[x])
else:
fn = f[x] if f[x] == 'no_filter' else f[x].keys()[0]
_append_loop(mapping, x, fn, f[x], w, ys[x])
if b['y_on_y']:
fn = fs if fs == 'no_filter' else fs.keys()[0]
for x in b['yks'][1:]:
_append_loop(mapping, x, fn, fs, w, b['yks'])
_append_loop(y_on_y, x, fn, fs, w, b['yks'])
return mapping, y_on_y
@modify(to_list=['views', 'categorize', 'xs', 'batches'])
def aggregate(self, views, unweighted_base=True, categorize=[],
batches='all', xs=None, verbose=True):
"""
Add views to all defined ``qp.Link`` in ``qp.Stack``.
Parameters
----------
views: str or list of str or qp.ViewMapper
``views`` that are added.
unweighted_base: bool, default True
If True, unweighted 'cbase' is added to all non-arrays.
categorize: str or list of str
Determines how numerical data is handled: If provided, the
variables will get counts and percentage aggregations
(``'counts'``, ``'c%'``) alongside the ``'cbase'`` view. If False,
only ``'cbase'`` views are generated for non-categorical types.
batches: str/ list of str, default 'all'
Name(s) of ``qp.Batch`` instance(s) that are used to aggregate the
``qp.Stack``.
xs: list of str
Names of variable, for which views are added.
Returns
-------
None, modify ``qp.Stack`` inplace
"""
if not 'cbase' in views: unweighted_base = False
if isinstance(views[0], ViewMapper):
views = views[0]
complete = views[views.keys()[0]]['kwargs'].get('complete', False)
elif any('cumsum' in v for v in views):
complete = True
else:
complete = False
x_in_stack = self.describe('x').index.tolist()
for dk in self.keys():
batches = self._check_batches(dk, batches)
if not batches: return None
x_y_f_w_map, y_on_y = self._x_y_f_w_map(dk, batches)
if not xs:
xs = [x for x in x_y_f_w_map.keys() if x in x_in_stack]
else:
xs = [x for x in xs if x in x_in_stack]
v_typ = self.variable_types(dk, verbose=False)
numerics = v_typ['int'] + v_typ['float']
skipped = [x for x in xs if (x in numerics and not x in categorize)]
total_len = len(xs)
if total_len == 0:
msg = "Cannot aggregate, 'xs' contains no valid variables."
raise ValueError(msg)
for idx, x in enumerate(xs, start=1):
if not x in x_y_f_w_map.keys():
msg = "Cannot find {} in qp.Stack for ``qp.Batch`` '{}'"
raise KeyError(msg.format(x, batches))
v = ['cbase'] if x in skipped else views
for f_dict in x_y_f_w_map[x].values():
f = f_dict.pop('f')
for weight, y in f_dict.items():
w = list(weight) if weight else None
self.add_link(dk, f, x=x, y=y, views=v, weights=w)
if unweighted_base and not ((None in w and 'cbase' in v)
or x in v_typ['array'] or any(yks in v_typ['array'] for yks in y)):
self.add_link(dk, f, x=x, y=y, views=['cbase'], weights=None)
if complete:
if isinstance(f, dict):
f_key = f.keys()[0]
else:
f_key = f
for ys in y:
y_on_ys = y_on_y.get(x, {}).get(f_key, {}).get(tuple(w), [])
if ys in y_on_ys: continue
link = self[dk][f_key][x][ys]
for ws in w:
pct = 'x|f|:|y|{}|c%'.format('' if not ws else ws)
counts = 'x|f|:||{}|counts'.format('' if not ws else ws)
for view in [pct, counts]:
if view in link:
del link[view]
if verbose:
done = float(idx) / float(total_len) *100
print '\r',
time.sleep(0.01)
print 'Stack [{}]: {} %'.format(dk, round(done, 1)),
sys.stdout.flush()
if skipped and verbose:
msg = ("\n\nWarning: Found {} non-categorized numeric variable(s): {}.\n"
"Descriptive statistics must be added!")
print msg.format(len(skipped), skipped)
return None
@modify(to_list=['on_vars', '_batches'])
def cumulative_sum(self, on_vars, _batches='all', verbose=True):
"""
Add cumulative sum view to a specified collection of xks of the stack.
Parameters
----------
on_vars : list
The list of x variables to add the view to.
_batches: str or list of str
Only for ``qp.Links`` that are defined in this ``qp.Batch``
instances views are added.
Returns
-------
None
The stack instance is modified inplace.
"""
for dk in self.keys():
_batches = self._check_batches(dk, _batches)
if not _batches or not on_vars: return None
meta = self[dk].meta
data = self[dk].data
for v in on_vars:
if v in meta['sets']:
items = [i.split('@')[-1] for i in meta['sets'][v]['items']]
on_vars = list(set(on_vars + items))
self.aggregate(['counts_cumsum', 'c%_cumsum'], False, [], _batches, on_vars, verbose)
return None
def _add_checking_chain(self, dk, cluster, name, x, y, views):
key, view, c_view = views
c_stack = qp.Stack('checks')
c_stack.add_data('checks', data=self[dk].data, meta=self[dk].meta)
c_stack.add_link(x=x, y=y, views=view, weights=None)
c_stack.add_link(x=x, y=y, views=c_view, weights=None)
c_views = c_stack.describe('view').index.tolist()
len_v_keys = len(view)
view_keys = ['<KEY>x|f|:|||counts'][0:len_v_keys]
c_views = view_keys + [v for v in c_views
if v.endswith('{}_check'.format(key))]
if name == 'stat_check':
chain = c_stack.get_chain(x=x, y=y, views=c_views, orient_on='x')
name = [v for v in c_views if v.endswith('{}_check'.format(key))][0]
cluster[name] = chain
else:
chain = c_stack.get_chain(name=name, x=x, y=y, views=c_views)
cluster.add_chain(chain)
return cluster
@modify(to_list=['on_vars', '_batches'])
def add_nets(self, on_vars, net_map, expand=None, calc=None, text_prefix='Net:',
checking_cluster=None, _batches='all', verbose=True):
"""
Add a net-like view to a specified collection of x keys of the stack.
Parameters
----------
on_vars : list
The list of x variables to add the view to.
net_map : list of dicts
The listed dicts must map the net/band text label to lists of
categorical answer codes to group together, e.g.:
>>> [{'Top3': [1, 2, 3]},
... {'Bottom3': [4, 5, 6]}]
It is also possible to provide enumerated net definition dictionaries
that are explicitly setting ``text`` metadata per ``text_key`` entries:
>>> [{1: [1, 2], 'text': {'en-GB': 'UK NET TEXT',
... 'da-DK': 'DK NET TEXT',
... 'de-DE': 'DE NET TEXT'}}]
expand : {'before', 'after'}, default None
If provided, the view will list the net-defining codes after or before
the computed net groups (i.e. "overcode" nets).
calc : dict, default None
A dictionary that is attaching a text label to a calculation expression
using the the net definitions. The nets are referenced as per
'net_1', 'net_2', 'net_3', ... .
Supported calculation expressions are add, sub, div, mul. Example:
>>> {'calc': ('net_1', add, 'net_2'), 'text': {'en-GB': 'UK CALC LAB',
... 'da-DK': 'DA CALC LAB',
... 'de-DE': 'DE CALC LAB'}}
text_prefix : str, default 'Net:'
By default each code grouping/net will have its ``text`` label prefixed
with 'Net: '. Toggle by passing None (or an empty str, '').
checking_cluster : quantipy.Cluster, default None
When provided, an automated checking aggregation will be added to the
``Cluster`` instance.
_batches: str or list of str
Only for ``qp.Links`` that are defined in this ``qp.Batch``
instances views are added.
Returns
-------
None
The stack instance is modified inplace.
"""
def _netdef_from_map(net_map, expand, prefix, text_key):
netdef = []
for no, net in enumerate(net_map, start=1):
if 'text' in net:
logic = net[no]
text = net['text']
else:
logic = net.values()[0]
text = {t: net.keys()[0] for t in text_key}
if not isinstance(logic, list) and isinstance(logic, int):
logic = [logic]
if prefix and not expand:
text = {k: '{} {}'.format(prefix, v) for k, v in text.items()}
if expand:
text = {k: '{} (NET)'.format(v) for k, v in text.items()}
netdef.append({'net_{}'.format(no): logic, 'text': text})
return netdef
def _check_and_update_calc(calc_expression, text_key):
if not isinstance(calc_expression, dict):
err_msg = ("'calc' must be a dict in form of\n"
"{'calculation label': (net # 1, operator, net # 2)}")
raise TypeError(err_msg)
for k, v in calc_expression.items():
if not k in ['text', 'calc_only']: exp = v
if not k == 'calc_only': text = v
if not 'text' in calc_expression:
text = {tk: text for tk in text_key}
calc_expression['text'] = text
if not isinstance(exp, (tuple, list)) or len(exp) != 3:
err_msg = ("Not properly formed expression found in 'calc':\n"
"{}\nMust be provided as (net # 1, operator, net # 2)")
raise TypeError(err_msg.format(exp))
return calc_expression
for dk in self.keys():
_batches = self._check_batches(dk, _batches)
if not _batches: return None
meta = self[dk].meta
data = self[dk].data
for v in on_vars:
if v in meta['sets']:
items = [i.split('@')[-1] for i in meta['sets'][v]['items']]
on_vars = list(set(on_vars + items))
all_batches = copy.deepcopy(meta['sets']['batches'])
for n, b in all_batches.items():
if not n in _batches: all_batches.pop(n)
languages = list(set(b['language'] for n, b in all_batches.items()))
netdef = _netdef_from_map(net_map, expand, text_prefix, languages)
if calc: calc = _check_and_update_calc(calc, languages)
view = qp.ViewMapper()
view.make_template('frequency', {'rel_to': [None, 'y']})
options = {'logic': netdef,
'axis': 'x',
'expand': expand if expand in ['after', 'before'] else None,
'complete': True if expand else False,
'calc': calc}
view.add_method('net', kwargs=options)
self.aggregate(view, False, [], _batches, on_vars, verbose)
if checking_cluster is not None:
c_vars = {v: '{}_net_check'.format(v) for v in on_vars
if not v in meta['sets'] and
not '{}_net_check'.format(v) in checking_cluster.keys()}
view['net_check'] = view.pop('net')
view['net_check']['kwargs']['iterators'].pop('rel_to')
for k, net in c_vars.items():
checking_cluster = self._add_checking_chain(dk, checking_cluster,
net, k, ['@', k], ('net', ['cbase'], view))
return None
@modify(to_list=['on_vars', 'stats', 'exclude', '_batches'])
def add_stats(self, on_vars, stats=['mean'], other_source=None, rescale=None,
drop=True, exclude=None, factor_labels=True, custom_text=None,
checking_cluster=None, _batches='all', verbose=True):
"""
Add a descriptives view to a specified collection of xks of the stack.
Valid descriptives views: {'mean', 'stddev', 'min', 'max', 'median', 'sem'}
Parameters
----------
on_vars : list
The list of x variables to add the view to.
stats : list of str, default ``['mean']``
The metrics to compute and add as a view.
other_source : str
If provided the Link's x-axis variable will be swapped with the
(numerical) variable provided. This can be used to attach statistics
of a different variable to a Link definition.
rescale : dict
A dict that maps old to new codes, e.g. {1: 5, 2: 4, 3: 3, 4: 2, 5: 1}
drop : bool, default True
If ``rescale`` is provided all codes that are not mapped will be
ignored in the computation.
exclude : list
Codes/values to ignore in the computation.
factor_lables : bool, default True
If True, will write the (rescaled) factor values next to the
category text label.
custom_text : str, default None
A custom string affix to put at the end of the requested statistics'
names.
checking_cluster : quantipy.Cluster, default None
When provided, an automated checking aggregation will be added to the
``Cluster`` instance.
_batches: str or list of str
Only for ``qp.Links`` that are defined in this ``qp.Batch``
instances views are added.
Returns
-------
None
The stack instance is modified inplace.
"""
def _factor_labs(values, rescale, drop, exclude, axis=['x']):
if not rescale: rescale = {}
ignore = [v['value'] for v in values if v['value'] in exclude or
(not v['value'] in rescale.keys() and drop)]
factors_mapped = {}
for v in values:
if v['value'] in ignore: continue
has_xedits = v['text'].get('x edits', {})
has_yedits = v['text'].get('y edits', {})
if not has_xedits: v['text']['x edits'] = {}
if not has_yedits: v['text']['y edits'] = {}
factor = rescale[v['value']] if rescale else v['value']
for tk, text in v['text'].items():
if tk in ['x edits', 'y edits']: continue
for ax in axis:
try:
t = v['text']['{} edits'.format(ax)][tk]
except:
t = text
new_lab = '{} [{}]'.format(t, factor)
v['text']['{} edits'.format(ax)][tk] = new_lab
return values
if other_source and not isinstance(other_source, str):
raise ValueError("'other_source' must be a str!")
if not rescale: drop = False
options = {'stats': '',
'source': other_source,
'rescale': rescale,
'drop': drop, 'exclude': exclude,
'axis': 'x',
'text': '' if not custom_text else custom_text}
for dk in self.keys():
_batches = self._check_batches(dk, _batches)
if not _batches: return None
meta = self[dk].meta
data = self[dk].data
check_on = []
for v in on_vars:
if v in meta['sets']:
items = [i.split('@')[-1] for i in meta['sets'][v]['items']]
on_vars = list(set(on_vars + items))
check_on = list(set(check_on + [items[0]]))
elif not meta['columns'][v].get('values'):
continue
elif not isinstance(meta['columns'][v]['values'], list):
parent = meta['columns'][v]['parent'].keys()[0].split('@')[-1]
items = [i.split('@')[-1] for i in meta['sets'][parent]['items']]
check_on = list(set(check_on + [items[0]]))
else:
check_on = list(set(check_on + [v]))
view = qp.ViewMapper()
view.make_template('descriptives')
for stat in stats:
options['stats'] = stat
view.add_method('stat', kwargs=options)
self.aggregate(view, False, on_vars, _batches, on_vars, verbose)
if checking_cluster and 'mean' in stats and check_on:
options['stats'] = 'mean'
c_view = qp.ViewMapper().make_template('descriptives')
c_view.add_method('stat_check', kwargs=options)
views = ('stat', ['cbase', 'counts'], c_view)
checking_cluster = self._add_checking_chain(dk, checking_cluster,
'stat_check', check_on, ['@'], views)
if not factor_labels or other_source: return None
all_batches = meta['sets']['batches'].keys()
if not _batches: _batches = all_batches
batches = [b for b in all_batches if b in _batches]
for v in check_on:
globally = False
for b in batches:
batch_me = meta['sets']['batches'][b]['meta_edits']
values = batch_me.get(v, {}).get('values', [])
if not values:
globally = True
elif not isinstance(values, list):
p = values.split('@')[-1]
values = batch_me['lib'][p]
batch_me['lib'][p] = _factor_labs(values, rescale, drop,
exclude, ['x', 'y'])
else:
batch_me[v]['values'] = _factor_labs(values, rescale, drop,
exclude, ['x'])
if globally:
values = meta['columns'][v]['values']
if not isinstance(values, list):
p = values.split('@')[-1]
values = meta['lib']['values'][p]
meta['lib']['values'][p] = _factor_labs(values, rescale, drop,
exclude, ['x', 'y'])
else:
meta['columns'][v]['values'] = _factor_labs(values, rescale,
drop, exclude, ['x'])
return None
@modify(to_list=['_batches'])
def add_tests(self, _batches='all', verbose=True):
"""
Apply coltests for selected batches.
Sig. Levels are taken from ``qp.Batch`` definitions.
Parameters
----------
_batches: str or list of str
Only for ``qp.Links`` that are defined in this ``qp.Batch``
instances views are added.
Returns
-------
None
"""
self._remove_coltests()
if verbose:
start = time.time()
for dk in self.keys():
_batches = self._check_batches(dk, _batches)
if not _batches: return None
for batch_name in _batches:
batch = self[dk].meta['sets']['batches'][batch_name]
levels = batch['siglevels']
weight = batch['weights']
x_y = batch['x_y_map']
x_f = batch['x_filter_map']
f = batch['filter']
yks = batch['yks']
if levels:
vm_tests = qp.ViewMapper().make_template(
method='coltests',
iterators={'metric': ['props', 'means'],
'mimic': ['Dim'],
'level': levels})
vm_tests.add_method('significance',
kwargs = {'flag_bases': [30, 100]})
if 'y_on_y' in batch:
self.add_link(filters=f, x=yks[1:], y=yks,
views=vm_tests, weights=weight)
total_len = len(x_y.keys())
for idx, x in enumerate(x_y.keys(), 1):
if x == '@': continue
self.add_link(filters=x_f[x], x=x, y=x_y[x],
views=vm_tests, weights=weight)
if verbose:
done = float(idx) / float(total_len) *100
print '\r',
time.sleep(0.01)
print 'Batch [{}]: {} %'.format(batch_name, round(done, 1)),
sys.stdout.flush()
if verbose: print '\n'
if verbose: print 'Sig-Tests:', time.time()-start
return None
def _remove_coltests(self, props=True, means=True):
"""
Remove coltests from stack.
Parameters
----------
props : bool, default=True
If True, column proportion test view will be removed from stack.
means : bool, default=True
If True, column mean test view will be removed from stack.
"""
for dk in self.keys():
for fk in self[dk].keys():
for xk in self[dk][fk].keys():
for yk in self[dk][fk][xk].keys():
for vk in self[dk][fk][xk][yk].keys():
del_prop = props and 't.props' in vk
del_mean = means and 't.means' in vk
if del_prop or del_mean:
del self[dk][fk][xk][yk][vk]
return None
|
Designing and Managing Client/Server DBMSs Client/server systems have evolved from simple graphical front ends for relational data bases to object-oriented and distributed data bases. This article discusses the merits of the emerging technologies of the new generation of client/server systemsthree-tier architectures, object-oriented data bases, object request broker systems, and replicationand the design and management issues in developing applications for such systems.
|
Theatre Scholarship and Technology: A Look at the Future of the Discipline In 1956 a small group of scholars, all of them engaged in some way or other in the study of theatre, past and present, met and founded a certain remarkable organization. I hardly need to explain that that organization was the American Society for Theatre Research, then and still the only organization of American scholars devoted entirely to promoting the cause, the activity, and the results of research in theatre. Nor is it necessary here for me to recapitulate, even briefly, the course of events that now sees us celebrating the twenty-fifth anniversary of the Society. Fortunately, one of those founding members, Thomas Marshall, has generously accepted the charge offered him a year ago by the ASTR Executive Committee to write a concise history of the Society from the time of its founding. That review of the first twenty-five years has just come off the press, nicely timed for distribution to all members in attendance at this meeting. Subsequently it is to appear also in the twenty-fifth anniversary issue of the Society's journal, Theatre Survey.
|
<reponame>kittylyst/ocelotvm
package ocelot;
import org.junit.BeforeClass;
import org.junit.Test;
import static ocelot.Opcode.*;
import static org.junit.Assert.*;
import org.junit.Ignore;
/**
*
* @author ben
*/
public class TestIntArithmetic {
private static InterpMain im;
@BeforeClass
public static void setup() {
im = new InterpMain();
}
// General form of a simple test case should be:
//
// 1. Set up a byte array of the opcodes to test
// 1.a Ensure that this ends with an opcode from the RETURN family
// 2. Pass to an InterpMain instance
// 3. Look at the return value
@Test
public void int_divide_works() {
byte[] buf = {ICONST_5.B(), ICONST_3.B(), IDIV.B(), IRETURN.B()};
JVMValue res = im.execMethod("", "main:()V", buf, new InterpLocalVars());
assertEquals("Return type is int", JVMType.I, res.type);
assertEquals("Return value should be 1", 1, (int) res.value);
byte[] buf1 = {ICONST_2.B(), ICONST_2.B(), IDIV.B(), IRETURN.B()};
res = im.execMethod("", "main:()V", buf1, new InterpLocalVars());
assertEquals("Return type is int", JVMType.I, res.type);
assertEquals("Return value should be 1", 1, (int) res.value);
byte[] buf2 = {BIPUSH.B(), (byte)17, BIPUSH.B(), (byte)5, IREM.B(), IRETURN.B()};
res = im.execMethod("", "main:()V", buf2, new InterpLocalVars());
assertEquals("Return type is int", JVMType.I, res.type);
assertEquals("Return value should be 2", 2, (int) res.value);
}
@Test
public void int_arithmetic_works() {
byte[] buf = {ICONST_1.B(), ICONST_1.B(), IADD.B(), IRETURN.B()};
JVMValue res = im.execMethod("", "main:()V", buf, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 2", 2, (int) res.value);
byte[] buf2 = {ICONST_1.B(), ICONST_M1.B(), IADD.B(), IRETURN.B()};
res = im.execMethod("", "main:()V", buf2, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 0", 0, (int) res.value);
byte[] buf3 = {ICONST_2.B(), ICONST_M1.B(), IMUL.B(), IRETURN.B()};
res = im.execMethod("", "main:()V", buf3, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be -2", -2, (int) res.value);
}
@Test
public void iconst_store_iinc_load() {
byte[] buf = {ICONST_1.B(), ISTORE.B(), (byte) 1, IINC.B(), (byte) 1, (byte) 1, ILOAD.B(), (byte) 1, IRETURN.B()};
JVMValue res = im.execMethod("", "main:()V", buf, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 2", 2, (int) res.value);
}
@Test
public void iconst_dup() {
byte[] buf = {ICONST_1.B(), DUP.B(), IADD.B(), IRETURN.B()};
JVMValue res = im.execMethod("", "main:()V", buf, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 2", 2, (int) res.value);
byte[] buf2 = {ICONST_1.B(), DUP.B(), IADD.B(), DUP.B(), IADD.B(), IRETURN.B()};
res = im.execMethod("", "main:()V", buf2, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 4", 4, (int) res.value);
}
@Test
public void iconst_dup_nop_pop() {
byte[] buf = {ICONST_1.B(), DUP.B(), NOP.B(), POP.B(), IRETURN.B()};
JVMValue res = im.execMethod("", "main:()V", buf, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 1", 1, (int) res.value);
byte[] buf2 = {ICONST_1.B(), DUP.B(), NOP.B(), POP.B(), POP.B(), RETURN.B()};
res = im.execMethod("", "main:()V", buf2, new InterpLocalVars());
assertNull("Return should be null", res);
}
@Test
public void iconst_dup_x1() {
byte[] buf = {ICONST_1.B(), ICONST_2.B(), DUP_X1.B(), IADD.B(), IADD.B(), IRETURN.B()};
JVMValue res = im.execMethod("", "main:()V", buf, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 5", 5, (int) res.value);
byte[] buf2 = {ICONST_1.B(), ICONST_2.B(), DUP_X1.B(), IADD.B(), DUP_X1.B(), IADD.B(), IADD.B(), IRETURN.B()};
res = im.execMethod("", "main:()V", buf2, new InterpLocalVars());
assertEquals("Return type should be int", JVMType.I, res.type);
assertEquals("Return value should be 8", 8, (int) res.value);
}
@Test
@Ignore
public void TestIntIfEqPrim() {
byte[] buffy = {ICONST_1.B(), ICONST_1.B(), IADD.B(), ICONST_2.B(), IF_ICMPEQ.B(), (byte) 0, (byte) 11, ICONST_4.B(), GOTO.B(), (byte) 0, (byte) 12, ICONST_3.B(), IRETURN.B()};
JVMValue res = im.execMethod("", "main:()V", buffy, new InterpLocalVars());
assertEquals(JVMType.I, res.type);
assertEquals(2, ((int) res.value));
byte[] buffy2 = {ICONST_1.B(), ICONST_1.B(), IADD.B(), ICONST_3.B(), IF_ICMPEQ.B(), (byte) 0, (byte) 11, ICONST_4.B(), GOTO.B(), (byte) 0, (byte) 12, ICONST_3.B(), IRETURN.B()};
res = im.execMethod("", "main:()V", buffy2, new InterpLocalVars());
assertEquals(JVMType.I, res.type);
assertEquals(2, ((int) res.value));
}
}
|
Visual exploratory behaviour in infancy and novelty seeking in adolescence: two developmentally specific phenotypes of DRD4? BACKGROUND The present study was designed to investigate the association between visual exploratory behaviour in early infancy, novelty seeking in adolescence, and the dopamine D4 receptor (DRD4) genotype. METHODS Visual attention was measured in 232 three-month-old infants (114 males, 118 females) from a prospective longitudinal study using a habituation-dishabituation paradigm. At age 15 years, the Junior Temperament and Character Inventory (JTCI/12-18) was administered to assess adolescent novelty seeking. DNA was genotyped for the DRD4 exon III polymorphism. RESULTS Boys with a higher decrement of visual attention during repeated stimulation in infancy displayed significantly higher JTCI novelty seeking at age 15 years. Furthermore, boys carrying the 7r allele of DRD4 exhibited both greater rates of attention decrement in infancy and higher scores on NS in adolescence. In contrast, no association between DRD4, visual attention and novelty seeking was observed in girls. CONCLUSIONS The present investigation provides further evidence supporting a role of DRD4 in novelty seeking during the course of development.
|
Russian Industry in Global Value-Added Chains The paper analyzes Russia's participation in global value-added chains in the context of global trends in their development and the challenges facing the Russian economy in modernizing its industry. The analysis is based on the OECDs TiVA indicators. The results indicate that the extent of Russia's involvement in GVACs is very significant, but the nature of this participation is purely raw. The specificity of the participation of the Russian Federation in the international fragmentation of production within GVACs is that most of the links in value chains are bottom-up. Russia is extremely limited in using imported flows to create export products with high added value. The study confirms that in the international division of labor, Russia retains a historically established specialization, with predominance in the export of mineral and agricultural raw materials, which determines the current profile of Russia's participation in global valueadded chains. Introduction At present, industrial policy is implemented in Russia aimed at modernizing national industries and producing competitive products that involves the technological reequipment of the economy sectors based on the high growth rate of renewal of fixed assets, growth of innovative activity of enterprises, introduction of new technologies and advanced methods in management, work, large-scale investment, development of human capital. The organization of the production process has undergone significant changes in the last two to three decades. Today, the production process goes beyond national boundaries, being split into specialized operations and distributed among the links of global value-added chains (GVACs). Involvement of countries in such GVACs has long become a modern way of participating by countries in the international division of labor, which introduces fundamental changes in national economic strategies. The approach from the position of valueadded chains allows for a deeper exploration of the aspects of the interaction between different sectors of the economy of different countries, identifying trends and opportunities for modernization, identifying barriers to economic development and potential risks, and developing recommendations on public policy to eliminate them. Despite the huge number of publications united by the themes of the development of GVACs in the world and in individual countries, the issues of Russia's participation in global value chains are largely white spots. To date, there are very few publications on the problems of Russia's participation in GVACs and the possibility of modernizing the economy from the perspective of the value-added approach, while the ongoing modernization policy requires a deeper understanding of Russia's role in global value-added chains in order to identify opportunities for the modernization of the Russian economy and its branches and the development on their basis of recommendations for improving public policy measures. In this connection, the need for an analysis of the degree and quality of involvement of the Russian economy in global value-added chains is very relevant and timely, which has determined the purpose of this study. Literature review The analysis of countries' participation in global value-added chains significantly changes the understanding of competitiveness. In the context of value-added chains, the competitiveness of the industry and the economy includes not only the competitiveness of a firm, industry or economy, but the "competitiveness" of their place in the chain (). Institutional context significantly impacts international behavior of firms by facilitating or restricting internationalization processes (;). In the context of country participation in GVACs, modernization is defined as improving the ability of a firm, industry or economy as a whole to move to more complex and profitable economic niches based on the use of higher-skill labor or as a transition from economic activity with a low added value to economic activity with higher added value based on internal innovation resources and capabilities, and continuous improvement of processes, products and functions. The concept of modernization within the framework of the value chain approach is because it is important for the country's development to move to higher value-added chains or links in the same value creation chain (Gereffi and Kaplinsky, 2001;). Within the framework of the value-added chain approach, the opportunities for modernizing economies and their sectors for economic growth are considered both in terms of creating high added value and in terms of its redistribution within GVACs. Humphrey and Schmitz point to several types of modernization, representing different "niches". First, modernization of a link in the chain. Modernization of processes: increasing the efficiency of production processes through the reorganization of the production system and the use of advanced technologies; or product upgrading: shifting to more complex products or products with higher added value. Second, a functional upgrade, implying a transition to a more profitable link in the chain. Third, modernization of the chaintransition to more profitable chains of value creation. The first two types of modernization relate to upgrading the chain of value creation within the link, the second type involves moving along the value chain towards more profitable links and the third type of modernization involves moving to a new value chain (;). Factors contributing to the emergence and wide spread of GVACs are analyzed in the works of Kee and Tang, Mose and Sorescu factors reducing the use of offshoring in developing economies and the return of production to developed economies are shown in a study by De Backer et al. ; the impact of revolutionary advances in information and communication technologies (ICTs) on the development of production fragmentation and GVACs are highlighted in the works of Baldwin, Strange and Zucchella, Fyodorov and Kuzmin ; the involvement of the service sector in international fragmentation is shown in Miroudot ; the impact of countries' participation in GVACs on economic growth in Baldwin and Yan, Keller and Yeaple, Ataseven and Nair, Chang et al., Smorodinskaya and Katukov ; the negative consequences of offshoring for the economies of developed countries in the form of job cuts and lower wages for low-skilled workersin Geishecker ; the unforeseen consequences of the policy of sanctions in the form of trade losses of countries not participating in the sanctions war (Israel and Switzerland) are disclosed in the work of Sanandaji and Avorin, in of the Pacific Region in the work of Shakhovskaya et al., in of the Russian region in the work of Chernova et al.. Nevertheless, the processes of industrial modernization of the countries with a transitional economy the Russian Federation, are not sufficiently studied from the position of participation in global value-added chains. Volchkova and Turdyeva believe that Russia is weakly involved in the international division of labor and global value-added chains, while deep integration into the world economy would help avoid sanctions because interests of business always outstrip the politics. Smorodinskaya and Katukov hold a different opinion on Russia's involvement in GVACs. Their research shows that Russia's economy is characterized by a high degree of participation in GVACs, which exceeds the world average, but emphasizes the primitive nature of Russia's participation in the international division of labor and the mineral wealth background of this participation. Materials and Methods The analysis of Russia's participation in GVACs is based on the indicators of TiVA (Trade in Value Added) published by OECD (OECD, 2017a;2017b). In view of the fact that official statistics of the countries of the world are usually available with a delay of two to three years after the reporting period, TiVA indicators are fully represented by the OECD for the period from 1995 to 2011 inclusive. TiVA indicators for 2012-2014 are represented by the shortened nomenclature being somewhat predictive, which does not exclude a certain inaccuracy, but in no way reduces their significance for analysis (OECD, 2017c). The study was based on the following indicators: Trade in value added is a statistical approach used to evaluate sources of added value in the production of goods and services for exports and imports by country and industry. Unlike traditional methods of measuring international trade, which register gross flows of goods and services each time they cross borders, the chosen approach monitors the added value of each industry and country in the production chain. In the period from 2000 to 2011, the rate of growth in value added in the Russian Federation was the highest in the world (7.13%), with the outgrowing growth of gross exports of final products (OECD, 2016a). The share of foreign VA (value added) in gross exports, the most significant growth of which was observed in some developing countries (India +12.7%, Vietnam +9.1%), Asia-Pacific (South Korea +11.9%, Taiwan (PRC) +11.3%, Japan +7.3%), as well as some EU countries (Germany +5.5%, the UK +4.9%), in Russia, as in other BRICS countries, with the exception of India and South, significantly decreased: in Russia by 4.5%, in Chinaby 3.8%, in Brazilby 0.7% (OECD, 2016a). The re-importation of the national VA in Russia's gross exports in 2000 was well below most developed countries, grew more than 5 times by 2011, but remains low ($1,678 billion). Meantime, in the BRICS countries, this indicator has grown significantly: in India, by more than 45 times (up to $469.7 billion in 2011), in China by almost 25 times ($18,912.2 billion in 2011), in Brazilby 12.39 times ($192.0 billion in 2011), which is rather a reflection of the effect of the "low base". In the EU countries, the re-importation of the national VA, which was previously at a high level, almost quadrupled in Germany. Most of all, in 2011, the following countries re-imported their VA: China ($18,912.2 billion), Germany ($14,524.7 billion), the United States ($13,705.1 billion) (OECD, 2016a). In the re-export of intermediate imports, China leads, followed by Germany, the United States, the countries of the Asia-Pacific (South Korea, Japan, Taiwan). The Russian Federation in terms of reexport of intermediate imports in 2011 was among the ten largest re-exporters ($80,515.8 billion), which is characterized by its rather high involvement in the international fragmentation of production in this period (OECD, 2016a). The analysis of indicators of trade in value added by the reduced nomenclature (TiVA) for the period from 2012 to 2014 shows the decline in the Russian VA in gross exports (Fig. 1) against the background of the growth of this indicator in most developed countries (with the exception of Japan, where it declined by 11%) and some BRICS countries (Brazil and South Africa, a decrease of 7% and 8%, respectively). The most significant growth of VA in gross exports was recorded in Vietnam (+26%), China (+13%), developed EU countries (Germany and the UK +10%) and the USA (+10%) (Fig. 1). In 2014, compared to 2012, Russia's added value in foreign final demand declined (by 6%) in 2014 due to a decline in the export of Russian products to foreign markets against the backdrop of a noticeable growth in several foreign countries: by 9-10% in developed EU countries (the UK, Germany, France) and the USA. VA increased significantly in foreign final demand in Vietnam (+26%) and China (+13%). The reduction in foreign value added in Russia's domestic final demand in 2012-2014 amounted to 11.47%, while in the rapidly growing economies of Southeast Asia and Vietnam, there were significant growth rates of this indicator (+28% and +11%, respectively). The leading economies of Western Europe and North America also demonstrated their growing involvement in GVACs: the UK +9%, Germany +8%, France +6%, USA +4%. Also, due to lower imports, there was a reduction in foreign value added in Russia's gross exports (by 2.5%), while the share of foreign value added in Russia's gross exports is relatively low: from 13.5% to 14%, while in the developed European countries it exceeds 25%, in the USA it keeps at a level slightly above 15%, in China about 30%, and in Korea and Vietnam exceeds 37% and 36%, respectively. The leader in terms of growth of foreign value added in gross exports was Vietnam (+35%), followed by Japan (+14%), the large economies of the EU (Germany, France) and the USA. Re-exported intermediate imports, as a percentage of intermediate imports, remain stable at just above 30%, which is below the European countries (Germany 52.62%, France 42.92%, the UK 36.9%), Korea (52.69%), China (45.45%), Vietnam (60.8%), the exports of which are highly dependent on re-importation from foreign countries, but lower than the USA and Japan (about 23%). The generalizing index for the participation of countries in GVACs is the participation index. The GVAC participation index consists of two components, reflecting the top-down and bottom-up links in the chain. Individual economies participate in global value creation chains by importing foreign materials to produce goods and services that they export (reverse or top-down participation), and by exporting their intermediate goods and services to partners exporting them (forward or upward participation in GVACs). Forward participation of the economy in GVACs corresponds to the indicator "domestic added value directed to the third economy" and reflects the domestic value added that is contained in exports to third countries for further processing through the value-added chain and characterizes the country as a seller for its role in GVACs. Reverse participation in GVACs is correlated with "external addition of content to exports" when the economy imports intermediate products for export (a buyer for its role in GVACs). The Russian Federation in its involvement in GVACs with a participation index of 51.8 in 2011 was superior to the developed countries of Europe (Germany 49.6, France 47, the UK 47.6) and significantly ahead of the USA (39.8) and even China (47.7), with almost threefold excess of its participation in the bottom-up links, while China and Korea on their participation in GVACs are in the top-down links (Fig. 2). The share of the Russian value added, conditioned by the global final demand, which for a long time does not exceed the 30% mark, has significant differences by sectors (Figure 3). The largest share of the Russian added value due to world final demand was recorded in industrial metals (82.78%) and mining (79.34%), the smallest sharein education, real estate, and healthcare. In agricultural products and in the production of food products, the share of Russian value added is low: 11.35% and 9.99%, respectively, which again indicates the mineral wealth orientation of Russian exports. The foreign content of Russian exports is most noticeable in machine building ("cars, trailers and semitrailers", "electrical machines and equipment", "other transport equipment") and is almost absent in education, healthcare, real estate operations (Figure 4). Source: Compiled by the authors on the basis of OECD (2016b). According to the contribution of the partner countries to the Russian gross exports by value added, the largest share belongs to China (8.7%), followed by Germany (6.46%), Japan (5.53%), Italy (5.16% %) and the USA (5.13%). In the domestic final demand of the Russian Federation, a significant share of the added value belongs to China, the USA, and Germany, and for all countries, it was to some extent reduced in 2014 against the backdrop of anti-Russian sanctions and Russian response measures (Table 1). On the basis of an analysis of the gross indicators, the three most important export markets of Russia were China (8.2%), Germany (7.7%) and the USA (7.5%), an analysis of export directions for value added adjusts the distribution of trading partners and swaps the USA and China: the USA (10.7%), Germany (8.1%) and China (7.6%). Similarly, China, Germany and the USA are among the top three importing partners on the basis of gross indicators, although, based on an analysis of value added, the difference between China and the USA is significantly smaller than in the gross measurement, which partly reflects the higher domestic supplementary content of the US exports. In the trade in intermediate goods, the exports of the Russian Federation are directed to China, Germany, Italy, and the USA with a significant increase in China's share (OECD -WTO, 2015). Discussion The participation of countries in global value-added chains is a modern way of international division of labor and the emergence of national producers in global markets. In this connection, the issues of the degree and quality of participation of countries in global value-added chains become the most important challenge for both developing and transition economies and for the economies of developed countries (Smorodinskaya and Katukov, 2017). As shown by the analysis, the opinion of Volchkova on the weak involvement of the Russian economy in the global economy (Volchkova and Turdyeva, 2016) is mistaken twice: first, Russia is deeply involved in GVACs, although this nature of this involvement leaves much to be desired in terms of its raw materials and, second, there is no doubt that the "too visible hand of politics" in recent years has begun to exert a decisive influence on economic policy, ignoring the interests of business. The analysis of Russia's participation in GVAC on the basis of the value-added approach on the whole confirms the results obtained in the study by Smorodinskaya and Katukov. Currently, despite the significant scale of Russia's participation in GVACs, the effectiveness of its participation does not correspond to the potential of the Russian economy and the national tasks of industrial modernization. And, although the analysis showed that the extent of Russia's involvement in GVACs is very significant, the nature of this participation is purely related to mineral wealth. The specificity of participation of the Russian Federation in the international fragmentation of production within GVACs is that most of the links in the valueadded chains to which Russia is involved, including metallurgy, mining and chemical industries, telecommunications, etc., are upward, which means that foreign countries use goods exported from Russia as raw materials or components in their own production. Similar results were obtained in (Smorodinskaya and Katukov, 2017) for the period until 2011, where it was shown that the domestic value added, introduced by Russia into the global chains, is formed by more than 80% by the exports of mineral wealth and other intermediate goods that are widely used by other countries to make products with a high degree of processing. Russia's leadership in the exports of domestic value added is ensured by the exports of primary commodities. As in 2011, in the structure of exports of domestic value added, the key positions are taken by the mining industry (79.34%) and basic metals (82.78%). At the same time, Russia itself is extremely limited in using imported flows to create export products with high value added. This specialization prevents Russia from moving up the links of value chains (Smorodinskaya and Katukov, 2017). The resources exported from Russia are returned to the economy of Russia already in the form of finished goods with an appropriate markup, which is further exacerbated by existing tariff and non-tariff trade restrictions (Meshkova and Moiseechev, 2015). In particular, Russia leads by the number of trade barriers (36 trade barriers) followed by China The analysis shows that in the international division of labor, Russia retains a historically developed specialization, with predominance in the export of mineral wealth, metals, fertilizers and agricultural raw materials, which determines the current profile of Russia's participation in GVACs. And although the global competitiveness index of Russia in 2018 reached a record level of 4.64 points, which allowed it to take the 38 th place out of 137 countries, it still lags far behind most developed countries and countries with fast-growing economies. The current position of Russia in GVACs does not provide it with possible longterm benefits from such participation and is inconsistent with those medium-and long-term tasks of scientific and technological innovation development that are fixed in key strategic and policy documents. The analysis of the TiVA database confirms that Russia faces a serious challenge to improve its economic modernization strategy, considering the new understanding of the global trade processes. Conclusion The need to identify opportunities to modernize the economy and its industries and develop related recommendations to improve the measures of the governmental industrial policy of the Russian Federation requires a deeper understanding of the country's role in GVACs. An analysis based on the value-added approach showed that the mineral wealth related nature of Russia's participation in GVACs does not contribute to the successful solution of the tasks of modernizing Russian industry. Mineral wealth specialization of the Russian economy in the global international division of labor with the bottom-up location of the Russian links of the chain hinders the renovation and development of manufacturing industries. Russian mineral wealth and materials exported for processing to other countries are returned back to the Russian economy already in the form of finished products. The existing position of Russia in global chains prevents the obtaining of long-term economic benefits from such participation and does not correspond to the tasks of scientific and technological innovation development facing the Russian industry. The possibility of accelerating the growth rates of the Russian economy and modernizing the industry is seen by the authors in the need for export-oriented import substitution. support and expand the import of consumer goods in the real sector of the Russian economy".
|
// DeleteKey removes key and associated value from storage
func (d *shardedKvMap) DeleteKey(key string) {
shard := getShard(key)
d.mutexes[shard].Lock()
defer d.mutexes[shard].Unlock()
delete(d.values[shard], key)
}
|
Re: Edwin S. Rubenstein's Column, "National Data: November's Job Numbers: Good for Immigrants; Bad for the Rest of Us"
I am a low-income African-American who is a frequent reader of VDARE.COM.
Regarding Mr. Rubenstein's 12/02/05 article, I just wish more of our so-called Black leaders in Washington would see the statistics the way he does.
Maybe then we could convince America that the problem of out-of-control immigration doesn't just concern "White People".
|
Feature Identification for Diagnosing Misalignment under the Influence of Parameter Variation Misalignment as a result of improper adjustment, heat expansion, and vibration can lead to damage and unexpected downtime of electric motors and their processes. In order to recognize misalignment during processes, or as a simple warning after maintenance, MCSA can be applied. However, previous studies have shown that MCSA fails for load variation and is unable to distinguish faults. At the same time, more sophisticated approaches like deep learning use unknown decision-making processes. Valid features for diagnosing misalignment under the influence of load and motor size variation are unknown. Machine learning algorithms are able to search for valid feature sets. The findings of this paper show that even under load and motor size variation, features for diagnosis can be found. In addition, redundant feature sets with similar results are available and deliver better results than the use of MCSA. The valid features of this study help to implement and to improve technical diagnosis.
|
Protective effect of Paeonia anomala extracts and constituents against tert-butylhydroperoxide-induced oxidative stress in HepG2 cells. The fruit and root parts of Paeonia anomala L. are used for the treatment of many kinds of disorders in Mongolian traditional medicine. The protective effect of a fruit extract from P. anomala against tert-butylhydroperoxide-induced cell damage was evaluated in human hepatoma HepG2 cells and compared to that of a root extract from P. anomala on the basis of cell viability, generation of intracellular reactive oxygen species, cellular total glutathione concentration, and anti-genotoxicity. The fruit extract of P. anomala showed excellent protection against the oxidative stress when compared to the root extract, through free radical scavenging, enhancing cellular glutathione concentration, and inhibiting DNA damage. Chemical constituents in the fruit extract of P. anomala were investigated and two novel compounds, 2-hydroxy-6-methoxy-4-O-(6'-O--L-arabinofuranosyl--D-glucopyranosyl)acetophenone and 3,3'-di-O-methyl-4-O-(3''-O-galloyl--D-glucopyranosyl)ellagic acid, along with 18 other known compounds were identified. Compound 2 showed better cytoprotection against tert-butylhydroperoxide than compound 1. Among other compounds isolated from the fruit extract, ellagic acid, methyl gallate, ethyl gallate, fischeroside B, and quercetin derivatives showed potent protective effects against tert-butylhydroperoxide-induced oxidative stress via inhibiting reactive oxygen species generation and increasing total glutathione levels in HepG2 cells.
|
Road network extraction by local context interpretation In this paper, we describe an interpretation tool likely to identify the arcs of a network generated by an automatic road network extraction system. This system is based on the variable use of various extraction methods: intensive for low-level processes, restricted for higher-level processes. A very special attention is drawn to the efficiency evaluation of this high-level module and to the modelization of the different objects of a scene.
|
// Create an instance of a Pd 'map' object.
void *map_new(t_floatarg in_min, t_floatarg in_max, t_floatarg out_min, t_floatarg out_max) {
t_map *map = (t_map *)pd_new(map_class);
map->value = 0.0;
map->in_min = (float)in_min;
map->in_max = (float)in_max;
map->out_min = (float)out_min;
map->out_max = (float)out_max;
map->maped_outlet = outlet_new(&map->x_ob,NULL);
inlet_new(&map->x_ob,&map->x_ob.ob_pd, &s_float, gensym("in_min"));
inlet_new(&map->x_ob,&map->x_ob.ob_pd, &s_float, gensym("in_max"));
inlet_new(&map->x_ob,&map->x_ob.ob_pd, &s_float, gensym("out_min"));
inlet_new(&map->x_ob,&map->x_ob.ob_pd, &s_float, gensym("out_max"));
return (void *)map;
}
|
package global
import "github.com/karldoenitz/Tigo/TigoWeb"
type JsonResponse struct {
TigoWeb.BaseResponse
Status int `json:"code"`
Message string `json:"msg"`
Data interface{} `json:"data,omitempty"`
}
|
<reponame>LiquidatorCoder/fluentui-system-icons
import * as React from "react";
import { JSX } from "react-jsx";
import { IFluentIconsProps } from '../IFluentIconsProps.types';
const MailRead20Regular = (iconProps: IFluentIconsProps, props: React.HTMLAttributes<HTMLElement>): JSX.Element => {
const {
primaryFill,
className
} = iconProps;
return <svg {...props} width={20} height={20} viewBox="0 0 20 20" xmlns="http://www.w3.org/2000/svg" className={className}><path d="M9.74 3.07a.5.5 0 01.52 0l6.77 4.06A2 2 0 0118 8.85v5.65a2.5 2.5 0 01-2.5 2.5h-11A2.5 2.5 0 012 14.5V8.85a2 2 0 01.97-1.72l.21.36-.2-.36 6.76-4.06zM10 4.08L3.49 8 3.47 8 10 11.92 16.53 8h-.02L10 4.07zm7 4.8l-6.74 4.05a.5.5 0 01-.52 0L3 8.88v5.62c0 .83.67 1.5 1.5 1.5h11c.83 0 1.5-.67 1.5-1.5V8.88z" fill={primaryFill} /></svg>;
};
export default MailRead20Regular;
|
def _gather_balances(self, addresses: List[str], height: int) -> Dict[Any, str]:
requests = self._generate_web3_requests(addresses, height)
response = self._batch_gatherer.make_request(json.dumps(requests))
addr_dict = {}
for item in response:
if isinstance(item, str) or item.get('result') is None:
continue
addr_dict[item.get('id')] = str(int(item.get('result'), 16))
return addr_dict
|
from flask import Blueprint, render_template
route_member = Blueprint("member_page", __name__)
@route_member.route('/index')
def index():
return render_template("/member/index.html")
@route_member.route('/info')
def info():
return render_template("/member/info.html")
@route_member.route('/set')
def set():
return render_template('/member/set.html')
@route_member.route('/comment')
def comment():
return render_template('/member/comment.html')
|
Effects of root addition and foliar application of nitric oxide and salicylic acid in alleviating iron deficiency induced chlorosis of peanut seedlings ABSTRACT Nitric oxide (NO) and salicylic acid (SA) are two important signaling molecules, which could alleviate chlorosis of peanut under iron (Fe) deficiency. Here, we further investigated the mechanism of different combinations of sodium nitroprusside (SNP, a nitric oxide donor) and SA supplying in alleviation Fe deficiency symptoms and selected which is the best combination. Thus, peanut was cultivated in hydroponic culture under iron limiting condition with different combinations of SNP and SA application. After 21 days, Fe deficiency significantly inhibited peanut growth, decreased soluble Fe concentration and chlorophyll contents, and disturbed ionic homeostasis. In addition, the content of reactive oxygen species (ROS) and malondialdehyde (MDA) concentration increased, which led the lipid peroxidation. Application of SNP and SA significantly changed Fe trafficking in cells and organs, which increased Fe uptake from nutrient solution, and transport from root to shoot, enhanced the activity of ferric-chelate reductase (FCR), that increased the available Fe in cell organelles, and the active Fe, chlorophyll contents in leaves. Furthermore, ameliorated the inhibition of calcium (Ca), magnesium (Mg) and zinc (Zn) uptake and promoted plant growth in Fe deficiency. At the same time, it increased the activities of superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT) to protect the plasmolemma from peroxidation. Results demonstrated that different combinations of SNP and SA application could alleviate the chlorosis of peanut in Fe deficiency by various mechanisms. Such as increased the available Fe and chlorophyll concentrations in leaves, improved the activities of antioxidant enzymes and modulated the mineral elements balance and so on. Foliar application of SNP and SA is the best to protect leaves while directly adding them into nutrient solution is the best to protect roots. These results also indicated that the effects of SNP and SA supplying together to leaves or roots are better than respectively adding to roots and spraying to leaves. The best combination is foliar application of SNP and SA.
|
Impact of once- versus twice-daily perphenazine dosing on clinical outcomes: an analysis of the CATIE data. OBJECTIVE The objective of this study was to evaluate the impact of once- versus twice-daily dosing of perphenazine, which has a plasma half-life of 8-12 hours, on clinical outcomes in patients with schizophrenia. METHOD Data from phase 1 of the Clinical Antipsychotic Trial of Intervention Effectiveness (CATIE) conducted between January 2001 and December 2004 were used in this post hoc analysis. Patients with schizophrenia (DSM-IV) randomly allocated to treatment with perphenazine were also randomly assigned to once-daily (N = 133) or twice-daily (N = 124) dosing and followed over 18 months. Discontinuation rate and time to discontinuation were used as primary outcomes to compare the 2 groups. The following clinical outcomes were analyzed as secondary measures: efficacy-Positive and Negative Syndrome Scale, Clinical Global Impressions-Severity scale, Calgary Depression Scale for Schizophrenia, and Drug Attitude Inventory and safety/tolerability-Abnormal Involuntary Movement Scale, Barnes Akathisia Rating Scale, Simpson-Angus Scale, and body weight. Data on treatment-emergent adverse events, concomitant psychotropic medications, and medication adherence (pill count and clinician rating scale) were also analyzed for each group. RESULTS No significant differences were found in any outcome measures between the once-daily and twice-daily dosing groups, which remained the same when using the mean dose of perphenazine as a covariate. CONCLUSIONS Perphenazine is routinely administered in a divided dosage regimen because of its relatively short plasma half-life. However, the present findings challenge such a strategy, suggesting that once-daily represents a viable treatment option. Results are discussed in the context of more recent evidence that challenges the need for high and continuous dopamine D2 receptor blockade to sustain antipsychotic response. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT00014001.
|
The reference frame of the motion aftereffect is retinotopic. Although eye-, head- and body-movements can produce large-scale translations of the visual input on the retina, perception is notable for its spatiotemporal continuity. The visual system might achieve this by the creation of a detailed map in world coordinates--a spatiotopic representation. We tested the coordinate system of the motion aftereffect by adapting observers to translational motion and then tested at the same retinal and spatial location (full aftereffect condition), at the same retinal location, but at a different spatial location (retinotopic condition), at the same spatial, but at a different retinal location (spatiotopic condition), or at a different spatial and retinal location (general transfer condition). We used large stimuli moving at high speed to maximize the likelihood of motion integration across space. In a second experiment, we added a contrast-decrement detection task to the motion stimulus to ensure attention was directed at the adapting location. Strong motion aftereffects were found when observers were tested in the full and retinotopic aftereffect conditions. We also found a smaller aftereffect at the spatiotopic location but it did not differ from that at the location that was neither spatiotopic nor retinotopic. This pattern of results did not change when attention was explicitly directed at the adapting stimulus. We conclude that motion adaptation took place at retinotopic levels of visual cortex and that no spatiotopic interaction of motion adaptation and test occurred across saccades.
|
#!/usr/bin/env python
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import subprocess
# The location of the generate grammar kit script
DIR = os.path.dirname(__file__)
# The location of the plugin directory
PLUGIN_PATH = os.path.join(DIR, "..")
# The location of the grammar-kit directory
GRAMMAR_KIT = os.path.join(DIR, "../../../third-party/java/grammar-kit/")
OUT_DIR = os.path.join(PLUGIN_PATH, "gen")
FLEX_OUT_DIR = os.path.join(OUT_DIR, "com/facebook/buck/intellij/ideabuck/lang")
GRAMMAR_KIT_JAR = os.path.join(GRAMMAR_KIT, "grammar-kit.jar")
GRAMMAR_KIT_JFLEX_JAR = os.path.join(GRAMMAR_KIT, "JFlex.jar")
JFLEX_SKELETON = os.path.join(PLUGIN_PATH, "resources/idea-flex.skeleton")
FLEX_FILE = os.path.join(
PLUGIN_PATH, "src/com/facebook/buck/intellij/ideabuck/lang/Buck.flex"
)
BNF_FILE = os.path.join(
PLUGIN_PATH, "src/com/facebook/buck/intellij/ideabuck/lang/Buck.bnf"
)
print(FLEX_OUT_DIR)
subprocess.call(["java", "-jar", GRAMMAR_KIT_JAR, OUT_DIR, BNF_FILE])
subprocess.call(
[
"java",
"-jar",
GRAMMAR_KIT_JFLEX_JAR,
"-sliceandcharat",
"-skel",
JFLEX_SKELETON,
"-d",
FLEX_OUT_DIR,
FLEX_FILE,
]
)
|
package org.apache.el.parser;
import javax.el.ELContext;
import javax.el.ELManager;
import javax.el.ELProcessor;
import javax.el.ExpressionFactory;
import javax.el.ValueExpression;
import org.junit.Assert;
import org.junit.Test;
public class TestAstSemicolon {
@Test
public void testGetValue01() {
ELProcessor processor = new ELProcessor();
Object result = processor.getValue("1;2", String.class);
Assert.assertEquals("2", result);
}
@Test
public void testGetValue02() {
ELProcessor processor = new ELProcessor();
Object result = processor.getValue("1;2", Integer.class);
Assert.assertEquals(Integer.valueOf(2), result);
}
@Test
public void testGetValue03() {
ELProcessor processor = new ELProcessor();
Object result = processor.getValue("1;2 + 3", Integer.class);
Assert.assertEquals(Integer.valueOf(5), result);
}
@Test
public void testGetType() {
ELProcessor processor = new ELProcessor();
ELContext context = processor.getELManager().getELContext();
ExpressionFactory factory = ELManager.getExpressionFactory();
ValueExpression ve = factory.createValueExpression(
context, "${1+1;2+2}", Integer.class);
Assert.assertEquals(Number.class, ve.getType(context));
Assert.assertEquals(Integer.valueOf(4), ve.getValue(context));
}
}
|
// WARNING: This file is autogenerated. DO NOT EDIT!
// Generated 2021-06-03 02:37:30 +0000
package jnr.constants.platform.linux.aarch64;
public enum InterfaceInfo implements jnr.constants.Constant {
IFF_ALLMULTI(512L),
// IFF_802_1Q_VLAN not defined
// IFF_ALTPHYS not defined
IFF_AUTOMEDIA(16384L),
// IFF_BONDING not defined
// IFF_BRIDGE_PORT not defined
IFF_BROADCAST(2L),
// IFF_CANTCONFIG not defined
IFF_DEBUG(4L),
// IFF_DISABLE_NETPOLL not defined
// IFF_DONT_BRIDGE not defined
// IFF_DORMANT not defined
// IFF_DRV_OACTIVE not defined
// IFF_DRV_RUNNING not defined
// IFF_DYING not defined
IFF_DYNAMIC(32768L),
// IFF_EBRIDGE not defined
// IFF_ECHO not defined
// IFF_ISATAP not defined
// IFF_LINK0 not defined
// IFF_LINK1 not defined
// IFF_LINK2 not defined
// IFF_LIVE_ADDR_CHANGE not defined
IFF_LOOPBACK(8L),
// IFF_LOWER_UP not defined
// IFF_MACVLAN_PORT not defined
IFF_MASTER(1024L),
// IFF_MASTER_8023AD not defined
// IFF_MASTER_ALB not defined
// IFF_MASTER_ARPMON not defined
// IFF_MONITOR not defined
IFF_MULTICAST(4096L),
IFF_NOARP(128L),
IFF_NOTRAILERS(32L),
// IFF_OACTIVE not defined
// IFF_OVS_DATAPATH not defined
IFF_POINTOPOINT(16L),
IFF_PORTSEL(8192L),
// IFF_PPROMISC not defined
IFF_PROMISC(256L),
// IFF_RENAMING not defined
// IFF_ROUTE not defined
IFF_RUNNING(64L),
// IFF_SIMPLEX not defined
IFF_SLAVE(2048L),
// IFF_SLAVE_INACTIVE not defined
// IFF_SLAVE_NEEDARP not defined
// IFF_SMART not defined
// IFF_STATICARP not defined
// IFF_SUPP_NOFCS not defined
// IFF_TEAM_PORT not defined
// IFF_TX_SKB_SHARING not defined
// IFF_UNICAST_FLT not defined
IFF_UP(1L);
// IFF_WAN_HDLC not defined
// IFF_XMIT_DST_RELEASE not defined
// IFF_VOLATILE not defined
// IFF_CANTCHANGE not defined
private final long value;
private InterfaceInfo(long value) { this.value = value; }
public static final long MIN_VALUE = 1L;
public static final long MAX_VALUE = 32768L;
static final class StringTable {
public static final java.util.Map<InterfaceInfo, String> descriptions = generateTable();
public static final java.util.Map<InterfaceInfo, String> generateTable() {
java.util.Map<InterfaceInfo, String> map = new java.util.EnumMap<InterfaceInfo, String>(InterfaceInfo.class);
map.put(IFF_ALLMULTI, "IFF_ALLMULTI");
map.put(IFF_AUTOMEDIA, "IFF_AUTOMEDIA");
map.put(IFF_BROADCAST, "IFF_BROADCAST");
map.put(IFF_DEBUG, "IFF_DEBUG");
map.put(IFF_DYNAMIC, "IFF_DYNAMIC");
map.put(IFF_LOOPBACK, "IFF_LOOPBACK");
map.put(IFF_MASTER, "IFF_MASTER");
map.put(IFF_MULTICAST, "IFF_MULTICAST");
map.put(IFF_NOARP, "IFF_NOARP");
map.put(IFF_NOTRAILERS, "IFF_NOTRAILERS");
map.put(IFF_POINTOPOINT, "IFF_POINTOPOINT");
map.put(IFF_PORTSEL, "IFF_PORTSEL");
map.put(IFF_PROMISC, "IFF_PROMISC");
map.put(IFF_RUNNING, "IFF_RUNNING");
map.put(IFF_SLAVE, "IFF_SLAVE");
map.put(IFF_UP, "IFF_UP");
return map;
}
}
public final String toString() { return StringTable.descriptions.get(this); }
public final int value() { return (int) value; }
public final int intValue() { return (int) value; }
public final long longValue() { return value; }
public final boolean defined() { return true; }
}
|
package service
import (
"k8s.io/api/core/v1"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"github.com/Qihoo360/wayne/src/backend/client"
"github.com/Qihoo360/wayne/src/backend/resources/common"
"github.com/Qihoo360/wayne/src/backend/resources/endpoint"
"github.com/Qihoo360/wayne/src/backend/resources/event"
"github.com/Qihoo360/wayne/src/backend/resources/pod"
)
type ServiceDetail struct {
ObjectMeta common.ObjectMeta `json:"objectMeta"`
TypeMeta common.TypeMeta `json:"typeMeta"`
// InternalEndpoint of all Kubernetes services that have the same label selector as connected Replication
// Controller. Endpoints is DNS name merged with ports.
InternalEndpoint common.Endpoint `json:"internalEndpoint"`
// ExternalEndpoints of all Kubernetes services that have the same label selector as connected Replication
// Controller. Endpoints is external IP address name merged with ports.
ExternalEndpoints []common.Endpoint `json:"externalEndpoints"`
// List of Endpoint obj. that are endpoints of this Service.
EndpointList []endpoint.Endpoint `json:"endpointList"`
// Label selector of the service.
Selector map[string]string `json:"selector"`
// Type determines how the service will be exposed. Valid options: ClusterIP, NodePort, LoadBalancer
Type v1.ServiceType `json:"type"`
// ClusterIP is usually assigned by the master. Valid values are None, empty string (""), or
// a valid IP address. None can be specified for headless services when proxying is not required
ClusterIP string `json:"clusterIP"`
// List of events related to this Service
EventList []common.Event `json:"eventList"`
// PodInfos represents list of pods status targeted by same label selector as this service.
PodList []*v1.Pod `json:"podList"`
// Show the value of the SessionAffinity of the Service.
SessionAffinity v1.ServiceAffinity `json:"sessionAffinity"`
}
func GetServiceDetail(cli *kubernetes.Clientset, indexer *client.CacheFactory, namespace, name string) (*ServiceDetail, error) {
serviceDate, err := cli.CoreV1().Services(namespace).Get(name, metaV1.GetOptions{})
if err != nil {
return nil, err
}
endpoint, err := endpoint.GetServiceEndpointsFromCache(indexer, namespace, name)
if err != nil {
return nil, err
}
podList, err := pod.ListKubePod(indexer, namespace, serviceDate.Spec.Selector)
if err != nil {
return nil, err
}
eventList, err := event.GetPodsWarningEvents(indexer, podList)
if err != nil {
return nil, err
}
detail := toServiceDetail(serviceDate, eventList, podList, endpoint)
return &detail, nil
}
func toServiceDetail(service *v1.Service, events []common.Event, pods []*v1.Pod, endpoints []endpoint.Endpoint) ServiceDetail {
return ServiceDetail{
ObjectMeta: common.NewObjectMeta(service.ObjectMeta),
TypeMeta: common.NewTypeMeta(common.ResourceKind("service")),
InternalEndpoint: common.GetInternalEndpoint(service.Name, service.Namespace, service.Spec.Ports),
ExternalEndpoints: common.GetExternalEndpoints(service),
EndpointList: endpoints,
Selector: service.Spec.Selector,
ClusterIP: service.Spec.ClusterIP,
Type: service.Spec.Type,
EventList: events,
PodList: pods,
SessionAffinity: service.Spec.SessionAffinity,
}
}
|
<filename>besu/src/main/java/org/hyperledger/besu/cli/options/OptionParser.java
/*
* Copyright ConsenSys AG.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
*/
package org.hyperledger.besu.cli.options;
import static com.google.common.base.Preconditions.checkArgument;
import java.util.Iterator;
import com.google.common.base.Splitter;
import com.google.common.collect.Range;
import org.apache.tuweni.units.bigints.UInt256;
public class OptionParser {
public static Range<Long> parseLongRange(final String arg) {
checkArgument(arg.matches("-?\\d+\\.\\.-?\\d+"));
final Iterator<String> ends = Splitter.on("..").split(arg).iterator();
return Range.closed(parseLong(ends.next()), parseLong(ends.next()));
}
public static long parseLong(final String arg) {
return Long.parseLong(arg, 10);
}
public static String format(final Range<Long> range) {
return format(range.lowerEndpoint()) + ".." + format(range.upperEndpoint());
}
public static String format(final int value) {
return Integer.toString(value, 10);
}
public static String format(final long value) {
return Long.toString(value, 10);
}
public static String format(final float value) {
return Float.toString(value);
}
public static String format(final UInt256 value) {
return value.toBigInteger().toString(10);
}
}
|
The use of testate amoebae in studies of sea-level change: a case study from the Taf Estuary, south Wales, UK Micropalaeontological techniques play an important role in high-resolution studies of sea-level change. Salt-marsh foraminifera are among the most valuable groups of sea-level indicators as their distribution shows a narrow vertical zonation which can be accurately related to former sea level. This paper focuses on testate amoebae (thecamoebians), a closely related group of protozoans which have also been widely reported in salt marshes, but only in low numbers and diversities and only in the size fraction used in foraminiferal analyses (<63 m). A new preparation technique is described which is based on the analysis of the <63,um fraction using high power light microscopy. This technique is applied to surface sediment samples from a salt marsh in the Taf estuary in south Wales. The results show that small testate amoebae (<63 m) are much more abundant and diverse in salt-marsh sediments than the larger testate amoebae (>63 m). Species diversity increases from 2 (>63 prm) to 36 (<63 m) and the maximum abundance is 65 600 individuals per cm3. The assemblages display a distinct spatial zonation across the marsh surface which appears to be closely related to elevation and to tidal parameters. The surface distribution of the testate amoebae is compared with the distribution of foraminifera and diatoms at the same site. The implications of using <63-m testate amoebae as a tool for sea-level reconstruction are discussed.
|
The State of Mississippi on Wednesday awarded a tax break worth up to $6 million for a hotel project involving the Trump family business, a public subsidy that could indirectly benefit President Trump.
The board of the Mississippi Development Authority approved the so-called tourism tax rebate, which had been requested by the development’s local owners, Dinesh and Suresh Chawla. The Trump Organization will brand and manage the hotel and collect fees from the Chawlas for doing so.
The subsidy, to be paid out over a period of many years, is expected to offset nearly a third of the Chawlas’ projected $20 million in costs for building the hotel, which is scheduled to open this fall in Cleveland, Miss. The property, called Scion West End, is to be the first in a new line of four-star Scion hotels that the Trump Organization announced late in the 2016 presidential campaign.
In December, the Chawlas formally applied for the tax rebate from the state development agency, which is led by Glenn McCullough Jr., an appointee of Mr. Bryant, both of whom are supportive of President Trump.
In an email on Wednesday, Dinesh Chawla said he and his brother were pleased by the approval, though they said the development agency had not notified them of it.
The development agency declined to comment.
The decision to approve the request was not unusual for the agency, which evaluates applications for the tourism rebate program based on set criteria and has granted similar subsidies to other hotels.
The award renews legal questions about a Trump-affiliated property receiving benefits from a state or local government. Ethics watchdogs and the president’s critics say the Mississippi tax break would benefit the president, albeit indirectly, because he continues to own the Trump Organization through a trust.
Such benefits, they say, could violate the Constitution’s emoluments clauses, which essentially prohibit the president from accepting certain gifts from foreign or domestic governments. Other legal experts, however, contend that domestic emoluments are allowable so long as Mr. Trump does not earn them from his service as president.
Dinesh Chawla said in an email to The Times earlier this month that the Trump Organization had played no role in the rebate application and that the Trumps and the Chawlas had agreed that any rebate would not figure into fees paid to the Trumps.
A spokesman for the Mississippi development agency said the Trump name was not mentioned anywhere in the Chawlas’ application.
The development agency’s tourism rebate program is part of an effort to draw tourists to Mississippi and help the local economy. According to the development agency, the program allows a developer to recoup some of the sales taxes collected on a property to “reimburse the applicant for eligible costs incurred during the project’s construction.” The agency said earlier this month that 23 other tourism rebate applications had been approved under the program, including 10 for hotels.
The partnership between the Chawlas and the Trumps materialized after Mr. Bryant, a Republican, introduced members of the two families during the 2016 presidential campaign. The governor and the Chawlas have known each other for years.
Clay Chandler, a spokesman for Mr. Bryant, said earlier this month that the Chawlas had followed the same procedure as other applicants for the tax rebate. “State law guides the application process, and state law alone will determine if any application is approved,” he said.
Still, the Chawlas contacted state officials for more than two years, seeking to get the project on their radar, according to the emails obtained through a public records request.
“I would really appreciate your efforts in advocating this idea with the Governor’s office, MDA executive director Glenn McCullough, and MDA staff and others,” Suresh Chawla wrote in a July 2015 email to Robert Morgan, an aide to Mr. Bryant, using the acronym for the development agency.
Last summer, Suresh Chawla alerted Mr. Morgan to the partnership with the Trumps within minutes of its being announced, sending him and other officials a news release about the deal.
When Mr. Morgan received the message, he shared the news release with three other members of Mr. Bryant’s administration, including his chief of staff, along with an official at the development agency.
Dinesh Chawla said that he and his brother had emailed more than 1,000 people with the news of the deal. “It had nothing to do with the people in high offices,” he said.
|
/**
* The type Onvif executor.
*
* @author : Ajit Gaikwad
* @version : V1.0
* @email : [email protected]
* @project : SmartCam
* @package : com.things.smartlib
* @date : 10/24/2018
* @see <a href="https://smartron.com/things.html">TThings a Smartron Company</a>
*/
public class OnvifExecutor {
/**
* The constant TAG.
*/
//Constants
public static final String TAG = OnvifExecutor.class.getSimpleName();
//Attributes
private OkHttpClient client;
private MediaType reqBodyType;
private RequestBody reqBody;
private Credentials credentials;
private OnvifResponseListener onvifResponseListener;
private static final String FORMAT_HTTP = "http://%s";
//Constructors
/**
* Instantiates a new Onvif executor.
*
* @param onvifResponseListener the onvif response listener
*/
OnvifExecutor(OnvifResponseListener onvifResponseListener) {
this.onvifResponseListener = onvifResponseListener;
credentials = new Credentials("username", "password");
DigestAuthenticator authenticator = new DigestAuthenticator(credentials);
Map<String, CachingAuthenticator> authCache = new ConcurrentHashMap<>();
client = new OkHttpClient.Builder()
.connectTimeout(CONNECTION_TIMEOUT, TimeUnit.SECONDS)
.writeTimeout(WRITE_TIMEOUT, TimeUnit.SECONDS)
.readTimeout(READ_TIMEOUT, TimeUnit.SECONDS)
.authenticator(new CachingAuthenticatorDecorator(authenticator, authCache))
.addInterceptor(new AuthenticationCacheInterceptor(authCache))
.build();
reqBodyType = MediaType.parse(CONNECTION_MEDIATYPE);
}
//Methods
/**
* Sends a request to the Onvif-compatible device.
*
* @param device the device
* @param request the request
*/
void sendRequest(OnvifDevice device, OnvifRequest request) {
credentials.setUserName(device.getUsername());
credentials.setPassword(device.getPassword());
reqBody = RequestBody.create(reqBodyType, OnvifXMLBuilder.getSoapHeader() + request.getXml() + OnvifXMLBuilder.getEnvelopeEnd());
performXmlRequest(device, request, buildOnvifRequest(device, request));
}
/**
* Clears up the resources.
*/
void clear() {
onvifResponseListener = null;
}
//Properties
/**
* Sets onvif response listener.
*
* @param onvifResponseListener the onvif response listener
*/
public void setOnvifResponseListener(OnvifResponseListener onvifResponseListener) {
this.onvifResponseListener = onvifResponseListener;
}
private void performXmlRequest(OnvifDevice device, OnvifRequest request, Request xmlRequest) {
if (xmlRequest == null)
return;
client.newCall(xmlRequest)
.enqueue(new Callback() {
@Override
public void onResponse(Call call, Response xmlResponse) throws IOException {
OnvifResponse response = new OnvifResponse(request);
ResponseBody xmlBody = xmlResponse.body();
if (xmlResponse.code() == 200 && xmlBody != null) {
response.setSuccess(true);
response.setXml(xmlBody.string());
onvifResponseListener.onResponse(device,response);
parseResponse(device, response);
return;
}
String errorMessage = "";
if (xmlBody != null)
errorMessage = xmlBody.string();
onvifResponseListener.onError(device, xmlResponse.code(), errorMessage);
}
@Override
public void onFailure(Call call, IOException e) {
onvifResponseListener.onError(device, -1, e.getMessage());
}
});
}
private void parseResponse(OnvifDevice device, OnvifResponse response) {
switch (response.getOnvifRequest().getType()) {
case GET_CAPABILITIES:
((GetDeviceCapabilities) response.getOnvifRequest()).getDeviceCapabilitiesListener().onCapabilitiesReceived(device,
new DeviceCapabilitiesParser().parse(response));
break;
case GET_SERVICES:
OnvifServices path = new GetServicesParser().parse(response);
device.setPath(path);
((GetServicesRequest) response.getOnvifRequest()).getListener().onServicesReceived(device, path);
break;
case GET_DEVICE_INFORMATION:
((GetDeviceInformationRequest) response.getOnvifRequest()).getListener().onDeviceInformationReceived(device,
new GetDeviceInformationParser().parse(response));
break;
case GET_MEDIA_PROFILES:
//((GetMediaProfilesRequest) response.getOnvifRequest()).getListener().onMediaProfileReceived(device,
//new GetMediaProfilesParser().parse(response));
((GetMediaProfilesRequest) response.getOnvifRequest()).getListener().onMediaProfileReceived(device,
new DeviceMediaProfileParser().parse(response));
break;
case GET_STREAM_URI:
GetMediaStreamRequest streamRequest = (GetMediaStreamRequest) response.getOnvifRequest();
streamRequest.getListener().onvifStreamUriReceived(device, streamRequest.getMediaProfile(),
new GetMediaStreamParser().parse(response));
break;
case PTZ:
PTZRequest ptzRequest = (PTZRequest) response.getOnvifRequest();
ptzRequest.getOnvifPTZListener().onPTZReceived(device,response.isSuccess());
break;
/* case PTZ_STOP:
PTZStopRequest ptzStopRequest = (PTZStopRequest) response.getOnvifRequest();
ptzStopRequest.getOnvifPTZListener().onPTZReceived(device,response.isSuccess());
break;*/
case DEVICE_DISCOVER_MODE:
GetDeviceDiscoveryMode deviceDiscoveryMode=(GetDeviceDiscoveryMode) response.getOnvifRequest();
deviceDiscoveryMode.getDeviceDiscoverModeListener().OnDeviceDiscoverModeReceived(device,new DeviceDiscoverModeParser().parse(response));
break;
case DEVICE_DNS:
GetDeviceDNS deviceDNS=(GetDeviceDNS) response.getOnvifRequest();
deviceDNS.getDeviceDNSListener().OnDNSReceived(device,new DeviceDNSParser().parse(response));
break;
case DEVICE_HOSTNAME:
GetDeviceHostname deviceHostname=(GetDeviceHostname) response.getOnvifRequest();
deviceHostname.getDeviceHostnameListener().OnHostnameReceived(device,new DeviceHostnameParser().parse(response));
break;
case DEVICE_NWGATEWAY:
GetDeviceNWGateway deviceNWGateway=(GetDeviceNWGateway) response.getOnvifRequest();
deviceNWGateway.getDeviceNWGatewayListener().OnNWGatewayReceived(device,new DeviceNWGatewayParser().parse(response));
break;
case DEVICE_NWINTERFACES:
GetDeviceNWInterfaces nwInterfaces=(GetDeviceNWInterfaces) response.getOnvifRequest();
nwInterfaces.getNwInterfacesListener().OnNWInterfacesReceived(device,new DeviceNWInterfacesParser().parse(response));
break;
case DEVICE_NWPROTOCOLS:
GetDeviceNWProtocols nwProtocols=(GetDeviceNWProtocols) response.getOnvifRequest();
nwProtocols.getNwProtoclosListener().OnNWProtocolsReceived(device,new DeviceNWProtocolsParser().parse(response));
break;
case DEVICE_SCOPES:
GetDeviceScopes deviceScopes = (GetDeviceScopes) response.getOnvifRequest();
deviceScopes.getScopesListener().onScopesReceived(device,new DeviceScopesParser().parse(response));
break;
case PTZ_CONFIGURATIONS:
GetPTZConfigurations ptzConfigurations = (GetPTZConfigurations) response.getOnvifRequest();
ptzConfigurations.getPtzConfigurationsListener().onPTZConfigurationsReceived(device,new PTZConfigurationsParser().parse(response));
break;
case PTZ_NODES:
GetPTZNodes ptzNodes = (GetPTZNodes) response.getOnvifRequest();
ptzNodes.getPtzConfigurationsListener().onPTZConfigurationsReceived(device,new PTZConfigurationsParser().parse(response));
break;
default:
onvifResponseListener.onResponse(device, response);
break;
}
}
private Request buildOnvifRequest(OnvifDevice device, OnvifRequest request) {
return new Request.Builder()
.url(getUrlForRequest(device, request))
.addHeader("Content-Type", "text/xml; charset=utf-8")
.post(reqBody)
.build();
}
private String getUrlForRequest(OnvifDevice device, OnvifRequest request) {
String requestUrl = device.getHost();
requestUrl = buildUrl(requestUrl);
return requestUrl + getPathForRequest(device, request);
}
private String getPathForRequest(OnvifDevice device, OnvifRequest request) {
switch (request.getType()) {
case GET_SERVICES:
return device.getPath().getServicepath();
case GET_DEVICE_INFORMATION:
return device.getPath().getDeviceinfomationpath();
case GET_MEDIA_PROFILES:
return device.getPath().getProfilespath();
case GET_STREAM_URI:
return device.getPath().getStreamURIpath();
}
return device.getPath().getServicepath();
}
private String bodyToString(Request request) {
try {
Request copy = request.newBuilder().build();
Buffer buffer = new Buffer();
if (copy.body() != null)
copy.body().writeTo(buffer);
return buffer.readUtf8();
} catch (IOException e) {
e.printStackTrace();
return "";
}
}
private String buildUrl(String url) {
if (url.startsWith("http://") || url.startsWith("https://"))
return url;
return String.format(Locale.getDefault(), FORMAT_HTTP, url);
}
}
|
/// Get the maximum possible solution for the `BV`: that is, the highest value
/// for which the current set of constraints is still satisfiable.
/// "Maximum" will be interpreted in an unsigned fashion.
///
/// Returns `Ok(None)` if there is no solution for the `BV`, that is, if the
/// current set of constraints is unsatisfiable. Only returns `Err` if a solver
/// query itself fails. Panics if the `BV` is wider than 64 bits.
pub fn max_possible_solution_for_bv_as_u64<V: BV>(
solver: V::SolverRef,
bv: &V,
) -> Result<Option<u64>> {
let width = bv.get_width();
if width > 64 {
panic!("max_possible_solution_for_bv_as_u64 on a BV with width > 64");
}
if !sat(&solver)? {
return Ok(None);
}
// Shortcut: if the BV is constant, just return its constant value
if let Some(u) = bv.as_u64() {
return Ok(Some(u));
}
// Shortcut: check all-ones first, and if it's a valid solution, just return that
if bvs_can_be_equal(&solver, bv, &V::ones(solver.clone(), width))? {
if width == 64 {
return Ok(Some(std::u64::MAX));
} else {
return Ok(Some((1 << width) - 1));
}
}
// min is inclusive, max is exclusive (we know all-ones doesn't work)
let mut min: u64 = 0;
let mut max: u64 = if width == 64 {
std::u64::MAX
} else {
(1 << width) - 1
};
let mut pushes = 0;
while (max - min) > 1 {
let mid = (min / 2) + (max / 2) + (min % 2 + max % 2) / 2; // (min + max) / 2 would be easier, but fails if (min + max) overflows
let mid = if mid / 2 > min { mid / 2 } else { mid }; // as another small optimization, rather than checking the midpoint (pure binary search) we bias towards the small end (checking effectively the 25th percentile if min is 0) as we assume small positive numbers are more common, this gets us towards 0 with half the number of solves
solver.push(1);
pushes += 1;
bv.ugte(&V::from_u64(solver.clone(), mid, width)).assert()?;
if sat(&solver)? {
min = mid;
} else {
max = mid;
solver.pop(1);
pushes -= 1;
}
}
solver.pop(pushes);
assert_eq!(max - min, 1);
// Recall that min is inclusive, max is exclusive. So `min` is actually the
// max possible solution here.
Ok(Some(min))
}
|
North Captiva Island
Development
Development of the island began in the 1960s, but was slow due to the absence of electric service and the difficulty on transporting building materials to the island. Commercial electric and phone service was established in the mid 1980s. The island Upper Captiva community has about 300 homes built and 300 vacant lots. About half of the island is owned by the State of Florida and is part of a State Park. All other areas are privately owned including the roads. Since the island can be accessed by boat or small plane only, a regular passenger ferry service runs from Pine Island Marina, North Captiva Island Club Ferry and Island Girl Charter at two-hour intervals, serving both tourists and locals. There is also a barging service that transports materials to and garbage from the island.
Hurricanes
The island was damaged in August 2004 when the eastern eyewall of Hurricane Charley struck North Captiva, immediately before hitting Charlotte Harbor to the north-northeast. The southern part of the island was divided from the north. Although it took several years, the island has mostly recovered from the hurricane damage and new construction is ongoing.
|
<reponame>Leiloloaa/type-challenges
// 只读类型
type MyReadonly<T> = {
readonly [P in keyof T]: T[P]
}
// js 类比
function readonly(obj) {
const result = {};
for (const key in obj) {
result["readonly" + key] = obj[key];
}
return result;
}
// 1. 返回一个对象
// 2. 遍历 obj (js 对象 ts 接口) in -> mapped keyof -> lookup
// 3. 加上 readonly 关键字 新的知识点
// 4. 通过 key 来获取 obj(接口) 里面的值 indexed
// 涉及到的知识点
// 1. 返回一个对象
// 2. 遍历foreach 接口 mapped
// - keyof lookup
// - https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-1.html#keyof-and-lookup-types
// - mapped
// - https://www.typescriptlang.org/docs/handbook/2/mapped-types.html
// 3. 加上 readonly 关键字 新的知识点
// - https://www.typescriptlang.org/docs/handbook/utility-types.html#readonlytype
// 4. todo[key] 取值 indexed
// - https://www.typescriptlang.org/docs/handbook/2/indexed-access-types.html
|
"""
Utilities for working with ledger types
"""
from colorama import Fore, Style
def safe_format_amount(commodity, amount):
"""
Formats an amount with a commodity, or without it if the commodity is None
"""
if commodity is None:
return str(amount)
return commodity.format_amount(amount)
def format_amount(commodity, amount):
"Formats the given amount for final display"
fmted = safe_format_amount(commodity, amount)
fmted = '{:>20}'.format(fmted)
if amount < 0:
fmted = Fore.RED + fmted + Style.RESET_ALL
return fmted
|
In Between the Raindrops, landscape photographer Peter Cox captures Ireland's dramatic vistas in all their glory. The moon fading into blue at the end, according to Discover Magazine, is a lunar eclipse.
For more work by Peter Cox, visit http://www.petercox.ie/.
Via It's Okay to Be Smart.
|
<reponame>julia53/100-days-of-python<filename>day5/password_generator.py
#Password Generator Project
import random
letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']
numbers = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
symbols = ['!', '#', '$', '%', '&', '(', ')', '*', '+']
print("Welcome to the PyPassword Generator!")
nr_letters= int(input("How many letters would you like in your password?\n"))
nr_symbols = int(input(f"How many symbols would you like?\n"))
nr_numbers = int(input(f"How many numbers would you like?\n"))
#Eazy Level - Order not randomised:
#e.g. 4 letter, 2 symbol, 2 number = JduE&!91
letters_s = random.sample(letters, nr_letters)
symbols_ = random.sample(numbers, nr_symbols)
numbers_s = random.sample(symbols, nr_numbers)
print("".join(letters_s + symbols_ + numbers_s))
#Hard Level - Order of characters randomised:
#e.g. 4 letter, 2 symbol, 2 number = g^2jk8&P
password_list = letters_s + symbols_ + numbers_s
result = ""
curr_password_len = len(password_list) - 1
while curr_password_len >= 0:
random_index = random.randint(0, curr_password_len)
random_letter = password_list[random_index]
result = result + random_letter
password_list.pop(random_index)
curr_password_len = curr_password_len - 1
print(result)
|
man = []
other = []
try:
data = open('sketch.txt')
for i in data:
try:
(role, line) = i.split(':', 1)
line = line.strip()
if role == "Man":
man.append(line)
elif role == "Other Man":
other.append(line)
except ValueError:
pass
data.close()
except IOError:
print("Arquivo nao existe")
print(man)
print(other)
with open('man_data.txt', 'w') as man_file:
print(man, file=man_file)
with open('other_man_data.txt', 'w') as other_man_file:
print(other, file=other_man_file)
|
import PkSwitch from "..";
import BaseAction from "./action";
import Log from "./log";
export {
PkSwitch as RootPlugin,
BaseAction,
Log
}
|
KANSAS CITY, Mo.—What can you say about a group of teens and preteens taking a 3,000 mile bicycle trek, called Ride2Freedom, to save five orphans? Remarkable, even heroic, perhaps?
Coming from around the world and representing 15 different nations, the 25 riders started cycling on June 1 from Los Angeles.
Their journey aims to raise awareness of the persecution of Falun Gong in the People’s Republic of China, which began in 1999. More specifically they aim to rescue five orphans, who have lost their parents due to the persecution.
All of the children practice Falun Gong, a spiritual path that incorporates meditative exercises and adheres to the principles of truthfulness, compassion, and tolerance.
What moves these youngsters to take their summer vacation and travel across the country? On day 27 of the trip, the Epoch Times caught up with Ride2Freedom in Kansas City to find out.
Aila Verheijke, 11, represents Hong Kong and is the youngest of the riders. Originally from Hong Kong, she now lives in San Francisco.
Aila considers this trek “a once in a lifetime chance” that she doesn’t want to miss. “It is very important that we raise awareness and save five orphans so they can have a happy life,” she said.
For Tanner Gao, 13, the ride is a bit more personal. His parents were persecuted in China.
Before he was born, his dad got taken away to a labor camp for two years. Then when he was 6, his mother was taken to a brainwashing camp for six months. His family moved to the U.S. to escape the persecution, he said.
“I missed her, and I don’t want anyone else to feel that way,” he recalled.
“I would feel really guilty if I miss this chance” to save the orphans, he explained.
Starting in Los Angeles, the riders have rallied across the U.S. and in 45 days will reach Washington D.C., and then push on to New York City to address the U.N., according to Ride2Freedom’s website.
I missed her, and I don’t want anyone else to feel that way.
Averaging about 80 to 130 miles a day, they each cycle in one- to two-hour shifts before taking a break and riding in vehicles with family and supporters.
It should be kept in mind that these riders have no professional biking experience, yet they are climbing mountains and crossing deserts, according to a press release. Thus, it’s not surprising that they have met some tough obstacles along the way.
The hardest part for Ailia has been falling down. Some of the riders had wounds that looked pretty red and puffy. Tanner’s legs were covered in scabs from a fall.
For Borong Tsai from Singapore, 15 this year, the hardest part of the journey was going through the Rockies. “Because I come from sea level, we were in 8,000 feet of elevation and I almost fainted,” he said.
They’ve traveled through some terrifically hot and muggy days as well, with the highest temperature reading of 102 degrees in Kansas.
But kids will be kids. The children enjoy themselves and make friends wherever they go.
The most fun Tanner, who starts eighth grade this year, has had was when he was swimming. “We swam in the clearest lake in Kansas. It was a really hot day and you just rode and then you go swimming. It was a relief,” he said.
Aila has most enjoyed getting to know the other cyclists and camping out together.
Perhaps the most harrowing part of the journey is yet to come. The group will select a few riders to fly to China and bring back the orphans.
Everyone can follow the ride on the Ride2Freedom website, the R2F Facebook page, on NTD Television, which will be streaming live daily from the road, and via Epoch Times.
Ride2Freedom is 30 youths will bike 3000 miles across America to rescue children orphaned by the persecution of Falun Gong in China.
|
<filename>tests/buffers/test_buffers.py<gh_stars>10-100
import numpy
from ai_traineree.buffers import Experience
def test_experience_init():
# Assign
state = numpy.random.random(10)
action = numpy.random.random(2)
reward = 3
next_state = numpy.random.random(10)
done = False
exp = Experience(state=state, action=action, reward=reward, done=done)
assert all(exp.state == state)
assert all(exp.action == action)
assert exp.reward == reward
assert exp.done == done
assert not hasattr(exp, "next_state")
exp = Experience(state=state, action=action, reward=reward, next_state=next_state, done=done)
assert all(exp.state == state)
assert all(exp.action == action)
assert exp.reward == reward
assert exp.done == done
assert all(exp.next_state == next_state)
def test_experience_comparison():
# Assign
e1a = Experience(state=[0, 2], action=2, done=False)
e1b = Experience(state=[0, 2], action=2, done=False)
e1c = Experience(action=2, done=False, state=[0, 2])
e2 = Experience(state=[0, 2, 3], action=[1, 1], done=False)
e3 = Experience(state=[0, 2], action=2, done=True)
# Assert
assert e1a == e1b
assert e1b == e1c
assert e2 != e3
|
<gh_stars>10-100
/*
* The MIT License (MIT)
*
* Copyright (c) 2017 <NAME> (https://alphaville.github.io),
* <NAME> (https://www.linkedin.com/in/krinamenounou),
* <NAME> (http://homes.esat.kuleuven.be/~ppatrino)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
/*! \page page_benchmark_results Benchmarks
*
* \tableofcontents
*
* \section benchmarks-dolan-more Dolan-Moré performance profiles
*
* In order to compare different solvers, we employ the [Dolan-Moré performance
* profile plot](https://pdfs.semanticscholar.org/54a2/0dbd409436be4f188dfa9a78949a1cac230d.pdf).
*
* Let us briefly introduce the Dolan-Moré performance profile plot.
*
* Let \f$P\f$ be a finite set of problems used as benchmarks and \f$S\f$ be a
* set of solvers we want to compare to one another.
*
* Let \f$t_{p,s}\f$ be the cost (e.g., runtime or flops) to solve a problem
* \f$p\f$ using a solver \f$s\f$.
*
* We define the ration between \f$t_{p,s}\f$ and the lowest observed cost to solve
* this problem using some solver \f$s\in S\f$:
*
* \f{eqnarray*}{
* r_{p,s} = \frac{t_{p,s}}{\min_{s \in S} t_{p,s}}.
* \f}
*
* If a solver \f$s\f$ does not solve a problem \f$p\f$, then we assign to \f$r_{p,s}\f$
* a very high value \f$r_M > r_{p,s}\f$ for all other \f$p,s\f$.
*
* The cumulative distribution of the performance ratio is the Dolan-Moré performance
* profile plot.
*
* In particular, define
*
* \f{eqnarray*}{
* \rho_s(\tau) = \frac{1}{n_p}\#\{p\in P: r_{p,s}\leq \tau\},
* \f}
*
* for \f$\tau\geq 1\f$ and where \f$n_p\f$ is the number of problems.
*
* The Dolan-Moré performance profile is the plot of \f$\rho_s\f$ vs \f$\tau\f$,
* typically on a logarithmic x-axis.
*
* <img src="images/dolan-more.png" alt="The Dolan More plot" width="60%"/>
*
*
* \section benchmark-results Benchmark results
*
* \subsection benchmark-parameters Benchmarking parameters
*
* In all benchmark results presented below we set the tolerance to \f$10^{-4}\f$.
*
* The \ref #scs_settings.max_iters "maximum number of iterations" was set to a
* very high value above which we may confidently tell the problem is unlikely
* to be solved (e.g., \f$10^6\f$).
*
* Given that different algorithms (SCS, SuperSCS using Broyden directions and
* SuperSCS using Anderson's acceleration) have a different per-iteration cost,
* we allow every algorithm to run for a give time (see
* \ref #scs_settings.max_time_milliseconds "max_time_milliseconds").
*
* After that maximum time has passed,
* if the algorithm has not converged we consider that it has failed to solve the
* problem.
*
*
* In Broyden's method we deactivated the K0 steps.
*
* \subsection benchmarks-lasso LASSO problems
*
* [1152 lasso problems](https://github.com/kul-forbes/scs/blob/master/tests/profiling_matlab/profile_runners/profile_runner_lasso.m)
*
*
* <div>
* <table border="0">
* <tr>
* <td style="padding:1px">
* <img src="images/lasso/lasso-broyden-50.png" alt="lasso-broyden-50" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/lasso/lasso-broyden-100.png" alt="lasso-broyden-100" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/lasso/lasso-anderson-5.png" alt="lasso-anderson-5" width="95%"/>
* </td>
* </tr>
* <tr>
* <td style="padding:1px">
* <img src="images/lasso/lasso-anderson-10.png" alt="lasso-anderson-10" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/lasso/lasso-anderson-15.png" alt="lasso-anderson-15" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/lasso/lasso-anderson-20.png" alt="lasso-anderson-20" width="95%"/>
* </td>
* </tr>
* </table>
* </div>
*
*
* \subsection benchmarks-pca1 Regularized PCA
*
* [288 regularized PCA problems](https://github.com/kul-forbes/scs/blob/master/tests/profiling_matlab/profile_runners/profile_runner_pca.m)
*
* <div>
* <table border="0">
* <tr>
* <td style="padding:1px">
* <img src="images/pca/pca-broyden-100.png" alt="pca-broyden-100" width="93%"/>
* </td>
* <td style="padding:1px">
* <img src="images/pca/pca-anderson-15.png" alt="pca-anderson-15" width="100%"/>
* </td>
* <td style="padding:1px">
* <img src="images/pca/pca-anderson-20.png" alt="pca-anderson-20" width="93%"/>
* </td>
* </tr>
* </table>
* </div>
*
*
* \subsection benchmarks-logreg Logistic regression problems
*
* [288 logistic regression problems](https://github.com/kul-forbes/scs/blob/master/tests/profiling_matlab/profile_runners/profile_runner_logreg.m)
*
* <div>
* <table border="0">
* <tr>
* <td style="padding:1px">
* <img src="images/logreg/logreg-broyden-50.png" alt="logreg-broyden-50" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/logreg/logreg-broyden-100.png" alt="logreg-broyden-100" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/logreg/logreg-anderson-5.png" alt="logreg-anderson-5" width="95%"/>
* </td>
* </tr>
* <tr>
* <td style="padding:1px">
* <img src="images/logreg/logreg-anderson-10.png" alt="logreg-anderson-10" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/logreg/logreg-anderson-15.png" alt="logreg-anderson-15" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/logreg/logreg-anderson-cmp.png" alt="logreg-anderson-cmp" width="95%"/>
* </td>
* </tr>
* </table>
* </div>
*
* \subsection benchmarks-sdp2 Semidefinite programming
*
* [48 SDP problems](https://github.com/kul-forbes/scs/blob/master/tests/profiling_matlab/profile_runners/profile_runner_sdp2.m)
*
* <div>
* <table border="0">
* <tr>
* <td style="padding:1px">
* <img src="images/sdp2/sdp2-broyden-50.png" alt="sdp-broyden-50" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2/sdp2-broyden-100.png" alt="sdp2-broyden-100" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2/sdp2-anderson-3.png" alt="sdp2-anderson-3" width="90%"/>
* </td>
* </tr>
* <tr>
* <td style="padding:1px">
* <img src="images/sdp2/sdp2-anderson-5.png" alt="sdp2-anderson-5" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2/sdp2-anderson-10.png" alt="sdp2-anderson-10" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2/sdp2-anderson-15.png" alt="sdp2-anderson-15" width="90%"/>
* </td>
* </tr>
* </table>
* </div>
*
* \subsection benchmarks-sdp2b Ill-conditioned SDPs
*
* [48 ill-conditioned SDP problems](https://github.com/kul-forbes/scs/blob/master/tests/profiling_matlab/profile_runners/profile_runner_sdp2b.m)
*
* <div>
* <table border="0">
* <tr>
* <td style="padding:1px">
* <img src="images/sdp2b/sdp2b-aa-3.png" alt="sdp2b-aa-5" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2b/sdp2b-aa-5.png" alt="sdp2b-aa-5" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2b/sdp2b-aa-10.png" alt="sdp2b-aa-10" width="90%"/>
* </td>
* </tr>
* <tr>
* <td style="padding:1px">
* <img src="images/sdp2b/sdp2b-aa-15.png" alt="sdp2b-aa-15" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2b/sdp2b-bro-50.png" alt="sdp2b-bro-50" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/sdp2b/sdp2b-bro-100.png" alt="sdp2b-bro-100" width="90%"/>
* </td>
* </tr>
* </table>
* </div>
*
* \subsection benchmarks-normcon Norm-constrained norm minimization
*
* [256 norm-constrained problems](https://github.com/kul-forbes/scs/blob/master/tests/profiling_matlab/profile_runners/profile_runner_normcon.m)
*
* <div>
* <table border="0">
* <tr>
* <td style="padding:1px">
* <img src="images/normcon_hard/nch-aa-3.png" alt="normcon-broyden-50" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/normcon_hard/nch-aa-5.png" alt="normcon-broyden-100" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/normcon_hard/nch-aa-10.png" alt="normcon-anderson-5" width="90%"/>
* </td>
* </tr>
* <tr>
* <td style="padding:1px">
* <img src="images/normcon_hard/nch-bro-50.png" alt="normcon-broyden-50" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/normcon_hard/nch-bro-100.png" alt="normcon-broyden-50" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/normcon_hard/nch-comp.png" alt="normcon-broyden-50" width="95%"/>
* </td>
* </tr>
* </table>
* </div>
*
* \section maros-meszaros Maros-Meszaros Problems
*
* We tested SuperSCS on the
* [Maros-Meszaros collection of QP problems](http://www.cuter.rl.ac.uk/Problems/marmes.html).
*
* <div>
* <table border="0">
* <tr>
* <td style="padding:1px">
* <img src="images/mm/mm-scs-vs-bro.png" alt="Maros-Meszaros: SCS vs SuperSCS/Broyden" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/mm/mm-scs-vs-aa.png" alt="Maros-Meszaros: SCS vs SuperSCS/AA" width="95%"/>
* </td>
* <td style="padding:1px">
* <img src="images/mm/mm-aa-vs-bro.png" alt="Maros-Meszaros: SuperSCS Broyden vs AA" width="95%"/>
* </td>
* </tr>
* </table>
* </div>
*
* Find details \ref page_maros_meszaros_results "here".
*/
|
A uniform XML-based approach to manage data acquisition hardware devices A comprehensive model based on XML technologies to interface data acquisition hardware devices for configuration and control purposes is presented. The model builds upon the use of a unified syntax for describing hardware devices, configuration data, test results as well as control sequences. The integration of the model with the online software framework of the CMS experiment is under evaluation.
|
<filename>union/union-service/src/main/java/com/welab/wefe/union/service/service/sms/SmsService.java
/**
* Copyright 2021 Tianmian Tech. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.welab.wefe.union.service.service.sms;
import com.welab.wefe.common.StatusCode;
import com.welab.wefe.common.data.mongodb.constant.SmsBusinessType;
import com.welab.wefe.common.data.mongodb.constant.SmsSupplierEnum;
import com.welab.wefe.common.data.mongodb.entity.sms.SmsDetailInfo;
import com.welab.wefe.common.data.mongodb.entity.sms.SmsVerificationCode;
import com.welab.wefe.common.data.mongodb.repo.SmsDetailInfoReop;
import com.welab.wefe.common.data.mongodb.repo.SmsVerificationCodeReop;
import com.welab.wefe.common.exception.StatusCodeWithException;
import com.welab.wefe.common.util.JObject;
import com.welab.wefe.common.util.StringUtil;
import com.welab.wefe.union.service.config.ConfigProperties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.HashMap;
import java.util.Map;
/**
* @author aaron.li
* @Date 2021/10/19
**/
@Service
public class SmsService {
private static final Logger LOG = LoggerFactory.getLogger(SmsService.class);
/**
* Verification code valid duration, unit: minute
*/
private final static long CODE_VALID_DURATION_MINUTE = 2;
private final static long CODE_VALID_DURATION_MILLISECONDS = CODE_VALID_DURATION_MINUTE * 60 * 1000L;
@Autowired
private ConfigProperties configProperties;
@Autowired
private SmsVerificationCodeReop smsVerificationCodeReop;
@Autowired
private SmsDetailInfoReop smsDetailInfoReop;
/**
* Send verification code
*
* @param mobile target mobile number
* @throws Exception
*/
public void sendVerificationCode(String mobile, SmsBusinessType smsBusinessType) throws StatusCodeWithException {
if (StringUtil.isEmpty(mobile)) {
throw new StatusCodeWithException("手机号不能为空", StatusCode.PARAMETER_CAN_NOT_BE_EMPTY);
}
if (!StringUtil.checkPhoneNumber(mobile)) {
throw new StatusCodeWithException("非法的手机号", StatusCode.PARAMETER_VALUE_INVALID);
}
if (!checkCodeIsExpire(mobile, smsBusinessType)) {
throw new StatusCodeWithException(CODE_VALID_DURATION_MINUTE + "分钟内禁止多次获取验证码", StatusCode.ILLEGAL_REQUEST);
}
// send sms to target mobile
try {
String code = generateCode();
AbstractSendSmsClient sendSmsClient = AliyunSendSmsClient.createClient(configProperties.getAliyunAccessKeyId(), configProperties.getAliyunAccessKeySecret());
Map<String, Object> smsRequest = new HashMap<>(16);
smsRequest.put("SignName", configProperties.getSmsAliyunSignName());
if (smsBusinessType.equals(SmsBusinessType.AccountForgetPasswordVerificationCode)) {
smsRequest.put("templateCode", configProperties.getSmsAliyunAccountForgetPasswordVerificationCodeTemplateCode());
} else if (smsBusinessType.equals(SmsBusinessType.MemberRegisterVerificationCode)) {
smsRequest.put("templateCode", configProperties.getSmsAliyunMemberRegisterVerificationCodeTemplateCode());
} else {
throw new StatusCodeWithException("无效的短信业务类型", StatusCode.ILLEGAL_REQUEST);
}
smsRequest.put("code", code);
AbstractSmsResponse smsResponse = sendSmsClient.sendVerificationCode(mobile, smsRequest);
SmsDetailInfo smsDetailInfo = new SmsDetailInfo();
smsDetailInfo.setMobile(mobile);
smsDetailInfo.setReqId(smsResponse.getReqId());
smsDetailInfo.setReqContent(JObject.create(smsRequest).toString());
smsDetailInfo.setSupplier(SmsSupplierEnum.Aliyun);
smsDetailInfo.setSuccess(smsResponse.success());
smsDetailInfo.setRespContent(smsResponse.getRespBody());
smsDetailInfo.setBusinessType(smsBusinessType);
smsDetailInfoReop.save(smsDetailInfo);
if (!smsResponse.success()) {
throw new StatusCodeWithException("获取验证码异常:" + smsResponse.getMessage(), StatusCode.SYSTEM_ERROR);
}
SmsVerificationCode smsVerificationCode = new SmsVerificationCode();
smsVerificationCode.setMobile(mobile);
smsVerificationCode.setCode(code);
smsVerificationCode.setBusinessType(smsBusinessType);
smsVerificationCodeReop.saveOrUpdate(smsVerificationCode);
} catch (StatusCodeWithException e) {
LOG.error("获取短信验证码异常:", e);
throw e;
} catch (Exception e) {
LOG.error("获取短信验证码异常:", e);
throw new StatusCodeWithException("获取验证码异常:" + e.getMessage(), StatusCode.SYSTEM_ERROR);
}
}
/**
* Check verification code is valid
*
* @param mobile target mobile number
* @param code Verification Code
* @return true: valid, false: invalid
*/
public void checkVerificationCodeValid(String mobile, String code, SmsBusinessType smsBusinessType) throws StatusCodeWithException {
SmsVerificationCode smsVerificationCode = smsVerificationCodeReop.find(mobile, smsBusinessType);
if (null == smsVerificationCode) {
throw new StatusCodeWithException("验证码无效,请重新获取验证码", StatusCode.PARAMETER_VALUE_INVALID);
}
long updateTime = smsVerificationCode.getUpdateTime();
if (System.currentTimeMillis() - updateTime > CODE_VALID_DURATION_MILLISECONDS) {
throw new StatusCodeWithException("验证码无效,请重新获取验证码", StatusCode.PARAMETER_VALUE_INVALID);
}
if (!smsVerificationCode.getCode().equals(code)) {
throw new StatusCodeWithException("验证码不正确", StatusCode.PARAMETER_VALUE_INVALID);
}
}
/**
* Check verification code is valid
*
* @param mobile target mobile number
* @return true: expire, false: non-expire
*/
private boolean checkCodeIsExpire(String mobile, SmsBusinessType smsBusinessTypeEnum) {
SmsVerificationCode smsVerificationCode = smsVerificationCodeReop.find(mobile, smsBusinessTypeEnum);
if (null == smsVerificationCode) {
return true;
}
long updateTime = smsVerificationCode.getUpdateTime();
return System.currentTimeMillis() - updateTime > CODE_VALID_DURATION_MILLISECONDS;
}
private String generateCode() {
return String.valueOf((int) ((Math.random() * 9 + 1) * 100000));
}
}
|
// Copyright 2018 <NAME>. All rights reserved.
// Use of this source code is governed by a MIT
// license that can be found in the LICENSE file.
package tokenizer
import (
"fmt"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
func TestTokenizer(t *testing.T) {
tokenizer := New()
tokens := tokenizer.Tokenize("I+believe_life is an intelligent thing: that things aren't random.")
assert.Equal(t, []string{"I", "believe", "life", "is", "an", "intelligent", "thing", "that", "things", "aren't", "random"}, tokens)
}
func TestTokenizerWithSeparator(t *testing.T) {
tokenizer := NewWithSeparator(" ")
tokens := tokenizer.Tokenize("I believe life is an intelligent thing: that things aren't random.")
assert.Equal(t, []string{"I", "believe", "life", "is", "an", "intelligent", "thing:", "that", "things", "aren't", "random."}, tokens)
}
func TestTokenizerWithKeepingSeparator(t *testing.T) {
tokenizer := New()
tokenizer.KeepSeparator()
tokens := tokenizer.Tokenize("I believe life is an intelligent thing: that things aren't random.")
assert.Equal(t, []string{"I", " ", "believe", " ", "life", " ", "is", " ", "an", " ", "intelligent", " ", "thing", ":", " ", "that", " ", "things", " ", "aren't", " ", "random", "."}, tokens)
}
func TestConvertSeparator(t *testing.T) {
assert.Equal(t, [256]uint8{'\t': 1, '\n': 1, ' ': 1}, convertSeparator("\t\n "))
}
func Test3D(t *testing.T) {
tok := New()
p :="/Volumes/m/3d"
filepath.Walk(p, func(path string, info os.FileInfo, err error) error {
fmt.Println(tok.Tokenize(path))
return nil
})
}
func BenchmarkTokenizer(b *testing.B) {
tokenizer := New()
for n := 0; n < b.N; n++ {
tokenizer.Tokenize("I believe life is an intelligent thing: that things aren't random.")
}
}
|
<gh_stars>1-10
def bar(foo=0, **kwargs):
b = foo
c = foo
|
import sys
sys.setrecursionlimit(10 ** 6)
class UnionFindTree:
"""Disjoint-Set Data Structure
Union-Find Tree
complexity:
init: O(n)
find, unite, same: O(alpha(n))
used in SRM505 div.2 900, ATC001 A, DSL1A(AOJ)
"""
def __init__(self, n):
self.par = list(range(n)) # parent
self.rank = [0] * n # depth of tree
def find(self, x):
if self.par[x] == x:
return x
else:
self.par[x] = self.find(self.par[x])
return self.par[x]
def unite(self, x, y):
x, y = self.find(x), self.find(y)
if x == y:
return
if self.rank[x] < self.rank[y]:
self.par[x] = y
else:
self.par[y] = x
if self.rank[x] == self.rank[y]:
self.rank[x] += 1
def same(self, x, y):
return self.find(x) == self.find(y)
while True:
N, Q = map(int, input().split())
if N == Q == 0:
break
parents = [0] + [int(input()) - 1 for _ in range(N - 1)]
queries = []
marked = set()
for _ in range(Q):
k, v = input().split()
v = int(v) - 1
if k == "Q":
queries.append((k, v))
elif k == "M" and v not in marked:
marked.add(v)
queries.append((k, v))
uf = UnionFindTree(N)
for i in range(1, N):
if i not in marked:
p_root = uf.find(parents[i])
uf.par[i] = p_root
uf.rank[p_root] = max(uf.rank[p_root], uf.par[i] + 1)
ans = 0
for k, v in reversed(queries):
if k == "Q":
ans += uf.find(v) + 1
elif not uf.same(v, parents[v]):
p_root = uf.find(parents[v])
uf.par[v] = p_root
uf.rank[p_root] = max(uf.rank[p_root], uf.par[v] + 1)
print(ans)
|
How does growth mindset inform interventions in primary schools? A systematic literature review ABSTRACT Growth mindset interventions, initially based on evidence from experimental studies, are widely used in schools internationally. This systematic literature review focuses on the use of growth mindset in primary schools, whether as a bespoke intervention or as an embedded cultural practice, to examine how the approach is operationalised. Six data bases were searched between August 2018 and February 2019, resulting in 131 papers, ten of which were included for methodological quality and appropriateness of focus. Findings indicate that research in this area is generally small scale and a mixture of process and outcome evaluations of whole school and targeted interventions. This review found that growth mindset has been applied across a range of subject areas and as a whole school and classroom intervention. More rigorous implementation and outcome studies are needed in this emerging field. Implications for educational psychology and school practice are discussed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.