content
stringlengths 7
2.61M
|
---|
Analysis of Sluice Foundation Seepage Using Monitoring Data and Numerical Simulation For sluices built on soil foundations, seepage safety of the foundation is one of the most concerns during operation of sluices. Monitoring data could reflect the real seepage behavior in the foundation, but of which the shortcoming is that generally only the local seepage states can be measured. The seepage field in the whole foundation can be analyzed by numerical simulation. The permeability coefficients of the foundation materials significantly affect the numerical simulation results; however, it is difficult to accurately determine the values of permeability coefficients. In this paper, an approach based on response surface method (RSM) for calibration of permeability coefficients was proposed, and the efficiency of parameter calibration is improved by constructing the response surface equation instead of time-consuming finite element calculation of foundation seepage. The seepage in a sluice foundation was analyzed using monitoring data and numerical simulation. The monitoring data showed that the seepage pressure in the foundation periodically varies with high value in flood season and low value in dry season. After calibration of the permeability coefficients of the foundation materials using the measured seepage pressure, the seepage fields in the foundation for different water levels were numerically simulated to investigate the cause for the periodical variation of the seepage pressure and the seepage safety of the foundation was assessed with the calculated seepage gradients. The methods adopted in this study could be applied to seepage analysis for sluice foundations with similar geologic conditions and antiseepage measures.
|
Overview of wireless implantable energy supply and communication technology With the development of integrated circuits and microelectronics, integrated and miniaturized implantable medical devices are increasingly used in modern medical technologies, e.g., cardiac pacemakers, vasodilators, and cochlear implants. However, the normal operation of these devices is inseparable from the availability of a sufficient energy supply and the bidirectional transmission of internal and external signals. Due to the limitation of the working environment of sensors, there is only a small space for most implanted electronic devices, which is a challenge faced by existing technology. In this paper, current wireless implantable energy supply and communication technologies are reviewed to determine the best available technologies, thereby providing a reference for method selection in designing implantable medical systems.
|
Lionel Messi has become Barcelona’s most-decorated player of all time after the 2-1 victory over Sevilla in the Spanish Super Cup on Sunday evening.
The Argentine has now won a staggering 33 trophies for Barcelona – one more than Andres Iniesta who won 32 during his time at Camp Nou before he joined Japanese side Vissel Kobe in May.
The win against Sevilla in Morocco was Messi’s eighth victory in the Spanish Super Cup since he made his debut.
The 31-year-old has also won nine La Liga titles, the Champions League four times, the Club World Cup on three occasions and the Copa del Rey six times.
Barcelona fell behind against Sevilla after Pablo Sarabia opened the scoring in the ninth minute.
Pique pulled Barcelona level before the break after pouncing on the rebound from Messi’s free-kick, which hit the post.
Ousmane Dembele then scored a stunning winner with 12 minutes remaining as his strike from the edge of Sevilla’s area arrowed into the top corner.
|
Extreme warfarin hypersensitivity after oophorectomy. We report the case of a woman who developed unexplained warfarin hypersensitivity after undergoing surgery to remove her ovaries. Presurgery, the patient's international normalised ratios (INR) control was stable and uneventful but 11 days after her operation she presented with extremely high (frequently ≥10) INR. Warfarin was discontinued on day 24 postoperation but 11 days later the plasma warfarin concentration was high at 4.8mg/l (therapeutic range 0.7-2.3mg/l). After cessation of warfarin, she required frequent doses of oral and intravenous vitamin K1 (totalling 48mg) as well as two doses of prothrombin complex concentrate to normalise the INR. The patient was switched from warfarin to heparin, then to dabigatran with no further thrombosis or bleeding. While on heparin, the kinetics of warfarin elimination and vitamin K status were found to be normal and the reason for the onset of the extreme sensitivity to warfarin remains unknown.
|
package cn.leancloud.im.v2;
/**
* 继承此类来处理与自定义消息相关的事件
*/
public class LCIMTypedMessageHandler<T extends LCIMTypedMessage> extends MessageHandler<T> {
/**
* 重载此方法来处理接收消息
*
* @param message message instance.
* @param conversation conversation instance.
* @param client client instance.
*/
@Override
public void onMessage(T message, LCIMConversation conversation, LCIMClient client) {
;
}
/**
* 重载此方法来处理消息回执
*
* @param message message instance.
* @param conversation conversation instance.
* @param client client instance.
*/
@Override
public void onMessageReceipt(T message, LCIMConversation conversation, LCIMClient client) {
;
}
@Override
public void onMessageReceiptEx(T message, String operator, LCIMConversation conversation, LCIMClient client) {
onMessageReceipt(message, conversation, client);
}
}
|
Martinez also claimed her former husband had physically abused her in the early 2000s, according to an FBI affidavit filed in court.
She will be sentenced on Aug. 5. She could face a $250,000 fine and up to 10 years in prison, according to the release.
|
Recent Advances in Biometrics and Its Applications voice, etc.), biometric feature extraction and matching, signal image and video processing in biometrics, advanced pattern recognition in biometrics, machine learning and deep learning in biometrics, fusion techniques in biometrics, soft biometrics, multimodal biometrics, security and privacy in biometrics, Big Data challenges in biometrics, online biometric systems, embedded biometric systems, emerging biometrics, and related applications. Introduction Biometric recognition has become a burgeoning research area due to the industrial and government needs for security and privacy concerns. It has also become a center of focus for many authentication and identification applications in the civil and forensic fields. This Special Issue aims to provide original research papers, as well as review articles focusing on recent advances in biometrics and its applications. It covers a wide range of topics in the field of biometrics, including biometrics-based authentication and identification, physiological and behavioral biometrics (e.g., finger, palm, face, eye, ear, iris, retina, gait, handwriting, voice, etc.), biometric feature extraction and matching, signal image and video processing in biometrics, advanced pattern recognition in biometrics, machine learning and deep learning in biometrics, fusion techniques in biometrics, soft biometrics, multimodal biometrics, security and privacy in biometrics, Big Data challenges in biometrics, online biometric systems, embedded biometric systems, emerging biometrics, and related applications. In This Special Issue The present issue consists of twelve articles on some topics in the wide range of research topics covered in this special issue. Among many paper submissions that we received, these articles have been accepted after a careful peer-review process. This section summarizes the main contributions of these articles. In, Wang et al. present an efficient biometric identification method using electrocardiogram signals. The proposed method is based on a feature learning process in the wavelet domain using sparse temporal-frequency autoencoding. Iula and Micucci in propose a palmprint recognition system based on ultrasound images. The proposed system uses a gel pad to obtain acoustic coupling between the ultrasound probe and the user's hand. The collected volumetric image is then processed to extract 2D images of the palmprint at various under-skin depths, which can be used for palmprint recognition. In, Ammour et al. propose a multimodal biometric identification system based on face and iris traits. The proposed system is based on an efficient feature extraction method, which applies 2D log-Gabor filters for iris feature extraction and singular spectrum analysis with wavelet transform for facial feature extraction, and a fusion process that combines the relevant features from both modalities. Nakanishi and Maruoka (in ) study the biometric recognition using electroencephalograms (EEGs) stimulated by personal ultrasound. They propose a method based on individual features extracted from the log power spectra of EEG signals using principal component analysis. The verification process is achieved using a support vector machine technique with the extracted features. In, Heravi et al. study the impact of aging on the performance of three-dimensional facial verification. The authors propose an interesting method to simulate the possible facial appearance of a young adult in the future. The proposed method based on three-dimensional faces obtained from a 3D morphable face aging model, allows to enhance the performance of the 3D verification process. Ilyas et al. (in ) present an anti-spoofing system for human age verification. The proposed system is based on auditory perception, and it is identified as vulnerable to spoofing attacks. In, Fang et al. investigate the correlation between the left and right irises of an individual using a convolutional neural network (CNN) for iris recognition. They propose a method based on VGG16 architecture to classify left and right irises from the same or different individuals with high accuracy. Other related applications that can involve biometrics have been presented in. The authors propose interesting detection and recognition approaches with applications in the domain of watermarking, health, and nutrients analysis. We found that the ideas of the proposed contributions for data processing, feature extraction, learning, and embedded systems, seem interesting to be considered in the field of biometrics. The authors highlighted some perspectives on how their work can be considered for biometric applications. Future Perspectives We have solicited original research work covering novel theories, innovative methods, and meaningful applications that can potentially lead to significant advances in biometrics. Based on the various contributions discussed in this Special Issue, research and development in this field will remain very active. Many different perspectives can be envisaged which, from our point of view, require more attention. For example, deep learning for biometrics, Big Data challenges in biometrics, and biometrics in the Internet-of-Things (IoT) technology. Finally, we hope that the readers will find useful information and interesting contributions in this Special Issue.
|
__version_info__ = '0.8.2'
__version__ = '0.8.2'
version = '0.8.2'
|
Horizontal drilling machines are utilized to drill underground bore holes for utility lines and other underground pipes. Using this type of trenchless drilling minimizes disruption of surface soil. This decreases the cost of laying utility lines especially in developed areas, and also substantially decreases the possibility of damaging previously buried utility lines and other underground structures.
In most cases the drilling machine comprises a frame, a motorized drive system mounted on the frame, and a drill string connected on one end to the drive system and on the other end to a boring tool/boring head assembly. The motorized drive system provides thrust to advance the drill string through the ground according to a planned bore path.
The boring head is commonly steered using a "slant-face" drill bit or some other suitable mechanism. A radio transmitter or other tracking device such as a "beacon" encased in housing may be provided in or directly behind the boring head to permit the tracking of the boring head.
Underground boring operations involve thrusting and rotating of the boring head in rocky and abrasive soil conditions, thereby increasing the probability of damage to the component parts of the boring head. To permit removal and/or replacement of the drill bit and other components, it is desirable for the boring head to be removably attached to the end of the drill string or beacon housing. To this end many boring heads are attached by a threaded connection. However, because of high rotational impact loading exerted on such joints, it is often difficult to disengage or "break out" these threaded joints.
In addition it is difficult to utilize a threaded joint to connect a replaceable head. This is especially true when utilizing a beacon housing where the transmitter is inserted from the side for easy maintenance of the electronics and where the orientation of the head should match the "clocking" of the beacon.
Therefore, while conventional removable boring heads have provided advantages in repair and maintenance of boring head assemblies, there remains a need for a more easily detachable boring head that is used in conjunction with a radio transmitter. The present invention is directed to an improvement to boring head assemblies with removable boring heads that specifically address this problem.
|
Estimation of sibilant groove formation and sound generation from early hominin jawbones. The speech production capability of sibilant fricatives of early hominin was assessed by interpolating the modern human vocal tract to an Australopithecine specimen based on the jawbone landmarks, and then simulating the airflow and sound generation. The landmark interpolation demonstrates the possibility to form the sibilant groove in the anterior part of the oral tract, and results of the aeroacoustic simulation indicate that the early hominins had the potential to produce the fricative broadband noise with a constant supply of airflow to the oral cavity, although the ancestor's tongue deformation ability is still uncertain, and the results are highly speculative.
|
They had a meeting of the minds and they were two short.
Over at The Corner, the Party of Ideas took a holiday break from beating up on some guy in a stock photo, which went on for three glorious days, and everybody went home except K-Lo who, alas, simultaneously found the keys to the blog and to the cabinet where they stash the sacramental wine. She ran a little amuck with the Jesus when nobody's was looking, but she also managed tofavor us with a Christmas message from Princess Dumbass of the Northwoods, who has a book-like product to shill, and who suddenly realized that insulting the pope might not be the best sales strategy since Catholics read, too, and some of them are suckers, and they might be willing to spring for the 99 cents the book-like entity is going for as it falls off the edge of the remainders table at BJ's Wholesale Club. Looking deeply into K-Lo's eyes, and seeing a familiar vacancy staring back, the Princess pitched as hard as she could. This is not word salad. This is a word Jell-O mold.
PALIN: Why do you say that? Because I answered candidly one simple tweeted question about the pope in Jake Tapper's CNN interview? Let me clear this up again: I have great respect for Pope Francis. The answer I gave about Pope Francis in one interview was blown way out of proportion, so c'mon NRO, be professional about this. I even clarified on my Facebook page that I apparently wasn't as clear in my response as some wanted. In that particular interview I was trying to say that I don't trust the media to get it right when reporting on much, much less the Vatican, which is why I think it's important to do your own research when it comes to things the media report about the pope. I have many Catholic family members and friends, and many assure me they believe Pope Francis is just as sincere and faithful a shepherd of the Church as his two predecessors, whom I greatly admired. (Keep in mind that I come from a big Irish Catholic family on my mother's side. I heard from friends and relatives when my taken-out-of-context comment about the pope went viral, because these respected people in my life know who I am.) Of course, I love my Catholic family and friends, and I respect the work the Church does to help humanity, advance a culture of life, and lift up the poor from lives of deep hardship and dependence.
Don't trust the people who actually report on the Vatican. Trust your drunk Uncle Seamus who's passed out in the green beans. I am glad that she clarified that she wasn't clear. I am also glad that the "respected people" in her life know who she is. It gives them time to turn off all the lights in the house and get very,very quiet when they hear the roar of the snowmacheen out by the driveway. There is only so much degook that you can gobble, even on Christmas.
|
<reponame>OpenCreate/Play-with-DSA
#pragma once
#include "BST.h"
#include "Set.h"
template <typename T>
class BSTSet : public Set<T> {
public:
BSTSet() {}
void add(T e) override { bst.add(e); }
void remove(T e) override { bst.remove(e); }
bool contains(T e) const override { return bst.contains(e); }
int getSize() const override { return bst.getSize(); }
bool isEmpty() const override { return bst.isEmpty(); }
private:
BST<T> bst;
};
|
Plasma transfusion volume and liver transplantation safety We have carefully evaluated the recently published original article from Bartelmaos and colleagues in which three virally secured plasma transfused in liver transplantation were compared. Recognizing, first, the effort to perform this randomized controlled trial, we cannot agree with certain methodologic aspects and findings. We consider that the selection of total volume of plasma transfused as the primary outcome for this equivalence study is inconsistent knowing that conditioning volumes vary between the three plasma types and they should not be compared without applying a correction factor; hence, the calculation of an equivalence margin of 20% based on transfused volume is not feasible. In addition, the expected volume to be transfused (760 300 mL, i.e., between three and four times lower than in the actual trial) and the standard deviation seem to be empirical (hypothetical coefficient of variation is 40%) and far from reality (coefficient of variation between 64 and 79% in the trial). Despite randomization, the patients in the methylene blue (MB) fresh-frozen plasma (FFP) treatment group had the lowest levels of fibrinogen, Factor (F)V, and prothrombin time at baseline. Compared by a t test, the fibrinogen and prothrombin time and FV were significantly lower (p < 0.05) in the MB-FFP patient group compared to the solvent/detergent (S/D) or FFP group. Consequently, the secondary outcome pertaining to the extent of correction of laboratory coagulation variables should be interpreted with caution. The study outcomes were compared based on medians given the nonnormal distribution of transfused plasma volumes but the goodness-of-fit test applied is not mentioned (i.e., Kolmogorov-Smirnov test, ShapiroWilk, or Anderson-Darling test). Nevertheless, the sample sizes in the three groups are big enough to apply the central limit theorem, which expresses the fact that given certain conditions, the mean of a sufficiently large number of independent random variables, with finite mean and variance, will be approximately normally distributed. Besides, often a log transformation is enough to get a normal distribution. It is better to deduce something about the population from which the samples originated, and this is best done with estimates of variables and confidence intervals (CIs) whereas nonparametric tests typically make use of ordinal information only. Furthermore, it is difficult to perform flexible modeling with nonparametric tests, for example, allowing for confounding factors using multiple regression. Plasma volume and plasma units were log-transformed for linear regression models but not to assess the primary outcomes. The use of MB-FFP led to a 24% higher median volume of plasma transfused when compared to the consumption of the two other (combined) FFP groups. This combination of S/D-FFP and quarantine (Q) groups has not been stated in the Materials and Methods section, a reason to do so is not given and, again, is based on the ratio of medians only and intervals are not provided. Therefore, inferiority cannot be evaluated. Instead, the primary objective of the trial was to know if the plasma volume transfused was equivalent when using treated plasma (MB-FFP and S/D-FFP) compared to untreated plasma (Q-FFP) and this comparison was not shown. When the number of plasma units transfused intraoperatively was analyzed, the medians were compared. When the means of units are compared, statistical differences are not observed, the ratio of MB-FFP with respect to S/DFFP and Q-FFP is +7.8 and +16.1%, respectively, and the ratio of S/D-FFP with respect to Q-FFP is +7.6%. Again, CIs are not given but all the ratios fall within the 20% interval of equivalence; accordingly the results are at least not conclusive. On the other hand, the number of plasma units (medians) at the first transfusion episode was equivalent between the three groups. Nevertheless, when compared by a t test, the mean of the units used was significantly higher with S/D-FFP than with Q-FFP (p < 0.05). As stated by the authors, after the adjustment of bleeding risk factors, the upper 95% CIs of total volume and number of plasma units transfused with MB-FFP falls out of the 20% equivalence margin but the lower 95% CIs are within. The authors conclude that equivalence cannot be shown but in fact inferiority cannot be proven. Albeit that no adverse event differences between the three plasmas could be demonstrated, we remarked in particular the apparent increased proportion of patients with vascular complications in the S/D-FFP group, as well as the difference in percentage of patients with acute graft rejection. Although the sample size was too small to allow the power of detecting differences, it is worth to note that the Mantel-Haenszel common odds ratio (OR) risk estimate (Table 1) of patients with adverse events for Q/MB was 1.97 (p = 0.04). The OR of acute graft rejection for Q/MB was 2.68 (p = 0.074) and the OR of hepatic and splanchnic vascular thrombosis for SD/MB was 3.760 (p = 0.0497). That is, the risk of experiencing adverse events was double when patients were transfused with Q-FFP than with MB-FFP and the risk of vascular thrombosis was more than three times higher when patients were transfused with SD-FFP than with MB-FFP; interesting outcomes considering the patients in the MB-FFP group were suffering from more severe illnesses. The comparison of first transfusion episode, bleeding, red blood cells transfused, and hemostasis support, including the number of platelets transfused, did not show significant differences between the three groups.
|
Mutations in exons 10 and 11 of human glucokinase result in conformational variations in the active site of the structure contributing to poor substrate binding explains hyperglycemia in type 2 diabetic patients Mutations in the glucokinase (GK) gene play a critical role in the establishment of type 2 diabetes. In our earlier study, R308K mutation in GK in a clinically proven type 2 diabetic patient showed, structural and functional variations that contributed immensely to the hyperglycemic condition. In the extension of this work, a cohort of 30 patients with established type 2 diabetic condition were chosen and the exons 10 and 11 of GK were PCR-amplified and sequenced. The sequence alignment showed A379S, D400Y, E300A, E395A, E395G, H380N, I348N, L301M, M298I, M381G, M402R, R308K, R394P, R397S, and S398R mutations in 12 different patients. The structural analysis of these mutated GKs, showed a variable number of -- units, hairpins, -bulges, strands, helices, helixhelix interactions, -turns, and -turns along with the RMSD variations when compared to wild-type GK. Molecular modeling studies revealed that the substrate showed variable binding orientations and could not fit into the active site of these mutated structures; moreover, it was expelled out of the conformations. Therefore, these structural variations in GK due to mutations could be one of the strongest reasons for the hyperglycemic levels in these type 2 diabetic patients.
|
<reponame>kevin-shelaga/gloo
package statusutils_test
import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/solo-io/gloo/pkg/utils/statusutils"
"github.com/solo-io/gloo/projects/gloo/api/external/solo/ratelimit"
ratelimitpkg "github.com/solo-io/gloo/projects/gloo/pkg/api/external/solo/ratelimit"
v1 "github.com/solo-io/gloo/projects/gloo/pkg/api/v1"
"github.com/solo-io/solo-kit/pkg/api/v1/resources"
"github.com/solo-io/solo-kit/pkg/api/v1/resources/core"
)
var _ = Describe("Status", func() {
var (
statusClientRed, statusClientGreen resources.StatusClient
)
BeforeEach(func() {
statusClientRed = statusutils.GetStatusClientForNamespace("red")
statusClientGreen = statusutils.GetStatusClientForNamespace("green")
})
Context("single status api (deprecated)", func() {
It("works with RateLimitConfig (api)", func() {
rateLimitConfig := &ratelimit.RateLimitConfig{}
newStatus := &core.Status{
State: core.Status_Accepted,
}
// we should not panic
statusClientRed.SetStatus(rateLimitConfig, newStatus)
// we should not panic and we should get out what we put in
statusRed := statusClientRed.GetStatus(rateLimitConfig)
Expect(statusRed).To(Equal(newStatus))
// we should not panic and we should get out what we put in
statusGreen := statusClientGreen.GetStatus(rateLimitConfig)
Expect(statusGreen).To(Equal(newStatus))
})
It("works with RateLimitConfig (pkg)", func() {
rateLimitConfig := &ratelimitpkg.RateLimitConfig{}
newStatus := &core.Status{
State: core.Status_Accepted,
}
// we should not panic
statusClientRed.SetStatus(rateLimitConfig, newStatus)
// we should not panic and we should get out what we put in
statusRed := statusClientRed.GetStatus(rateLimitConfig)
Expect(statusRed).To(Equal(newStatus))
// we should not panic and we should get out what we put in
statusGreen := statusClientGreen.GetStatus(rateLimitConfig)
Expect(statusGreen).To(Equal(newStatus))
})
})
Context("namespaced statuses api", func() {
It("works with Upstream", func() {
upstream := &v1.Upstream{}
newStatus := &core.Status{
State: core.Status_Accepted,
}
statusClientRed.SetStatus(upstream, newStatus)
// we should get out what we put in
statusRed := statusClientRed.GetStatus(upstream)
Expect(statusRed).To(Equal(newStatus))
// we should get nil, since the status is stored under a different key
statusGreen := statusClientGreen.GetStatus(upstream)
Expect(statusGreen).To(BeNil())
})
})
})
|
Back in 2010, Apple ran its very first commercial for the iPad during the 81st annual Academy Awards. So it's only fitting that the company has returned again and again to remind people of what they can do with sleeker, more powerful tablets. And this year, Apple is making its strongest pitch yet for the iPad as a filmmaking tool.
If you can't wait until the Oscars, you can watch the spot now on Apple's website, and later on YouTube.
With Groenland's "Our Hearts Like Gold" playing softly in the background of the new ad, which launched Sunday, director Martin Scorsese in a voiceover talks about creativity and how "every step is a first step; every brush stroke is a test; every scene is a lesson; every shot is a school."
As is typical of most Apple commercials, Scorsese never mentions the Apple iPad Air, movie-making apps or the process of shooting a movie on a tablet.
Instead, the images tell the true tale of a collection of Los Angeles County High School for the Arts students who assigned to make a movie with an iPad. The one-minute spot features clips of various teams using the tablet to shoot and edit a variety of scenes.
As a small text qualifier at the end of the ad notes, the students did use some additional hardware, including a boom mic, dolly and radio-controlled airplane. But the rest was all completed using the iPad and apps from the App Store.
Los Angeles County High School for the Arts students during the moment they learned that the Apple spot, which they had just seen for the first time, would air during the Oscars.
What's not seen, but was told to Mashable by Apple, is that the commercial was — like the one that aired during the Grammys earlier this month — filmed entirely on the iPad Air 2. Its iSight camera supports 1080p 30fps video.
One wonders if Scorsese, the director of such classics as The Departed and Taxi Driver, would ever consider shooting a movie on a tablet. The filmmaker is currently shooting on location outside the United States, according to Apple, and was unavailable for comment.
You can't do your work according to the people's values. I'm not talking about 'following your dream,' either. I never like the inspirational value of that phrase. Dreaming is a way of trivializing the process, the obsession that carries you through the failure as well as the successes which could be harder to get through.
If you're dreaming, you're sleeping. It's important and imperative to always be awake to your feelings, your possibilities, your ambitions. But you also know this, for your work, for your passions, every day is a rededication.
Painters, dancers, writers, filmmakers, it's the same for all of you, all of us. Every step is a first step, every brush stroke is a test, every scene is a lesson, every shot is a school. So, let the learning continue.
|
Numerical Simulation of Nonlinear Ultrasonic Waves Due to Bi-material Interface Contact Boundary integral equations are formulated to investigate nonlinear waves generated by a debonding interface of bi-material subjected to an incident plane wave. For the numerical simulation, the IRK (Implicit Runge-Kutta method) based CQ-BEM (Convolution Quadrature-Boundary Element Method) is developed. The interface conditions for a debonding area, consisting of three phases of separation, stick, and slip, are developed for the simulation of nonlinear ultrasonic waves. Numerical results are obtained and discussed for normal incidence of a plane longitudinal wave onto the nonlinear interface with a static compressive stress. Introduction Nonlinear ultrasonic nondestructive testing has been developed over the last decade, since nonlinear waves are generated by nonlinear elasticity of materials and unbonded interface conditions and are very sensitive to a degradation of material properties at very early stage. However, the mechanism of generation of nonlinear waves has not yet been understood very well from the theoretical and/or numerical point of view. So far, two dimensional simulations on nonlinear ultrasonic waves have been carried out. Also, the axisymmetric problem of a circular crack subjected to normal incidence of a longitudinal wave was solved numerically. However, no full three dimensional analysis has been done. It is, therefore, demanded to conduct numerical simulations with three dimensional realistic models. In this paper, three dimensional boundary integral equations are formulated for a circular interface crack with unbonded conditions in bi-material half spaces. The integral equations are discretized using the IRK based CQ-BEM and numerically solved to investigate the nonlinearity involved in ultrasonic waves scattered by the nonlinear interface crack. Formulation of boundary integral equations The model of a bi-material, consisting of two semi-infinite domains and, is shown in figure 1. and represent the bonding and debonding areas, respectively, of the bi-material interface. Assuming that an incident plane wave is given in, we may formulate time-domain boundary integral equations, in which reflected and transmitted waves are unknown values. However, the timedomain boundary integral equations for the bi-material have the following disadvantages: For normal incidence of a plane wave, reflection and transmission occur at all points on the interface from the initial step in time. Due to this fact, truncation errors can be introduced at the edge elements, where the infinite interface is truncated in numerical analysis. For oblique incidence, initial conditions that are given by zero displacement and velocity for unknown reflected and transmitted waves at all points in the domains, cannot be satisfied unless an infinite interface is taken into account in the numerical analysis. Therefore, in this paper, the integral formulation in which unknown variables are only scattered waves from the debonding area, is proposed in order to overcome these difficulties. If the flat interface of infinite extent is perfectly bonded and is subjected to a plane wave incidence, it is easy to calculate analytically the "free field ", defined as the summation of the incident wave and the reflected wave in, and the transmitted wave field in as follows: If debonding exists in a local area on the interface, the free field may be disturbed by the wave field scattered by the debonding area, and the total displacement can be expressed by. Since the scattered wave satisfies the initial condition and the radiation condition, the boundary integral equations for can be formulated. In solving the boundary integral equations for, the convolution integrals with time are evaluated by IRK based CQM and the surface integrals over the bonding and debonding interfaces are discretized by constant elements. For the acceleration, the fast multipole method is applied to IRK based CQ-BEM. The boundary integral equations for in and are simultaneously solved using appropriate interface conditions. The interface condition on the bonding area is the continuity of displacement and stress. For the debonding area, three types of interface conditions, "separation", "stick", and "slip", are considered. "separation" means that two surfaces of upper and lower materials are separated with no traction, while "stick" and "slip" are contact conditions under compressive normal stress state. For the "stick" condition, the surfaces of two materials move with no relative velocity. On the other hand, the "slip" condition allows a relative tangential movement with dynamic friction force. Numerical examples Numerical examples are presented for nonlinear ultrasonic wave problems of bi-material interface subjected to a small static compressive stress 0.74kPa normal to the interface and the normal incidence disturbance of longitudinal waves with 2 and 4MHz frequencies. The incident wave is a sinusoidal plane wave with three cycles and 10nm amplitude. The debonding area is a circular interface crack with radius 0.5mm. We assume that the material constants for bi-material are given in figure 1. The coefficients of static and kinematic friction are given by 0.61 and 0.47, respectively. Vertical displacement at the center points on top and bottom debonding area In figures 2 (a) and (b), the vertical displacements at the center points on the top and bottom debonding surfaces, subjected to 2MHz and 4MHz sinusoidal incident waves, respectively, are Figure 3 shows the vertical displacements of total and scattered wave fields ((a) and (b)) at the internal point located 2.0mm above the center of the debonding area, and the normalized frequency spectra of the scattered waves ((c) and (d)). Figures (a) and (c) are the results for the 2MHz sinusoidal incident wave, and figures (b) and (d) are for the 4MHz case. Note that the vertical axes for the vertical displacements of scattered waves are shown on the right side of the graphs. For comparison, the results for the displacements of the free fields in the case of no debonding are shown. Vertical displacement 2.0mm above the center of debonding area In both figures of (a) and (b), the time variations of vertical displacements of total waves show periodic waveforms and small aftereffects. However, the aftereffects do not continue for a long time compared with the results of two dimensional simulations. In the case of 2MHz, the vertical displacement of scattered wave generated by the clapping motion shows a shorter period than the fundamental frequency of 2MHz, and hence large second and higher harmonics as well as subharmonics are seen in the frequency spectrum (see figure (c)). In the case of 4MHz, on the other hand, the waveform of scattered wave is a little distorted compared with the case of 2MHz, and the frequency spectrum shows relatively small amplitudes of the second and third harmonics. From these results, it can be said that the generation of nonlinear ultrasonic waves like higher harmonics and subharmonics due to contact conditions on the debonding interface depend largely on the frequency. Conclusions In this paper, the boundary integral equations for bi-material subjected to an incident plane wave were formulated, and the IRK based CQ-BEM was implemented in the numerical simulation. The interface conditions for debonding areas that consists three phases of "separation", "stick", and "slip", were developed for the simulation of nonlinear ultrasonic waves. Numerical results showed that the generation of nonlinear ultrasonic waves are largely dependent on the frequency as well as the contact conditions on the interface.
|
package dev.mieser.tsa.web.formatter;
import static java.nio.charset.StandardCharsets.UTF_8;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatIllegalArgumentException;
import java.util.Locale;
import org.junit.jupiter.api.Test;
class Base64FormatterTest {
private final Base64Formatter testSubject = new Base64Formatter();
@Test
void parseThrowsExceptionWhenStringCannotBeDecoded() {
// given
String illegalBase64 = "I'm Base64, trust me, bro!";
// when / then
assertThatIllegalArgumentException()
.isThrownBy(() -> testSubject.parse(illegalBase64, Locale.GERMAN))
.withMessage("Not a valid Base64 string.");
}
@Test
void parseReturnsDecodedBinaryData() {
// given
String encoded = "NzM1NTYwOA==";
// when
byte[] decodedBinaryData = testSubject.parse(encoded, Locale.ENGLISH);
// then
assertThat(decodedBinaryData).isEqualTo("7355608".getBytes(UTF_8));
}
@Test
void printReturnsEncodedBinaryData() {
// given
byte[] binaryData = "7355608".getBytes(UTF_8);
// when
String encodedData = testSubject.print(binaryData, Locale.CANADA);
// then
assertThat(encodedData).isEqualTo("NzM1NTYwOA==");
}
}
|
// Copyright 2019 <NAME> License MIT
// Adapted for MTRN2500 Assignment 3 2019
// <NAME> and <NAME>
#include "config_parser.hpp"
#include <algorithm> // std::remove_if
#include <iostream>
#include <sstream> // std::istringstream
#include <string>
#include <unordered_map>
namespace assignment3 {
ConfigReader::ConfigReader(std::istream& config_file) {
// Get line by line of config_file
std::string key;
std::string value;
int n_lines = 0;
while (std::getline(config_file, key, ':') &&
std::getline(config_file, value)) {
std::cout << "Line " << ++n_lines << ": " << key << ":" << value
<< std::endl;
// Purge whitespace from lines
key.erase(remove_if(key.begin(), key.end(), isspace), key.end());
value.erase(remove_if(value.begin(), value.end(), isspace), value.end());
// Add to unordered map config_
config_[key] = value;
}
std::cout << std::endl;
// Print out unordered map config_
for (const auto& [key, value] : config_)
std::cout << "key: \"" << key << "\", "
<< "value: \"" << value << "\"" << std::endl;
std::cout << std::endl;
}
auto ConfigReader::find_config(std::string const& key,
std::string const& default_value) const
-> std::string {
auto config_iterator = config_.find(key);
if (!(config_iterator == config_.end())) {
return config_iterator->second;
}
std::cout << "Configuration for " + key +
" not found. Default configuration used."
<< std::endl;
return default_value;
}
ConfigParser::ConfigParser(ConfigReader const& config)
: zid_{config.find_config("zid", std::string{"z0000000"})},
refresh_period_{std::chrono::duration<int64_t>(
stol(config.find_config("refresh_rate", std::string{"10"})))},
joy_config_{
stoul(config.find_config("x_axis", std::string{"0"})),
stoul(config.find_config("y_axis", std::string{"1"})),
stoul(config.find_config("z_plus_axis", std::string{"2"})),
stoul(config.find_config("z_minus_axis", std::string{"5"})),
stoul(config.find_config("steering_axis", std::string{"3"})),
stoul(config.find_config("drop_block_button", std::string{"2"})),
stoul(config.find_config("clear_blocks_button", std::string{"5"})),
stoul(config.find_config("gravity_button", std::string{"3"})),
stod(config.find_config("trigger_deadzone", std::string{"0.1"})),
stod(config.find_config("joystick_deadzone", std::string{"0.5"})),
stod(config.find_config("floor_bound", std::string{"5"})),
stod(config.find_config("ceiling_bound", std::string{"80"})),
stod(config.find_config("side_bound", std::string{"25"}))} {}
auto ConfigParser::get_zid() const -> std::string { return zid_; }
auto ConfigParser::get_refresh_period() const -> std::chrono::milliseconds {
return refresh_period_;
}
auto ConfigParser::get_joystick_config() const -> JoystickConfig {
return joy_config_;
}
} // namespace assignment3
|
On the Relative and Absolute Positioning Errors in Self-Localization Systems This paper considers the accuracy of sensor node location estimates from self-calibration in sensor networks. The total parameter space is shown to have a natural decomposition into relative and centroid transformation components. A linear representation of the transformation parameter space is shown to coincide with the nullspace of the unconstrained Fisher information matrix (FIM). The centroid transformation subspace-which includes representations of rotation, translation, and scaling-is characterized for a number of measurement models including distance, time-of-arrival (TOA), time-difference-of-arrival (TDOA), angle-of-arrival (AOA), and angle-difference-of-arrival (ADOA) measurements. The error decomposition may be applied to any localization algorithm in order to better understand its performance characteristics, and it may be applied to the Cramer-Rao bound (CRB) to determine performance limits in the relative and transformation domains. A geometric interpretation of the constrained CRB is provided based on the principal angles between the measurement subspace and the constraint subspace. Examples are presented to illustrate the utility of the proposed error decomposition into relative and transformation components.
|
<reponame>MrNANMU/EasyPermission
package com.dasong.easypermission.example;
import android.util.Log;
public class LogUtils {
public static final String TAG = "EP";
public static void e(String msg){
Log.e(TAG,msg);
}
}
|
<filename>bitcoin/commands/commands.go
// Copyright 2020 Condensat Tech. All rights reserved.
// Use of this source code is governed by a MIT
// license that can be found in the LICENSE file.
package commands
type Command string
const (
CmdGetBlockCount = Command("getblockcount")
CmdGetNewAddress = Command("getnewaddress")
CmdListUnspent = Command("listunspent")
CmdLockUnspent = Command("lockunspent")
CmdListLockUnspent = Command("listlockunspent")
CmdGetTransaction = Command("gettransaction")
CmdGetRawTransaction = Command("getrawtransaction")
CmdGetAddressInfo = Command("getaddressinfo")
CmdImportAddress = Command("importaddress")
CmdImportPubKey = Command("importpubkey")
CmdImportBlindingKey = Command("importblindingkey")
CmdBlindRawTransaction = Command("blindrawtransaction")
CmdSendMany = Command("sendmany")
CmdDumpPrivkey = Command("dumpprivkey")
CmdCreateRawTransaction = Command("createrawtransaction")
CmdDecodeRawTransaction = Command("decoderawtransaction")
CmdFundRawTransaction = Command("fundrawtransaction")
CmdSignRawTransactionWithKey = Command("signrawtransactionwithkey")
CmdSignRawTransactionWithWallet = Command("signrawtransactionwithwallet")
CmdSendRawTransaction = Command("sendrawtransaction")
CmdTestMempoolAccept = Command("testmempoolaccept")
CmdRawIssueAsset = Command("rawissueasset")
CmdListIssuances = Command("listissuances")
CmdRawReissueAsset = Command("rawreissueasset")
)
|
def create_parameter_with_resource_ref(name, resource_type, reference):
op_param = OperationDefinitionParameter(
{'name': name, 'use': 'in', 'min': 1, 'max': '1', 'type': resource_type,
'profile': {'reference': f"#{reference}"}})
return op_param
|
Anomalous swelling in phospholipid bilayers is not coupled to the formation of a ripple phase. Aligned stacks of monomethyl and dimethyl dimyristoyl phosphatidylethanolamine (DMPE) lipid bilayers, like the much studied dimyristoyl PC (DMPC) bilayers, swell anomalously in a critical fashion as the temperature is decreased within the fluid phase towards the main transition temperature, T(M). Unlike DMPC bilayers, both monomethyl and dimethyl DMPE undergo transitions into a gel phase rather than a rippled phase below T(M). Although it is not fully understood why there is anomalous swelling, our present results should facilitate theory by showing that the formation of the phase below T(M) is not related to critical phenomena above T(M).
|
The Climate Scare of this Week is apparently melting permafrost.The Met Office warning on April 10:
Increased climate change risk to permafrost. Global warming will thaw about 20% more permafrost than previously thought, scientists have warned – potentially releasing significant amounts of greenhouse gases into the Earth’s atmosphere.
The researchers, from Sweden and Norway as well as the UK, suggest that the huge permafrost losses could be averted if ambitious global climate targets are met.
Lead-author Dr Sarah Chadburn of the University of Leeds said: “A lower stabilisation target of 1.5ºC would save approximately two million square kilometres of permafrost.
“Achieving the ambitious Paris Agreement climate targets could limit permafrost loss. For the first time we have calculated how much could be saved.”
The permafrost bogeyman has been reported before, been debunked, but will likely return again like a zombie that never dies. I have likened the climate false alarm system to a Climate Whack-A-Mole game because the scary notions keep popping up no matter how often you beat them down with reason and facts. So once again into the breach, this time on the subject of Permafrost.
Permafrost basics
I Travelled to the Arctic to Plunge a Probe Into the Melting Permafrost is a Motherboard article that aims to alarm but also provides some useful information.
The ground above the permafrost that freezes and thaws on an annual cycle is called the active layer. The uppermost segment is organic soil, because it contains all the roots and decomposing vegetation from the surface. Beneath the organic layer is the moist, clay-like mineral soil, which sits directly on top of the permafrost. The types of vegetation will influence the contents of the soil—but in return, the soil determines what can grow there.
Kholodov inserted probes into the layers of soil and the permafrost to measure its temperature, moisture content, and thermal conductivity. The air-filled organic layer is a much better insulator than the waterlogged mineral soil. So an ecosystem with a thicker organic layer, where there’s more vegetation, should provide better protection for the permafrost below.
On a warm morning in the boreal forests around Fairbanks, Loranty squeezed between two black spruce trees and motioned to all the woody debris scattered on the ground. “Here, where we have more trees and denser forests, we have shallower permafrost thaw depths.”
He grabbed a T-shaped depth probe and shoved it into the ground. It only sank about a handspan before it struck permafrost. “When you have trees, they provide shade,” he said, “and that prevents the ground from getting too warm in the summer.” So here, the permafrost is shallow, right beneath the surface.
Other vegetation, like moss, can also protect permafrost. “It’s fluffy, with lots of airspace, like a down coat,” Loranty explained, “and heat can’t move through it well, so it’s a good insulator.”
But 800km north on the tundra, close to the Arctic Ocean, there are no trees. It’s a less productive ecosystem than the forest and provides little insulation to the frozen ground. Here, low-lying shrubs, grasses, and lichens dominate underfoot. When I grabbed the depth probe and pushed it in, it sunk down a meter before it bottomed out because the permafrost was much deeper.
Permafrost Nittty Gritty
To really understand permafrost, it helps to listen to people dealing with Arctic infrastructure like roads. A thorough discussion and analysis is presented in Impacts of permafrost degradation on a road embankment at Umiujaq in Nunavik (Quebec), Canada By Richard Fortier, Anne-Marie LeBlanc, and Wenbing Yu
Following the retreat of the Wisconsin Ice Sheet about 7600–7300 years B.P. on the east coast of Hudson Bay (Hillaire–Marcel 1976; Allard and Seguin 1985) and about 7500– 7000 years B.P. in Ungava (Gray et al. 1980; Allard et al. 1989), the sea flooded a large band of coastline in Nunavik (Fig. 1). Glaciomarine sediments were then deposited in deep water in the Tyrrell and D’Iberville Seas (Fig. 1). Due to the isostatic rebound, once exposed to the cold atmosphere, the raised marine deposits were subsequently eroded and colonized by vegetation, and permafrost aggraded from sporadic permafrost to continuous permafrost with increasing latitude (Fig. 1).
A case study is presented herein on recent thaw subsidence observed along the access road to the Umiujaq Airport in Nunavik (Quebec). In addition to the measurement of the subsidence, a geotechnical and geophysical investigation including a piezocone test, ground-penetrating radar (GPR) profiling, and electrical resistivity tomography (ERT) was carried out to characterize the underlying stratigraphy and permafrost conditions. In the absence of available ground temperature data for assessing the causes of permafrost degradation, numerical modeling of the thermal regime of the road embankment and subgrade was also undertaken to simulate the impacts of (i) an increase in air temperature observed recently in Nunavik and (ii) the thermal insulation effect of snow accumulating on the embankment shoulders and toes. The causes and effects of permafrost degradation on the road embankment are also discussed.
Values of thawing and freezing n-factors according to the surface conditions (Figs. 4 and 13) are given in Table 1. The gray road surface absorbs solar radiation in summer, inducing a higher surface temperature than air temperature and a higher thawing n-factor than the ones for the natural ground surface. The thawing n-factor is close to unity and the surface temperature is close to the air temperature in summer for the natural ground surface (ground surface boundaries Nos. 2, 3, and 4). Due to the absence of snow cover on the road surface, the freezing n-factor is close to unity. However, an increase in snow thickness leads to a decrease in the freezing n-factor (Fig. 13 and Table 1). We make the assumption that from one year to another there is no change in surface conditions due to climate variability and the thawing and freezing n-factors are constant.
Only the governing equation of heat transfer by conduction taking into account the phase change problem was considered to simulate the permafrost warming and thawing underneath the road embankment. However, complex processes of heat transfer, groundwater flow, and thaw consolidation can take place in degrading permafrost. The development of a two dimensional numerical model of these coupled processes is needed to accurately predict the thaw subsidence based on the thaw consolidation properties of permafrost and to compare this prediction with the performance of the access road to Umiujaq Airport.
As expected from the design of thick road embankments in cold regions,the permafrost table has moved upward 0.9 m underneath the road embankment, preventing permafrost degradation (Fig. 14a). However, the permafrost is slightly warmer by a few tenths of degree Celsius underneath the road embankment than away from the road (Fig. 15). This increase in permafrost temperature due to the thermal effect of the road embankment makes the permafrost more vulnerable to any potential climate warming. The permafrost base in the bedrock has also moved upward 3.9 m for a permafrost thinning of 3 m (Fig. 15). This thawing taking place at the permafrost base does not induce any thaw settlement because the bedrock is thaw stable.
The subsidence is due to thaw consolidation taking place in a layer of ice-rich silt underneath a superficial sand layer. While the seasonal freeze–thaw cycles were initially restricted to the sand layer, the thawing front has now reached the thaw-unstable ice-rich silt layer. According to our numerical modeling, the increase in air temperature recently observed in Nunavik cannot be the sole cause of the observed subsidence affecting this engineering structure. The thick embankment also acts as a snow fence favoring the accumulation of snow on the embankment shoulders. The permafrost degradation is also due to the thermal insulation of the snow cover reducing heat loss in the embankment shoulders and toes.
Permafrost in Russia
The Russians are seasoned permafrost scientists with Siberia as their preserve, and their observations are balanced by their long experience. The latest Russia report is from 2010.
We conclude the following based on initial analysis and interpretation of the data obtained in this project:
Most of the permafrost observatories in Russia show substantial warming of permafrost during the last 20 to 30 years. The magnitude of warming varied with location, but was typically from 0.5C to 2C at the depth of zero annual amplitude. This warming occurred predominantly between the 1970s and 1990s. There was no significant observed warming in permafrost temperatures in the 2000s in most of the research areas; some sites even show a slight cooling during the late 1990s and early 2000s.
Warming has resumed during the last two to three years at many locations predominantly near the coasts of the Arctic Ocean. Much less or no warming was observed during the 1980s and 1990s in the north of East Siberia. However, the last three years show significant permafrost warming in the eastern part of this region.
Permafrost is thawing in specific landscape settings within the southern part of the permafrost domain in the European North and in northwest Siberia. Formation of new closed taliks and an increase in the depth of preexisting taliks have been observed in this area during the last 20 to 30 years.
Methane Realism
An article in Scientific American raises several concerns about permafrost, but does add some realism:
First, while most of the methane is believed to be buried roughly 200 meters below the sea bed, only the top 25 meters or so of sea-bed are currently thawed, and thawing seems to have only progressed by about one meter in the last 25 years – a pace that suggests that the large bulk of the buried methane will stay in place for centuries to come.
Second, several thousand years ago, when orbital mechanics maximized Arctic warmth, the area around the North Pole is believed to have been roughly 4 degrees Celsius warmer than it is today and covered in less sea ice than today. Yet there’s no evidence of a massive amount of methane release in this time.
Third, the last time methane was released in vast quantities into the atmosphere – during the Paleocene-Eocene Thermal Maximum 56 million years ago – the process didn’t happen overnight. It took thousands of years.
Put those facts together, and we are probably not in danger of a methane time bomb going off any time soon.
Summary
The active layer of permafrost does vary from time to time and place to place. There was warming and some permafrost melting end of last century, but lately not so much. Any specific permafrost layer is influenced by many factors, including air temperatures, snow cover and vegetation, as well as the structure of the land, combining fill, sand, silt, ice and salinity mixtures on top of bedrock.
And nature includes negative feedbacks to permafrost melt. Any vegetation, even moss, growing in unfrozen soil provides insulation limiting further melting, as well as absorbing additional CO2. Reduced snowcover aids freezing and constrains later melting.
Rather than a permafrost bogeyman, we need a more people-friendly mascot. Consider our traditional nature friends loved by children and adults.
For example, Smokey the Bear
Rudolph the Reindeer
And the ever-popular Cola Bear
Introducing Permafrosty
Permafrosty is here! Love him tender, and he’ll never let you down.
Additional Background on Permafrost in an earlier post The Permafrost Bogeyman
Advertisements
|
package com.translationexchange.swing.tokenizers;
import java.util.Arrays;
import junit.framework.Assert;
import com.translationexchange.core.Utils;
import com.translationexchange.swing.tokenizers.AttributedStringTokenizer;
public class AttributedStringTokenizerTest {
// @Test
public void testTokenization() {
AttributedStringTokenizer dt = new AttributedStringTokenizer("Hello [bold: World]");
Assert.assertEquals(
Arrays.asList("bold"),
dt.getTokenNames()
);
dt = new AttributedStringTokenizer("Hello [bold: How are [italic: you?]]");
Assert.assertEquals(
Arrays.asList("bold", "italic"),
dt.getTokenNames()
);
// broken
dt.tokenize("[bold: Hello World");
Assert.assertEquals(
Arrays.asList("bold"),
dt.getTokenNames()
);
dt.tokenize("[bold: Hello [strong: World]]");
Assert.assertEquals(
Arrays.asList("bold", "strong"),
dt.getTokenNames()
);
}
// @Test
public void testSubstitution() {
AttributedStringTokenizer dt = new AttributedStringTokenizer("Hello [bold: World]");
Assert.assertNotNull(
dt.substitute(Utils.buildMap())
);
}
}
|
Africans in East Anglia, 1467-1833 parallel to a bequest to fund apprenticeships there) and money to build there a public library to house his own collection of books, running to nearly 5000 volumes and manuscripts. A series of chapters reconstruct Plumes library and the vicissitudes through economic and architectural crises of the building, book and manuscript collection to its now important status as an architectural and bibliographical treasure house, on which ongoing work in cataloguing and digitising has extended and democratised access to the Library in accordance with Plumes founding principle of public availability. Here several chapters, notably those by Doe, David Pearson and Helen Kemp successfully demonstrate the value of the volumes emphasis on understanding through contextualisation. Placing the librarys foundation in the context of the contemporaneous establishment of parochial and other public libraries and the culture of book auctions, bequests and borrowings, this work underpins the argument that this was a library, as Pearson puts it, of a scholar-clergyman, not antiquarian, with two-thirds perhaps predictably devoted to theology and religion, but the other third to a wider variety of classical and other authors and subjects. Here as in other chapters, notably that by Thornton and Max Earnshaw, there is much fresh and important material on the local history of trusts, their membership and the important role they played (and play) in the establishment and maintenance of Plumes bequests, on the numbers drawn into their running from his death to the present and the sometimes considerable demands this placed on local groups of trustees. Plume was perhaps unexceptionable in much of his (recorded) life, but for his philanthropy. Future work, stimulated by this volume, will help to answer the question that runs quietly through the volume as to where the balance between typical and exceptional lies in the scale of his philanthropy. As various chapters make approvingly clear, Plume did not seek fame, not wishing to have his many bequests named after him. Nevertheless, given the quality of this volume, it is to be hoped that future comparative work on the local history of early modern philanthropy will draw on the excellent work here and make the example of Thomas Plume, called by an eighteenth-century historian of Essex this munificent person, better known.
|
Great Commercial office or retail space in the Quaint Village of Montgomery. Renovated office space currently being used as a kitchen & bathroom showroom, Off street parking, all utilities are included in the rent. Now there is 3 large rooms which could be split up if needed.
|
Heterotrophy mitigates the response of the temperate coral Oculina arbuscula to temperature stress Abstract Anthropogenic increases in atmospheric carbon dioxide concentration have caused global average sea surface temperature (SST) to increase by approximately 0.11°C per decade between 1971 and 2010 a trend that is projected to continue through the 21st century. A multitude of research studies have demonstrated that increased SSTs compromise the coral holobiont (cnidarian host and its symbiotic algae) by reducing both host calcification and symbiont density, among other variables. However, we still do not fully understand the role of heterotrophy in the response of the coral holobiont to elevated temperature, particularly for temperate corals. Here, we conducted a pair of independent experiments to investigate the influence of heterotrophy on the response of the temperate scleractinian coral Oculina arbuscula to thermal stress. Colonies of O. arbuscula from Radio Island, North Carolina, were exposed to four feeding treatments (zero, low, moderate, and high concentrations of newly hatched Artemia sp. nauplii) across two independent temperature experiments (average annual SST (20°C) and average summer temperature (28°C) for the interval 20052012) to quantify the effects of heterotrophy on coral skeletal growth and symbiont density. Results suggest that heterotrophy mitigated both reduced skeletal growth and decreased symbiont density observed for unfed corals reared at 28°C. This study highlights the importance of heterotrophy in maintaining coral holobiont fitness under thermal stress and has important implications for the interpretation of coral response to climate change. Introduction Anthropogenic activities have increased global atmospheric carbon dioxide (pCO 2 ) from approximately 280 ppm during the Industrial Revolution to presentday values exceeding 400 ppm (). This increase in atmospheric pCO 2 has resulted in global sea surface temperature (SST) increases of 0.11°C per decade between 1971 and 2010, and these trends are projected to continue into the 21st century (). Notably, over the last several decades, warming has been more prominent in the North Atlantic Ocean relative to other ocean basins with warming of up to 4°C predicted for temperate Atlantic waters by the end of this century (). This ocean warming trend has affected the health of marine ecosystems, including thermally sensitive coral reef habitats worldwide (Hoegh-Guldberg 1999;Hoegh-Guldberg and Bruno 2010). Tropical corals are stenothermal, that is they can tolerate a small range of temperatures, and thus even small changes in seawater temperature can result in "coral bleaching"the loss of corals' photosynthetic endosymbionts (Symbiodinium spp.) and/or photosynthetic pigmentswhich in turn can negatively affect corals, including by reducing their growth and calcification (Jokiel and Coles 1990;D';;Donner 2009). Although the effects of rising SST on tropical corals are well investigated (Lesser 1997;Hoegh-Guldberg and Bruno 2010;;), uncertainty remains as to how temperate corals have responded to recent warming and how they will cope with predicted end-of-century ocean warming (;). In many temperate coastal environments, including the North Carolina (NC) coast, SSTs have increased concurrently with other anthropogenic stressors (). Human development along NC's coastal watersheds has increased nutrient loading, leading to eutrophication ((Paerl et al., 2006. This eutrophication has triggered changes in primary and secondary production, altering the trophic structure of NC coastal ecosystems (). Changes in primary and secondary production have the potential to affect corals as they obtain carbon (C) and nutrients not only from the photosynthetic byproducts of their endosymbiotic algae (photoautotrophy), but also by feeding on plankton and dissolved/particulate matter from the water column (heterotrophy; Houlbreque and Ferrier-Pages 2009). Generally, photoautotrophic C is used for metabolic demands and calcification, while heterotrophic C is allocated for building tissue and growth (). Coral heterotrophy also supplements nutrients not provided by coral endosymbionts through the capture of dissolved organic matter, particulate organic matter, and zooplankton (Houlbreque and Ferrier-Pages 2009). Photoautotrophy can provide up to 100% of a coral's daily metabolic requirements; therefore, when corals bleach and lose symbiont-derived C, they must either reduce metabolic demand, rely on existing energy reserves, or increase heterotrophy (;). Previous studies have shown that heterotrophy allows some species of tropical corals to mitigate the negative effects of thermal stress and ocean acidification (OA), which include but are not limited to decreased photosynthetic activity, loss of pigmentation, and reduced calcification (;Cohen and Holcomb 2009;Edmunds 2011). For example, research has demonstrated the importance of heterotrophy in the tropical scleractinian coral Montipora capitata, which met up to 100% of its daily metabolic requirements by increasing feeding rates when recovering from a bleaching event (). More recently, it was shown that the negative effects of OA and temperature stress in Acropora cervicornis were mitigated by increased feeding rates (). Taken together, these studies suggest that heterotrophic feeding is one method by which tropical corals can cope with stressors associated with climate change; however, this effect is understudied in temperate coral species. Oculina arbuscula is a facultatively symbiotic coral, meaning that it can exist as a healthy colony both with (symbiotic) or without (aposymbiotic) its endosymbionts (Miller 1995). Aposymbiotic colonies of O. arbuscula use zooplankton of the pico-and/or nanoplankton size class (<63 lm) almost exclusively to obtain nutrition, while symbiotic colonies of O. arbuscula rely almost entirely on C translocated from their endosymbionts to meet metabolic demands (). Therefore, the trophic ecology of this temperate coral may be more complicated than its tropical coral counterparts, which primarily exhibit obligate symbiosis. To date, no studies have investigated the physiological response of C acquisition in O. arbuscula under thermal stress; however, previous research has investigated the impact of heterotrophy and temperature on growth and symbiont density in other temperate coral species. Recent studies on temperate corals have revealed a positive relationship between temperature and coral growth (i.e., coral growth increases with increasing temperature; ;Miller 1995) as well as a positive relationship between feeding and growth (Kevin and Hudson 1979;Miller 1995) and photosynthesis of algal symbionts (Szmant-Froelich and Pilson 1984;Piniak 2002). Growth rates of the symbiotic temperate scleractinian coral Cladocora caespitoa have been shown to be driven primarily by increased temperature and heterotrophic food supply (). However, when the same species was exposed to temperatures 4°C above normal maximum summer temperature (28°C), the result was tissue necrosis that eventually led to bare skeleton and coral fragment death (). Recently, these results were contrasted by a study which demonstrated that long-term exposure of C. caespitosa to thermal stress (29°C) resulted in no impact on tissue necrosis or photosynthetic efficiency (). However, exposure to temperature stress in addition to the presence of invasive algae negatively impacted photosynthetic efficiency and caused tissue necrosis in the coral (). Another study found that for two temperate coral species (C. caespitosa and Oculina patagonica), short-term thermal stress up to 5°C above the mean summer temperature resulted in no change in symbiont density (). Taken together, these studies demonstrate the variable manner in which temperate corals respond to thermal stress and illustrate the need for a more comprehensive understanding of how temperate corals are likely to respond to future climate change. The aim of this study was to determine the effect of heterotrophy on growth and symbiont density of the temperate coral O. arbuscula under thermal stress. The temperate scleractinian coral O. arbuscula was selected for this study because its environment is experiencing changes in primary and secondary production () as well as variations in SST (). Oculina arbuscula inhabits the southeastern and mid-Atlantic US up to 200 m depth (Miller 1995) create hard-bottom structure that supports economically valuable fisheries species and a variety of other ecologically and economically important organisms (). Oculina arbuscula were fed four concentrations of freshly hatched Artemia sp. nauplii based on representative field concentrations of plankton quantified at the collection site (Fulton 1984). Corals were reared under feeding conditions for approximately 40 days at the average annual temperature (mild stress: 20°C) and average summer temperature (moderate stress: 28°C) of the collection site (Fig. 1), and the effects of heterotrophy on growth and symbiont density under thermal stress were investigated. Understanding how O. arbuscula responds to temperature stress will allow for better predictions of how coral-dominated benthic hard-bottom ecosystems may shift in the face of a changing climate and will help to inform environmental management decisions in the Southeast US. Collection and transportation In February 2014, 12 10-to 15-cm-diameter symbiotic colonies of O. arbuscula were collected at Radio Island Jetty near Beaufort, NC, using a hammer and chisel ( Fig. 1). All O. arbuscula colonies were collected under the NC Division of Marine Fisheries Permit #706481. During collection, seawater temperature (8.50 AE 0.01°C; mean AE SE) was measured every 10 min using a HOBO Water Temperature Pro V2 Logger (Onset, Bourne, MA). Oculina arbuscula colonies were collected at a depth of approximately 3 m across a linear distance of~200 m. Colonies collected were separated by at least 5 m in an effort to avoid the collection of identical genotypes. Colonies were transported to the Aquarium Research Center at the University of NC at Chapel Hill where corals were maintained in four 500-L recirculating holding tanks at approximately ambient field temperature (9.4 AE 0.1°C; AESE) and salinity (35.2 AE 0.04; AESE). Temperature treatments were determined based on 2005-2012 data from a NOAA buoy (station BFTN7), located approximately 1.3 km from the collection site. The mild thermal stress experiment (20°C) represents the mean annual temperature measured at the NOAA buoy between January 2005 and December 2012 (20.0 AE 0.01°C, AESE). The moderate thermal stress experiment (28°C) represents the approximate average summer temperature (June-August) measured for the same time interval as above (27.7 AE 0.6°C, AESE; NOAA National Data Buoy Center; Fig. 1). The moderate thermal stress experiment (28°C) reflects a chronic exposure to elevated temperature conditions relative to ambient seawater temperature conditions at the time of collection. Due to the contrast between the temperature at collection (approximately 9°C) and the experimental temperatures (20°C or 28°C), we refer to the two thermal experiments as "mild stress" and "moderate stress." Due to tank limitations and logistical concerns, the two thermal experiments (20 and 28°C) were not run concurrently. Each thermal experiment was therefore considered statistically mutually exclusive in all analyses. This is because each thermal experiment was conducted on unique genotypes and experienced different pretreatment conditions. Recovery and acclimation Mild temperature stress experiment (20°C) Half of the 12 O. arbuscula colonies (N = 6) collected were assigned to the mild temperature stress experiment (20°C). Each mild stress O. arbuscula colony was sectioned into 12 fragments using a diamond-embedded band saw (Inland, Madison Heights, MI) and mounted on sterile plastic petri dishes using cyanoacrylate. This sectioning yielded a total of 72 approximately equal-sized fragments (12 per colony), each weighing between 15 and 25 g wet weight. Fragments were given approximately 1 week for recovery, after which pre-acclimation buoyant weight measurements were conducted. Coral fragments were maintained at ambient field temperature for a total of 10 days after collection, at which point temperatures were slowly increased by approximately 0.5°C per day until the 20°C target temperature was achieved. All colonies were fed equally at the moderate concentration of newly hatched Artemia sp. nauplii (250 Artemia sp. nauplii per L) and were acclimated at 20°C until corals were assigned to their respective feeding treatments for the duration of the 38-day feeding experiment. Moderate temperature stress experiment (28°C) The other half of the O. arbuscula colonies (N = 6) remained as full colonies at ambient field temperatures for 10 days after collection and were increased to 20°C along with the mild stress experiment colonies. Moderate stress colonies were maintained at 20°C for 23 days, at which point colonies were sectioned using methods described above, which produced 72 approximately equally sized fragments (12 per colony). Colonies used in the moderate stress experiment were maintained at 20°C until completion of the mild stress experiment (20 days), after which time fragments were placed in experimental tanks and temperatures were slowly increased by approximately 0.5°C per day until the target 28°C was achieved. All colonies were fed equally at the moderate concentration of Artemia sp. nauplii (250 Artemia sp. nauplii per L) and allowed to acclimate to 28°C until the corals were assigned to feeding treatments for the duration of the 37day feeding thermal stress experiment. Experimental design Mild and moderate temperature stress experiments were conducted using the same experimental seawater system consisting of 12 38-L experimental aquaria. Three aquaria were assigned to each of four feeding treatments (zero, low, moderate, and high). For each thermal experiment, O. arbuscula fragments from six colonies (6 colonies 9 12 fragments = 72 fragments) were placed in aquaria such that each genotype was represented in all four of the heterotrophic feeding regimes (n = 6 fragments per aquarium). Each feeding treatment consisted of three 38-L aquaria connected to a 190-L sump. Feeding treatments shared a high-output T5 lighting system (Current-USA, Vista, CA) containing two 460-nm actinic bulbs and two 10,000-K daylight bulbs (156 watt fixture). To simulate dawn and dusk, corals were only exposed to actinic lights for the first and last hours of the 12-h light cycle. This lighting system maintained an average photosynthetically active radiation (PAR) of approximately 300 lmol pho-tonsm 2 sec 1 at the base of each aquarium. Experimental PAR conditions were based on spot measurements made during collection (approximately mid-day on 8 February 2014) that ranged between 200 and 400 lmol photonsm 2 sec 1 as well as on values used in previous tank experiments for the same species and collection site (Miller 1995). Each aquarium contained two powerheads (Hydor USA, Sacramento, CA) rated at 908.5 Lh 1. The sump filtration system consisted of a filter sock to remove particulates and a protein skimmer (Eshopps, City of Industry, CA) to remove organic materials, both of which were regularly cleaned following feeding. Each aquarium was covered with a transparent plexiglass sheet to limit evaporative water loss. Mild stress treatments were maintained at 20 AE 0.1°C (AESE) by circulating the seawater through a chiller (AquaEuroUSA, Los Angeles, CA). Moderate stress treatments were maintained at 27.9 AE 0.1°C (AESE) using 50-W heaters (Eheim, Deizisau, Germany). For both experiments, O. arbuscula were fed newly hatched Artemia sp. nauplii three times a week at their respective treatment concentrations. Artemia sp. nauplii concentration for the moderate feeding treatment was determined from average copepod abundance measured near Beaufort, NC (8414 copepodsm 3 or~250 Artemia sp. nauplii per L; Fulton 1984). Low and high feeding treatments were calculated as half and twice the moderate concentration, respectively. The four feeding treatment target concentrations were approximately zero: 0 Artemia sp. nauplii per L; low: 125 Artemia sp. nauplii per L; moderate: 250 Artemia sp. nauplii per L; and 500 Artemia sp. nauplii per L. The amount of Artemia sp. nauplii added to each feeding treatment was estimated by counting in triplicate the number of hatched nauplii in 1 mL of water and extrapolating to the concentrations listed above. For example, an average hatch contained approximately 100 Artemia sp. nauplii per mL, and in order to obtain 250 Artemia sp. nauplii per L (7650 Artemia sp. nauplii per aquaria) in the moderate feeding system, 153 mL of hatched Artemia sp. nauplii was added. Each recirculating sump system and protein skimmer were turned off before feeding commenced in order to isolate individual aquarium during feeding. Feeding began at least 30 min after aquarium lights were turned off (12h day-night cycle) to simulate crepuscular feeding. After the aquaria were isolated, the respective amounts of newly hatched Artemia sp. nauplii were added to each aquarium. Powerheads were left on in every aquarium during feeding to ensure circulation of the Artemia sp. nauplii. Aquaria remained isolated and corals were allowed to feed for 1 h. In order to limit positional effects across O. arbuscula fragments within an aquarium, fragments were rotated prior to each feeding event. At the completion of the mild and moderate temperature stress experiments, each O. arbuscula fragment was photographed and had its tissue airbrushed. The tissue slurry was then frozen at 20°C, and the remaining coral skeletons were dried overnight at 50°C in a drying oven (Quincy Lab, Chicago, IL). Aquarium conditions Seawater salinity was formulated to 36.00 AE 0.07 (AESE) using Instant Ocean Sea Salt mixed with deionized water. Compared to other commercially available seawater mixes, Instant Ocean Sea Salt is most similar to natural seawater in its major and minor elemental composition as well as its carbonate chemistry (Atkinson and Bingman 1998). Deionized water was added between water changes to account for evaporation, and 50% water changes were performed weekly across all aquaria. Nitrate (NO 3 ) concentrations were monitored weekly in all aquaria using an Aquarium Pharmaceuticals Nitrate Test Kit (API, Chalfont, PA) to ensure that there were no excess nutrients in the experimental aquaria. Each measurement found negligible concentrations of NO 3 in all experimental aquaria. Seawater chemistry, including salinity, temperature, and pH, were monitored and recorded before each feeding and corrected as necessary. Salinity was measured using a YSI 3200 conductivity meter, temperature was measured using a NIST-calibrated partial-immersion organic-filled glass thermometer, and pH was measured using an Orion Star A211 pH meter with a ROSS Sure-Flow Combination pH probe calibrated with certified NBS pH buffers of 4.01, 7.00, and 10.01. All data for tank parameters are available in Table S1. Quantification of calcification rates and symbiont density Oculina arbuscula calcification rates were estimated using the buoyant weight method with a bottom-loading balance (precision = 0.0001 g; Mettler-Toledo, Columbus, OH; Davies 1989). Buoyant weight measurements (N = 3 replicate measurement/fragment) were taken before acclimation and at the start and end of each temperature experiment. Because buoyant weight measurements occurred over several days at each time point, growth data were corrected for the number of days in each experimental interval. Symbiont counts were completed using the hemocytometer method (Rodrigues and Grottoli 2007). Airbrushed samples were thawed and homogenized for 5 min using a Tissue-Tearor (Dremel, Racine, WI). Samples were then centrifuged for 15 min at 3030 g. Equal volumes of formalin and Lugol's iodine were added to the pellets, which were homogenized to re-suspend the pellet. Three replicates of 10 lL subsamples were counted from the stained symbiont suspensions on a hemocytometer using a light microscope. Technical replicates for symbiont counts from the three subsamples were then averaged and normalized to the volume of the tissue slurry as well as the surface area of the corresponding nubbin. After drying overnight at 50°C, coral skeletons were weighed to determine dry weights. Final buoyant weights and dry weights for coral fragments were linearly correlated (Fig. S1, R 2 = 0.997) with the following equation describing this relationship: Dry Weight mg 1:591 Buoyant Weight mg 4105:6: Using this equation, initial dry weight was estimated from initial buoyant weight. This proxy allowed for the expression of coral growth as the change in dry weight, or net calcification (;). Organic materials that attach to the coral tissue and have a density different from that of seawater may influence buoyant weight measurements. For this reason, we chose to use dry weight and express coral growth as net calcification instead of percent change in buoyant weight (). Surface area of O. arbuscula fragments was calculated using a 3D laser scanner (NextEngine, Santa Monica, CA) and was used to normalize buoyant weight and symbiont counts for each coral fragment to unit area (mgcm 2 or cellscm 2, respectively). Normalizing to surface area corrects for areas over which new skeleton was deposited over the experimental interval (Elahi and Edmunds 2007). Statistical analyses All statistical analyses were implemented using R software, version 3.0.1 (R Development Core Team, 2015). Analyses of variance (ANOVA, function aov()) were used to determine the effects of feeding and genotype on the difference in dry weight and the change in symbiont density (normalized to surface area) across each thermal experiment. To meet ANOVA assumptions, symbiont densities were log-transformed and all models were tested for equal variance of the residuals and normality (Fig. S2). For both thermal experiments, two fixed factors were modeled: feeding treatment and genotype, with tank nested within feeding treatment. Genotype was included as a fixed effect to minimize the type 1 errors; however, variations in genotypes were not the focus of the study. If factors were found to be significant (P < 0.05), post hoc Tukey's HSD tests were used to evaluate the significance of each pairwise comparison. Data for plotting were produced using the function summarySE() and plotted implemented in the package ggplot2. All R scripts and data collected in this experiment are provided as supporting information (Data S1 and S2). Ethics statement The NC Division of Marine Fisheries gave permission to collect all O. arbuscula colonies used in this experiment (permit #706481). Effect of feeding on dry weight Results revealed that feeding (P < 0.0001) significantly affected O. arbuscula calcification rates (expressed as dry weight; Table 1, Fig. 2) in both the mild and moderate stress experiments. Additionally, there was a main effect of genotype on calcification of O. arbuscula at both 20°C (P < 0.0001) and 28°C (P < 0.0001; Table 1). Oculina arbuscula fragments in the zero feeding treatment exhibited approximately zero net calcification at 28°C (moderate stress) while fragments reared at 20°C (mild stress) maintained positive net calcification. Calcification rates of moderate stress fragments in the zero feeding treatment were 0.50 AE 0.07 mgcm 2 day 1 (AESE) less than mild stress specimens in zero feeding treatment (Fig. 2). At 28°C, calcification of O. arbuscula fragments reared in the zero feeding treatment was significantly less than that of fragments reared in the low, moderate, and high feeding treatments (P = 0.005, P < 0.001, P < 0.001, respectively; Table 2B, Fig. 2B). Coral fragments reared in the low feeding treatment at 28°C also exhibited significantly lower calcification rates than fragments reared at high Artemia sp. nauplii concentrations (P = 0.002; Table 2B, Fig. 2B). Calcification rates of O. arbuscula fragments reared in the zero and low feeding treatments in the 20°C thermal experiment were significantly lower than fragments reared in the moderate and high feeding treatments at the same temperature (Table 2A, Fig. 2A). Effect of feeding on symbiont density In the moderate stress experiment, O. arbuscula fragments reared in the zero feeding treatment had significantly lower symbiont densities than fragments reared in the low, moderate, and high feeding treatments (P = 0.04, P = 0.001, P < 0.001, respectively; Table 2B, Fig. 3B). Feeding had no effect on symbiont density for O. arbuscula fragments reared in the mild thermal stress experiment (Table 1A, Fig. 3A). A main effect of genotype on symbiont density was also observed at 20°C (P = 0.03) and 28°C (P < 0.001; Table 1). Results of a two-factor ANOVA testing the effects of four feeding treatments and genotype on dry weight and symbiont density of Oculina arbuscula fragments. df, degrees of freedom; ss, sum of squares; F, F-value; P, P-value. Discussion Our experiments reveal that the temperate coral O. arbuscula can utilize heterotrophic carbon to minimize growth reductions and loss of symbionts associated with exposure to moderate thermal stress. Because this species actively feeds, O. arbuscula may have an ecological advantage during thermal stress events that are predicted to occur with increasing frequency in future warmer oceans (). The facultative symbiosis of O. arbuscula is another potential mechanism that could allow the organism to deal with future climate change, as facultative symbioses are generally considered to offer flexibility to an organism in dealing with periods of rapid change (). This finding builds on previous studies that have demonstrated how heterotrophy can alleviate the negative effects of climate change, including rising temperature, OA, and the interactions of these two global scale stressors, for example (;). Heterotrophy mitigates loss of net calcification under temperature stress Our study demonstrates that, when fed, O. arbuscula exhibited significantly greater net calcification under moderate thermal stress than when unfed (Fig. 2), suggesting that the negative effects of thermal stress can be mediated through heterotrophy. Unfed O. arbuscula fragments under moderate thermal stress were not able to maintain growth and exhibited approximately zero net calcification, an effect that was alleviated once corals were provided the opportunity for heterotrophy. A recent study found that C acquired via heterotrophy was used for tissue building and skeletal growth in healthy, but not in bleached, colonies of the tropical corals M. capitata and Porites compressa (). In the temperate coral O. arbuscula, Leal et al. found that symbiotic colonies of this species obtained nutrition primarily from their endosymbionts, while aposymbiotic colonies depended primarily on heterotrophy of sediment organic matter as well as picoand nanoplankton (<10 lm). This suggests that aposymbiotic colonies of O. arbuscula in the field likely depend almost entirely on heterotrophy, specifically on plankton <10 lm, for maintaining growth. Here, we studied symbiotic colonies of O. arbuscula, and results show that skeletal growth was maintained both with and without heterotrophic C under mild thermal stress. Notably, O. arbuscula under mild thermal stress grew significantly more when provided either moderate or high concentrations of Artemia sp. nauplii as compared to colonies reared with zero and low concentrations of food (Table 2A, Fig. 2A). Therefore, it is possible that O. arbuscula used symbiont-derived C to maintain skeletal growth in the zero and low feeding treatments over the experimental interval at 20°C (Fig. 2). This agrees with Leal et al., who found that symbiotic colonies of O. arbuscula relied primarily on their endosymbionts for nutrition, regardless of the season. Temperature-induced bleaching is mitigated by heterotrophy Symbiont density was measured to determine the bleaching status of each O. arbuscula fragment as the symbionts of temperate (and tropical) corals provide photosynthetic carbon to the host, and the loss of those symbionts indicates holobiont stress and results in the loss of nutrients to the host (Miller 1995;Hoegh-Guldberg 1999). While we cannot directly compare across thermal experiments, our data suggest that O. arbuscula had lower symbiont densities under moderate thermal stress when compared with corals reared under mild stress conditions (Fig. 3). Because O. arbuscula is facultatively symbiotic, it differs from most tropical corals in that prolonged bleaching does not necessarily indicate chronic stress or imminent death (Miller 1995); however, the loss of symbiont for this species could be indicative of stress and any loss of organic C input could have effects on holobiont fitness. The results observed here contrast the findings of Rodolfo-Metalpa et al., who found that symbionts of temperate symbiotic Mediterranean corals (Cladocora caespitosa and O. patagonica) were temperature tolerant, exhibiting no change in symbiont density or maximum quantum yield (F v /F m ) under thermal stress up to 29°C. Other work has shown that algal symbiosis in another facultatively symbiotic coral, Astrangia poculata, is seasonally variable, with predicted chlorophyll density peaking in the late summer to early autumn and then decreasing to a relatively stable value throughout the winter (Dimond and Carrington 2007). Our findings suggest that thermal stress resulted in symbiont loss in the facultatively symbiotic temperate coral O. arbuscula. However, it is important to note that the observed decreases in symbiont density in the zero feeding treatment as compared to the other feeding treatments at 28°C could have resulted from the combination of temperature stress and a decline in food availability. The response we observed here, taken in consideration with previous studies on temperate corals, highlights that the responses of temperate corals to thermal stress are variable and possibly species specific. Change in symbiont density per unit area (10 6 cellscm 2 ) across four feeding treatments in the 28°C thermal experiment. Error bars represent standard error. Letters represent statistical differences (X, Y) as tested with Tukey's HSD. (C) Images visualizing the effect of heterotrophy on O. arbuscula symbiont density. Bleaching was observed for this fragment that received no heterotrophic opportunity as compared to the same genotype that received high feeding in the thermal stress experiment. Feeding did not affect symbiont density at 20°C; however, we observed that under moderate temperature stress, symbiont densities were greater in fed than unfed corals and as long as the fragments had some heterotrophic opportunity, symbiont density did not change significantly. These results highlight that heterotrophic feeding (at all concentrations) enabled O. arbuscula to maintain both significantly greater growth rates and symbiont densities as compared to corals in the zero feeding treatment under moderate thermal stress. Our results confirm previous findings, which demonstrate heterotrophic food sources can reduce photophysiological damage in other coral species experiencing a thermal stress event. These authors proposed that in the absence of external food sources, temperature tolerance of Stylophora pistillata decreased because additional stressors (i.e., increased temperature) incur metabolic costs that limit physiological processes of the coral. Ferrier-Pages et al. observed a similar result in three scleractinian coral species (S. pistillata, Turbinaria reniformis and Galaxea fascicularis): heterotrophic feeding reduced damage to the photosynthetic apparatus of symbionts and no bleaching was observed in fed, temperature-stressed corals. Starved corals under temperature stress specifically demonstrated decreased electron transport and photosynthetic rates due to bleaching and resulting photoinhibition of photosystem II (). Therefore, it is possible that in our experiments, heterotrophic inputs prevented damage to O. arbuscula symbionts, possibly by bringing nitrogen and other essential nutrients to the coral holobiont, or by reducing the dependence of the coral host on the symbiont, thereby preventing bleaching. Implications for future climate change and management Climate change, along with increases in human pressure on coastal ecosystems including population increases, industrialization, and agribusiness, will likely result in future increases of eutrophication in estuarine and coastal waters around the globe (). Previous research and models suggest that eutrophication and climate change could increase primary production and phytoplankton standing stocks () and create a shift toward plankton communities of a smaller size class (;). For example, between 1979 and 2011, the northern Baltic Sea experienced a general increase in total phytoplankton biomass along with a decrease in total zooplankton abundance, leading researchers to conclude that the plankton community in this region shifted to favor plankton of smaller size classes as a result of warming and eutrophication (). These results have relevant implications for the NC coast and heterotrophic opportunity of O. arbuscula. Nutrient concentrations have increased in coastal NC waters since the end of World War II as a result of human activity (), and a recent study demonstrated preferential feeding of the coral on small-size classes of phytoplankton (). Collectively, these studies suggest that future climate change, which will likely result in thermal stress for O. arbuscula on coastal NC hard-bottom habitats, could also provide an additional food source (i.e., increase in smaller plankton communities) to allow the coral to cope with that stress. These changes could particularly affect aposymbiotic colonies of O. arbuscula, which depend almost entirely on heterotrophy for inputs of organic carbon (). It should be noted that even if the size class of plankton decreases in the future and if O. arbuscula is able to consume this readily available food source (), this does not automatically indicate that the corals will be able to utilize this heterotrophic opportunity to mitigate the negative effects of climate change. In fact, our results suggest that even a doubling of available food source (represented by our high feeding treatment) would not result in a significant increase in O. arbuscula growth or symbiont density (Figs. 2, 3). Conversely, several studies have demonstrated that climate change could result in future decreases in plankton concentrations (Roemmich and Mcgowan 1995;Richardson and Schoeman 2004). However, even if plankton concentrations decrease to half of their current values (represented by the low feeding treatment), our results indicate that O. arbuscula would still be able to utilize this heterotrophic opportunity to maintain growth and symbiont density at levels statistically equivalent to currentday conditions (represented by the moderate feeding treatment) when exposed to thermal stress (Figs. 2, 3). Instead, only complete deprivation of O. arbuscula to heterotrophic opportunity under thermal stress is likely to result in significant decreases in fitness. Thus, under extreme circumstances in which zero plankton is available, these corals would essentially starve, making it difficult for O. arbuscula to maintain both growth rates and symbiont density under thermal stress. Our conclusion is supported by the significant decreases in both growth rates and symbiont density observed in the zero feeding treatment as compared to all other levels of heterotrophic opportunity in the moderate stress experiment (Figs. 2, 3). It should be noted that feeding was not directly measured in this study; therefore, the impact of feeding is inferred based on the four concentrations of Artemia sp. nauplii provided in the experiment. It is possible that corals exposed to the different concentrations of Artemia sp. nauplii were not actually consuming significantly different amounts of food. This could potentially explain why there were no significant differences in coral growth or symbiont density observed between the moderate and high or moderate and low feeding treatments at 28°C (Figs. 2, 3). Additionally, the high feeding treatment in the 28°C experiment was 0.6°C warmer on average than the moderate feeding treatment at the same temperature (Table S1). This additional temperature stress could have prevented fragments in the high feeding treatment from exhibiting significantly greater growth rates and symbiont densities than fragments in the moderate feeding treatment at 28°C. Our findings also indicate that O. arbuscula colonies varied in the degree to which heterotrophy will affect their response to future temperature stress. Oculina arbuscula genotype had a significant effect on growth and symbiont density in both thermal experiments (20 and 28°C; Table 1), which demonstrates within-population variation in response to heterotrophy and temperature. This study investigated the responses of 12 O. arbuscula colonies from a single population; however, if these results are found to be consistent across populations and species, this suggests that significant genetic variation exists with respect to the influence of heterotrophy and temperature, perhaps providing fuel for natural selection. Although O. arbuscula hard-bottom ecosystems are less extensive in NCthe northern end of their rangetheir structural complexity provides arguably one of the most critical habitats in the region for an array of smaller organisms, which in turn supports a large diversity of recreationally and commercially relevant fish species (). Specifically, the snapper-grouper fisheries species that depend on hard-bottom habitat (formed primarily by O. arbuscula) during migration produced a $3.6 million annual market between 1992 and 2001 (). However, these hard-bottom ecosystems face threats from physical habitat loss/degradation (i.e., dredging and bottom-disturbing fishing gear) and water quality degradation (i.e., nutrient enrichment and toxic chemical contamination), which will only increase as NC human populations grow (). Successful management of these habitats will aid in the maintenance of healthy fishery stocks for commercial and recreational use and conservation of the overall health of not only the NC coast, but across O. arbuscula habitats of the entire southeast US. A better understanding of how heterotrophy and SST warming interact to affect temperate corals, an important member of hardbottom ecosystems, will help to inform management decisions as SSTs will continue to increase throughout the century (). Conclusions Although many studies have been conducted on the effects of climate change on coral reefs, few consider how heterotrophy interacts with a coral's response to climatic stressors, and even fewer consider this interaction in temperate species. This study aimed to understand the potentially beneficial impacts of coral heterotrophy in the context of ocean warming in a temperate scleractinian coral species. Results showed that heterotrophy provided thermally stressed corals with the necessary metabolic requirements to maintain growth and symbiont health/density. While it remains to be seen whether this response is conserved in other temperate and/or facultatively symbiotic coral species, the response observed here raises the question of how potential changes in nutrients and therefore planktonic communities could mitigate the negative effects of climate change on temperate reef ecosystems. Supporting Information Additional Supporting Information may be found online in the supporting information tab for this article: Figure S1. Buoyant weight-dry weight correlation. Figure S2. Data normality plots. Table S1. Aquaria water quality. Data S1. R script for data analysis. Data S2. Growth and symbiont density data.
|
// return the ref of the data between start and stop(exclued)
func (this *BufferedConn) Peek(start, stop int) []byte {
if start > this.pos || stop < 0 {
return nil
}
return this.buf[start:stop]
}
|
Even at the tender age of 17, Easah Suliman is already used to all the attention. The Aston Villa central defender has just returned from the Under-17 World Cup in South America having played every minute as England went home with only one point from their three group stage matches.
So far, so familiar. But Suliman knows he is being interviewed because he has been nominated for the Young Player accolade at the Asian Football Awards, which take place at Wembley on Thursday evening.
“It’s not really normal for someone my age to be doing these kind of articles,” Suliman acknowledges. “But I think it’s quite important to show that being Asian is not going to stop me, whereas in the past it might have done. I’m not too sure if that’s true or not. But if I can be a role model to get others involved in football then it’s a real achievement.”
Two years ago the teenager from Moseley in Birmingham became the first player with Pakistani heritage to captain an England team and he signed his first professional contract in January amid reported interest from Bayern Munich and Arsenal. Already well over 6ft, he was an unused substitute in August for Villa’s Capital One cup match against Notts County and was regularly training with the first-team squad until Tim Sherwood’s recent departure.
Along with Liverpool’s Yan Dhanda, a 16-year-old who was signed for around £250,000 from West Bromwich Albion in 2013, Suliman is rated as the next great hope of Britain’s Asian community – estimated to make up more than 5% of England’s population. However, only nine out of 3,000 professional footballers in the top four divisions can claim south Asian heritage and the Football Association’s Asian inclusion in football programme, which launched in May, is hoping to address that imbalance.
“It’s been delayed a few times but the fact they have launched it is a good thing,” says Baljit Rihal, who set up the Asian Football Awards in 2012 but has decided to make an event he runs largely out of his own pocket a biennial affair. “There has been a noticeable increase in terms of young players in academies. Now I think we’re just waiting for that star player.
“It makes it difficult when you have the same faces every year,” he adds. “In an ideal world in order to measure our success we should not have to do the awards.”
Once again, the Swansea City and Wales defender Neil Taylor is expected to compete with the Wolverhampton Wanderers captain, Danny Batth, for the main prize, while West Brom’s Adil Nabi, who has been playing for Roberto Carlos on loan at the Indian Super League side Delhi Dynamos, is the other nominee. The night will also recognise the role Asians play at all levels of the game, including coaches, non-league players, grassroots and the Woman in Football award.
Facebook Twitter Pinterest Neil Taylor of Swansea is expected to vie with Wolves’ Danny Batth for the main prize at the Asian Football Awards. Photograph: Huw Evans/Rex Shutterstock
But almost two decades on from the publication of the Asians Can’t Play Football study by Jas Bains which documented the absence of Asian players in the game, it is the search for a poster boy that still dominates the debate. Recent statistics from the five-a-side football firm Goals Soccer Centres estimated that as much as 20% of its business comes from the community, evidence the stereotypical view that Asians are simply not interested is far from the truth.
“The sport of choice for most Asians, especially from second and third generation families, is football rather than cricket,” says Rihal, who also set up the Asian Cricket Awards last year thanks to funding from the England and Wales Cricket Board.
“Most people just assume that it’s cricket because of their backgrounds. But if the FA are serious about this initiative then they need to be putting money behind it. I don’t think this plan is right up there in term of their priorities but they have realised it’s an issue that needs to be looked at.”
The first of the FA’s Talent ID events took place in Luton and Birmingham last month, with several more planned over the next four years. It has also established weekly sessions in areas with dense Asian populations in the hope of finding the next Suliman.
“I’m not too sure why there are so few Asian players, but now we’re definitely getting a lot more opportunities,” says the Villa defender. “It doesn’t matter what colour you are or what culture you’re from or what background you’re from, you still get the same chances. As long as you work hard and are willing to sacrifice, your chance will come.
“It’s important that players like Yan and I try and get through and build a pathway for other Asians to think that if we’ve made it, then there’s no reason why they can’t as well.”
Suliman has also been on the receiving end of some advice from the England all-rounder Moeen Ali, who hails from the same neighbourhood and is a friend of his father.
“I‘ve spoken to him a bit to see how it’s all going and just get some advice. It helps because there’s not really anyone at the highest level from an Asian background playing football but in cricket obviously there’s a lot more British Pakistanis making it to the top.”
Wisden has estimated that up to 40% of cricketers at grassroots level are from a south Asian background, although that number drops dramatically to just 6% in the first-class game. Should an Asian player follow in Moeen’s footsteps and play for England’s senior team the next hurdle for football will be to avoid that kind of decline.
For now, though, Suliman will have to get used to being asked what it was like to lead his country out in an international match. “I’ve been reminded of that a few times now,” he laughs. “It’s obviously a great achievement and one I will remember forever. But hopefully if I can keep working hard then one day I can try to get into the senior England team and maybe even captain them. That’s the dream.”
|
Infections After Large Joint or Bursa Injection Supplemental digital content is available in the text. Objective Despite the ubiquity of intra-articular and bursal injections for the treatment of joint pain and bursitis, relatively little literature is available on the prevalence of infection after these procedures. The aim of this study was to identify the number of infections recalled by sports medicine physicians who perform injections of large joints and bursae at least once per month. Design A survey of physician members of the American Medical Society for Sports Medicine identified the reported number of recalled infections for each large joint/bursal location. Results Of a total of 554 physicians, only 31 infections were recalled by 27 physicians. Only 4.87% of all physicians were aware of an infection after an injection during their career. On average, one infection was recalled of 170 physician-years in practice. No differences in infection rates were observed when comparing primary specialties (P = 0.281). Conclusions This study, the largest to date, demonstrates that sports medicine physicians rarely encounter infections after large joint and bursa injections. Though rare, because of their catastrophic nature, risk mitigation strategies should be maintained.
|
Once you've paid your way through design school, good luck finding the money to rent out a studio to actually produce the ideas you came up with in class. Luckily, over the past two years more and more co-op design studios have cropped up, providing a valuable resource for cash strapped starter-uppers.
Core77 posts about Menlo Park's TechShop, a fully-equipped, open-access workshop that lets you drop in any time and work on your own projects. Techshop stocks milling machines, metal lathes, plastic printers and RP machines, CNC laser cutters, MIG- and TIG-welding equipment, and more. There are daily $30 passes, monthly $100 passes, and annual $1,200 passes.
In New York there's 3rd Ward. The 20,000 square foot workspace located in East Williamsburg, Brooklyn houses a wood and metal shop, digital media room, recording space, and dance studio. “You can’t have access to this kind of equipment unless you’re in an academic setting,” says co-founder Jason Goodman. “And the academic environment costs an arm and a leg,” adds his partner Jeremy Lovitt.
For the price of a gym membership ($60/month) members can work on G5 computers running all the latest art, music, video and design software, as well as high quality printers and scanners. Use of the wood and metal shop bumps membership prices up, but Goodman and Lovitt let bona fide starving artists donate time as shop monitors in exchange for equipment access. Since its opening in May of '06, 200 people have gotten onboard including Japanther who recorded their new album in the co-op’s studio and TV on the Radio who took over the third floor to shoot one of their videos.
Etsy Labs is yet another Brooklyn co-op. Opened the February, the 7,000 square foot warehouse in Brooklyn Heights was started by the online auction house for the world of the handmade (Wired wrote about the company in its 14.07 issue).
The space houses a silkscreen press, a letterpress, film, music, and video production facilities, jewelry making stations, sewing machines, and sergers for members to use as needed. Membership costs $20/month. Etsy Labs, like 3rd Ward offers workshops for those interested in getting crafty.
|
Performance of Serology Assays for Diagnosing Celiac Disease in a Clinical Setting ABSTRACT Diagnosis of celiac disease frequently depends upon serology assays. We set out to prospectively assess the diagnostic value of five serology tests: an enzyme-linked immunosorbent assay (ELISA) for tissue transglutaminase (tTG)-immunoglobulin A (IgA) and tTG-IgG, a chemiluminescence assay for tTG-IgA, an ELISA for deamidated gliadin peptide (DGP) IgG and IgA screening, and detection of endomysial antibodies (Abs) by indirect immunofluorescence. One hundred sixteen children at high risk for developing celiac disease were evaluated clinically and underwent small bowel biopsies and blood serology tests. We examined differences between younger and older children in terms of clinical presentation, test performance, and the ability of high Ab levels to correctly predict diagnosis of celiac disease. Celiac disease was diagnosed for 85 (73%) children. No significant clinical differences were observed between the biopsy-positive and biopsy-negative groups. Children ≤3 years of age revealed higher concentrations of tTG-IgA and DGP Abs than children >3 years old (P = 0.017 and 0.007, respectively). High Ab concentrations were predictive of villous atrophies, with sensitivities ranging from 92.8% to 97.9%, depending on the assay and the cutoff points applied. Sensitivities, specificities, positive predictive values, and negative predictive values varied among assays and improved after correction for best cutoff points. Assay specificities obtained in the clinical setting were lower than expected. The new tTG-IgA chemiluminescence assay demonstrated high throughput but low specificity (74.2%). The tTG-IgA ELISA exhibited the highest test efficiency, and the tTG-IgA chemiluminescence assay was suitable for large-scale screening, with reduced specificity. High concentrations of celiac disease-specific Abs bring into question the need for performance of biopsies on children at high risk. Celiac disease (CD) is a common autoimmune enteropathy that occurs in genetically predisposed children and adults upon ingestion of gluten or related proteins. The diverse presentation of CD includes classical clinical symptoms, such as diarrhea, weight loss, failure to thrive, malabsorption, and anemia, and atypical manifestations, such as nonspecific abdominal pain, esophageal reflux, osteoporosis, hypertransaminasemia, and neurological symptoms. Population studies have shown that the incidences of CD in Europe and North America are 0.5 to 1%. Even though the rate of diagnosis has increased in recent years, according to the accepted iceberg concept, the majority of affected individuals are still undiagnosed. According to the latest consensus report on CD, small bowel biopsies are considered the gold standard and are mandatory for diagnosis. Obtaining a biopsy specimen is an invasive procedure and at times may miss patchy mucosal changes. Poor orientation of the removed tissue may lead to difficulties in interpretation. On the other hand, serology testing for CDspecific antibodies (Abs) is easy to perform and a wide range of commercial kits are now available. The serology tests are sensitive and specific and are becoming the obligatory tool for correctly referring patients for biopsies. Immunoglobulin A (IgA) against the tissue transglutaminase (tTG) antigen is ac-cepted as the best serology screening tool performed by the enzyme-linked immunosorbent assay (ELISA) method. Recently, a new human recombinant tTG-IgA chemiluminescence assay was developed for use with the Immulite 2000 analyzer. This platform enables large-scale testing at a high throughput, an advantage which should be taken into account due to the increasing requests for serology testing. In many clinical laboratories, the fluorescence endomysial Ab (EMA) assay is used for confirming the presence of tTG-IgA. The EMA assay is known for its high sensitivity and specificity for diagnosing CD but requires much technologist labor and yet suffers from interobserver variability in interpretation. Abs to deamidated gliadin peptides (DGP) were shown to be of diagnostic value, and DGP Ab kits are being extensively evaluated. A DGP assay recognizing both IgA and IgG Abs, known as the DGP (IgAIgG) screen, is intended for detecting both IgA-deficient and IgA-sufficient CD patients. Thus, the need for measuring total IgA for all tested subjects is eliminated. IgA deficiency affects approximately 1/500 of the general population and is a 10-fold-increased risk factor for CD. Performance of the DGP (IgAIgG) screen could reduce test costs by eliminating the need for IgA screening. tTG-IgA Ab titer was shown to correlate well with severity of biopsy result in adults and pediatric populations. This positive correlation has raised the possibility of avoiding small bowel biopsies, when tTG-IgA Ab concentrations are especially high, for diagnosing high-risk populations. This concept is not thoroughly studied with the various tTG-IgA commercial kits or other CD Ab specificities. The majority of studies regarding the diagnostic value of CD serology were conducted in research settings. A few publications raised the possibility that serology assays may be less accurate when used in clinical settings. We therefore examined a group of children presenting clinical suspicion for developing CD in our community clinical setting. The high prevalence of biopsyproven CD children in this population enabled us to examine the diagnostic value of several serology kits by comparing the results for two age groups. We also calculated the correlations between Ab titer and severity of biopsy result and assessed the possibility that high Ab titers have predictive value for biopsy results. MATERIALS AND METHODS Population study. One hundred sixteen children referred from December 2006 until March 2008 to the Pediatric Gastroenterology Unit at the Edmond and Lily Safra Children's Hospital, Sheba Medical Center, Ramat-Gan, Israel, participated in this study. The ethics committee at Sheba Medical Center approved this study protocol. All parents of participating children received oral and written explanation and signed an informed consent according to the Declaration of Helsinki requirements. The selection criteria for the children participating in the study were the presence of clinical signs and symptoms of CD in the children and/or the existence of known CD patients among the children's relatives. These children underwent small bowel biopsies and blood sample collection. Histology evaluation was carried out in the hospital's pathology unit, and the results were considered the gold standard for our study. Serology tests were carried out at the Central Laboratory-Immunology Unit of Maccabi-Health Services, Rechovot, Israel. Histopathology. Biopsy specimens from the distal duodenum (with a minimum of five conventional forceps samples per patient) were obtained by upper duodenoscopy. Samples were fixed in buffered formalin and embedded in paraffin wax. Standard sections were obtained and stained with hematoxylin and eosin. The histopathology slides were examined without any knowledge of serology results at the hospital's pathology unit. The results of villous atrophy were categorized according to the modified Marsh criteria. Briefly, Marsh 0 represents normal mucosa, Marsh 1 represents normal mucosa architecture with increased intraepithelial lymphocytes (30 intraepithelial lymphocytes/100 enterocytes), Marsh 2 represents additional crypt hyperplasia, Marsh 3a represents partial villous atrophy, Marsh 3b represents subtotal villous atrophy, and Marsh 3c represents total villous atrophy. Serum analysis. All serology tests were performed in a community-based laboratory among the hundreds of routine samples arriving daily. The laboratory personnel had no previous knowledge of clinical diagnosis or biopsy findings. Five serology tests were performed using the following methods via commercial kits in accordance with the manufacturers' instructions: (i) tTG-IgA Celikey ELISA (intra-assay coefficient of variation , 4.9 to 8.7%; manufacturerrecommended cutoffs, 5 U/ml for negative results, 5 to 8 U/ml for borderline results, and 8 U/ml for positive results; Phadia, Freiburg, Germany), (ii) tTG-IgA Immulite 2000 (CV, 3.9 to 6.1%; manufacturer-recommended cutoffs, 4 U/ml for negative results and 4 U/ml for positive results; Siemens, Deerfield, IL), (iii) tTG-IgG Celikey ELISA (CV, 3.6 to 7.2%; manufacturer-recommended cutoffs, 7 U/ml for negative results, 7 to 10 U/ml for borderline results, and 10 U/ml for positive results; Phadia, Freiburg, Germany), (iv) DGP (IgGIgA) screen ELISA (Quanta Lite; CV, 0.5 to 4.7%; manufacturer-recommended cutoffs, 20 U for negative results, 20 to 30 U for weak-positive results, and 30 U for moderate-to-strong-positive results; Inova Diagnostics, San Diego, CA), and (v) EMA immunofluorescence using primate smooth muscle slides and a dual anti-IgG/anti-IgA conjugate (manufacturer-recommended cutoffs, 1/5 for negative results and 1/5 for positive results; Immco Diagnostics, Buffalo, NY). The initial sample dilution was 1/5. Positive samples were titrated by serial dilutions up to 1/160. All slides were examined by two independent observers. Serum IgA levels were determined for all specimens to rule out IgA deficiency. IgA levels were measured by nephelometry (BNII, Siemens, Deerfield, IL), using Dade Behring IFCC calibrators and reagents. Age-dependent IgA reference ranges are as described by Bienvenu et al.. When a discrepancy was found between serology results and biopsy findings, human leukocyte antigen type DQ2/DQ8 (HLA-DQB1) typing was performed on extracted DNA by PCR methods based on sequence-specific oligonucleotide probing/sequence-specific priming. Statistical analysis. The data were analyzed using BMDP statistical software (W. J. Dixon, University of California Press, Los Angeles). Discrete variables were compared, by group, using Fisher's exact test. Values for the continuous variable, age, were compared using Student's t test. When comparing the serological assays by age groups, we used the Mann-Whitney nonparametric U test and presented the results as median concentrations. Since the Marsh scores are not continuous variables, Spearman nonparametric correlations between the Marsh scores and the results for all the serology assays were calculated. Logistic regression analysis was applied for each assay separately in order to derive the best cutoff point for predicting Marsh scores of 1. Since the EMA assay has a logarithmic distribution, we applied the following transformation: log 5 (reciprocal of end point titer)/5 1. Using the cutoff values recommended by the manufacturer and the best cutoff points derived from the logistic regressions, we dichotomized the results of the assays and were thus able to produce two-by-two contingency tables which enabled us to calculate sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and test efficiency. RESULTS Demographic, clinical, and histological findings. The mean age of the participating children was 6.7 years (range, 1 to 17 years), with 52 (45%) male and 64 (55%) female subjects. All children were referred for small intestinal biopsy at the hospital's pediatric gastroenterology unit as a result of CD-suggestive clinical symptoms or being a relative of a CD patient. Final diagnosis of CD was determined according to histological criteria (early or mild mucosal changes or severe mucosal changes , as accepted). Table 1 shows the demographic data, histology findings, and clinical reasons for referral, sorted according to biopsy-positive and biopsy-negative children. No significant differences in gender, age, or clinical symptoms were observed between the two groups, besides anemia being slightly more common in the biopsy-positive group (P 0.075). Histology analysis revealed that 90.5% of the children diagnosed with CD had partial or severe villous atrophy (Marsh 3a, 3b, and 3c) at the time of diagnosis. Surprisingly, failure to thrive was more pronounced among children in the biopsy-negative group, even though this difference was not significant (P 0.35). Fourteen children were referred for biopsy testing due to other clinical symptoms, as listed in Table 1. IgA deficiency. Five patients (4.3%) in our study were IgA deficient (6 mg/dl). Three of them were negative for CD by both criteria (a Marsh score of 0 and negative results for serology assays). The fourth patient scored Marsh 0 with a weak-positive DGP screen result (28.9 U). This 4-year-old girl suffered from diarrhea and anemia at the time of biopsy testing and continually suffered from a wide range of infections, such as acute tonsillitis, chronic otitis media, and urinary tract infections, thus reflecting classical immunodeficiency disease patterns. The fifth IgA-deficient patient was diagnosed with full-blown CD with total villous atrophy (Marsh 3c). As expected, the results for all IgA-based serology tests were negative for this patient, while those for both IgG-based tests (the DGP screen and the tTG-IgG assay) were strong positive (153 U and 150 U/ml, respectively). CD serology-specific assays. All sera were tested using five different serology assays to identify those tests which associate best with biopsy findings, considered the gold standard for diagnosis of CD. All assays detected CD Abs in the biopsynegative group to a greater or lesser extent ( Table 2). HLA-DQ2/DQ8 levels in these discrepant cases were determined, and 4 of the 14 patients were DQ2/DQ8 negative, thus clearly identifying false-positive serology results in the biopsy-negative group. Three of these four patients tested positive by the DGP (IgAIgG) screen. Furthermore, the DGP (IgAIgG) screen stood out, with 11 (35%) positive cases out of 31 biopsynegative children, with a mean concentration of 34.8 U/ml. The tTG-IgA Celikey assay detected four positive cases in the biopsy-negative group (three of them with borderline levels). The EMA assay resulted in eight positives (five of them with borderline titers), and the tTG-IgA Immulite 2000 assay re-sulted in eight positives, twice as many as the tTG-IgA Celikey assay, even though both assays use human recombinant antigen. The tTG-IgG Celikey assay resulted in the lowest number of positives in the biopsy-negative group (2/31) but, similarly, displayed the lowest number of true positives (55/85) in the biopsy-positive group. One of the biopsy-negative subjects (no. 14 ) tested positive in all CD IgA-serology assays and positive for the CD-associated *0302 allele. This child presented low body weight and short stature. Since clinical, serological, and genetic results were suggestive of CD, this result could imply a rare case of latent CD. The child was referred to undergo a repeated biopsy. Those children who were positive for CD were further divided into 3-year-olds and 3-year-olds. This age cutoff point was chosen to obtain a large enough group of younger children for statistical analysis. The median concentrations of Abs measured for these groups are shown in Table 3. Total IgA measurements were higher among the older children, as expected, but all the CD-specific Abs revealed higher concentrations for the younger age group. The younger children showed somewhat more gastrointestinal manifestations (24.2%) and failure to thrive (39.4%) than the older children (12% and 24%, respectively), but these differences did not differ significantly (P 0.15 and P 0.11, respectively). The results of the logistic regression analyses are shown in Table 4. The areas under the receiver operating characteristic curve (AUC) did not differ notably between the assays (they were all between 0.95 and 0.96), with the exception of the tTG-IgG assay (AUC, 0.87). A statistical evaluation of the assays using the manufacturer's cutoff points and the calculated best cutoff points is shown in Table 5. For all calculations, borderline Ab concentrations were considered positive. According to the manufacturer's cutoff points, the EMA and DGP (IgGIgA) screen assays presented the best sensitivities (95.3%) but the worst specificities (74.2 and 64.5%, respectively). The tTG-IgA Celikey assay was found to be the most efficient test (91.4%), followed by the EMA (89.6%) and tTG-IgA Immulite 2000 (87.9%) assays. The tTG-IgG Celikey assay exhibited poor sensitivity (67.7%), thus leading to an extremely low NPV (49.2%) and inferior test efficiency (72.4%). For all assays, the best-fit cutoff points for our high-risk population (Table 5) showed increased specificity, with the exception of the tTG-IgG Celikey assay, where the cutoff point was lowered, thus improv-ing sensitivity. According to the suggested cutoff points, the EMA and the tTG-IgA Celikey assays exhibit the same predictive values and are equally efficient. We evaluated the association between Ab concentration and severity of biopsy findings, as depicted in Fig. 1. In general, higher Ab values were measured for children with Marsh scores of 3a, 3b, and 3c. This association is well presented by the median Ab levels increasing with severity of histology Marsh grading. The correlations (r) between Ab level and severity of histology findings were as follows: for the EMA assay , 0.73; for the DGP (IgGIGA) screen, 0.71; for the tTG-IgA Celikey assay, 0.72; for the tTG-IgA Immulite 2000 assay, 0.68; and for the tTG-IgG Celikey assay, 0.56. All correlation coefficients were highly significant (P 0.001), considering the size of our study. In light of these positive correlations, we tested the capability of high Ab concentrations for correctly pointing out positive biopsy results. High Ab cutoff points were assigned to each assay as shown in Table 6, and the resulting numbers of patients with partial, subtotal, and total villous atrophy levels (Marsh 3a, 3b, and 3c) were determined. Sensitivities were calculated. The high Ab cutoff points were 10 times the manufacturer-recommended cutoff values for the tTG-IgA Celikey assay and the tTG-IgA Immulite 2000 assay, were 3 times the manufacturer-recommended cutoff values for the DGP (IgGIgA) screen assay, represented titers of 1/160 for the EMA test, and were equal to the manufacturer-recommended cutoff point (10 U/ml) for the tTG-IgG Celikey assay. As observed in Table 6, high Ab levels could be used as a predicting tool for partial, subtotal, and total villous atrophy (Marsh 3a, 3b, and 3c, accordingly). DISCUSSION The children enrolled in this study were at high risk for developing CD due to clinical symptoms or relationship to known CD patients and do not represent the general pedi- The odds ratio is a way of determining whether the probability of a certain event is the same for two groups and represents the change in the estimated odds of the outcome resulting from an increase in the continuous variable by 1 unit. b Using the formula log 5 (reciprocal of endpoint titer)/5 1. The participating children were divided into CD positives and negatives according to biopsy results, which are considered the gold standard for diagnosis of CD. Several of the clinical indications listed were more noticeable in the biopsy-negative group, namely, failure to thrive and other high-risk characteristics that might imply CD, such as diabetes mellitus type 1, hypertransaminasemia, and IgA deficiency. All these clinical symptoms may indeed suggest CD, but their higher occurrence within the biopsy-negative group intensifies the concept that CD is mainly asymptomatic and the majority of cases remain undiagnosed. Lurz et al. similarly presented failure to thrive as a more pronounced characteristic in the disease control group of highrisk children. They suggested that these children are often more aggressively investigated than those with milder symptoms. Comparison of the CD-positive children in two age groups revealed that the children 3 years old demonstrated clinical symptoms such as gastrointestinal manifestations and failure to thrive more frequently than the 3-year-old children. These findings correlate well with the more-severe villous atrophy and higher titers of CD-specific Abs in this group, as measured by almost all assays. Similar differences in histological features, clinical findings, and Ab titers between younger (2-year-old) and older (2-year-old) children were recently reported by Vivas et al.. Our findings are in agreement with those of that group and others. Furthermore, it has been suggested in the past that pediatric CD patients (2 years old) may have normal IgA-tTG and EMA levels. We, as others, did not observe any lack of Ab sensitivity in this age group. Five CD-specific serology assays were assessed in this study to inspect their value in CD diagnosis. One of the strengths of our study is the prospective testing of all serology assays with the same test tube on the same day, concurrently with the biopsy procedure. In our clinical setting, the EMA assay and the DGP (IgAIgG) screen were the most sensitive assays and the DGP (IgAIgG) screen was the least specific. With both the manufacturer-recommended cutoff and the best-fit cutoff based on logistic regression analysis, the DGP (IgAIgG) screen assay clearly had more false positives than the other assays. The tTG-IgA Celikey and tTG-IgA Immulite 2000 assays revealed lower specificities (87% and 74%, respectively) than those reported in the literature (reviewed by Rostom et al. ). The reduced Ab specificity that we have found may have several explanations. On the one hand, the presence of DGP-positive children in the biopsy-negative group may actually indicate that these children will develop CD later in life. There are reports documenting DGP Abs preceding tTG-IgA. Our study does not include follow-up. Improvement of clinical symptoms and decline of Ab concentrations during a gluten-free diet would assist in final diagnosis for the serology-positive/biopsy-negative group. Nevertheless, these children are negative for CD on the basis of the current diagnostic criteria, regarding the biopsy findings as the gold standard. Only three biopsy-negative patients exhibited positive serology results with four of the five assays, with tTG-IgA and EMA detectable at borderline concentrations. A single patient, who probably had latent CD, revealed positive serology results by all assays. On the other hand, these positive Abs may indeed be false positive. Agardh reported false-positive DGP/tTG (IgGIgA) Abs and DGP-IgG Abs in a disease control group of children. Similar seropositive results with negative biopsy findings were documented in other clinical studies. Abrams et al. have shown in a recent paper that sensitivities in clinical practice are not as high as those reported in research laboratories. The same may apply for specificities. All biopsy-negative children in our clinical study were symptomatic and could be referred to as a "disease control" group, exhibiting lower specificities than blood donors, which usually serve as negative controls. Nevertheless, the majority of false-positive EMA results in our study were at or slightly above the cutoff point. The DGP (IgGIgA) screen's false positives had a wider range of results. When the best-fit cutoff points based on logistic regression analysis were applied, the specificity was increased without too much effect on sensitivity. It should be stressed, however, that these best-fit cutoff points are suitable for high-risk children and may not apply for the general low-risk population. In terms of overall test efficiency, the tTG-IgA Celikey assay displayed the best performance. The tTG-IgA assay is usually performed with an automated ELISA instrument and is therefore suitable for large-scale screening. The EMA assay is a manual, time-consuming, and subjective assay, used for confirmation of positive tTG-IgA samples. According to our results, confirmation with the EMA assay is unnecessary, since there were no tTG-IgA-positive/EMA-negative samples. The same conclusion was reached in a comparative study of 10 different tTG IgA/IgG assays. The tTG-IgA Immulite 2000 assay utilizes a human recombinant tTG antigen on a random access platform, with the advantages of a sensitive chemiluminescence signal and high throughput. To the best of our knowledge, there are no previous publications evaluating this new immunoassay. According to our data, overall performance was comparable to that of the other serology assays, but test efficiency was lower than that of the tTG-IgA Celikey ELISA. The tTG-IgA Immulite 2000 assay may seem attractive as a front-line screening kit, though one must keep in mind that the reduced specificity may lead to the performance of too many confirmatory tests or unnecessary biopsies. The tTG-IgG assay and the DGP (IgGIgA) screen were initially performed for detecting positive CD among the IgA deficient. Since only one biopsy confirmed that a CD-positive patient was IgA deficient in our study, no conclusive results may be reached. On the other hand, the tTG-IgG Celikey assay is not suitable for testing the IgA-sufficient samples. The extremely low sensitivity obtained may be due to the facts that the IgA Abs have higher avidity to the tTG antigen and therefore that subsequent IgG Ab binding is reduced. The DGP (IgGIgA) screen has been reported by others as an excellent substitute for the previous nonspecific Gliadin assays. In our study, the sensitivity of the DGP (IgGIgA) screen was higher and the specificity was lower than those reported in two recent studies. We have concentrated on children at high risk who were defined as CD positive according to biopsy grades Marsh 1 to Marsh 3c. The abovecited studies examined adults, where only Marsh scores of 3b to 3c or Marsh scores of 3a to 3c were considered positive. Nevertheless, other studies have shown the DGP assays to be as sensitive and specific as the tTG-IgA and EMA assays. In conclusion, the reported sensitivities and specificities of the DGP assays vary significantly. In light of the positive association between Ab concentration and Marsh grading, we set out to examine the proposal of Barker et al., where very high positive tTG-IgA levels would be enough for CD diagnosis of symptomatic patients, thus eliminating the need for small bowel biopsy. Since different kits use different cutoff points and there is no standardization, the proposal of Barker et al. should be examined with caution regarding other tTG-IgA assays. Our data confirm that for high-risk children, strong Ab levels could predict villous atrophy (Marsh 3a to Marsh 3c) with high sensitivity (92.8 to 97.9%, depending on the kit and cutoff used). For the remaining 2.1 to 7.2% cases of strong Ab titer, Marsh 2 grading was observed, strongly suggestive of CD. These results are in agree- VOL. 16, 2009 SEROLOGY MARKERS FOR DIAGNOSIS OF CELIAC DISEASE 1581 ment with those of an additional study regarding pediatric and adult patients, recently published. In conclusion, our data reveal that biopsy-proven CD was found in a large proportion of children with a wide range of classical and atypical symptoms. Younger children exhibited severe biopsy findings together with intense clinical indications and higher Ab concentrations more frequently. The five serology assays varied in their performance levels and appeared to exhibit lower specificities in the clinical setting than those previously reported. The tTG-IgA Celikey kit demonstrated the best test efficiency for the studied population.
|
/*
* #%L
* Alfresco Records Management Module
* %%
* Copyright (C) 2005 - 2019 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* -
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
* -
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
* -
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
* -
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package org.alfresco.module.org_alfresco_module_rm.test.integration.transfer;
import static org.alfresco.module.org_alfresco_module_rm.action.impl.CompleteEventAction.PARAM_EVENT_NAME;
import static org.alfresco.module.org_alfresco_module_rm.role.FilePlanRoleService.ROLE_RECORDS_MANAGER;
import static org.alfresco.module.org_alfresco_module_rm.test.util.CommonRMTestUtils.DEFAULT_DISPOSITION_AUTHORITY;
import static org.alfresco.module.org_alfresco_module_rm.test.util.CommonRMTestUtils.DEFAULT_DISPOSITION_INSTRUCTIONS;
import static org.alfresco.module.org_alfresco_module_rm.test.util.CommonRMTestUtils.DEFAULT_EVENT_NAME;
import static org.alfresco.repo.security.authentication.AuthenticationUtil.getAdminUserName;
import static org.alfresco.repo.security.authentication.AuthenticationUtil.runAs;
import static org.alfresco.repo.site.SiteModel.SITE_CONSUMER;
import static org.alfresco.service.cmr.security.AccessStatus.ALLOWED;
import static org.alfresco.service.cmr.security.AccessStatus.DENIED;
import static org.alfresco.util.GUID.generate;
import java.io.Serializable;
import java.util.HashMap;
import java.util.Map;
import org.alfresco.module.org_alfresco_module_rm.action.impl.CompleteEventAction;
import org.alfresco.module.org_alfresco_module_rm.action.impl.CutOffAction;
import org.alfresco.module.org_alfresco_module_rm.action.impl.TransferAction;
import org.alfresco.module.org_alfresco_module_rm.test.util.BaseRMTestCase;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
import org.alfresco.service.cmr.repository.NodeRef;
/**
* Test case which shows that the user who did not create a transfer folder will not be able to see it.
*
* @author <NAME>
* @since 2.3
*/
public class NoPermissionsOnTransferFolderTest extends BaseRMTestCase
{
// Test users
private String testUser1 = null;
private String testUser2 = null;
/**
* @see org.alfresco.module.org_alfresco_module_rm.test.util.BaseRMTestCase#isUserTest()
*/
@Override
protected boolean isUserTest()
{
return true;
}
/**
* @see org.alfresco.module.org_alfresco_module_rm.test.util.BaseRMTestCase#setupTestUsersImpl(org.alfresco.service.cmr.repository.NodeRef)
*/
@Override
protected void setupTestUsersImpl(NodeRef filePlan)
{
super.setupTestUsersImpl(filePlan);
// Create test users
testUser1 = generate();
createPerson(testUser1);
testUser2 = generate();
createPerson(testUser2);
// Join the RM site
siteService.setMembership(siteId, testUser1, SITE_CONSUMER);
siteService.setMembership(siteId, testUser2, SITE_CONSUMER);
// Add the test users to RM Records Manager role
filePlanRoleService.assignRoleToAuthority(filePlan, ROLE_RECORDS_MANAGER, testUser1);
filePlanRoleService.assignRoleToAuthority(filePlan, ROLE_RECORDS_MANAGER, testUser2);
}
public void testNoPermissionsOnTransferFolder()
{
doBehaviourDrivenTest(new BehaviourDrivenTest(testUser1)
{
// Records folder
private NodeRef recordsFolder = null;
// Transfer folder
private NodeRef transferFolder = null;
/**
* @see org.alfresco.module.org_alfresco_module_rm.test.util.BaseRMTestCase.BehaviourDrivenTest#given()
*/
@Override
public void given()
{
runAs(new RunAsWork<Void>()
{
public Void doWork()
{
// Create category
NodeRef category = filePlanService.createRecordCategory(filePlan, generate());
// Give filing permissions for the test users on the category
filePlanPermissionService.setPermission(category, testUser1, FILING);
filePlanPermissionService.setPermission(category, testUser2, FILING);
// Create disposition schedule
utils.createDispositionSchedule(category, DEFAULT_DISPOSITION_INSTRUCTIONS, DEFAULT_DISPOSITION_AUTHORITY, false, true, true);
// Create folder
recordsFolder = recordFolderService.createRecordFolder(category, generate());
// Make eligible for cut off
Map<String, Serializable> params = new HashMap<>(1);
params.put(PARAM_EVENT_NAME, DEFAULT_EVENT_NAME);
rmActionService.executeRecordsManagementAction(recordsFolder, CompleteEventAction.NAME, params);
// Cut off folder
rmActionService.executeRecordsManagementAction(recordsFolder, CutOffAction.NAME);
return null;
}
}, getAdminUserName());
// FIXME: This step should be executed in "when()".
// See RM-3931
transferFolder = (NodeRef) rmActionService.executeRecordsManagementAction(recordsFolder, TransferAction.NAME).getValue();
}
/**
* @see org.alfresco.module.org_alfresco_module_rm.test.util.BaseRMTestCase.BehaviourDrivenTest#when()
*/
@Override
public void when()
{
// FIXME: If the transfer step is executed here the test fails. See RM-3931
//transferFolder = (NodeRef) rmActionService.executeRecordsManagementAction(recordsFolder, TransferAction.NAME).getValue();
}
/**
* @see org.alfresco.module.org_alfresco_module_rm.test.util.BaseRMTestCase.BehaviourDrivenTest#then()
*/
@Override
public void then()
{
// Check transfer folder
assertNotNull(transferFolder);
// testUser1 should have read permissions on the transfers container
assertEquals(ALLOWED, permissionService.hasPermission(transfersContainer, READ_RECORDS));
// Check if testUser1 has filing permissions on the transfer folder
assertEquals(ALLOWED, permissionService.hasPermission(transferFolder, FILING));
runAs(new RunAsWork<Void>()
{
public Void doWork()
{
// Check transfer folder
assertNotNull(transferFolder);
// testUser2 should have read permissions on the transfers container
assertEquals(ALLOWED, permissionService.hasPermission(transfersContainer, READ_RECORDS));
// Check if testUser2 has read permissions on the transfer folder
assertEquals(DENIED, permissionService.hasPermission(transferFolder, READ_RECORDS));
return null;
}
}, testUser2);
}
});
}
}
|
export const enum SourceType {
URL,
FILE,
ArrayBuffer
}
export const enum PlayBackState {
PREPARE = 'PREPARE',
INITIALIZED = 'INITIALIZED',
STARTED = 'STARTED',
PLAYING = 'PLAYING',
PAUSED = 'PAUSED',
STOPPED = 'STOPPED',
ENDED = 'ENDED'
}
export * from './audio';
export * from './MediaRessource';
|
#!/usr/bin/env python3
import os
from sys import executable
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
# EXECUTE THIS SCRIPT IN BASE DIRECTORY!!!
NUM_EXPERIMENTS_PER_SETUP = 5
NUM_SECONDS = 10
def get_filename(protocol, payload, workload, record, thread, skew, reps, second, i):
return "YCSB" + protocol + "P" + str(payload) + "W" + workload + "R" + str(record) + "T" + str(thread) + "S" + str(second) + "Theta" + str(skew).replace('.', '') + "Reps" + str(reps) + ".log" + str(i)
def gen_build_setups():
protocols = ["silo", "nowait", "mvto"]
payloads = [4, 100, 1024]
return [[protocol, payload] for protocol in protocols for payload in payloads]
def build():
if not os.path.exists("./build"):
os.mkdir("./build") # create build
os.chdir("./build")
if not os.path.exists("./log"):
os.mkdir("./log") # compile logs
for setup in gen_build_setups():
protocol = setup[0]
payload = setup[1]
title = "ycsb" + str(payload) + "_" + protocol
print("Compiling " + title)
os.system(
"cmake .. -DLOG_LEVEL=0 -DCMAKE_BUILD_TYPE=Release -DBENCHMARK=YCSB -DCC_ALG=" +
protocol.upper() + " -DPAYLOAD_SIZE=" + str(payload))
logfile = title + ".compile_log"
ret = os.system("make -j$(nproc) > ./log/" + logfile + " 2>&1")
if ret != 0:
print("Error. Stopping")
exit(0)
os.chdir("../") # go back to base directory
def gen_setups():
protocols = ["silo", "nowait", "mvto"]
threads = [1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120]
setups = [
# Cicada
[100, "B", 10000000, 0.99, 16],
[100, "A", 10000000, 0.99, 1],
# TicToc ("A" is not exactly the same)
[1024, "C", 10000000, 0, 2],
[1024, "A", 10000000, 0.8, 16],
[1024, "B", 10000000, 0.9, 16]
# MOCC (Skip)
]
return [[protocol, thread, *setup]
for protocol in protocols
for thread in threads
for setup in setups]
def run_all():
os.chdir("./build/bin") # move to bin
if not os.path.exists("./res"):
os.mkdir("./res") # create result directory inside bin
for setup in gen_setups():
protocol = setup[0]
thread = setup[1]
payload = setup[2]
workload = setup[3]
record = setup[4]
skew = setup[5]
reps = setup[6]
second = NUM_SECONDS
title = "ycsb" + str(payload) + "_" + protocol
args = workload + " " + \
str(record) + " " + str(thread) + " " + \
str(second) + " " + str(skew) + " " + str(reps)
print("[{}: {}]".format(title, args))
for i in range(NUM_EXPERIMENTS_PER_SETUP):
result_file = get_filename(
protocol, payload, workload, record, thread, skew, reps, second, i)
print(" Trial:" + str(i))
ret = os.system("./" + title + " " + args +
" > ./res/" + result_file + " 2>&1")
if ret != 0:
print("Error. Stopping")
exit(0)
os.chdir("../../") # back to base directory
def get_stats_from_file(result_file):
f = open(result_file)
for line in f:
line = line.strip().split()
# print(line)
if not line:
continue
if line[0] == "commits:":
txn_cnt = float(line[1])
if line[0] == "sys_aborts:":
abort_cnt = float(line[1])
if line[0] == "Throughput:":
throughput = float(line[1])
f.close()
return txn_cnt, abort_cnt, throughput
def tuple_to_string(tup):
payload, workload, record, skew, reps = tup
return "YCSB({})P{}R{}THETA{}REPS{}".format(workload, payload, record, str(skew).replace('.', ''), reps)
def plot_all():
# plot throughput
os.chdir("./build/bin/res") # move to result file
if not os.path.exists("./plots"):
os.mkdir("./plots") # create plot directory inside res
throughputs = {}
abort_rates = {}
for setup in gen_setups():
protocol = setup[0]
thread = setup[1]
payload = setup[2]
workload = setup[3]
record = setup[4]
skew = setup[5]
reps = setup[6]
second = NUM_SECONDS
graph_line = tuple([payload, workload, record, skew, reps])
if graph_line not in throughputs:
throughputs[graph_line] = {}
if graph_line not in abort_rates:
abort_rates[graph_line] = {}
if protocol not in throughputs[graph_line]:
throughputs[graph_line][protocol] = []
if protocol not in abort_rates[graph_line]:
abort_rates[graph_line][protocol] = []
average_throughput = 0
average_abort_rate = 0
for i in range(NUM_EXPERIMENTS_PER_SETUP):
result_file = get_filename(
protocol, payload, workload, record, thread, skew, reps, second, i)
txn_cnt, abort_cnt, throughput = get_stats_from_file(result_file)
abort_rate = abort_cnt / (abort_cnt + txn_cnt)
average_throughput += throughput
average_abort_rate += abort_rate
average_throughput /= NUM_EXPERIMENTS_PER_SETUP
average_abort_rate /= NUM_EXPERIMENTS_PER_SETUP
throughputs[graph_line][protocol].append([thread, average_throughput])
abort_rates[graph_line][protocol].append([thread, average_abort_rate])
for key in throughputs:
payload, workload, record, skew, reps = key
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 4))
markers = ['o', 'v', 's', 'p', 'P', '*', 'X', 'D', 'd', '|', '_']
marker_choice = 0
for protocol, res in throughputs[key].items():
res = np.array(res).T
ax1.plot(res[0], res[1]/(10**6),
markers[marker_choice] + '-', label=protocol)
marker_choice += 1
marker_choice = 0
for protocol, res in abort_rates[key].items():
res = np.array(res).T
ax2.plot(res[0], res[1], markers[marker_choice] + '-')
marker_choice += 1
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
ax2.xaxis.set_major_locator(MaxNLocator(integer=True))
ax1.set_xlabel(
"Thread Count ({} seconds)".format(NUM_SECONDS))
ax2.set_xlabel(
"Thread Count ({} seconds)".format(NUM_SECONDS))
ax1.set_ylabel("Throughput (Million txns/s)")
ax2.set_ylabel("Abort Rate")
ax1.grid()
ax2.grid()
fig.legend(loc="lower center", bbox_to_anchor=(
0.5, 0.84), ncol=len(throughputs.keys()))
fig.suptitle("YCSB-{}, {} records each with {} bytes, $\\theta$ = {}, {} reps per txn".format(
workload, record, payload, skew, reps))
fig.tight_layout(rect=[0, 0, 1, 0.96])
fig.savefig("./plots/{}.png".format(tuple_to_string(key)))
print("{}.pdf is saved in ./build/bin/res/plots/".format(tuple_to_string(key)))
os.chdir("../../../") # go back to base directory
if __name__ == "__main__":
build()
run_all()
plot_all()
|
Cross Roads or Cross Purposes? Tensions Between Military and Humanitarian Providers In October 2001, then-Secretary of State Colin Powell addressed a conference of humanitarian nongovernmental organizations (NGOs) in Washington, D.C. There, he remarked "I want you to know that I have made it clear to my staff here and to all of our ambassadors around the world that I am serious about making sure we have the best relationship with the NGOs who are such a force multiplier for us, such an important part of our combat team." Although his purpose in this address was undoubtedly to build a foundation for a whole-of-nation effort to promote democracy, respect for human rights, and the elimination of terrorism, the secretary's speech had the opposite effect, angering many of the conference's participants who felt that the US Government was seeking to co-opt their organizations by making them mere ancillaries to the war effort. In 2006 to 2007, Army Lieutenant Colonel James L. Cook was the C J3 (Deputy for Plans and Operations) for Combined Joint Task Force (CJTF) 76, covering Regional Command (RC) South and RC East in Afghanistan. His command controlled most of the Provincial Reconstruction Teams (PRTs), and all of the American PRTs operating in those areas of responsibility. Troubling to LTC Cook was the level of redundancy of aid and assistance programs undertaken by the military, government agencies, and the NGO community. He was confused as to why: "as operators, it was so difficult to get everyone to row together" and divide responsibilities to most efficiently and effectively use the limited resources at hand. Although he found levels of access to and cooperation with NGOs varied from project to project and NGO to NGO, Cook felt area-wide communication and cooperation were less than he thought possible and NGOs were (largely) unresponsive to his staff's efforts to streamline the distribution of reconstruction and aid monies. Introduction Like most military and foreign policy professionals, Secretary Powell and LTC Cook have a genuine interest in helping those in need. Alleviating suffering is not their only interest however. The Departments of State and Defense are arms of the United States government and are thus responsible to the nation and its people for advancing their interests as well as for meeting the needs of those affected by tragedy. Indeed, there is a hierarchy of interests that are served by government-sponsored humanitarian missions. First, advance the goals of the nation, and, second, deliver aid to those in need. There is nothing cynical or hypocritical about this hierarchy. As the previous passages reflect, rather than seeing these national and humanitarian ends as conflicting, both Secretary Powell and LTC Cook believed these two goals were in harmony--one supports the other. As the respect for human rights and dignity from policy practitioners is genuine, they believe their common cause with their humanitarian NGO counterparts should serve as a basis for a smooth and unproblematic partnership. True, the humanitarians might not share their hierarchy of interests, as the latter may privilege the interests of those in need of aid above the interests of the nations that deliver it. But, as the government practitioner sees no conflict between serving these two interests, this fact ought not disrupt prospects for cooperation. The continued unevenness in civil-military relations between militaries and nongovernmental aid-givers, sometimes cooperative, often uncooperative (even hostile), thus continues to confuse and frustrate government agents. In fact, the root of such problems stem from the fact that many in the policy community fail to appreciate that humanitarians also have a hierarchy of interests. Humanitarians have historically been as concerned with humanitarianism as an end as much as a means, because the practice of humanitarianism redeems the aid-giver as much as it comforts the recipient--or, more precisely, the aid-giver is redeemed through providing comfort to others.
|
<filename>pyllhttp.c
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <stdbool.h>
#include <string.h>
#include <ctype.h>
#include "lib/llhttp.h"
#define STRING(x) #x
#define XSTRING(x) STRING(x)
#define LLHTTP_VERSION XSTRING(LLHTTP_VERSION_MAJOR) "." XSTRING(LLHTTP_VERSION_MINOR) "." XSTRING(LLHTTP_VERSION_PATCH)
typedef struct {
PyObject_HEAD
llhttp_t llhttp;
} parser_object;
static PyObject *base_error;
static PyObject *errors[] = {
#define HTTP_ERRNO_GEN(CODE, NAME, _) NULL,
HTTP_ERRNO_MAP(HTTP_ERRNO_GEN)
#undef HTTP_ERRNO_GEN
};
static PyObject *methods[] = {
#define HTTP_METHOD_GEN(NUMBER, NAME, STRING) NULL,
HTTP_METHOD_MAP(HTTP_METHOD_GEN)
#undef HTTP_METHOD_GEN
};
static int
parser_callback(_Py_Identifier *type, llhttp_t *llhttp) {
PyObject *result = _PyObject_CallMethodIdObjArgs(llhttp->data, type, NULL);
if (result)
Py_DECREF(result);
if (PyErr_Occurred())
return HPE_USER;
if (HPE_PAUSED == llhttp_get_errno(llhttp)) {
llhttp_resume(llhttp);
return HPE_PAUSED;
}
if (HPE_PAUSED_UPGRADE == llhttp_get_errno(llhttp)) {
llhttp_resume_after_upgrade(llhttp);
return HPE_PAUSED_UPGRADE;
}
return HPE_OK;
}
static int
parser_data_callback(_Py_Identifier *type, llhttp_t *llhttp, const char *data, size_t length) {
PyObject *payload = PyMemoryView_FromMemory((char*)data, length, PyBUF_READ);
PyObject *result = _PyObject_CallMethodIdObjArgs(llhttp->data, type, payload, NULL);
Py_DECREF(payload);
if (result)
Py_DECREF(result);
if (PyErr_Occurred())
return HPE_USER;
if (HPE_PAUSED == llhttp_get_errno(llhttp)) {
llhttp_resume(llhttp);
return HPE_PAUSED;
}
if (HPE_PAUSED_UPGRADE == llhttp_get_errno(llhttp)) {
llhttp_resume_after_upgrade(llhttp);
return HPE_PAUSED_UPGRADE;
}
return HPE_OK;
}
#define PARSER_CALLBACK(type) \
_Py_IDENTIFIER(type); \
static int parser_ ## type (llhttp_t *llhttp) \
{ return parser_callback(&PyId_ ## type, llhttp); }
#define PARSER_DATA_CALLBACK(type) \
_Py_IDENTIFIER(type); \
static int parser_ ## type (llhttp_t *llhttp, const char *data, size_t length) \
{ return parser_data_callback(&PyId_ ## type, llhttp, data, length); }
PARSER_CALLBACK(on_message_begin)
PARSER_DATA_CALLBACK(on_url)
PARSER_CALLBACK(on_url_complete)
PARSER_DATA_CALLBACK(on_status)
PARSER_CALLBACK(on_status_complete)
PARSER_DATA_CALLBACK(on_header_field)
PARSER_CALLBACK(on_header_field_complete)
PARSER_DATA_CALLBACK(on_header_value)
PARSER_CALLBACK(on_header_value_complete)
PARSER_CALLBACK(on_headers_complete)
PARSER_DATA_CALLBACK(on_body)
PARSER_CALLBACK(on_message_complete)
PARSER_CALLBACK(on_chunk_header)
PARSER_CALLBACK(on_chunk_complete)
llhttp_settings_t parser_settings = {
.on_message_begin = parser_on_message_begin,
.on_url = parser_on_url,
.on_url_complete = parser_on_url_complete,
.on_status = parser_on_status,
.on_status_complete = parser_on_status_complete,
.on_header_field = parser_on_header_field,
.on_header_field_complete = parser_on_header_field_complete,
.on_header_value = parser_on_header_value,
.on_header_value_complete = parser_on_header_value_complete,
.on_headers_complete = parser_on_headers_complete,
.on_body = parser_on_body,
.on_message_complete = parser_on_message_complete,
.on_chunk_header = parser_on_chunk_header,
.on_chunk_complete = parser_on_chunk_complete,
};
static PyObject *
request_new(PyTypeObject *type, PyObject *args, PyObject *kwds) {
PyObject *self = type->tp_alloc(type, 0);
if (self) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_init(llhttp, HTTP_REQUEST, &parser_settings);
llhttp->data = self;
}
return self;
}
static PyObject *
response_new(PyTypeObject *type, PyObject *args, PyObject *kwds) {
PyObject *self = type->tp_alloc(type, 0);
if (self) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_init(llhttp, HTTP_RESPONSE, &parser_settings);
llhttp->data = self;
}
return self;
}
static PyObject *
parser_execute(PyObject *self, PyObject *payload) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
Py_buffer buffer;
if (PyObject_GetBuffer(payload, &buffer, PyBUF_SIMPLE))
return NULL;
if (!PyBuffer_IsContiguous(&buffer, 'C')) {
PyErr_SetString(PyExc_TypeError, "buffer is not contiguous");
PyBuffer_Release(&buffer);
return NULL;
}
llhttp_errno_t error = llhttp_execute(llhttp, buffer.buf, buffer.len);
PyBuffer_Release(&buffer);
if (PyErr_Occurred())
return NULL;
switch (error) {
case HPE_OK:
return PyLong_FromUnsignedLong(buffer.len);
case HPE_PAUSED:
case HPE_PAUSED_UPGRADE:
case HPE_PAUSED_H2_UPGRADE:
return PyLong_FromUnsignedLong(llhttp->error_pos - (const char*)buffer.buf);
default:
PyErr_SetString(errors[error], llhttp_get_error_reason(llhttp));
return NULL;
}
}
static PyObject *
parser_pause(PyObject *self) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_pause(llhttp);
Py_RETURN_NONE;
}
static PyObject *
parser_unpause(PyObject *self) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_resume(llhttp);
Py_RETURN_NONE;
}
static PyObject *
parser_upgrade(PyObject *self) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_resume_after_upgrade(llhttp);
Py_RETURN_NONE;
}
static PyObject *
parser_finish(PyObject *self) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_errno_t error = llhttp_finish(llhttp);
if (HPE_OK == error)
Py_RETURN_NONE;
PyErr_SetString(errors[error], llhttp_get_error_reason(llhttp));
return NULL;
}
static PyObject *
parser_reset(PyObject *self) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_reset(llhttp);
Py_RETURN_NONE;
}
static PyObject * parser_dummy_noargs(PyObject *self) { Py_RETURN_NONE; }
static PyObject * parser_dummy_onearg(PyObject *self, PyObject *arg) { Py_RETURN_NONE; }
static PyMethodDef parser_methods[] = {
{ "execute", (PyCFunction)parser_execute, METH_O },
{ "pause", (PyCFunction)parser_pause, METH_NOARGS },
{ "unpause", (PyCFunction)parser_unpause, METH_NOARGS },
{ "upgrade", (PyCFunction)parser_upgrade, METH_NOARGS },
{ "finish", (PyCFunction)parser_finish, METH_NOARGS },
{ "reset", (PyCFunction)parser_reset, METH_NOARGS },
{ "on_message_begin", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_url", (PyCFunction)parser_dummy_onearg, METH_O },
{ "on_url_complete", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_status", (PyCFunction)parser_dummy_onearg, METH_O },
{ "on_status_complete", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_header_field", (PyCFunction)parser_dummy_onearg, METH_O },
{ "on_header_field_complete", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_header_value", (PyCFunction)parser_dummy_onearg, METH_O },
{ "on_header_value_complete", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_headers_complete", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_body", (PyCFunction)parser_dummy_onearg, METH_O },
{ "on_message_complete", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_chunk_header", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ "on_chunk_complete", (PyCFunction)parser_dummy_noargs, METH_NOARGS },
{ NULL }
};
static PyObject *
parser_method(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
if (llhttp->type != HTTP_REQUEST)
Py_RETURN_NONE;
if (!llhttp->http_major && !llhttp->http_minor)
Py_RETURN_NONE;
PyObject * method = methods[llhttp->method];
Py_INCREF(method);
return method;
}
static PyObject *
parser_major(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
if (!llhttp->http_major && !llhttp->http_minor)
Py_RETURN_NONE;
return PyLong_FromUnsignedLong(llhttp->http_major);
}
static PyObject *
parser_minor(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
if (!llhttp->http_major && !llhttp->http_minor)
Py_RETURN_NONE;
return PyLong_FromUnsignedLong(llhttp->http_minor);
}
static PyObject *
parser_content_length(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
if (!(llhttp->flags & F_CONTENT_LENGTH))
Py_RETURN_NONE;
return PyLong_FromUnsignedLong(llhttp->content_length);
}
static PyObject *
parser_get_lenient_headers(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
return PyBool_FromLong(llhttp->lenient_flags & LENIENT_HEADERS);
}
static int
parser_set_lenient_headers(PyObject *self, PyObject *value, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_set_lenient_headers(llhttp, PyObject_IsTrue(value));
return 0;
}
static PyObject *
parser_get_lenient_chunked_length(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
return PyBool_FromLong(llhttp->lenient_flags & LENIENT_CHUNKED_LENGTH);
}
static int
parser_set_lenient_chunked_length(PyObject *self, PyObject *value, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_set_lenient_chunked_length(llhttp, PyObject_IsTrue(value));
return 0;
}
static PyObject *
parser_get_lenient_keep_alive(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
return PyBool_FromLong(llhttp->lenient_flags & LENIENT_KEEP_ALIVE);
}
static int
parser_set_lenient_keep_alive(PyObject *self, PyObject *value, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
llhttp_set_lenient_keep_alive(llhttp, PyObject_IsTrue(value));
return 0;
}
static PyObject *
parser_message_needs_eof(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
return PyBool_FromLong(llhttp_message_needs_eof(llhttp));
}
static PyObject *
parser_should_keep_alive(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
return PyBool_FromLong(llhttp_should_keep_alive(llhttp));
}
static PyObject *
parser_is_paused(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
return PyBool_FromLong(HPE_PAUSED == llhttp_get_errno(llhttp));
}
static PyObject *
parser_is_upgrading(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
switch (llhttp_get_errno(llhttp)) {
case HPE_PAUSED_UPGRADE:
case HPE_PAUSED_H2_UPGRADE:
Py_RETURN_TRUE;
break;
default:
Py_RETURN_FALSE;
break;
}
}
static PyObject *
parser_is_busted(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
switch (llhttp_get_errno(llhttp)) {
case HPE_OK:
case HPE_PAUSED:
case HPE_PAUSED_UPGRADE:
Py_RETURN_FALSE;
default:
Py_RETURN_TRUE;
}
}
static PyObject *
parser_error(PyObject *self, void *closure) {
llhttp_t *llhttp = &((parser_object*)self)->llhttp;
if (HPE_OK == llhttp_get_errno(llhttp))
Py_RETURN_NONE;
return PyUnicode_FromString(llhttp_get_error_reason(llhttp));
}
static PyGetSetDef parser_getset[] = {
{ "method", parser_method },
{ "major", parser_major },
{ "minor", parser_minor },
{ "content_length", parser_content_length },
{ "lenient_headers", parser_get_lenient_headers, parser_set_lenient_headers },
{ "lenient_chunked_length", parser_get_lenient_chunked_length, parser_set_lenient_chunked_length },
{ "lenient_keep_alive", parser_get_lenient_keep_alive, parser_set_lenient_keep_alive },
{ "message_needs_eof", parser_message_needs_eof },
{ "should_keep_alive", parser_should_keep_alive },
{ "is_paused", parser_is_paused },
{ "is_upgrading", parser_is_upgrading },
{ "is_busted", parser_is_busted },
{ "error", parser_error },
{ NULL }
};
static void
parser_dealloc(PyObject *self) {
Py_TYPE(self)->tp_free((PyObject*)self);
}
static PyType_Slot request_slots[] = {
{Py_tp_doc, "llhttp request parser"},
{Py_tp_new, request_new},
{Py_tp_dealloc, parser_dealloc},
{Py_tp_methods, parser_methods},
{Py_tp_getset, parser_getset},
{0, 0},
};
static PyType_Spec request_spec = {
"llhttp.Request",
sizeof(parser_object),
0,
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
request_slots,
};
static PyType_Slot response_slots[] = {
{Py_tp_doc, "llhttp response parser"},
{Py_tp_new, response_new},
{Py_tp_dealloc, parser_dealloc},
{Py_tp_methods, parser_methods},
{Py_tp_getset, parser_getset},
{0, 0},
};
static PyType_Spec response_spec = {
"llhttp.Response",
sizeof(parser_object),
0,
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
response_slots,
};
static struct PyModuleDef llhttp_module = {
PyModuleDef_HEAD_INIT,
.m_name = "llhttp",
.m_doc = "llhttp wrapper",
.m_size = -1,
};
static char *
snake_to_camel(char * string) {
bool upper = true;
char * camel = string;
for (const char * snake = string ; *snake ; ++snake) {
if (isalpha(*snake)) {
*camel++ = upper ? toupper(*snake) : tolower(*snake);
} else if (isdigit(*snake)) {
*camel++ = *snake;
}
upper = !isalpha(*snake);
}
*camel = '\0';
return string;
}
PyMODINIT_FUNC
PyInit___llhttp(void) {
PyObject *m = PyModule_Create(&llhttp_module);
if (!m)
return NULL;
if (PyModule_AddStringConstant(m, "version", LLHTTP_VERSION))
goto fail;
if ((base_error = PyErr_NewException("llhttp.Error", NULL, NULL))) {
Py_INCREF(base_error);
PyModule_AddObject(m, "Error", base_error);
#define HTTP_ERRNO_GEN(CODE, NAME, _) \
if (CODE != HPE_OK && CODE != HPE_PAUSED && CODE != HPE_PAUSED_UPGRADE) { \
char long_name[] = "llhttp." #NAME "_Error"; \
char *short_name = snake_to_camel(long_name + strlen("llhttp.")); \
if ((errors[CODE] = PyErr_NewException(long_name, base_error, NULL))) { \
Py_INCREF(errors[CODE]); \
PyModule_AddObject(m, short_name, errors[CODE]); \
} \
}
HTTP_ERRNO_MAP(HTTP_ERRNO_GEN)
#undef HTTP_ERRNO_GEN
}
#define HTTP_METHOD_GEN(NUMBER, NAME, STRING) \
methods[HTTP_ ## NAME] = PyUnicode_FromStringAndSize(#STRING, strlen(#STRING));
HTTP_METHOD_MAP(HTTP_METHOD_GEN)
#undef HTTP_METHOD_GEN
PyObject *request_type = PyType_FromSpec(&request_spec);
if (!request_type)
goto fail;
if (PyModule_AddObject(m, request_spec.name + strlen("llhttp."), request_type)) {
Py_DECREF(request_type);
goto fail;
}
PyObject *response_type = PyType_FromSpec(&response_spec);
if (!response_type)
goto fail;
if (PyModule_AddObject(m, response_spec.name + strlen("llhttp."), response_type)) {
Py_DECREF(response_type);
goto fail;
}
return m;
fail:
Py_DECREF(m);
return NULL;
}
//
|
<reponame>JareBear12418/GCode-Editor<filename>main.py
import os
import wx
class PhotoCtrl(wx.App):
def __init__(self, redirect=False, filename=None):
wx.App.__init__(self, redirect, filename)
self.frame = wx.Frame(None, title='Photo Control')
self.panel = wx.Panel(self.frame)
self.PhotoMaxSize = 240
self.createWidgets()
self.frame.Show()
def createWidgets(self):
instructions = 'Browse for an image'
img = wx.EmptyImage(240,240)
self.imageCtrl = wx.StaticBitmap(self.panel, wx.ID_ANY,
wx.BitmapFromImage(img))
instructLbl = wx.StaticText(self.panel, label=instructions)
self.photoTxt = wx.TextCtrl(self.panel, size=(200,-1))
browseBtn = wx.Button(self.panel, label='Browse')
browseBtn.Bind(wx.EVT_BUTTON, self.onBrowse)
self.mainSizer = wx.BoxSizer(wx.VERTICAL)
self.sizer = wx.BoxSizer(wx.HORIZONTAL)
self.mainSizer.Add(wx.StaticLine(self.panel, wx.ID_ANY),
0, wx.ALL|wx.EXPAND, 5)
self.mainSizer.Add(instructLbl, 0, wx.ALL, 5)
self.mainSizer.Add(self.imageCtrl, 0, wx.ALL, 5)
self.sizer.Add(self.photoTxt, 0, wx.ALL, 5)
self.sizer.Add(browseBtn, 0, wx.ALL, 5)
self.mainSizer.Add(self.sizer, 0, wx.ALL, 5)
self.panel.SetSizer(self.mainSizer)
self.mainSizer.Fit(self.frame)
self.panel.Layout()
def onBrowse(self, event):
"""
Browse for file
"""
wildcard = "GCode files (*.jpg)|*.jpg"
dialog = wx.FileDialog(None, "Choose a file",
wildcard=wildcard)
if dialog.ShowModal() == wx.ID_OK:
try:
self.photoTxt.SetValue(dialog.GetPath())
except Exception as e:
print(e)
dialog.Destroy()
self.onView()
def onView(self):
filepath = self.photoTxt.GetValue()
img = wx.Image(filepath, wx.BITMAP_TYPE_ANY)
# scale the image, preserving the aspect ratio
W = img.GetWidth()
H = img.GetHeight()
if W > H:
NewW = self.PhotoMaxSize
NewH = self.PhotoMaxSize * H / W
else:
NewW = self.PhotoMaxSize * W / H
NewH = self.PhotoMaxSize
img = img.Scale(NewW,NewH)
self.imageCtrl.SetBitmap(wx.BitmapFromImage(img))
self.panel.Refresh()
if __name__ == '__main__':
app = PhotoCtrl()
app.MainLoop()
|
Microbial evaluation of heat cured silicone versus heat cured acrylic resin in maxillary obturator Purpose: The aim of this trial was to compare the microbial colonization of heat-cured silicone and heat-cured acrylic resin in obturators restoring acquired maxillary defects. Material and methods: The experiment was carried out on six partially edentulous patients having unilateral total maxillectomy defects approaching midline (class I Aramany classification) who are in need of definitive obturator. Selected patients received metal framework prosthesis with heat-cured acrylic resin bulb extending into the surgical site. At the time of delivery, a swab was obtained from the patients who represent the baseline for the group I. Patients were recalled after two and four weeks from wearing the obturator for microbiological evaluation and a swab was taken each time from the same place. In group II, the heat-cured acrylic resin was replaced with heat-cured silicone and a swab was obtained on the day of insertion as a baseline for group II. Patients were recalled after two and four weeks from the insertion of the relined obturator for microbiological evaluation and a swab was taken each time from the same place. Swabs were obtained from the nasal surface of the surgical defect and immediately cultivated into three different media Blood Agar, Sabouraud Dextrose Agar, and Macconkey media and incubated for microbiological evaluation. The identification and quantification of the isolated microorganisms were performed using the conventional microbiological cultivation method. Finally, the collected data were tabulated and statistically analyzed. Results: Statistical analysis of the collected data showed, that the difference between the two groups was insignificant. However, patients were more satisfied with obturators lined with heat-cured silicone. Conclusion: It was concluded that within the limitations of this study, both acrylic resin and resilient lining materials could be used as a material for obturator construction in maxillofacial cases. However, longer follow-up period might show different results. Background Restoring patients with maxillofacial imperfections is one of the most challenging treatments of the stomatognathic system. Usually, maxillary defects result from surgical elimination of oral tumors. The main objective of prosthetic obturation is the closure of the maxillary defects by an obturator in order to avoid hyper-nasal speech and fluid escape into the nasal cavity. Since the construction of an obturator for a maxillary defect requires optimum retention, stability, and obturation of the defect thus the weight of the obturator must be kept as minimum as possible to counteract the dislodging pull of gravity. This could be achieved by constructing a hollow bulb Open Access Bulletin of the National Research Centre *Correspondence: [email protected] 2 Fixed and Removable Department, National Research Center, Giza, Egypt Full list of author information is available at the end of the article Page 2 of 9 Elkhashab et al. Bulletin of the National Research Centre 46:120 obturator. Furthermore, relining of the palatal part of the obturator with a soft liner greatly enhances the comfort of the patient as it is flexible and protects the integrity of the adjoining moving tissues (). Patients with acquired maxillary defects suffer from traumatized mucosa with less tolerance to masticatory forces, therefore, resilient liners are essential for their Cushing effect and ability to distribute the stresses evenly and uniformly at the mucosa lining interface (). Fabricated a perfectly adapted silicone obturator engaging favorable undercuts within the defect. Regarding the retention, stability, and fluid leakage of the prosthesis, the patient was satisfied and during the follow-up appointment expressed remarkable improvement in the speech and prosthesis comfort (). The adherence of microbes to host cells or polymers such as acrylic resin and soft liners is necessary for colonization and the development of pathogenesis and infection (). Oral mucositis induced in maxillofacial patients receiving radiotherapy may be explained by the alteration of oral flora and microbial colonization due to xerostomia induced by radiation (). Investigations have reported that continuous swallowing or aspiration of microorganisms from denture plaque exposes patients to the risks of unexpected infections (). Different Gram-positive and Gramnegative micro-organisms are present in the microflora of denture plaque. Numerous pathogenic and opportunistic bacteria and fungi from the patient's prosthesis were identified. Staphylococcus species were the dominant, Gram-positive cocci and wide arrays of Gram-negative rods were identified, including Pseudomonas aeruginosa, Enterobacter cloacae, and Klebsiella pneumonia (). There's a huge body of evidence demonstrating that Candida is able to adhere to acrylic resin. Typically the first step that will lead to the growth of denture stomatitis of the adjacent mucosa. Candida adheres specifically or through a layer of denture plaque to denture base (polymethylmethacrylate, PMMA). Without this adherence, micro-organisms would be eliminated from the oral cavity when saliva or food is being swallowed (). Adherence of Candida to acrylic resin denture materials is influenced by the degree of surface roughness in addition to other factors such as the presence of other microorganisms and diet rich in sucrose (). They added that C. Albicans, being a relatively hydrophilic species, adheres to the surfaces in larger amounts as the surface wettability increases. Several studies have revealed that rough acrylic resin surfaces are more responsible for bacterial accumulation and plaque formation than smooth surfaces (). Adhesions of microorganisms on the surface of soft liners depend on the surface topography and composition of these materials. It was found that soft liners are more favorable for microbial colonization than acrylic resin leading to surface deterioration (). Microbiological investigations have been used widely in prosthodontics research. Samples taken from appliances, teeth, ridge, or implants were investigated for several reasons and by several methods (). Hence, this trial was performed to an ecological evaluation of the oral environment and bacterial or candidal growth when hard acrylic resin of the obturator bulb is changed with heat-cured silicone. The research question stated here was "In maxillectomy patients will obturator with heat-cured silicone will result in less microbial colonization than conventional obturator? This trial was performed following verifications made in the Consolidated Standards of Reporting Trials (CONSORT), statement for reporting RCTs. Trial design and setting The study was designed to be crossover clinical trial. Six partially edentulous patients having unilateral total maxillectomy defects approaching midline (class I Aramany classification) were selected from the outpatient clinic, Prosthodontics Department, Faculty of Oral and Dental medicine Cairo University or referred from the National Cancer Institute. Control group (Group I): all the patients first received definitive obturator which fabricated using conventional heat cured acrylic resin. Study group (Group II): after that the obturator was removed from all the patients and then relined with heat cured soft silicone material. Trial registration The study protocol was approved by Evidence-based Dentistry Committee, Prosthodontics Department Board and Ethics Committee of Faculty of Oral and Dental Medicine, Cairo University. Inclusion criteria 1. Partially edentulous patients having unilateral total maxillectomy defects approaching midline. 2. At least four months were elapsed from the date of surgery. 3. Adult patients with age ranged between 20 and 60 years old with an average age of 45 years. 4. All patients had a full set of natural teeth on the intact side of the arch and intact opposing arch. 5. Cooperative patients and follow the instructions. 6. Remaining palatal mucosa was free from inflammatory conditions. Exclusion criteria 1. Patients with systemic disorders that might disturb oral ecology were excluded, such as diabetes mellitus, blood diseases, T.B. 2. Patients were not receiving chemotherapy or radiotherapy or any drugs that could affect bacterial balance during the study period. 3. Smoking patients. Because it will affect the healing process. 4. Uncooperative patients. Because it will not return for follow-up Patient examination Patient's assessment was done to determine whether the patient met the study inclusion criteria. These assessments include a medical history questionnaire, a clinical examination, and radio-graphic assessment. Patient consent form Diagnostic data, suggested treatment and alternatives were reviewed with participants for this study. Illustrative consultation, treatment period, prosthodontics device and ultimate difficulties as well as hazards were all written in a consent form. The patients were fully educated about the possible consequences of the proposed research and signed a special written consent form designed for this purpose. All patients were requested to sign an informed consent form; this was translated into the Arabic language to be understood by the patients. The trial was conducted in accordance with the Declaration of Helsinki. Interventions and study procedures A conventional obturator was fabricated for all patients following the traditional steps. Construction of the definitive obturator A suitable maxillary perforated stock tray was selected according to patient arch form and size. The tray was modified either by reduction or addition of modeling wax 1 in order to cover the area of defect and allow the impression material to extend to the required borders. Training appliances and muscle relaxants were prescribed for patients suffering from trismus. Topical anesthesia was applied to the defect to reduce pain during procedure and undesirable undercuts were blocked out with vaselinated gauze. Upper and lower primary impressions were made using irreversible hydrocolloid impression material and poured into dental stone 2 to obtain study casts. Surveying 3 of the maxillary diagnostic cast was carried out. A. Mouth preparation Mouth preparation was done according to the planned design. Support Support was achieved through the palatal plate major connector, in addition to multiple occlusal rest seats were prepared distal to the first premolar, mesial to the second premolar, distal to the first molar and mesial to the second molar. A cingulum rest was prepared just above the cingulum of the canine tooth. Retention Retention was achieved through double Aker's clasps on the premolars and molars with alternating buccal and lingual retention. Bracing and reciprocation Bracing and reciprocation were obtained through the double Aker's clasps and the minor connectors. B. Final impression A custom made acrylic tray with wax spacer 4 was fabricated with a wax spacer. Any undesirable undercuts in the defect side was blocked out using vaslinized gauze. A rubber base adhesive was applied to the fitting surface of the special tray and the final impression was made using medium body rubber base. 5 The Impression was disinfected and assessed for extension, anatomical landmarks, rolled borders and surface details. The final impression was then boxed and poured into dental stone 6 to obtain the master cast. C. Framework construction On the obtained master cast, relief and block out were made. The planned design was then transferred to the refractory cast and wax pattern was fabricated. The Refractory cast was then invested, burnt out and cast. Framework was trimmed, finished, polished and tried in the patient's mouth. The fitting surface of the metal framework was coated with pressure indicating paste (PIP) 7 before insertion and any interference was eliminated. It was checked for fitness, retention, extension, stability and finally it was checked for occlusion. After metal framework try in, framework with trial denture base and occlusal rim were fabricated. D. Centric relation record and setting up of teeth The Framework with trial denture base and occlusal rims were inserted in the patient's mouth and asked to close with gentle force on softened wax 8 so that the occlusal imprints of the opposing teeth are recorded. Then the upper and lower casts were mounted on a semi adjustable articulator. 9 The teeth 10 shade, size and form were determined; setting up of artificial teeth was carried out and arranged following the guide lines of the lingualized concept of occlusion. E. Final try in stage The waxed up definitive obturator teeth was tried in patients mouth and checked for retention and comfort. Extension of the posterior and lateral borders of the obturator and restoration of the normal facial contour were also evaluated. F. Processing of the obturator Definitive obturators were fabricated using conventional heat cured acrylic resin 11 (group I). During packing a hollowed obturator bulb was constructed using the lost salt technique. A long curing cycle was performed (74 °C for 9 h). Adequate time was allowed for proper cooling of the flask after curing prior to the deflasking procedure. The obturator was highly finished and polished. G. Obturator insertion Finished obturator was checked carefully for blebs, bubbles, artifacts in either metal or acrylic and borders were checked for sharp edges. At the time of delivery (Figs. 1, 2), the prosthesis was checked intra-orally for proper extension, retention, adaptation, pressure areas, and occlusion. The patient was instructed to come back in the next day and any necessary adjustment was carried out. Patient instructions One week before obturator insertion, patients were instructed to remove the interim prosthesis all day except during eating, perform Chlorohexidene mouthwash in addition to Penicillin 500 mg. and Metronidazole 500 mg, during this period any other medications that might alter the oral flora were avoided. After obturator insertion, patients were instructed to wear the prosthesis during daytime, eating and to be removed from mouth for approximately 8 h daily (sleeping hours) to reduce trauma to the underlying mucosa. Patients were instructed to avoid any medications or mouthwashes. The prosthesis should be cleaned after each meal under running water over a basin filled with water to avoid accidental drop and breakage. While not in use it should be placed in a container with tap water. Microbiological samples for obturator with heat cured acrylic resin bulb At time of obturator insertion: one swab was taken from each patient from the nasal surface of the surgical cavity. It was considered a base line for each patient. After obturator insertion: one swab was taken at the following follow-up periods: Relining of the obturator After the fourth week swab was obtained, 2 mm of the heat cured acrylic resin bulb were reduced and the obturator was relined with chair side soft liner. 12 The patients were instructed not to remove the prosthesis for the next 48 h and come back for further relining procedures. While prosthesis still in place, an overall impression using a hydrocolloid impression material 13 in a perforated stock tray was done. The obtained cast with the obturator was flasked. After deflasking, the chair side soft liner was replaced with heat cured soft silicone material 14 (group II) prior to application of silicone liner an adhesive primer with a solvating effect must be used on the denture base. At the time of delivery, the prosthesis was checked intra-orally for proper extension, retention, adaptation, pressure areas, and occlusion. The patient was instructed to come back in the next day to adjust any problem related to the prosthesis (Fig. 3). Microbiological samples for obturator with heat cured silicone bulb At time of obturator insertion: one swab was taken from each patient from the nasal surface of the surgical cavity. It was considered a base line for each patient. After obturator insertion: one swab was taken at the following follow-up periods: Microbiological procedures For all patients, microbiological samples were collected and evaluated by semi quantitative culture of microorganisms in the following manner. Isolation of microorganisms was carried out using gamma sterilized disposable swabs (Fig. 4). Microbial growth evaluation was made as following: The swabs were emulsified in 1 ml sterile nutrient broth then after good shaking; it was added to 9 ml Morphological examination: Candida (Fig. 6) Appear on Sabouraud Dextrose Agar plates as cream colored pasty colonies. The Colonies have a distinctive yeast smell. Appear as large dark violet budding organisms in gram stain. Staphylococcus aureus (Fig. 7) Appear on Blood Agar plates as yellow or occasionally white 1-2 mm in diameter surrounded with a clear zone of complete hemolysis ( hemolysis). Pigment is less pronounced in young colonies. Colonies are slightly raised and easily emulsified. Appear as gram positive cocci arranged in clusters. Gram negative bacteria Appear pink to red or pale on Macconkey Agar plates. Statistical analysis In this study, data of Candidal and Bacterial colonies were coded, edited, collected, and analyzed as means and standard deviations for both groups (before and after relining) before insertion as base line, 2 weeks and 4 weeks follow-up periods. Statistical analysis was carried out using Microsoft Excel 2010 program. While testing significance was performed using SPSS ® 20 (Statistical package for scientific studies, SPSS, Inc., Chicago, IL, USA) and Minitab ® statistical software Ver. 16. Collected values were calculated according to the equation, CFU/ul = Total number of colonies counted in the plate X inversion of the saline dilution /10. Data were explored for normality using Kolmogrov-Smirnov test and Shapiro-Wilk test. Exploration of data revealed that the collected values were not normally distributed. Kruskal-Wallis test followed by multiple comparisons test were performed to test the significance between the follow-up periods within each group to detect the effect of time on candida and bacterial growth. In addition, Mann-Whitney U test was performed to test the significance between both groups at each follow-up periods to compare the candida and bacterial growth between both groups. A probability level of P ≤ 0.05 was considered statistically significant. Comparison between both groups Regarding Candidal colonization For evaluation of candidal growth change for each follow-up period, mean difference was calculated of each time interval of each group. Mann-Whitney U test was performed to detect the significance between both groups at time interval which revealed that there was insignificant difference during the follow-up period as showed in Table 1 and Fig. 8. Regarding bacterial colonization For evaluation of bacterial growth change for each follow-up period, mean difference was calculated of each time interval of each group. Mann-Whitney U test was performed to detect the significance between both groups at time interval which revealed that there was insignificant difference during the follow-up period except at 2-4 month time interval which was significant as showed in Table 2 and Fig. 9. Discussion This study was performed on six partially edentulous patients having unilateral total maxillectomy defects approaching midline (class I Aramany classification) who undergo surgery at least six months before the study was initiated, which is quite sufficient to prepare the patient physically and emotionally for the prosthetic intervention (). This trial was accepted by the Ethical Committee, Evidence-based Dentistry Committee, and Prosthodontics Department Board of Faculty of Dentistry, Cairo University, Egypt. This study has been planned, performed, and reported, intentionally using the best-presented methodology, according to principles for evidence-based medicine. The results of this study showed an insignificant difference between both materials used for the construction of the obturator bulb. However, the acrylic resin group showed higher results after two weeks. The increased microbial colonization of the acrylic resin group is in agreement with that postulated by Bettencourt et al. who concluded that the inferior fit and retention of the obturator bulb of the acrylic resin, in addition to the toxic effect on the oral cells and tissues caused by the residual monomer as a result of the polymerization process, leads to tissue trauma that enhances microbial colonization. This is in controversy with what reported by some authors that resilient liners present greater retention of candida than the acrylic resin. Pereira-Cenci et al. stated that patients wearing obturators, due to their fear of frequent insertion and removal for good oral hygiene, allow denture plaque to accumulate on the denture base material, according to the study (PMMA). This accumulation allows microorganisms to adhere to the surface, resulting in denture stomatitis. This is in accordance with the results obtained in the present study as the acrylic resin showed greater colonization than the resilient liner. As reported by Verran and Maryan greater adherence of candida and bacteria was observed in some soft lining materials. However, they found that Molloplast B is the most successful soft lining material in terms of reduced bacterial colonization and this is in agreement with the results obtained in the present study. Nikawa et al. reported that resilient liners show changes in their physical properties with the aging of the material which enhances colonization; however, in the present study the short follow-up period did not allow these changes to take place. On the other hand Busscher et al. reported that in clinical situations colonization of resilient liners is more than hard acrylic resin which is in controversy with the current study. Although it was reported that the less smooth surface of soft liners in comparison to the hard acrylic resin is considered a perfect shelter for microorganisms, the results of this study show no significant correlation. These results are in accordance with the study of Nikawa et al. who observed no relationship between surface roughness and biofilm formation on different soft lining materials, including Molloplast B. This finding implies that the surface roughness of a material may not be the only factor that governs the adherence of microorganisms. Glass et al. postulated that the bacteria adhere (in CFU/ml) more to the soft lining materials than C. Albicans. These findings are important as pathogenic bacteria are present in the denture plaque and may play a role in denture stomatitis and in systemic infections. This is in accordance with the results obtained in the present study as the bacterial colonization is greater than the candidal colonization. Clarke reported that the resilient liners are used to allow uniform distribution of the forces at the mucosal lining interface and to limit tissue trauma that is considered main factor in the development of denture stomatitis. The superior fit and adaptation of the silicone material to the delicate tissues of the surgical site played an important role in minimizing the leakage of ingested food into the defect and the adherence of microorganisms. This could explain the reduced colonization of candida in the resilient liner group. Moreover, the easy removal and insertion of the obturator minimized the friction with soft delicate tissues which reduced tissue trauma and thus reduced colonization. Molloplast B is a permanent soft lining material which can serve for several years before showing surface deteriorations as reported by Schmidt et al.. The short follow-up period was insufficient for Molloplast B to show any surface changes that might affect the oral flora significantly. Therefore, extending the follow-up period may change the results. Patients were asked about their feedback regarding their prosthesis. All patients of the Molloplast B group without exception were greatly satisfied with the comfort, retention, and phonetics during using the prostheses. This may be attributed to the inherent resiliency of the Molloplast B which provided excellent peripheral seal. Also, the flexibility of the material provided easy insertion and removal through the undercuts that allowed greater engagement of soft tissue undercuts and thus improved retention and stability significantly. On the contrary, two patients of the acrylic resin group were not well satisfied with the phonetics, and they experienced discomfort during insertion and removal that necessitated much more adjustment. Conclusions Considering the limitation of this study, the following conclusion can be pinched; both acrylic resin and resilient lining materials could be used as a material for obturator construction in maxillofacial cases.
|
<reponame>newmann/front-ta-admin
import {ElementRef, OnInit, ViewChild} from '@angular/core';
import {NzMessageService, NzModalService, NzModalRef} from 'ng-zorro-antd';
import {ActivatedRoute, Router} from '@angular/router';
import {BylConfigService} from '../../service/constant/config.service';
import {BylResultBody} from '../../service/model/result-body.model';
import {ReuseTabService} from '@delon/abc';
import {Observable} from 'rxjs';
import {BylCrudComponentBasePro} from "./crud-component-base-pro";
import {BylTicketBaseModal} from "../../service/model/ticket-base.model";
import {BylTicketBaseService} from "../../service/service/ticket-base.service";
import {BylTicketStatusEnum} from "../../service/model/ticket-status.enum";
import {BylDetailBaseModel} from "../../service/model/detail-base.model";
import {BylDetailDeleteResultModel} from "../../service/model/detail-delete-result.model";
import {BylDetailAddResultModel} from "../../service/model/detail-add-result.model";
import {BylWorkTypeConfigDetail} from "../../service/project/model/work-type-config-detail.model";
import {BylWorkTypeConfigTicket} from "../../service/project/model/work-type-config-ticket.model";
import {BylDetailUpdateResultModel} from "../../service/model/detail-update-Result.model";
import {BylDetailBatchAddResultModel} from "../../service/model/detail-batch-add-result.model";
/**
* @Description: crud组件对象的抽象类
* @Author: <EMAIL>
* @Date: Created in 2018-03-31 9:46
**/
export abstract class BylCrudComponentTicket<T extends BylDetailBaseModel,E extends BylTicketBaseModal>
extends BylCrudComponentBasePro<E>{
submitLoading: boolean = false;
cancelLoading: boolean = false;
checkLoading: boolean = false;
public businessService: BylTicketBaseService<E>;
constructor(public msgService: NzMessageService,
public configService: BylConfigService,
// public modalService: NzModalService,
// public modalSubject: NzModalRef,
public activatedRoute: ActivatedRoute,
public reuseTabService: ReuseTabService,
public router: Router
) {
super(msgService, configService, activatedRoute, reuseTabService, router);
}
ngOnInit() {
//如果是新增单据,则先现在好单据后,进入修改模式
console.log("in Ticket Crud component ngOnInit, processType:",this.processType);
if(this.processType === 'new'){
this.businessService.getNewTicket().subscribe((data) => {
if (data.code === BylResultBody.RESULT_CODE_SUCCESS) {
console.log("in TicketCrudComponent, ngOnInit new ticket:", data);
//调出新生成的单据进行修改和调整
this.sourceId = data.data.id;
this.processType = this.sourceId;
this.loadData(this.sourceId);
} else {
this.errMsg = data.msg;
this.reset();
}
},err =>{
this.errMsg = err;
this.reset();
});
} else {
//进行后续处理
super.ngOnInit();
}
}
/**
* 重置界面内容
*/
reset() {
console.log('reset form', this.businessData);
super.reset();
//设置可复用标签的名字:
if (this.sourceId) {
//说明是修改
if (this.crudEntityName) {
this.reuseTabService.title = '编辑-' + this.businessData.billNo;
} else {
this.reuseTabService.title = '编辑-' + this.crudEntityName + "[" + this.businessData.billNo +"]";
}
}
}
/**
* 提交实体
*/
submitEntity() {
this.submitLoading = true;
this.errMsg = '';
this.getFormData();
let saveResult$: Observable<BylResultBody<E>>;
console.log('in BylCrudComponentTicket ', this.businessData);
saveResult$ = this.businessService.submit(this.businessData);
this.followProcess(saveResult$);
}
/**
* 作废
*
*/
cancelEntity() {
this.cancelLoading = true;
this.errMsg = '';
let saveResult$: Observable<BylResultBody<E>>;
console.log('in BylCrudComponentTicket submitform', this.businessData);
saveResult$ = this.businessService.cancel(this.businessData);
this.followProcess(saveResult$);
}
/**
* 审核
*
*/
checkEntity() {
this.checkLoading = true;
this.errMsg = '';
let saveResult$: Observable<BylResultBody<E>>;
console.log('in BylCrudComponentTicket confirmEntity', this.businessData);
saveResult$ = this.businessService.check(this.businessData);
this.followProcess(saveResult$);
}
protected followProcess(call$: Observable<BylResultBody<E>> ){
call$.subscribe(
data => {
// this._loading = false;
if (data.code === BylResultBody.RESULT_CODE_SUCCESS) {
// simpleDeepCopy(this.businessData,data.data);
this.setFormData(data.data);
this.reset(); //重置界面
} else {
this.errMsg = data.msg;
}
this.setLoadingFalse();
},
err => {
this.errMsg = err.toString();
this.setLoadingFalse();
}
);
}
setLoadingFalse(){
this.submitLoading= false;
this.checkLoading = false;
this.cancelLoading = false;
}
showSaveButton(): boolean{
return this.businessData.status === BylTicketStatusEnum.UNSUBMITED
|| this.businessData.status == BylTicketStatusEnum.SUBMITED ;
}
showSubmitButton():boolean{
return this.businessData.status === BylTicketStatusEnum.UNSUBMITED;
// || this.businessData.status == BylTicketStatusEnum.SUBMITED;
}
showCancelButton(): boolean{
return this.businessData.status === BylTicketStatusEnum.SUBMITED;
}
showCheckButton(): boolean{
return this.businessData.status === BylTicketStatusEnum.SUBMITED;
}
showBrowseButton(): boolean{
return this.businessData.status === BylTicketStatusEnum.CHECKED
|| this.businessData.status === BylTicketStatusEnum.CHECKED_DELETED
|| this.businessData.status === BylTicketStatusEnum.SUBMITED_DELETED;
}
/**
* 单据明细修改之后,需要调整单据头的最后修改时间,以便进行多用户控制
* @param {number} value
*/
changeTicketModifyDateTime(value: number){
// console.log("in WorkloadTicket Crud getModifyDateTimeChange, value ", value);
this.businessData.modifyAction.modifyDateTime = value;
this.defaultBusinessData.modifyAction.modifyDateTime = value;
// this.reset();
};
updateTicketForAddItem(addResult: BylDetailAddResultModel<T,E>){
this.changeTicketModifyDateTime(addResult.ticket.modifyAction.modifyDateTime);
this.reset();
}
updateTicketForUpdateItem(updateItemResult: BylDetailUpdateResultModel<T,E>){
this.changeTicketModifyDateTime(updateItemResult.ticket.modifyAction.modifyDateTime);
this.reset()
}
updateTicketForDeleteItem(deleteItemResult: BylDetailDeleteResultModel<T,E>){
this.changeTicketModifyDateTime(deleteItemResult.ticket.modifyAction.modifyDateTime);
this.reset()
}
updateTicketForBatchAddItem(addResult: BylDetailBatchAddResultModel<T,E>){
this.changeTicketModifyDateTime(addResult.ticket.modifyAction.modifyDateTime);
this.reset();
}
}
|
J.L. Granatstein and R.D. Cuff, eds., War and Society in North America. Toronto: Thomas Nelson, 1971, pp. viii, 199. Hector J. Massey, ed., The Canadian Military: A Profile. Toronto: Copp Clark, 1972, pp. vii, 290. examination of arms-control proposals through the use of official document excerpts and numerous welcome excursions into the General's own diary. In addition, the analysis the author brings to the negotiating process, with his assertion of "super power" dominance, provides the student of international relations with empirical data to fit the conceptual frameworks developed by such theorists as Fred C. Ikle and Kenneth Waltz. Frequently throughout the last two thirds of the book General Burns, in the context of "super power" proposals and conference deliberations, discusses the official Canadian position along with governmental responses to American nudges. The reader should not be surprised to find the General endorsing the traditional or "Pearsonian" approach to Canadian diplomacy based on this state being a mediator or broker between disputants at the conference table. It is while dealing with the Canadian decision to acquire nuclear weapons for defensive purposes that General Burns writes quite candidly about Howard Green, the secretary of state for external affairs during much of the Diefenbaker period, and a fervent supporter of disarmament. In discussing Canada's contribution General Burns also contends that the Canadian decision to adopt nuclear defensive weapons did not injure this country's credibility nor impede the work of the delegation at the arms control conferences. Despite the over-all strength and value of A Seat at the Table, there are some criticisms that might be made. First, while the detailed account or commentary demonstrates fine scholarship by General Burns, the editor's pen should have been in evidence at times when the momentum sagged and the minutia appeared. Second, a more substantive criticism arises from what strikes this reviewer as a disproportionate emphasis upon arms control failures. Specifically, only three chapters directly examine the limited Test Ban and Non-Proliferation agreements while much of the remaining sections of the book expand upon the unacceptable control proposals. Finally, the omission of any significant analysis of the effect the absence of France and the Chinese Peoples Republic had upon armscontrol negotiations is unfortunate. These criticisms are, however, really quite marginal when measured against the worth of this volume. General Burns has written a book which can be warmly welcomed by students and laymen concerned with almost any aspect of the study of international relations.
|
New officers for the coming year were installed at a luncheon on May 18, at the Club’s last meeting until September. They are: Frances Rich, president; Cindy Harvey and Marylin Portman, co-vice presidents; Ann Maiforth, secretary; and Nancy Sims, treasurer.
The Coleville High School girl’s softball team qualified for the state tournament, held in Wells on May 19 and 20. They played and won their way through the competition, to the championship game. They finished in second place, handing the winning team, Carlin, their only loss of the tournament in the semi-final game. We’re proud of the team and coach Debie Bush for a good job in tough competition.
The Spring Fling is always a great place to go to see good crafts, browse a flea market, visit with friends and have some good food. This year was no exception. As always, the Chamber of Commerce put together a good show for everyone in the family. The booths had a great variety of good things, both crafts and flea market items. The bounce house was a big hit with the kids, and when you needed a break, the Lions Club barbecued hamburgers and hot dogs hit the spot. The wet weather held off until Sunday, but it was still fun anyway. Congratulations to the winners of the raffle prizes – a “to die for” stainless gas barbecue, a gorgeous turquoise necklace from Out West Gallery and a stay at Tahoe Ridge or a spa day at Walley’s Hot Springs and Resort in Genoa.
Unfortunately, Sunday’s weather slowed down the turnout, but there were still many very generous people who participated to help others.
The elementary school will be having an end-of-the year open house with an art show of works by students in grades K-5. Their work is being judged as in a regular show, and will be on display for us all to see their talent.
Beginning at the same time and continuing through the evening, the Antelope Valley Fire District will be honoring our volunteer firefighters. Fire-fighting vehicles and equipment will be on display. There will be cake and punch, as well as balloons and goodie bags for the kids. So come and see the vehicles and volunteers who protect us and our homes.
Friday through Sunday, the Veterans of Foreign Wars will have its annual yard sale next door to the Walker Mini Mall. This group supports the VA Hospital and Spouse House in Reno, as well as the Veteran’s Home in Barstow. For donations or pick up, call 495-2149 or 495-2132. All donations are welcome.
Together with the Coleville schools and the U.S. Marine Corps, they will conduct a Memorial Day service at the Antelope Valley Cemetery in Coleville on Tuesday, May 30 at 9 a.m. The public is invited to participate in this annual event.
There will be a free afternoon health and service fair for all seniors at the Walker Community Park on June 2 from 11 a.m. to 4 p.m., sponsored by Mono County. There is no cost to attend the fair, and there will be a free barbecue lunch for seniors. The fair will consist of different service agencies for Mono County who provide for the senior population. Come to the park and find out what kind of help is available and enjoy the food and a live band playing. Some of the groups attending will be the Mono County Social Services Department, the Inyo Mono Area Agency on Aging, the Mono County Health Department, Community Service Solutions, and many others who provide senior services. For more information call Community Service Solutions 1-888-230-3811, Ext. 701 or 760-872-7604.
The Antelope Valley Artists will meet at the Coleville Library on May 31 at 4 p.m. The public is more than welcome to attend, as they will be hanging a new exhibit of works by local artists. Refreshments will be served, and the artists will be there. Come and see the art work, the library and meet the artists.
n Memorial and dedications of bench and plaque in honor of Mike Bruzzesi will be tomorrow in the park at 1 p.m.
n On Memorial Day weekend there will also be yard sales taking place at homes throughout Walker. Be sure to check them out – you might find a treasure.
n The Red Hat Society ladies had a fun day trip to Truckee last week. They browsed the quaint shops there and just had a good time. If you are interested in joining the club, call Bobbie at 495-2433 for more info.
n The Antelope Valley Cemetery Committee thanks everybody who took part in Saturday’s cleanup. They especially thank Chuck Evans who brought his front loader and filled in several sunken graves. They would like to remind everyone who is willing and able to do so, to clean up their cemetery plots before the May 30 service.
If the cost of gas isn’t keeping you home this year, be careful out there on the highways. Have a great holiday.
|
def components(self) -> list[str]:
comps = super().components()
if self.forward_only:
if len(comps) == 0:
comps = ["gz"]
return comps
|
<filename>samples/09_InitDescriptorSet/09_InitDescriptorSet.cpp
// Copyright(c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// VulkanHpp Samples : 09_InitDescriptorSet
// Initialize a descriptor set
#include "../utils/math.hpp"
#include "../utils/utils.hpp"
#include "vulkan/vulkan.hpp"
#include <iostream>
#define GLM_FORCE_RADIANS
#pragma warning( disable : 4201 ) // disable warning C4201: nonstandard extension used: nameless struct/union; needed
// to get glm/detail/type_vec?.hpp without warnings
#include <glm/gtc/matrix_transform.hpp>
static char const * AppName = "09_InitDescriptorSet";
static char const * EngineName = "Vulkan.hpp";
int main( int /*argc*/, char ** /*argv*/ )
{
try
{
vk::UniqueInstance instance = vk::su::createInstance( AppName, EngineName );
#if !defined( NDEBUG )
vk::UniqueDebugUtilsMessengerEXT debugUtilsMessenger = vk::su::createDebugUtilsMessenger( instance );
#endif
vk::PhysicalDevice physicalDevice = instance->enumeratePhysicalDevices().front();
vk::UniqueDevice device = vk::su::createDevice(
physicalDevice, vk::su::findGraphicsQueueFamilyIndex( physicalDevice.getQueueFamilyProperties() ) );
vk::su::BufferData uniformBufferData(
physicalDevice, device, sizeof( glm::mat4x4 ), vk::BufferUsageFlagBits::eUniformBuffer );
vk::su::copyToDevice(
device, uniformBufferData.deviceMemory, vk::su::createModelViewProjectionClipMatrix( vk::Extent2D( 0, 0 ) ) );
vk::UniqueDescriptorSetLayout descriptorSetLayout = vk::su::createDescriptorSetLayout(
device, { { vk::DescriptorType::eUniformBuffer, 1, vk::ShaderStageFlagBits::eVertex } } );
/* VULKAN_HPP_KEY_START */
// create a descriptor pool
vk::DescriptorPoolSize poolSize( vk::DescriptorType::eUniformBuffer, 1 );
vk::UniqueDescriptorPool descriptorPool = device->createDescriptorPoolUnique(
vk::DescriptorPoolCreateInfo( vk::DescriptorPoolCreateFlagBits::eFreeDescriptorSet, 1, 1, &poolSize ) );
// allocate a descriptor set
vk::UniqueDescriptorSet descriptorSet = std::move(
device->allocateDescriptorSetsUnique( vk::DescriptorSetAllocateInfo( *descriptorPool, 1, &*descriptorSetLayout ) )
.front() );
vk::DescriptorBufferInfo descriptorBufferInfo( uniformBufferData.buffer.get(), 0, sizeof( glm::mat4x4 ) );
device->updateDescriptorSets(
vk::WriteDescriptorSet(
descriptorSet.get(), 0, 0, 1, vk::DescriptorType::eUniformBuffer, nullptr, &descriptorBufferInfo ),
{} );
/* VULKAN_HPP_KEY_END */
}
catch ( vk::SystemError & err )
{
std::cout << "vk::SystemError: " << err.what() << std::endl;
exit( -1 );
}
catch ( std::runtime_error & err )
{
std::cout << "std::runtime_error: " << err.what() << std::endl;
exit( -1 );
}
catch ( ... )
{
std::cout << "unknown error\n";
exit( -1 );
}
return 0;
}
|
<reponame>pupnewfster/ProjectExtended
package gg.galaxygaming.projectextended.client.rendering.item;
import com.mojang.blaze3d.matrix.MatrixStack;
import com.mojang.blaze3d.vertex.IVertexBuilder;
import gg.galaxygaming.projectextended.ProjectExtended;
import gg.galaxygaming.projectextended.common.items.PETrident;
import javax.annotation.Nonnull;
import net.minecraft.client.renderer.IRenderTypeBuffer;
import net.minecraft.client.renderer.ItemRenderer;
import net.minecraft.client.renderer.model.ItemCameraTransforms.TransformType;
import net.minecraft.client.renderer.tileentity.ItemStackTileEntityRenderer;
import net.minecraft.item.ItemStack;
import net.minecraft.util.ResourceLocation;
public class TridentISTER extends ItemStackTileEntityRenderer {
public static final ResourceLocation DM_TRIDENT = ProjectExtended.rl("textures/entity/dark_matter_trident.png");
public static final ResourceLocation RM_TRIDENT = ProjectExtended.rl("textures/entity/red_matter_trident.png");
@Override
public void func_239207_a_(@Nonnull ItemStack stack, @Nonnull TransformType transformType, @Nonnull MatrixStack matrix, @Nonnull IRenderTypeBuffer renderer,
int light, int overlayLight) {
matrix.push();
matrix.scale(1, -1, -1);
IVertexBuilder builder = ItemRenderer.getEntityGlintVertexBuilder(renderer, trident.getRenderType(getTexture(stack)), false, stack.hasEffect());
trident.render(matrix, builder, light, overlayLight, 1, 1, 1, 1);
matrix.pop();
}
private ResourceLocation getTexture(ItemStack stack) {
if (stack.getItem() instanceof PETrident && ((PETrident) stack.getItem()).getMatterTier() > 0) {
return RM_TRIDENT;
}
//Fallback to dark matter trident
return DM_TRIDENT;
}
}
|
package priv.kimking.base.designpattern.b3composite.model.aggregates;
import priv.kimking.base.designpattern.b3composite.model.vo.TreeNode;
import priv.kimking.base.designpattern.b3composite.model.vo.TreeRoot;
import java.util.Map;
/**
* 规则树聚合
*
* @author kim
* @date 2021/12/3
*/
public class TreeRich {
private TreeRoot treeRoot; //树根信息
private Map<Long, TreeNode> treeNodeMap; //树节点ID -> 子节点
public TreeRich(TreeRoot treeRoot, Map<Long, TreeNode> treeNodeMap) {
this.treeRoot = treeRoot;
this.treeNodeMap = treeNodeMap;
}
public TreeRoot getTreeRoot() {
return treeRoot;
}
public void setTreeRoot(TreeRoot treeRoot) {
this.treeRoot = treeRoot;
}
public Map<Long, TreeNode> getTreeNodeMap() {
return treeNodeMap;
}
public void setTreeNodeMap(Map<Long, TreeNode> treeNodeMap) {
this.treeNodeMap = treeNodeMap;
}
}
|
"""
======================================================================
>> Autor: <NAME>
>> Email: <EMAIL>
>> Fecha: 03/11/2020
======================================================================
Universidad Nacional Autónoma de México
Facultad de Ciencias
Computación Distribuida [2021-1]
Pruebas para la práctica.
======================================================================
"""
import simpy
from Canales.CanalRecorridos import CanalRecorridos
from NodoBFS import NodoBFS
from NodoDFS import NodoDFS
# Las unidades de tiempo que les daremos a las pruebas.
TIEMPO_DE_EJECUCION = 50
class TestPractica1:
"""Clase para las pruebas unitarias de la práctica 1."""
# Las aristas de adyacencias de la gráfica.
adyacencias = [{1, 3, 4, 6}, {0, 3, 5, 7}, {3, 5, 6},
{0, 1, 2}, {0}, {1, 2}, {0, 2}, {1}]
def test_ejercicio_uno(self):
"""Método que prueba el algoritmo de BFS."""
# Creamos el ambiente y el objeto Canal.
env = simpy.Environment()
bc_pipe = CanalRecorridos(env)
# La lista que representa la gráfica.
grafica = []
# Creamos los nodos.
for i in range(len(self.adyacencias)):
vecinos = self.adyacencias[i]
canal_entrada = bc_pipe.crea_canal_de_entrada()
canal_salida = bc_pipe
nodo = NodoBFS(i, vecinos, canal_entrada, canal_salida)
grafica.append(nodo)
# Le decimos al ambiente lo que va a procesar.
for nodo in grafica:
env.process(nodo.bfs(env))
# Y lo corremos.
env.run(until=TIEMPO_DE_EJECUCION)
# Probamos que efectivamente se hizo un BFS.
padres_esperados = [0, 0, 3, 0, 0, 1, 0, 1]
distancias_esperadas = [0, 1, 2, 1, 1, 2, 1, 2]
# Para cada nodo verificamos que su lista de identifiers sea la esperada.
for i in range(0, len(grafica)):
nodo = grafica[i]
assert nodo.padre == padres_esperados[i], (f"El nodo {nodo.id_nodo} tiene mal padre")
assert nodo.distancia == distancias_esperadas[i], (f"El nodo {nodo.id_nodo} tiene distancia equivocada")
def test_ejercicio_dos(self):
"""Prueba para el algoritmo DFS."""
# Creamos el ambiente y el objeto Canal.
env = simpy.Environment()
bc_pipe = CanalRecorridos(env)
# La lista que representa la gráfica.
grafica = []
# Creamos los nodos
for i in range(0, len(self.adyacencias)):
grafica.append(NodoDFS(i, self.adyacencias[i],
bc_pipe.crea_canal_de_entrada(), bc_pipe))
# Le decimos al ambiente lo que va a procesar.
for nodo in grafica:
env.process(nodo.dfs(env))
# Y lo corremos.
env.run(until=TIEMPO_DE_EJECUCION)
# Probamos que efectivamente se hizo un BFS.
padres_esperados = [0, 0, 3, 1, 0, 2, 2, 1]
hijos_esperados = [{1, 4}, {3, 7}, {5, 6}, {2}, set(), set(), set(), set()]
# Para cada nodo verificamos que su lista de identifiers sea la esperada.
for i in range(0, len(grafica)):
nodo = grafica[i]
assert nodo.padre == padres_esperados[i], ("El nodo {nodo.id_nodo} tiene mal padre")
assert nodo.hijos == hijos_esperados[i], ("El nodo {nodo.id_nodo} tiene distancia equivocada")
|
// ======================================================================== //
// Copyright 2019 Intel Corporation //
// //
// Licensed under the Apache License, Version 2.0 (the "License"); //
// you may not use this file except in compliance with the License. //
// You may obtain a copy of the License at //
// //
// http://www.apache.org/licenses/LICENSE-2.0 //
// //
// Unless required by applicable law or agreed to in writing, software //
// distributed under the License is distributed on an "AS IS" BASIS, //
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. //
// See the License for the specific language governing permissions and //
// limitations under the License. //
// ======================================================================== //
#pragma once
#include "TestingVolume.h"
#include "procedural_functions.h"
namespace openvkl {
namespace testing {
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &) = gradientNotImplemented>
struct ProceduralUnstructuredVolume : public TestingVolume
{
ProceduralUnstructuredVolume(
const vec3i &dimensions,
const vec3f &gridOrigin,
const vec3f &gridSpacing,
VKLUnstructuredCellType _primType = VKL_HEXAHEDRON,
bool _cellValued = true,
bool _indexPrefix = true,
bool _precomputedNormals = false,
bool _hexIterative = false);
range1f getComputedValueRange() const override;
vec3i getDimensions() const;
vec3f getGridOrigin() const;
vec3f getGridSpacing() const;
float computeProceduralValue(const vec3f &objectCoordinates);
vec3f computeProceduralGradient(const vec3f &objectCoordinates);
private:
range1f computedValueRange = range1f(ospcommon::math::empty);
vec3i dimensions;
vec3f gridOrigin;
vec3f gridSpacing;
VKLUnstructuredCellType primType;
bool cellValued;
bool indexPrefix;
bool precomputedNormals;
bool hexIterative;
int vtxPerPrimitive(VKLUnstructuredCellType type) const;
std::vector<unsigned char> generateVoxels(vec3i dimensions);
void generateVKLVolume() override;
std::vector<vec3f> generateGrid();
std::vector<idxType> generateTopology();
};
// Inlined definitions ////////////////////////////////////////////////////
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline ProceduralUnstructuredVolume<idxType,
samplingFunction,
gradientFunction>::
ProceduralUnstructuredVolume(const vec3i &dimensions,
const vec3f &gridOrigin,
const vec3f &gridSpacing,
VKLUnstructuredCellType _primType,
bool _cellValued,
bool _indexPrefix,
bool _precomputedNormals,
bool _hexIterative)
: dimensions(dimensions),
gridOrigin(gridOrigin),
gridSpacing(gridSpacing),
primType(_primType),
cellValued(_cellValued),
indexPrefix(_indexPrefix),
precomputedNormals(_precomputedNormals),
hexIterative(_hexIterative)
{
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline range1f
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
getComputedValueRange() const
{
if (computedValueRange.empty()) {
throw std::runtime_error(
"computedValueRange only available after VKL volume is generated");
}
return computedValueRange;
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline vec3i ProceduralUnstructuredVolume<idxType,
samplingFunction,
gradientFunction>::getDimensions()
const
{
return dimensions;
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline vec3f ProceduralUnstructuredVolume<idxType,
samplingFunction,
gradientFunction>::getGridOrigin()
const
{
return gridOrigin;
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline vec3f
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
getGridSpacing() const
{
return gridSpacing;
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline float
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
computeProceduralValue(const vec3f &objectCoordinates)
{
return samplingFunction(objectCoordinates);
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline vec3f
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
computeProceduralGradient(const vec3f &objectCoordinates)
{
return gradientFunction(objectCoordinates);
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline int
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
vtxPerPrimitive(VKLUnstructuredCellType type) const
{
switch (type) {
case VKL_TETRAHEDRON:
return 4;
case VKL_HEXAHEDRON:
return 8;
case VKL_WEDGE:
return 6;
case VKL_PYRAMID:
return 5;
}
return 0;
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline std::vector<unsigned char>
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
generateVoxels(vec3i dimensions)
{
std::vector<unsigned char> voxels(dimensions.long_product() *
sizeof(float));
float *voxelsTyped = (float *)voxels.data();
auto transformLocalToObject = [&](const vec3f &localCoordinates) {
return gridOrigin + localCoordinates * gridSpacing;
};
for (size_t z = 0; z < dimensions.z; z++) {
for (size_t y = 0; y < dimensions.y; y++) {
for (size_t x = 0; x < dimensions.x; x++) {
size_t index =
z * dimensions.y * dimensions.x + y * dimensions.x + x;
vec3f objectCoordinates = transformLocalToObject(vec3f(x, y, z));
voxelsTyped[index] = samplingFunction(objectCoordinates);
}
}
}
return voxels;
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline void
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
generateVKLVolume()
{
vec3i valueDimensions = dimensions;
if (!cellValued)
valueDimensions += vec3i(1, 1, 1);
std::vector<unsigned char> values = generateVoxels(valueDimensions);
std::vector<vec3f> vtxPositions = generateGrid();
std::vector<idxType> topology = generateTopology();
std::vector<idxType> cells;
std::vector<uint8_t> cellType;
volume = vklNewVolume("unstructured");
uint64_t numCells = dimensions.long_product();
cells.reserve(numCells);
cellType.reserve(numCells);
for (idxType i = 0; i < numCells; i++) {
cells.push_back(i *
(vtxPerPrimitive(primType) + (indexPrefix ? 1 : 0)));
cellType.push_back(primType);
}
VKLData cellData = vklNewData(
cells.size(),
std::is_same<idxType, uint32_t>::value ? VKL_UINT : VKL_ULONG,
cells.data());
vklSetData(volume, "cell.index", cellData);
vklRelease(cellData);
if (!indexPrefix) {
VKLData celltypeData =
vklNewData(cellType.size(), VKL_UCHAR, cellType.data());
vklSetData(volume, "cell.type", celltypeData);
vklRelease(celltypeData);
}
VKLData valuesData =
vklNewData(valueDimensions.long_product(), VKL_FLOAT, values.data());
vklSetData(
volume, cellValued ? "cell.data" : "vertex.data", valuesData);
vklRelease(valuesData);
VKLData vtxPositionsData =
vklNewData(vtxPositions.size(), VKL_VEC3F, vtxPositions.data());
vklSetData(volume, "vertex.position", vtxPositionsData);
vklRelease(vtxPositionsData);
VKLData topologyData = vklNewData(
topology.size(),
std::is_same<idxType, uint32_t>::value ? VKL_UINT : VKL_ULONG,
topology.data());
vklSetData(volume, "index", topologyData);
vklRelease(topologyData);
vklSetBool(volume, "indexPrefixed", indexPrefix);
vklSetBool(volume, "precomputedNormals", precomputedNormals);
vklSetBool(volume, "hexIterative", hexIterative);
vklCommit(volume);
computedValueRange = computeValueRange(
VKL_FLOAT, values.data(), valueDimensions.long_product());
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline std::vector<vec3f>
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
generateGrid()
{
std::vector<vec3f> grid((dimensions + vec3i(1, 1, 1)).long_product(), 0);
for (size_t z = 0; z <= dimensions.z; z++) {
for (size_t y = 0; y <= dimensions.y; y++) {
for (size_t x = 0; x <= dimensions.x; x++) {
size_t index = z * (dimensions.y + 1) * (dimensions.x + 1) +
y * (dimensions.x + 1) + x;
grid[index] = gridOrigin + gridSpacing * vec3f(x, y, z);
}
}
}
return grid;
}
template <typename idxType,
float samplingFunction(const vec3f &),
vec3f gradientFunction(const vec3f &)>
inline std::vector<idxType>
ProceduralUnstructuredVolume<idxType, samplingFunction, gradientFunction>::
generateTopology()
{
uint64_t numPerPrim = vtxPerPrimitive(primType);
if (indexPrefix)
numPerPrim++;
std::vector<idxType> cells;
cells.reserve(dimensions.long_product() * numPerPrim);
for (size_t z = 0; z < dimensions.z; z++) {
for (size_t y = 0; y < dimensions.y; y++) {
for (size_t x = 0; x < dimensions.x; x++) {
idxType layerSize = (dimensions.x + 1) * (dimensions.y + 1);
idxType offset = layerSize * z + (dimensions.x + 1) * y + x;
idxType offset2 = offset + layerSize;
if (indexPrefix)
cells.push_back(vtxPerPrimitive(primType));
switch (primType) {
case VKL_TETRAHEDRON:
cells.push_back(offset + 0);
cells.push_back(offset + 1);
cells.push_back(offset + (dimensions.x + 1) + 0);
cells.push_back(offset2 + 0);
break;
case VKL_HEXAHEDRON:
cells.push_back(offset + 0);
cells.push_back(offset + 1);
cells.push_back(offset + (dimensions.x + 1) + 1);
cells.push_back(offset + (dimensions.x + 1));
cells.push_back(offset2 + 0);
cells.push_back(offset2 + 1);
cells.push_back(offset2 + (dimensions.x + 1) + 1);
cells.push_back(offset2 + (dimensions.x + 1));
break;
case VKL_WEDGE:
cells.push_back(offset + 0);
cells.push_back(offset + 1);
cells.push_back(offset + (dimensions.x + 1) + 0);
cells.push_back(offset2 + 0);
cells.push_back(offset2 + 1);
cells.push_back(offset2 + (dimensions.x + 1) + 0);
break;
case VKL_PYRAMID:
cells.push_back(offset + 0);
cells.push_back(offset + 1);
cells.push_back(offset + (dimensions.x + 1) + 1);
cells.push_back(offset + (dimensions.x + 1));
cells.push_back(offset2 + 0);
break;
}
}
}
}
return cells;
}
///////////////////////////////////////////////////////////////////////////
// Procedural volume types ////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////
using WaveletUnstructuredProceduralVolume =
ProceduralUnstructuredVolume<uint32_t,
getWaveletValue<float>,
getWaveletGradient>;
using ZUnstructuredProceduralVolume =
ProceduralUnstructuredVolume<uint32_t, getZValue, getZGradient>;
using ConstUnstructuredProceduralVolume =
ProceduralUnstructuredVolume<uint32_t, getConstValue, getConstGradient>;
using XYZUnstructuredProceduralVolume =
ProceduralUnstructuredVolume<uint32_t, getXYZValue, getXYZGradient>;
using WaveletUnstructuredProceduralVolume64 =
ProceduralUnstructuredVolume<uint64_t,
getWaveletValue<float>,
getWaveletGradient>;
using ZUnstructuredProceduralVolume64 =
ProceduralUnstructuredVolume<uint64_t, getZValue, getZGradient>;
using ConstUnstructuredProceduralVolume64 =
ProceduralUnstructuredVolume<uint64_t, getConstValue, getConstGradient>;
} // namespace testing
} // namespace openvkl
|
package common
import (
"strconv"
"time"
)
const (
walletSysBalance = "tp_wallet:wallet_balance_"
lockAccountBalance = "lock:tp_wallet:account_balance_"
LockAccountBalanceTtl = time.Second * 15
)
func KeyLockAccountBalance(uid uint64) string {
return lockAccountBalance + strconv.FormatUint(uid, 10)
}
func KeyWalletSysBalance(uid uint64) string {
return walletSysBalance + strconv.FormatUint(uid, 10)
}
|
<filename>scs2-simulation/src/main/java/us/ihmc/scs2/simulation/shapes/FrameSTPConvexPolytope3D.java
package us.ihmc.scs2.simulation.shapes;
import static us.ihmc.euclid.tools.EuclidCoreIOTools.DEFAULT_FORMAT;
import java.util.List;
import us.ihmc.euclid.geometry.interfaces.Vertex3DSupplier;
import us.ihmc.euclid.referenceFrame.ReferenceFrame;
import us.ihmc.euclid.referenceFrame.interfaces.FixedFrameShape3DPoseBasics;
import us.ihmc.euclid.referenceFrame.interfaces.FrameBoundingBox3DReadOnly;
import us.ihmc.euclid.referenceFrame.interfaces.FramePoint3DReadOnly;
import us.ihmc.euclid.referenceFrame.interfaces.FrameShape3DBasics;
import us.ihmc.euclid.referenceFrame.interfaces.FrameVertex3DSupplier;
import us.ihmc.euclid.referenceFrame.polytope.FrameConvexPolytope3D;
import us.ihmc.euclid.referenceFrame.polytope.FrameFace3D;
import us.ihmc.euclid.referenceFrame.polytope.FrameHalfEdge3D;
import us.ihmc.euclid.referenceFrame.polytope.FrameVertex3D;
import us.ihmc.euclid.referenceFrame.polytope.interfaces.FrameConvexPolytope3DReadOnly;
import us.ihmc.euclid.referenceFrame.tools.EuclidFrameShapeIOTools;
import us.ihmc.euclid.shape.convexPolytope.interfaces.ConvexPolytope3DReadOnly;
import us.ihmc.euclid.shape.convexPolytope.tools.EuclidPolytopeConstructionTools;
import us.ihmc.euclid.tools.EuclidCoreFactories;
import us.ihmc.euclid.tools.EuclidHashCodeTools;
import us.ihmc.euclid.transform.interfaces.Transform;
import us.ihmc.euclid.tuple3D.interfaces.Point3DBasics;
import us.ihmc.euclid.tuple3D.interfaces.Point3DReadOnly;
import us.ihmc.euclid.tuple3D.interfaces.Vector3DReadOnly;
import us.ihmc.scs2.simulation.shapes.STPShape3DTools.STPConvexPolytope3DSupportingVertexCalculator;
import us.ihmc.scs2.simulation.shapes.interfaces.FrameSTPConvexPolytope3DReadOnly;
import us.ihmc.scs2.simulation.shapes.interfaces.STPConvexPolytope3DReadOnly;
import us.ihmc.scs2.simulation.shapes.interfaces.STPShape3DBasics;
/**
* Convex polytope that implements the sphere-torus-patches (STP) method to make shapes strictly
* convex.
* <p>
* <strong> WARNING: STP convex polytope does not properly cover all scenarios and may result in a
* non-convex shape. A STP convex polytope should always be visualized first and validate its
* geometry, see the examples in the <i>simulation-construction-set-visualizers</i> repository. For
* now, it is recommended to stick with primitive shapes. </strong>
* </p>
*
* @see STPShape3DReadOnly
* @author <NAME>
*/
public class FrameSTPConvexPolytope3D implements FrameSTPConvexPolytope3DReadOnly, FrameShape3DBasics, STPShape3DBasics
{
private double minimumMargin, maximumMargin;
private double largeRadius, smallRadius;
private final FrameConvexPolytope3D rawConvexPolytope3D;
private final FrameBoundingBox3DReadOnly boundingBox;
private final STPConvexPolytope3DSupportingVertexCalculator supportingVertexCalculator = new STPConvexPolytope3DSupportingVertexCalculator();
private boolean stpRadiiDirty = true;
/**
* Creates a new empty convex polytope initializes its reference frame to
* {@link ReferenceFrame#getWorldFrame()}.
*/
public FrameSTPConvexPolytope3D()
{
this(ReferenceFrame.getWorldFrame());
}
/**
* Creates a new empty convex polytope and initializes its reference frame.
*
* @param referenceFrame this polytope initial frame.
*/
public FrameSTPConvexPolytope3D(ReferenceFrame referenceFrame)
{
this(referenceFrame, EuclidPolytopeConstructionTools.DEFAULT_CONSTRUCTION_EPSILON);
}
/**
* Creates a new empty convex polytope.
*
* @param referenceFrame this polytope initial frame.
* @param constructionEpsilon tolerance used when adding vertices to a convex polytope to trigger a
* series of edge-cases.
*/
public FrameSTPConvexPolytope3D(ReferenceFrame referenceFrame, double constructionEpsilon)
{
rawConvexPolytope3D = new FrameConvexPolytope3D(referenceFrame, constructionEpsilon);
Point3DReadOnly rawMinPoint = rawConvexPolytope3D.getBoundingBox().getMinPoint();
Point3DReadOnly rawMaxPoint = rawConvexPolytope3D.getBoundingBox().getMaxPoint();
Point3DReadOnly minPoint = EuclidCoreFactories.newLinkedPoint3DReadOnly(() -> rawMinPoint.getX() + maximumMargin,
() -> rawMinPoint.getY() + maximumMargin,
() -> rawMinPoint.getZ() + maximumMargin);
Point3DReadOnly maxPoint = EuclidCoreFactories.newLinkedPoint3DReadOnly(() -> rawMaxPoint.getX() - maximumMargin,
() -> rawMaxPoint.getY() - maximumMargin,
() -> rawMaxPoint.getZ() - maximumMargin);
boundingBox = STPShape3DTools.newLinkedFrameBoundingBox3DReadOnly(this, minPoint, maxPoint);
}
/**
* Creates a new convex polytope and adds vertices provided by the given supplier.
*
* @param referenceFrame this polytope initial frame.
* @param vertex3DSupplier the vertex supplier to get the vertices to add to this convex polytope.
*/
public FrameSTPConvexPolytope3D(ReferenceFrame referenceFrame, Vertex3DSupplier vertex3DSupplier)
{
this(referenceFrame);
addVertices(vertex3DSupplier);
}
/**
* Creates a new convex polytope, adds vertices provided by the given supplier, its reference frame
* is initialized to match the reference frame of the vertex supplier.
*
* @param vertex3DSupplier the vertex supplier to get the vertices to add to this convex polytope.
*/
public FrameSTPConvexPolytope3D(FrameVertex3DSupplier vertex3DSupplier)
{
this(vertex3DSupplier.getReferenceFrame(), vertex3DSupplier);
}
/**
* Creates a new convex polytope, adds vertices provided by the given supplier.
*
* @param referenceFrame this polytope initial frame.
* @param vertex3DSupplier the vertex supplier to get the vertices to add to this convex
* polytope.
* @param constructionEpsilon tolerance used when adding vertices to a convex polytope to trigger a
* series of edge-cases.
*/
public FrameSTPConvexPolytope3D(ReferenceFrame referenceFrame, Vertex3DSupplier vertex3DSupplier, double constructionEpsilon)
{
this(referenceFrame, constructionEpsilon);
addVertices(vertex3DSupplier);
}
/**
* Creates a new convex polytope, adds vertices provided by the given supplier, its reference frame
* is initialized to match the reference frame of the vertex supplier.
*
* @param vertex3DSupplier the vertex supplier to get the vertices to add to this convex
* polytope.
* @param constructionEpsilon tolerance used when adding vertices to a convex polytope to trigger a
* series of edge-cases.
*/
public FrameSTPConvexPolytope3D(FrameVertex3DSupplier vertex3DSupplier, double constructionEpsilon)
{
this(vertex3DSupplier.getReferenceFrame(), vertex3DSupplier, constructionEpsilon);
}
/**
* Creates a new convex polytope identical to {@code other}.
*
* @param referenceFrame this polytope initial frame.
* @param other the other convex polytope to copy. Not modified.
*/
public FrameSTPConvexPolytope3D(ReferenceFrame referenceFrame, ConvexPolytope3DReadOnly other)
{
this(referenceFrame, other.getConstructionEpsilon());
set(other);
}
/**
* Creates a new convex polytope identical to {@code other}.
*
* @param referenceFrame this polytope initial frame.
* @param other the other convex polytope to copy. Not modified.
*/
public FrameSTPConvexPolytope3D(ReferenceFrame referenceFrame, STPConvexPolytope3DReadOnly other)
{
this(referenceFrame, other.getConstructionEpsilon());
set(other);
}
/**
* Creates a new convex polytope identical to {@code other}.
*
* @param other the other convex polytope to copy. Not modified.
*/
public FrameSTPConvexPolytope3D(FrameConvexPolytope3DReadOnly other)
{
this(other.getReferenceFrame(), other);
}
/**
* Creates a new convex polytope identical to {@code other}.
*
* @param other the other convex polytope to copy. Not modified.
*/
public FrameSTPConvexPolytope3D(FrameSTPConvexPolytope3DReadOnly other)
{
this(other.getReferenceFrame(), other);
}
/**
* Sets this convex polytope to be identical to {@code other}.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param other the other convex polytope to copy. Not modified.
*/
public void set(ConvexPolytope3DReadOnly other)
{
rawConvexPolytope3D.set(other);
stpRadiiDirty = true;
}
/**
* Sets this convex polytope to be identical to {@code other}.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param other the other convex polytope to copy. Not modified.
*/
public void set(STPConvexPolytope3DReadOnly other)
{
rawConvexPolytope3D.set(other);
minimumMargin = other.getMinimumMargin();
maximumMargin = other.getMaximumMargin();
stpRadiiDirty = true;
}
/**
* Sets this convex polytope to be identical to {@code other}.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param other the other polytope to copy. Not modified.
* @throws ReferenceFrameMismatchException if the argument is not expressed in the same reference
* frame {@code this}.
*/
public void set(FrameConvexPolytope3DReadOnly other)
{
checkReferenceFrameMatch(other);
set((ConvexPolytope3DReadOnly) other);
}
/**
* Sets this convex polytope to be identical to {@code other}.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param other the other polytope to copy. Not modified.
* @throws ReferenceFrameMismatchException if the argument is not expressed in the same reference
* frame {@code this}.
*/
public void set(FrameSTPConvexPolytope3DReadOnly other)
{
checkReferenceFrameMatch(other);
set((STPConvexPolytope3DReadOnly) other);
}
/**
* Sets this convex polytope to be identical to {@code other}.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param other the other polytope to copy. Not modified.
*/
public void setIncludingFrame(FrameConvexPolytope3DReadOnly other)
{
setReferenceFrame(other.getReferenceFrame());
set((ConvexPolytope3DReadOnly) other);
}
/**
* Sets this convex polytope to be identical to {@code other}.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param other the other polytope to copy. Not modified.
*/
public void setIncludingFrame(FrameSTPConvexPolytope3DReadOnly other)
{
setReferenceFrame(other.getReferenceFrame());
set((STPConvexPolytope3DReadOnly) other);
}
/** {@inheritDoc} */
@Override
public void setReferenceFrame(ReferenceFrame referenceFrame)
{
rawConvexPolytope3D.setReferenceFrame(referenceFrame);
}
/** {@inheritDoc} */
@Override
public void changeFrame(ReferenceFrame desiredFrame)
{
rawConvexPolytope3D.changeFrame(desiredFrame);
}
@Override
public void setToNaN()
{
rawConvexPolytope3D.setToNaN();
stpRadiiDirty = true;
}
@Override
public void setToZero()
{
rawConvexPolytope3D.setToZero();
stpRadiiDirty = true;
}
/**
* Adds a new vertex to this convex polytope.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param vertexToAdd the vertex that is to be added to the convex polytope. Not modified.
* @return {@code true} if the vertex was added to this convex polytope, {@code false} if it was
* rejected.
*/
public boolean addVertex(Point3DReadOnly vertexToAdd)
{
return addVertices(Vertex3DSupplier.asVertex3DSupplier(vertexToAdd));
}
/**
* Adds a new vertex to this convex polytope.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param vertexToAdd the vertex that is to be added to the convex polytope. Not modified.
* @return {@code true} if the vertex was added to this convex polytope, {@code false} if it was
* rejected.
*/
public boolean addVertex(FramePoint3DReadOnly vertexToAdd)
{
checkReferenceFrameMatch(vertexToAdd);
return addVertex((Point3DReadOnly) vertexToAdd);
}
/**
* Adds a new vertex to this convex polytope.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param vertex3DSupplier the vertex supplier to get the vertices to add to this convex polytope.
* @return {@code true} if the vertex was added to this convex polytope, {@code false} if it was
* rejected.
*/
public boolean addVertices(Vertex3DSupplier vertex3DSupplier)
{
boolean wasAdded = rawConvexPolytope3D.addVertices(vertex3DSupplier);
if (wasAdded)
stpRadiiDirty = true;
return wasAdded;
}
/**
* Adds a new vertex to this convex polytope.
* <p>
* WARNING: This method generates garbage.
* </p>
*
* @param vertex3DSupplier the vertex supplier to get the vertices to add to this convex polytope.
* @return {@code true} if the vertex was added to this convex polytope, {@code false} if it was
* rejected.
*/
public boolean addVertices(FrameVertex3DSupplier vertex3DSupplier)
{
checkReferenceFrameMatch(vertex3DSupplier);
return addVertices((Vertex3DSupplier) vertex3DSupplier);
}
@Override
public double getMinimumMargin()
{
return minimumMargin;
}
@Override
public double getMaximumMargin()
{
return maximumMargin;
}
@Override
public double getSmallRadius()
{
updateRadii();
return smallRadius;
}
@Override
public double getLargeRadius()
{
updateRadii();
return largeRadius;
}
@Override
public void setMargins(double minimumMargin, double maximumMargin)
{
if (maximumMargin <= minimumMargin)
throw new IllegalArgumentException("The maximum margin has to be strictly greater that the minimum margin, max margin: " + maximumMargin
+ ", min margin: " + minimumMargin);
this.minimumMargin = minimumMargin;
this.maximumMargin = maximumMargin;
stpRadiiDirty = true;
}
/**
* <pre>
* r = h
* r^2 - g^2 - 0.25 * l<sub>max</sub>
* R = ------------------------
* 2 * (r - g)
* </pre>
*
* where:
* <ul>
* <li><tt>R</tt> is {@link #largeRadius}
* <li><tt>r</tt> is {@link #smallRadius}
* <li><tt>h</tt> is {@link #minimumMargin}
* <li><tt>g</tt> is {@link #maximumMargin}
* <li><tt>l<sub>max</max></tt> is the maximum edge length that needs to be covered by the large
* bounding sphere.
* </ul>
*/
protected void updateRadii()
{
if (!stpRadiiDirty)
return;
stpRadiiDirty = false;
if (minimumMargin == 0.0 && maximumMargin == 0.0)
{
smallRadius = Double.NaN;
largeRadius = Double.NaN;
}
else
{
smallRadius = minimumMargin;
largeRadius = STPShape3DTools.computeLargeRadiusFromMargins(minimumMargin,
maximumMargin,
STPShape3DTools.computeConvexPolytope3DMaximumEdgeLengthSquared(rawConvexPolytope3D));
}
}
@Override
public boolean containsNaN()
{
return rawConvexPolytope3D.containsNaN();
}
@Override
public double getVolume()
{
return rawConvexPolytope3D.getVolume();
}
@Override
public List<FrameFace3D> getFaces()
{
return rawConvexPolytope3D.getFaces();
}
@Override
public List<FrameHalfEdge3D> getHalfEdges()
{
return rawConvexPolytope3D.getHalfEdges();
}
@Override
public List<FrameVertex3D> getVertices()
{
return rawConvexPolytope3D.getVertices();
}
@Override
public double getConstructionEpsilon()
{
return rawConvexPolytope3D.getConstructionEpsilon();
}
@Override
public ReferenceFrame getReferenceFrame()
{
return rawConvexPolytope3D.getReferenceFrame();
}
@Override
public FrameBoundingBox3DReadOnly getBoundingBox()
{
return boundingBox;
}
@Override
public FrameSTPConvexPolytope3D copy()
{
return new FrameSTPConvexPolytope3D(this);
}
@Override
public FramePoint3DReadOnly getCentroid()
{
return rawConvexPolytope3D.getCentroid();
}
@Override
public boolean getSupportingVertex(Vector3DReadOnly supportDirection, Point3DBasics supportingVertexToPack)
{
return supportingVertexCalculator.getSupportingVertex(this, getSmallRadius(), getLargeRadius(), supportDirection, supportingVertexToPack);
}
@Override
public FixedFrameShape3DPoseBasics getPose()
{
return null;
}
/** {@inheritDoc} */
@Override
public void applyTransform(Transform transform)
{
rawConvexPolytope3D.applyTransform(transform);
}
/** {@inheritDoc} */
@Override
public void applyInverseTransform(Transform transform)
{
rawConvexPolytope3D.applyInverseTransform(transform);
}
/**
* Tests if the given {@code object}'s class is the same as this, in which case the method returns
* {@link #equals(FrameSTPConvexPolytope3DReadOnly)}, it returns {@code false} otherwise.
*
* @param object the object to compare against this. Not modified.
* @return {@code true} if {@code object} and this are exactly equal, {@code false} otherwise.
*/
@Override
public boolean equals(Object object)
{
if (object instanceof FrameSTPConvexPolytope3DReadOnly)
return FrameSTPConvexPolytope3DReadOnly.super.equals((FrameSTPConvexPolytope3DReadOnly) object);
else
return false;
}
/**
* Calculates and returns a hash code value from the value of each component of this convex polytope
* 3D.
*
* @return the hash code value for this convex polytope 3D.
*/
@Override
public int hashCode()
{
long hash = EuclidHashCodeTools.combineHashCode(rawConvexPolytope3D.hashCode(), EuclidHashCodeTools.toLongHashCode(minimumMargin, maximumMargin));
return EuclidHashCodeTools.toIntHashCode(hash);
}
/**
* Provides a {@code String} representation of this convex polytope 3D as follows:
*
* <pre>
* STP Convex polytope 3D: number of: [faces: 4, edges: 12, vertices: 4
* Face list:
* centroid: ( 0.582, -0.023, 0.160 ), normal: ( 0.516, -0.673, 0.530 )
* centroid: ( 0.420, 0.176, 0.115 ), normal: (-0.038, 0.895, -0.444 )
* centroid: ( 0.264, -0.253, -0.276 ), normal: ( 0.506, 0.225, -0.833 )
* centroid: ( 0.198, -0.176, -0.115 ), normal: (-0.643, -0.374, 0.668 )
* Edge list:
* [( 0.674, 0.482, 0.712 ); ( 0.870, 0.251, 0.229 )]
* [( 0.870, 0.251, 0.229 ); ( 0.204, -0.803, -0.461 )]
* [( 0.204, -0.803, -0.461 ); ( 0.674, 0.482, 0.712 )]
* [( 0.870, 0.251, 0.229 ); ( 0.674, 0.482, 0.712 )]
* [( 0.674, 0.482, 0.712 ); (-0.283, -0.207, -0.595 )]
* [(-0.283, -0.207, -0.595 ); ( 0.870, 0.251, 0.229 )]
* [( 0.204, -0.803, -0.461 ); ( 0.870, 0.251, 0.229 )]
* [( 0.870, 0.251, 0.229 ); (-0.283, -0.207, -0.595 )]
* [(-0.283, -0.207, -0.595 ); ( 0.204, -0.803, -0.461 )]
* [( 0.674, 0.482, 0.712 ); ( 0.204, -0.803, -0.461 )]
* [( 0.204, -0.803, -0.461 ); (-0.283, -0.207, -0.595 )]
* [(-0.283, -0.207, -0.595 ); ( 0.674, 0.482, 0.712 )]
* Vertex list:
* ( 0.674, 0.482, 0.712 )
* ( 0.870, 0.251, 0.229 )
* ( 0.204, -0.803, -0.461 )
* (-0.283, -0.207, -0.595 )
* worldFrame
* small radius: 0.001, large radius: 1.000
* </pre>
*
* @return the {@code String} representing this convex polytope 3D.
*/
@Override
public String toString()
{
String stpSuffix = String.format("\nsmall radius: " + DEFAULT_FORMAT + ", large radius: " + DEFAULT_FORMAT + "]", getSmallRadius(), getLargeRadius());
return "STP" + EuclidFrameShapeIOTools.getFrameConvexPolytope3DString(this) + stpSuffix;
}
}
|
BELGIUM - CIRCA 2002: So-called bust of Seneca, 1611-1612, detail from The Four Philosophers, by Peter Paul Rubens (1577-1640), oil on canvas, 164x139 cm. (Photo by DeAgostini/Getty Images); Florence, Palazzo Pitti (Pitti Palace) Galleria Palatina (Palatine Gallery). (Photo by DeAgostini/Getty Images)
We live in a world that is, in many ways, more predictable and under our own control than it has ever been before. We can predict the weather, find information on any subject almost instantaneously, have almost any material item delivered to our door within 24 hours with the click of a button, and can communicate with others across any distance or time zones.
And yet, in reality, we have little control over the events of our lives -- except how we react to them. The comfortable predictability and sense of orderliness that modern technology affords is wonderful for many things, but it can't do anything to help us cope with an inherently unpredictable world that can upend our daily plans or entire life trajectories at a moment's notice.
The ancient philosophy of Stoicism arose to deal with this very problem: How do we live a good life in a capricious, and sometimes cruel, world? And how do we minimize the suffering we experience as a result of events over which we have no control?
"It's a practical philosophy that's designed to help people deal with an inherently unpredictable world," Ryan Holiday, author of The Obstacle Is The Way, told the Huffington Post. "People think Stoicism is about not having emotions... [but] Stoicism as a philosophy is a series of exercises and reminders that men and women have practiced throughout history that are designed to help them deal with loss, pain, fear, our own mortality, temptation. It's about living an ordered, rational disciplined life so you're not being jerked around by success or failure."
We tend to inflict needless injury on ourselves by not understanding our emotions and misunderstanding the nature of external events -- namely, the fact that we don't control them.
"People are unhappy because they chase things that they don't control, or people are unhappy because they're unprepared for a world they don't control, or people create pain because they've made themselves dependent on things they don't control," explains Holiday.
Today, the ancient school of thought that created a science out of dealing with life's challenges is making a resurgence in popular thought. Perhaps the most famous proponent of Stoicism, the Roman statesman and philosopher Marcus Aurelius, has left a lasting legacy of Stoic thought that inspired civil war leaders and troops during the American revolution as well as modern-day tech CEOs and entrepreneurs, according to Holiday.
Aurelius's magnum opus The Meditations, written as private notes in the last decade of his life (170-180 C.E.), is based around a single, simple notion that captures the heart of Stoic philosophy. Aurelius writes, “You have power over your mind -- not outside events. Realize this, and you will find strength.”
Stoicism can be applied to anyone's life as a way to cope with change, challenge and even success. Here are six things that Stoic philosophy can teach us about how to live well.
Accept that most of life is beyond your control.
The iconic Serenity Prayer -- a mainstay of 12-step programs, Christian worship services and an ingrained part of our cultural dialogue -- is a perfect example of the core belief of Stoicism. The prayer preaches "accepting hardships as a pathway to peace" and surrendering to God's will.
"It's about identifying the difference between what's in your control and what's out of your control, which is probably the most salient influence of Stoicism on modern-day life," says Holiday.
What we must do, according to Stoicism, is to define the difference between what's within our control and what's outside our control, and then focus exclusively on what is within our control -- this is almost always on ourselves, our reactions, feelings and the stories we tell ourselves about a particular event.
"What we see in this writing is that stuff happens in life -- sometimes really bad stuff -- and we don't get to decide whether it happens to us and why it happens," says Holiday. "We just decide how we respond."
The Stoic approach echoes Buddhist ideals of detachment and acceptance, which hold that attachment to external things and outcomes causes suffering, and that acceptance of what cannot be changed or controlled is the key to reducing that suffering.
“If there is no solution to the problem then don't waste time worrying about it," the Dalai Lama once said. "If there is a solution to the problem then don't waste time worrying about it.”
We have enormous freedom to choose our thoughts and reactions.
We can choose to exercise power over our thoughts and attitudes in even the most dire of situations -- Roman philosopher Cicero uses the example of torture to illustrate a man's power to choose our own thoughts, which he says can never be taken away from him. In his Discussions at Tusculum, Cicero explains that when a man has been stripped of his dignity, he has not also been stripped of his potential for happiness.
Invoking the same metaphor, Gregory David Roberts' 2003 novel Shantaram, the story of an Australian convict and drug addict who escapes from prison and flees to Mumbai, serves as a powerful illustration of Stoic philosophy. The novel's protagonist Lin describes a life-changing epiphany he had while being tortured:
“It took me a long time and most of the world to learn what I know about love and fate and the choices we make, but the heart of it came to me in an instant, while I was chained to a wall and being tortured. I realised, somehow, through the screaming of my mind, that even in that shackled, bloody helplessness, I was still free: free to hate the men who were torturing me, or to forgive them. It doesn’t sound like much, I know. But in the flinch and bite of the chain, when it’s all you’ve got, that freedom is an universe of possibility. And the choice you make between hating and forgiving, can become the story of your life.”
Find your 'inner citadel.'
Marcus Aurelius, who faced a fair share of hardship and warfare in his life, and is thought to have written the Meditations from a tent in a Roman battle camp.
The Roman statesman wrote that in dire situations, man must have an "inner citadel" to which he can retreat. Living from this inner place of peace and equanimity -- a place which no person or external event can penetrate -- gives a man the freedom to shape his life by responding to events from a rational, calm headspace.
"What is beyond doubt is that we spend most of our life outside our citadel," Arianna Huffington writes in Thrive: The Third Metric to Redefining Success and Creating a Life of Well-Being, Wisdom, and Wonder, nodding to Aurelius's Meditations. "But we can learn to course-correct faster and faster, ten billion times a day if necessary, and bring ourselves back to that place of stillness, imperturbability and loving -- until it becomes second nature to return quickly to what is our true nature."
Remember that the universe has your back.
The Meditations, and Stoicism more broadly, encourage us to view life as being inherently on our side -- and it's a powerful way of reframing any obstacle we encounter. And of course, with the clarity of hindsight, we often see that the obstacles in the path are what ended up paving the way for bigger and better things.
Aurelius wrote that we should view all of life -- including its inevitable struggles -- as an "old and faithful friend":
True understanding is to see the events of life in this way: 'You are here for my benefit, though rumor paints you otherwise.' And everything is turned to one's advantage when he greets a situation like this: You are the very thing I was looking for. Truly whatever arises in life is the right material to bring about your growth and the growth of those around you. This, in a word, is art -- and this art called 'life' is a practice suitable to both men and gods. Everything contains some special purpose and a hidden blessing; what then could be strange or arduous when all of life is here to greet you like an old and faithful friend?
Steve Jobs expressed a similar sentiment during his famous 2005 Stanford commencement speech, and said that getting fired from Apple was the "best thing that ever happened" to him.
"You can't connect the dots looking forward; you can only connect them looking backwards," said Jobs. "So you have to trust that the dots will somehow connect in your future."
Let go of your expectations for other people.
Stoicism is about responsibility for yourself, and accepting a lack of control over the actions of others.
"You should understand that other people will struggle and may react poorly in various situations, and not hold it against them," says Holiday. "It's about creating the right attitude and mindset so that you don't see other people as being in conflict with you... [the way they behave] doesn't hurt your feelings and it doesn't change what you have to do and how you treat those people."
This makes it a particularly relevant school of thought for leaders, as Stoicism provides a path for leading with a cool head and an even keel, directing others towards their highest capacity without judgement or frustration when they fall short.
Turn your thoughts into action.
"The impediment to action advances action," Aurelius wrote. "What stands in the way becomes the way."
Stoicism is, above all, a practical philosophy, emphasizing the importance of personal responsibility and action.
"It might seem idealistic, but [the Meditations] is rooted in realism and pragmatism because that's the world he's in," says Holiday. "It's designed for real life, and not just real life, but some of the hardest parts of life."
It's not a philosophy of blind optimism and rose-colored glasses -- it's one of rationality and acceptance; being honest with yourself and not expecting the world or other people to be anything other than what they are.
"With its emphasis on realism and honesty... [Stoicism] puts you in a position to be really flexible and do great things because you're not expecting anything to be different," says Holiday.
A proponent of Stoicism, for instance, doesn't sit around and wait for what they think they deserve -- instead, they go out and do whatever they have control over in order to make it happen.
"Don't expect Google to call you and give you your dream job out of college," says Holiday. "Understand that it's on you to prove yourself and to do the work, and if it happens then it happens."
|
def mountfeed(self, feed_pat, feed_rot=None):
self.feed_pat = feed_pat
if feed_rot is not None:
self.feed_pat.rotateframe(feed_rot)
|
/**
* Generated class : msg_rc_channels_override
* DO NOT MODIFY!
**/
package org.mavlink.messages.ardupilotmega;
import org.mavlink.messages.MAVLinkMessage;
import org.mavlink.IMAVLinkCRC;
import org.mavlink.MAVLinkCRC;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
/**
* Class msg_rc_channels_override
* The RAW values of the RC channels sent to the MAV to override info received from the RC radio. A value of UINT16_MAX means no change to that channel. A value of 0 means control of that channel should be released back to the RC radio. The standard PPM modulation is as follows: 1000 microseconds: 0%, 2000 microseconds: 100%. Individual receivers/transmitters might violate this specification.
**/
public class msg_rc_channels_override extends MAVLinkMessage {
public static final int MAVLINK_MSG_ID_RC_CHANNELS_OVERRIDE = 70;
private static final long serialVersionUID = MAVLINK_MSG_ID_RC_CHANNELS_OVERRIDE;
public msg_rc_channels_override(int sysId, int componentId) {
messageType = MAVLINK_MSG_ID_RC_CHANNELS_OVERRIDE;
this.sysId = sysId;
this.componentId = componentId;
length = 18;
}
/**
* RC channel 1 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan1_raw;
/**
* RC channel 2 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan2_raw;
/**
* RC channel 3 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan3_raw;
/**
* RC channel 4 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan4_raw;
/**
* RC channel 5 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan5_raw;
/**
* RC channel 6 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan6_raw;
/**
* RC channel 7 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan7_raw;
/**
* RC channel 8 value, in microseconds. A value of UINT16_MAX means to ignore this field.
*/
public int chan8_raw;
/**
* System ID
*/
public int target_system;
/**
* Component ID
*/
public int target_component;
/**
* Decode message with raw data
*/
public void decode(ByteBuffer dis) throws IOException {
chan1_raw = (int)dis.getShort()&0x00FFFF;
chan2_raw = (int)dis.getShort()&0x00FFFF;
chan3_raw = (int)dis.getShort()&0x00FFFF;
chan4_raw = (int)dis.getShort()&0x00FFFF;
chan5_raw = (int)dis.getShort()&0x00FFFF;
chan6_raw = (int)dis.getShort()&0x00FFFF;
chan7_raw = (int)dis.getShort()&0x00FFFF;
chan8_raw = (int)dis.getShort()&0x00FFFF;
target_system = (int)dis.get()&0x00FF;
target_component = (int)dis.get()&0x00FF;
}
/**
* Encode message with raw data and other informations
*/
public byte[] encode() throws IOException {
byte[] buffer = new byte[8+18];
ByteBuffer dos = ByteBuffer.wrap(buffer).order(ByteOrder.LITTLE_ENDIAN);
dos.put((byte)0xFE);
dos.put((byte)(length & 0x00FF));
dos.put((byte)(sequence & 0x00FF));
dos.put((byte)(sysId & 0x00FF));
dos.put((byte)(componentId & 0x00FF));
dos.put((byte)(messageType & 0x00FF));
dos.putShort((short)(chan1_raw&0x00FFFF));
dos.putShort((short)(chan2_raw&0x00FFFF));
dos.putShort((short)(chan3_raw&0x00FFFF));
dos.putShort((short)(chan4_raw&0x00FFFF));
dos.putShort((short)(chan5_raw&0x00FFFF));
dos.putShort((short)(chan6_raw&0x00FFFF));
dos.putShort((short)(chan7_raw&0x00FFFF));
dos.putShort((short)(chan8_raw&0x00FFFF));
dos.put((byte)(target_system&0x00FF));
dos.put((byte)(target_component&0x00FF));
int crc = MAVLinkCRC.crc_calculate_encode(buffer, 18);
crc = MAVLinkCRC.crc_accumulate((byte) IMAVLinkCRC.MAVLINK_MESSAGE_CRCS[messageType], crc);
byte crcl = (byte) (crc & 0x00FF);
byte crch = (byte) ((crc >> 8) & 0x00FF);
buffer[24] = crcl;
buffer[25] = crch;
return buffer;
}
public String toString() {
return "MAVLINK_MSG_ID_RC_CHANNELS_OVERRIDE : " + " chan1_raw="+chan1_raw+ " chan2_raw="+chan2_raw+ " chan3_raw="+chan3_raw+ " chan4_raw="+chan4_raw+ " chan5_raw="+chan5_raw+ " chan6_raw="+chan6_raw+ " chan7_raw="+chan7_raw+ " chan8_raw="+chan8_raw+ " target_system="+target_system+ " target_component="+target_component;}
}
|
/**
* Created by DevSaki on 10/05/2015.
* db maintenance class
*
* @deprecated Replaced by {@link ObjectBoxDB}; class is kept for data migration purposes
*/
@Deprecated
@SuppressWarnings("squid:S1192") // Putting SQL literals into constants would be too cumbersome
public class HentoidDB extends SQLiteOpenHelper {
private static final int DATABASE_VERSION = 8;
private static HentoidDB instance;
private SQLiteDatabase mDatabase;
private int mOpenCounter;
private HentoidDB(Context context) {
super(context, Consts.DATABASE_NAME, null, DATABASE_VERSION);
}
// Use this to get db instance
public static synchronized HentoidDB getInstance(Context context) {
// Use application context only
if (instance == null) {
instance = new HentoidDB(context.getApplicationContext());
}
return instance;
}
@Override
public void onCreate(SQLiteDatabase db) {
db.execSQL(ContentTable.CREATE_TABLE);
db.execSQL(AttributeTable.CREATE_TABLE);
db.execSQL(ContentAttributeTable.CREATE_TABLE);
db.execSQL(ImageFileTable.CREATE_TABLE);
db.execSQL(ImageFileTable.SELECT_PROCESSED_BY_CONTENT_ID_IDX);
db.execSQL(QueueTable.CREATE_TABLE);
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
if (oldVersion < 2) {
db.execSQL("ALTER TABLE " + ContentTable.TABLE_NAME + " ADD COLUMN " + ContentTable.AUTHOR_COLUMN + " TEXT");
db.execSQL("ALTER TABLE " + ContentTable.TABLE_NAME + " ADD COLUMN " + ContentTable.STORAGE_FOLDER_COLUMN + " TEXT");
Timber.i("Upgrading DB version to v2");
}
if (oldVersion < 3) {
db.execSQL("ALTER TABLE " + ContentTable.TABLE_NAME + " ADD COLUMN " + ContentTable.FAVOURITE_COLUMN + " INTEGER DEFAULT 0");
Timber.i("Upgrading DB version to v3");
}
if (oldVersion < 4) {
db.execSQL(QueueTable.CREATE_TABLE);
Timber.i("Upgrading DB version to v4");
}
if (oldVersion < 5) {
db.execSQL(ImageFileTable.SELECT_PROCESSED_BY_CONTENT_ID_IDX);
Timber.i("Upgrading DB version to v5");
}
if (oldVersion < 6) {
db.execSQL("ALTER TABLE " + ContentTable.TABLE_NAME + " ADD COLUMN " + ContentTable.READS_COLUMN + " INTEGER DEFAULT 1");
Timber.i("Upgrading DB version to v6");
}
if (oldVersion < 7) {
db.execSQL("ALTER TABLE " + ContentTable.TABLE_NAME + " ADD COLUMN " + ContentTable.LAST_READ_DATE_COLUMN + " INTEGER");
db.execSQL("UPDATE " + ContentTable.TABLE_NAME + " SET " + ContentTable.LAST_READ_DATE_COLUMN + " = " + ContentTable.DOWNLOAD_DATE_COLUMN);
Timber.i("Upgrading DB version to v7");
}
if (oldVersion < 8) {
db.execSQL("ALTER TABLE " + ContentTable.TABLE_NAME + " ADD COLUMN " + ContentTable.DOWNLOAD_PARAMS_COLUMN + " TEXT");
db.execSQL("UPDATE " + ContentTable.TABLE_NAME + " SET " + ContentTable.DOWNLOAD_PARAMS_COLUMN + " = ''");
db.execSQL("ALTER TABLE " + ImageFileTable.TABLE_NAME + " ADD COLUMN " + ImageFileTable.DOWNLOAD_PARAMS_COLUMN + " TEXT");
db.execSQL("UPDATE " + ImageFileTable.TABLE_NAME + " SET " + ImageFileTable.DOWNLOAD_PARAMS_COLUMN + " = ''");
Timber.i("Upgrading DB version to v8");
}
}
// The two following methods to handle multiple threads accessing the DB simultaneously
// => only the last active thread will close the DB
private synchronized SQLiteDatabase openDatabase() {
mOpenCounter++;
if (mOpenCounter == 1) {
Timber.d("Opening db connection.");
mDatabase = this.getWritableDatabase();
}
return mDatabase;
}
private synchronized void closeDatabase() {
mOpenCounter--;
if (0 == mOpenCounter && mDatabase != null && mDatabase.isOpen()) {
Timber.d("Closing db connection.");
mDatabase.close();
}
}
// FUNCTIONAL METHODS
long countContentEntries() {
long count;
SQLiteDatabase db = openDatabase();
try {
count = DatabaseUtils.queryNumEntries(db, ContentTable.TABLE_NAME);
} finally {
closeDatabase();
}
return count;
}
@Nullable
public Content selectContentById(long id) {
Content result;
Timber.d("selectContentById");
SQLiteDatabase db = openDatabase();
try {
result = selectContentById(db, id);
} finally {
closeDatabase();
}
return result;
}
@Nullable
private Content selectContentById(SQLiteDatabase db, long id) {
Content result = null;
try (Cursor cursorContents = db.rawQuery(ContentTable.SELECT_BY_CONTENT_ID, new String[]{id + ""})) {
if (cursorContents.moveToFirst()) {
result = populateContent(cursorContents, db);
}
}
return result;
}
List<Content> selectContentEmptyFolder() {
List<Content> result;
Timber.d("selectContentEmptyFolder");
SQLiteDatabase db = openDatabase();
try (Cursor cursorContent = db.rawQuery(ContentTable.SELECT_NULL_FOLDERS, new String[]{})) {
result = populateResult(cursorContent, db);
} finally {
closeDatabase();
}
return result;
}
private List<Content> populateResult(Cursor cursorContent, SQLiteDatabase db) {
List<Content> result = Collections.emptyList();
if (cursorContent.moveToFirst()) {
result = new ArrayList<>();
do {
result.add(populateContent(cursorContent, db));
} while (cursorContent.moveToNext());
}
return result;
}
private Content populateContent(Cursor cursorContent, SQLiteDatabase db) {
Content content = new Content()
.setSite(Site.searchByCode(cursorContent.getInt(ContentTable.IDX_SOURCECODE - 1)))
.setUrl(cursorContent.getString(ContentTable.IDX_URL - 1))
.setTitle(cursorContent.getString(ContentTable.IDX_TITLE - 1))
.setQtyPages(cursorContent.getInt(ContentTable.IDX_QTYPAGES - 1))
.setUploadDate(cursorContent.getLong(ContentTable.IDX_ULDATE - 1))
.setDownloadDate(cursorContent.getLong(ContentTable.IDX_DLDATE - 1))
.setStatus(StatusContent.searchByCode(cursorContent.getInt(ContentTable.IDX_STATUSCODE - 1)))
.setCoverImageUrl(cursorContent.getString(ContentTable.IDX_COVERURL - 1))
.setAuthor(cursorContent.getString(ContentTable.IDX_AUTHOR - 1))
.setStorageFolder(cursorContent.getString(ContentTable.IDX_STORAGE_FOLDER - 1))
.setFavourite(1 == cursorContent.getInt(ContentTable.IDX_FAVOURITE - 1))
.setReads(cursorContent.getLong(ContentTable.IDX_READS - 1))
.setLastReadDate(cursorContent.getLong(ContentTable.IDX_LAST_READ_DATE - 1))
.setDownloadParams(cursorContent.getString(ContentTable.IDX_DOWNLOAD_PARAMS - 1))
.setQueryOrder(cursorContent.getPosition());
long id = cursorContent.getLong(ContentTable.IDX_INTERNALID - 1);
content.addImageFiles(selectImageFilesByContentId(db, id))
.addAttributes(selectAttributesByContentId(db, id, content.getSite()));
content.populateAuthor();
return content;
}
private List<ImageFile> selectImageFilesByContentId(SQLiteDatabase db, long id) {
List<ImageFile> result = Collections.emptyList();
try (Cursor cursorImageFiles = db.rawQuery(ImageFileTable.SELECT_BY_CONTENT_ID,
new String[]{id + ""})) {
// looping through all rows and adding to list
if (cursorImageFiles.moveToFirst()) {
result = new ArrayList<>();
do {
result.add(new ImageFile()
.setOrder(cursorImageFiles.getInt(2))
.setStatus(StatusContent.searchByCode(cursorImageFiles.getInt(3)))
.setUrl(cursorImageFiles.getString(4))
.setName(cursorImageFiles.getString(5))
.setDownloadParams(cursorImageFiles.getString(6))
);
} while (cursorImageFiles.moveToNext());
}
}
return result;
}
private AttributeMap selectAttributesByContentId(SQLiteDatabase db, long id, Site site) {
AttributeMap result = null;
try (Cursor cursorAttributes = db.rawQuery(AttributeTable.SELECT_BY_CONTENT_ID,
new String[]{id + ""})) {
// looping through all rows and adding to list
if (cursorAttributes.moveToFirst()) {
result = new AttributeMap();
do {
result.add(
new Attribute(
AttributeType.searchByCode(cursorAttributes.getInt(3)),
cursorAttributes.getString(2),
cursorAttributes.getString(1),
site
)
);
} while (cursorAttributes.moveToNext());
}
}
return result;
}
void updateContentStorageFolder(Content row) {
Timber.d("updateContentStorageFolder");
SQLiteDatabase db = openDatabase();
try (SQLiteStatement statement = db.compileStatement(ContentTable.UPDATE_CONTENT_STORAGE_FOLDER)) {
db.beginTransaction();
try {
statement.clearBindings();
statement.bindString(1, row.getStorageFolder());
statement.bindLong(2, row.getId());
statement.execute();
db.setTransactionSuccessful();
} finally {
db.endTransaction();
}
} finally {
closeDatabase();
}
}
void updateContentStatus(StatusContent updateFrom, StatusContent updateTo) {
Timber.d("updateContentStatus2");
SQLiteDatabase db = openDatabase();
try (SQLiteStatement statement = db.compileStatement(ContentTable.UPDATE_CONTENT_STATUS_STATEMENT)) {
db.beginTransaction();
try {
statement.clearBindings();
statement.bindLong(1, updateTo.getCode());
statement.bindLong(2, updateFrom.getCode());
statement.execute();
db.setTransactionSuccessful();
} finally {
db.endTransaction();
}
} finally {
closeDatabase();
}
}
List<Pair<Integer, Integer>> selectQueue() {
ArrayList<Pair<Integer, Integer>> result = new ArrayList<>();
Timber.d("selectQueue");
SQLiteDatabase db = openDatabase();
try (Cursor cursorQueue = db.rawQuery(QueueTable.SELECT_QUEUE, new String[]{})) {
// looping through all rows and adding to list
if (cursorQueue.moveToFirst()) {
do {
result.add(new Pair<>(cursorQueue.getInt(0), cursorQueue.getInt(1)));
} while (cursorQueue.moveToNext());
}
} finally {
closeDatabase();
}
return result;
}
List<Integer> selectContentsForQueueMigration() {
ArrayList<Integer> result = new ArrayList<>();
Timber.d("selectContentsForQueueMigration");
SQLiteDatabase db = openDatabase();
try (Cursor cursorQueue = db.rawQuery(QueueTable.SELECT_CONTENT_FOR_QUEUE_MIGRATION, new String[]{})) {
// looping through all rows and adding to list
if (cursorQueue.moveToFirst()) {
do {
result.add(cursorQueue.getInt(0));
} while (cursorQueue.moveToNext());
}
} finally {
closeDatabase();
}
return result;
}
void insertQueue(int id, int order) {
Timber.d("insertQueue");
SQLiteDatabase db = openDatabase();
try (SQLiteStatement statement = db.compileStatement(QueueTable.INSERT_STATEMENT)) {
statement.clearBindings();
statement.bindLong(1, id);
statement.bindLong(2, order);
statement.execute();
} finally {
closeDatabase();
}
}
public List<Integer> selectMigrableContentIds() {
ArrayList<Integer> result = new ArrayList<>();
Timber.d("selectMigrableContentIds");
SQLiteDatabase db = openDatabase();
try (Cursor cursorQueue = db.rawQuery(ContentTable.SELECT_MIGRABLE_CONTENT, new String[]{
StatusContent.DOWNLOADED.getCode() + "",
StatusContent.ERROR.getCode() + "",
StatusContent.MIGRATED.getCode() + "",
StatusContent.DOWNLOADING.getCode() + "",
StatusContent.PAUSED.getCode() + ""
})) {
// looping through all rows and adding to list
if (cursorQueue.moveToFirst()) {
do {
result.add(cursorQueue.getInt(0));
} while (cursorQueue.moveToNext());
}
} finally {
closeDatabase();
}
return result;
}
public SparseIntArray selectQueueForMigration() {
SparseIntArray result = new SparseIntArray();
Timber.d("selectQueueForMigration");
SQLiteDatabase db = openDatabase();
try (Cursor cursorQueue = db.rawQuery(QueueTable.SELECT_QUEUE, new String[]{})) {
// looping through all rows and adding to list
if (cursorQueue.moveToFirst()) {
do {
result.put(cursorQueue.getInt(0), cursorQueue.getInt(1));
} while (cursorQueue.moveToNext());
}
} finally {
closeDatabase();
}
return result;
}
}
|
<gh_stars>0
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Pig Dice Game Unittesting."""
# import logging
import unittest
from game.game import Game
# from game.player import Player
# from game.intelligence import Intelligence
# from game.game import dicehand
class TestGameClass(unittest.TestCase):
"""Test the class."""
def test_init_default_object(self):
"""Instantiate an object and check its properties."""
res = Game()
exp = Game
self.assertIsInstance(res, exp)
def test_start(self):
"""Test if game starts, restarts."""
a_game = Game()
a_game.start()
res = a_game.running_score == 0
self.assertTrue(res)
a_game.player.score = 10
a_game.start()
res = a_game.player.score == 0
self.assertTrue(res)
def test_cheat(self):
"""Test to get the number."""
a_game = Game()
a_game.cheat()
res = a_game.cheat()
exp = a_game.roll_dice()
self.assertEqual(res, exp)
def test_change_name(self):
"""Test to chanhe the name on the player."""
a_game = Game()
a_game.change_the_name("winner")
res = a_game.player.get_name()
exp = a_game.player.name
self.assertEqual(res, exp)
def test_get_name(self):
"""Test to get name."""
a_game = Game()
res = a_game.get_name()
exp = a_game.player.name
self.assertEqual(res, exp)
def test_get_history(self):
"""Test if we can get history."""
pass
def test_add_running_score(self):
"""Running score."""
a_game = Game()
a_game.add_running_score(1)
res = a_game.running_score == 0
self.assertTrue(res)
a_game.add_running_score(3)
res = a_game.running_score == 3
self.assertTrue(res)
# logging.warning(f'\nRESULT: { res }')
def test_hold_score(self):
"""Hold score."""
a_game = Game()
a_game.running_score = 10
a_game.hold_score()
res = a_game.player.score == 10
self.assertTrue(res)
def test_get_player_score(self):
"""Get the Score."""
a_game = Game()
res = a_game.get_player_score()
exp = a_game.player.score
self.assertEqual(res, exp)
def test_get_intelligence_score(self):
"""Get score."""
a_game = Game()
res = a_game.get_intelligence_score() == a_game.intelli.sum_scores
self.assertTrue(res)
# logging.warning(f'\nRESULT: { res }')
def test_current_game_is(self):
"""Find out what number of game is it."""
a_game = Game()
a_game.th_game = 3
res = a_game.current_game_is()
exp = a_game.th_game
self.assertEqual(res, exp)
# logging.warning(f'\nRESULT: { res }')
def test_who_is_the_winner(self):
"""Check if there is a winner."""
a_game = Game()
a_game.who_is_the_winner()
res = a_game.player.get_score()
exp = False
self.assertEqual(res, exp)
a_game.who_is_the_winner()
a_game.player.add_score(101)
res = a_game.who_is_the_winner()
self.assertTrue(res)
# logging.warning(f'\nRESULT: { res }')
if __name__ == "__main__":
unittest.main()
|
The sustainability hub: an information management tool for analysis and decision making Sustainability is becoming an increasingly important driver for which decision makers -- consumers, corporate and government -- rely on principled, accurate and provenanced metrics to make appropriate behavior changes. Our assertion here is that a Sustainability Hub which manages such metrics together with their context and chains of reasoning will be of great benefit to the global community. In this paper we explain the Hub vision and explain its triple value proposition of context, chains of reasoning and community. We propose a data model and describe our existing prototype.
|
Influence of endophytic Bacillus pumilus and EDTA on the phytoextraction of Cu from soil by using Cicer arietinum ABSTRACT In developing countries, soil contamination with metals is ubiquitous, which poses a serious threat to the ecosystem. The current study was designed to screen out the nested belongings of Cicer arietinum plants and Bacillus pumilus (KF 875447) in extracting copper (Cu) from contaminated soils. A pot experiment was executed by growing C. arietinum seedlings either inoculated with B. pumilus or uninoculated along with the application of 5 mM ethylenediaminetetraacetic acid (EDTA). Plants were subjected to three different concentrations of Cu (250, 350, and 500 ppm) for 48 days. An increase in Cu uptake was observed in C. arietinum plants inoculated with B. pumilus as compared to uninoculated ones. C. arietinum exhibited improved values for different growth parameters in the presence of B. pumilus, that is, root length (37%), shoot length (31%), whole plant fresh as well as (45%) dry weight (27%), and chlorophyll contents (32%). More than 70% of tolerance index (TI) was observed for plants at 500 ppm Cu treatment. Addition of B. pumilus and EDTA significantly increased metal uptake by C. arietinum up to 19 and 36%, respectively, while the application of B. pumilus and EDTA in combination increased metal accumulation by 41%. The calculated bioaccumulation and translocation factor (TF) revealed that C. arietinum possess phytoextraction potential for Cu, and this ability is significantly improved with application of B. pumilus and EDTA amendments.
|
People sit in the back of a truck as they celebrate what they said was the liberation of villages from Islamist rebels near the city of Ras al-Ain, Syria, Nov. 6, 2013. (photo by REUTERS/Stringer)
Ankara looks beyond Raqqa offensive for fate of northern Syria
Author: Metin Gurcan
Posted June 2, 2017
Prior to the May 16 meeting between US President Donald Trump and Turkish President Recep Tayyip Erdogan in Washington, Turkey had sought for a year to pressure Washington to make a final decision on whether it would cooperate with Turkey in northern Syria or whether it would opt to ally with the People’s Protection Units (YPG).
Of course, Turkey had hoped the United States would prefer to cooperate with its NATO ally rather than the YPG — a sub-state actor and the military wing of the Democratic Union Party (PYD), which is affiliated with the Kurdistan Workers Party (PKK). The United States, however, adamantly maintained over the past year that it did not have to make such a choice, and Ankara could not devise a new road map that would persuade the United States to ally with Turkey east of the Euphrates. Now, the United States feels it can preserve its relations with the YPG — even elevating such relations with the upcoming Raqqa offensive — while keeping Ankara at bay. This may not be as easy as Washington seems to think.
Last week, I had a series of meetings in Ankara with government and security officials on behalf of Al-Monitor, and I was able to gather the following impressions. Both the government and the security bureaucracy in Ankara see the Islamic State (IS) as an internal and border security issue, yet they consider the PYD a political structure that could rapidly transform into a state. They see its military wing, the YPG, to be on the verge of becoming a standing army with conventional capabilities, thus making it an existential and imminent security threat to Turkey. In short, when it comes to northern Syria, Ankara is preoccupied with the YPG threat, not the IS threat.
This may explain the statement issued by the National Security Council following their May 31 meeting: “The meeting emphasized that the policy of support of the US for PKK/PYD/YPG terror organizations that operate under the guise of Syrian Democratic Forces in contravention of Turkey’s expectations is not compatible with friendship and being allies.”
That same day, Foreign Minister Mevlut Cavusoglu issued what sounded like a last-minute warning and called on the United States to refrain from arming the YPG. By saying that such a move would be tantamount to threatening Syrian territorial integrity, Cavusoglu insinuated that the YPG and PKK are seeking to establish a Kurdish zone in Syria.
Another important impression Al-Monitor obtained from its Ankara contacts was Turkey's shift in geographical focus in Syria. It is understood that Ankara — with the anticipated Raqqa offensive — has shifted its attention to the Kobani canton. A result of this shift is a renewed intensity of Turkish military moves on the Suruc-Akcakale front line that faces the Kobani canton. The Turkish military continues to intensify its moves opposite the Kobani canton, indicating that Ankara sees the PYD’s presence — not its actions on the ground — as an existential threat.
Ankara is debating three different approaches to combating the YPG:
To prevent the creation of a Kurdish (or PKK) corridor, Turkey would, without delay, launch an operation east of the Euphrates (in the same vein as the concluded Operation Euphrates Shield) to control the Akcakale-Raqqa road, despite US objections. This means that the Turkish army would take over the Kobani canton, thus ensuring the collapse of the Raqqa offensive that the United States has tried to develop for two years. This approach has many supporters in Ankara. In this approach, Turkey would remain, for the time being, on strategic silence mode regarding an intervention in Syria and would await the outcome of the Raqqa offensive. This approach takes into consideration the losses the Iraqi army sustained in its Mosul operation against IS . The idea is to wait for the 50,000-strong YPG to sustain losses at Raqqa so Turkey can then launch a second Operation Euphrates Shield. In this approach, it is important to accurately predict how long and how strongly IS will defend Raqqa. Should IS mount a serious defense in Raqqa for 10-12 months — as it has in Mosul — this would increase YPG casualties and serve Ankara’s interests. The Raqqa offensive continues to shape strategies in the field and in politics. The political strategy should be to instigate a bottom-up insurgency within the PYD to divide it and thus compel it to cooperate with the Kurdish National Council (ENKS) in Syria, which operates in northern Syria under Massoud Barzani’s tutelage. On the ground, the strategy would be to dilute the pro-PKK sentiments in the Syrian Democratic Forces (SDF). One way to dilute pro-PKK sentiment would be to insert Barzani peshmerga who are close to Turkey — as was done during the 2014 IS-Kurdish clashes — and also to increase the Sunni Arab footprint in the SDF.
Most likely, Turkey will opt for a combination of the second and third approaches.
The grim reality is that untangling the Turkey-PYD-US conundrum east of the Euphrates will determine the settings that come after the offensive against IS and Raqqa. As the IS threat diminishes after the Raqqa offensive, it's critical to keep an eye on how the PYD and the YPG adapt to the emerging political and military scenes. Turkey will have to carefully monitor how the PYD develops its relations with local and international allies. In the post-offensive setting, we will see if the PKK’s and PYD's perceptions of their strength in northern Syria are realistic. The dominant understanding in Ankara is that the PYD doesn’t have structured relations with the United States and Russia, and that those countries will abandon the PYD to its fate in northern Syria. Ankara expects that one day there will be problems between the YPG and the United States.
Ankara also hopes that a power struggle will eventually erupt between the PYD-controlled Rojava and the PKK’s strategic command in the Qandil Mountains in northern Iraq. Ankara is aware that these two bodies appear to be monolithic at the moment since they have a common goal. After the Raqqa offensive, their goals and strategies will diverge. For example, although the PKK is a violent nonstate actor, Rojava is rapidly moving toward statehood: The PYD rules a territory that it has to take care of daily, and for this, it must set up a government to control the territory and to provide services to the population. The PYD’s only window to the outside world is the Turkish border. Ankara seems to be aware of the PYD's dependence, hence Ankara’s hope for a radical change in the hierarchical relationship between Qandil and Rojava. Many wonder if one day this expectation will distance the PYD from the PKK and open the way toward a “PYD normalization,” if only at low levels. Can this normalization be transformed into the PYD cooperating more strongly with ENKS? If, following the Raqqa operation, the PYD decides to forge closer relations with ENKS and the two agree to share the governance of Rojava, how will Ankara will respond?
After the Raqqa operation, the PYD will have to cope with the challenges and problems that stem from coming out from under PKK tutelage and transitioning from an organization designed to combat IS. In turn, Ankara will have to develop policies and visions with a new regional power instead of a mere militant force.
At the end of the day, the major issue is: A Turkish government that doesn't have well-planned PYD policies will militarize that policy in its view of the PYD as an existential threat. Many in Ankara now insist that the PYD presence east of the Euphrates is a grave threat that must be eradicated. This mindset means new crises to manage for the United States after the Raqqa operation.
|
// AOJ 0548 Reindeer with no sense of direction
// 2018.2.23 bal4u
#include <stdio.h>
int w, h, cnt, ans;
char map[13][13];
int mv[4][2] = {{-1,0},{0,1},{1,0},{0,-1}};
//#define getchar_unlocked() getchar()
int in()
{
int n = 0;
int c = getchar_unlocked();
do n = (n<<3)+(n<<1) + (c & 0xf), c = getchar_unlocked();
while (c >= '0');
return n;
}
void dfs(int r, int c, int s)
{
int i, nr, nc;
for (i = 0; i < 4; i++) {
nr = r, nc = c;
while (1) {
nr += mv[i][0], nc += mv[i][1];
if (nr < 0 || nr >= h || nc < 0 || nc >= w) break;
if (map[nr][nc] == 1) {
map[nr][nc] = -1;
dfs(nr, nc, s+1);
map[nr][nc] = 1;
break;
} else if (map[nr][nc] == 2) {
if (s == cnt) { ans++; return; }
}
}
}
}
int main()
{
int k, r, c, sr, sc;
while (w = in()) {
h = in();
cnt = 0;
for (r = 0; r < h; r++) for (c = 0; c < w; c++) {
map[r][c] = k = in();
if (k == 1) cnt++;
else if (k == 2) sr = r, sc = c;
}
ans = 0;
dfs(sr, sc, 0);
printf("%d\n", ans);
}
return 0;
}
|
Research on the Creative Techniques of "Reality Show" TV Programs Taking the Construction of "Manliness" in "Fighting Men" as an Example In recent years, with the steadily increasing ratings of reality shows, reality shows with male as the main shooting object are particularly popular as one of the many genres. Although the shooting methods and themes of the programmes are different, the "manliness" of the guests is the core of this type of reality show. Therefore, how to construct "manliness" in the programme has become the primary problem to be solved by this type of programme. This article uses close reading of the text to summarize the construction of "manliness" in the reality show "Fighting Men" (《我们战斗吧》) from three aspects: the external manifestation of male characteristics, the intuitive embodiment of moral cultivation, and the self-expression in teamwork, thereby revealing the common characteristics of this kind of reality show. INTRODUCTION In recent years, with the steadily increasing ratings of reality shows, reality shows with male as the main shooting object are particularly popular as one of the many genres. "Fighting Men" is a male outdoor experiential reality show jointly produced by Jiangsu TV and Purity Media. The production team of "Fighting Men" uses top foreign shooting teams to use film shooting techniques to make every programme a "Hollywood"-style "blockbuster". In terms of programme content, the production team invites Jing Boran, Wang Kai, Jam Hsiao, Jackson Wang, Yang Shuo and Bai Jingting; these male stars born in the 80s and 90s form the "team of male god". By completing a task assigned by mysterious men, they achieve self-transcendence and transformation and eventually grow into the true "male god". Here, regardless of the "team of male god" or "male god", popular speaking, it is an honorific name for outstanding male. In the show, the "team of male god" mainly refers to the team composed of these stars. The ultimate goal they want to pursue is to become a "male god", that is, from an individual perspective, they need to set goals for themselves and also hope to be seen by the audience as a higher level goal beyond the aura of stars. If the "team of male god" is just a kind of "representation" of stardom, then "the male god in the real sense" is the "innerness" that these stars want to show, that is, their "manliness". When it comes to the theory of "manliness", American scholar Harvey Claflin Mansfield, Jr. said: "Being confident in the face of danger is made up of many qualities that are considered to belong to men. Some qualities apply to all men, while others belong only to certain menmen with manliness. They are considered to be more or less men-specific, although not every one of them has the same degree. These elements of manliness make it exclusive to men". French scholar Bourdieu believes that "Manliness is understood not only as reproductive, sexual and social ability, but also as the ability to fight or abuse (especially in revenge); however, manliness is above all a be defended or lost, and their morality is, in order, chastity and fidelity. And 'truly masculine' men will do their best to expand their honor and win honor and respect in the public sphere". "The Hite Report On Male Sexuality" uses the research method of social survey to investigate "manliness" and finds that most men believe that "Manliness means strength, not overly emotional, but decisive". And they also believe that the "manliness" of men should be established on the basis of being accepted by male groups, that is, "Manliness is the way men intersect with each other, which completely excludes women. It is measured by the respect that other men have towards you, identified and compared at the ego level". These scholars define "manliness" from different perspectives. They jointly acknowledge that "manliness" is first of all male-specific and has obvious male characteristics; secondly, they think that "manliness" is a kind of "honor", which is manifested in the conscientiousness of the object, that is, the inner "virtue" of men; in the end, they believe that for men, there is nothing more "masculine" for men than being recognized in a male group. Therefore, when defining "manliness", it is needed to pay special attention to the external characteristics, moral cultivation and performance of men in male groups. It is not difficult to explain that reality shows with male groups as the main shooting object are so keen to show the spiritual qualities of male guests in the face of difficulties. "Fighting Men" is such a programme to show the "manliness" of the guests by "selling" to the audience. In the concrete construction of "manliness", the programme group has made sufficient preparation and thinking in terms of form and content, and directly marked its English name "Fighting Men" (namely, the men who are fighting) under the title of the programme -"我们战斗吧", showing that the core of this programme is to show the masculine charm of the guests. THE EXTERNAL MANIFESTATION OF MALE CHARACTERISTICS Different from the same type of reality shows on other TV stations, "Fighting Men" uses film production techniques to highlight the masculine charm of these performers in terms of lens language and characters setting. In terms of lens language, the programme group often uses medium close shots and close-ups to show the appearance and facial expressions of male guests when photographing guests, and intuitively highlight their male image. In the first two minutes of the first episode, in order to visually create the personal charm of these male guests, in the arranged narrative paragraphs, the programme group shows the personal strengths and social identity of each male guest through three or four scenes. In this limited scenes, with the voiceover of the show, the audience can intuitively tell the looks and specialties of the guests. In this two-minute narrative paragraph, it is divided into three groups to show the personal charm of the guests in turn. In the first set of shot editing, small panoramas and close shots are mainly used to tell the audience who the guests participating in the programme are, what occasion they appear in, and what the guests are preparing to do; in the second set of shot editing, the medium close shots and close-ups are mainly used to show what the guests are doing; in the third set of shot editing, the main methods such as lowangle shot, high angle shot, and lap dissolve are applied to explain that the guests have successfully completed what they have done. In the connection of these three sets of scenes, the audience see Jing Boran with a handsome face and agile skills, Jackson Wang, who practices fencing and is young and energetic, Yang Shuo, who is burly and loves fighting, as well as Jam Hsiao, who has a persistent pursuit of music... It can be said that in the first two minutes of the show, the production team has explained the personal halo and honor of these guests, and hinted at their unique masculine charm. In order to complete more challenging tasks, the best in these male groups gather together and work together to achieve new glory. In the task setting of each episode, the programme group continuously arranges challenges of varying difficulty for the participants in the programming of less than 80 minutes, intuitively presenting the audience with the audio-visual enjoyment of the "real people" in the reality show. The effect of this is conducive to capturing the first reaction of the participating guests when encountering difficulties, increasing the credibility of the programme and the affinity of the guests, and demonstrating their unusual physical prowess and volitional qualities. This kind of setting makes the show focus more on the difficulty of the performance of the action and the completion of the task while ignoring the emotional exchanges and inner expressions between the guest members. However, because the major premise of the programme is to apply the shooting method of "Hollywood blockbuster" and the "Hollywood blockbuster" in the usual sense itself focuses on fast Advances in Social Science, Education and Humanities Research, volume 643 editing and audiovisual effects, the collective confession of the "team of male god" added in the last episode of the programme is to supplement the previous missing emotional experience. In addition, because the story of the show is set in an "overhead" historical background, which is similar to the sci-fi style of "Superman" saving the earth, from the posters, the appearance of each guest is very "tough". Although in terms of the external characteristics of the guests, the only one who is really "tough" is Yang Shuo, Bai Jingting and Jackson Wang have a more or less youthful breath, Jing Boran and Jam Hsiao follow the "sunshine-boy style", and Wang Kai represents the "intellectual style". These styles basically include the most popular idol types currently on the market. As Jing Boran said in the show, the idol burden he had built so hard disappeared instantly. However, in the show, the audience see another extraordinary Jing Boranthe one who keeps running, the one who bravely climbs despite the heights, and the one who helps his teammates to complete their tasks... While other members are completing tasks one by one, they show the audience a "self" that is different from the previous image, and this new "self" is the "masculine" man that the programme group wants to portray -"a man who prefers action to reflection". THE INTUITIVE EMBODIMENT OF MORAL CULTIVATION Since "Fighting Men" is a reality show that "sells" the "manliness" of the participants, in addition to showing the "external beauty" of the male guests in the programme, reflecting the "inner beauty" of the guests is also the purpose of the programme group. Due to the limitation of the shooting techniques, each episode of the show only has a short length to show the personality characteristics of the male guests except for the routine tasks. The detailed portrayal of the guests' moral cultivation serves as another aspect of the performance of the guests' "manliness". In the specific display, the production team uses "mutual assistance" and "interaction" to show the moral cultivation and inner temperament of these six male guests. In terms of mutual assistance, the production team often sets rescue tasks when planning tasks, showing the spiritual qualities of the guests by rescuing key characters or rescuing a member. In these 13 episodes of the programme, a total of five rescue missions are set up, and in each rescue mission, the personal qualities of the participants are vividly portrayed. For example, in the mission of "Rescue Dr. Jin" in the second episode, when facing the challenge of entering the shark pond, although Yang Shuo, Jam Hsiao and Bai Jingting all show concern and cunning when evading the mission, Yang Shuo takes the place of the nervous Bai Jingting when the final decision is made as to who will finally enter the pond. In Yang Shuo's words, "In fact, I think Xiaobai (Bai Jingting) is very kind, because when he heard that I had a heart attack, I actually knew he was scared, but he basically made his decision. To be honest, I am also scared. Wish me good luck". When he says this, the subtitles in front of the shot are "If you are a big brother, you should come forward". It can be said that this plot not only shows Yang Shuo's responsibilities as the eldest brother in the face of dangerous tasks, but also reflects Bai Jingting's respect and care for his elder brother as a junior. The inner temperament of the two people is clear at a glance. Such detailed portrayal vividly demonstrates the exemplary conduct and nobility of character of the guests, and also highlights the national character of "respecting the old and cherishing the young" in traditional Chinese culture. If the mutual assistance between male guests shows more of a sense of responsibility and commitment as men, then the interaction between male guests and fans and average people participating in the recording of the show more creates an image of ordinary people who are free from the burden of "stars". This kind of interaction is more conducive to showing the approachable side of the stars, making the show look more real, and it also confirms the original intention of the experiential reality showto "allow programme participants to get an experience different from their own daily lives, and record and present the experience process". From the perspective of the audience, the participation of fans and average people makes the programme more close to life, shortens the distance between participating guests and ordinary people, and shows the guests' interpersonal skills and ideological and political behavior from different aspects. This kind of interactive activity is especially obvious when the "team of male god" performs the task of selling things together. From the analysis of the facial expressions of these guests, they think that this behavior of collecting money from fans is very "unseemly", but they must do this task. So the audience can see Jackson Wang with a solemn face when performing this task, Wang Kai who repeatedly asks the fans about their age to see if they have the consumption ability, and Jam Hsiao Advances in Social Science, Education and Humanities Research, volume 643 who has more or less complaints and sells things at very low prices, and so on. Although this kind of buying and selling interaction is undesirable, for fans, it is equivalent to getting a face-to-face communication opportunity with their idols through this buying and selling method. Therefore, the price of the goods is not important to them. However, for the guests, charging their fans is itself a very immoral behavior, which undermines their commitment as men. What's interesting is that viewers sitting in front of the TV and watching the show on the Internet see the very "manly" side of these male guests, and they like these stars even more. And this effect is deliberately created by the programme group to serve as a foil to show the manliness of the male guests through such detailed displaybeing kind-hearted and daring to take responsibility. THE SELF-EXPRESSION IN TEAMWORK In "The Hite Report On Male Sexuality", in the survey of "manliness", it is found that most men think that men with "manliness" should participate in men's activities and become part of the group and be accepted by other men. Participating in the activity as a team is more conducive to showing the guests' personal masculinity. In the programme "Fighting Men", the production team mainly uses confrontation and competition to present the "manliness" of the guests in the team. In the link of showing confrontation, the program group mainly uses the way of setting up barriers for the "team of male god" to highlight the collective sense of honor of the participants. Because "honor combines a private environment with public beliefs, those who desire honor feel that they have the right to behave in a certain way. By claiming honor, they surpass the kind of mindless and unreasonable aggressiveness". In the team, fighting against external pressure and resistance has become the primary problem to be solved by the "team of male god". This is reflected in their team song: "Let's fight. It's the same for anyone who does tasks. I am not afraid of difficulties. People who feel afraid can only be afraid. Let's fight. Just bite the bullet and keep fighting. Let the world see that we're prepared to fight the crisis. Let's fight. The mission is on my shoulders, and I will bear it when the sky falls down. A man should be exposed to wind and rain". Regardless of "wind and rain" or "the skies falling and the earth opening up", in the face of the test, as long as one is a man, he should stand in the forefront. Therefore, they work hard in the collective task. For example, in the graduation assessment of "Collecting the Six Elements" in the twelfth episode, the "team of male god" cooperates with each other by learning from each other's strengths, and finally completes the test to obtain the qualifications to participate in the graduation ceremony. In addition, in the shared task of catching the book thief in the fourth episode, it can be seen that in the unified external battle, the "team of male god" applies their wisdom and beliefs to give their best effort and finally succeed. In order to highlight the spirit of solidarity and cooperation of the "team of male god", the programme group arranges for them with strong opponents. These opponents are above them both in mind and physical strength, but in the process of catching, the team members have carried forward the collective strength, successfully caught the book thief in the encirclement and suppression, and finally got the task clues. In this teamwork, different participants show different aspects. Jackson Wang, Bai Jingting, and Jam Hsiao stand out for their agility and physical skills; Wang Kai, Jing Boran, and Yang Shuo show their witty side. It can be seen that it is easier to produce a sense of honor and mission of individual males in teamwork. In the link of showing competition, the program group uses the internal grouping of the "team of male god" to show the personal characteristics and masculine characteristics of the members. Through internal competition, it is more conducive for the audience to accept and agree with the "manliness" of a certain guest showing the responsibilities and obligations of an individual. In this kind of internal competition, the program group mainly allows different male members to partner and compete with each other through two or three groups. Of course, there are also ways to show the "manliness" of team members through competition for bodyguards like the ninth episode. This way of affirming personal value to show participants' "manliness" is more convincing, because "Manliness is a declaration of a person's value, because his value is not self-evident. Similarly, because the value needs to be declared, it also needs to be proved. After the declaration, one must fulfill his promise". When accepting this kind of competitive task, the audience will find that the male guests have different attitudes towards the sense of honor and their different forms of pursuit, such as Jam Hsiao's persistence to winning or losing, Jing Boran's belief in "being garrulous", Jackson Wang's "being funny" and so on. Although the reality show doesn't rule out traces of performance, in the competition with teammates, it is possible to see the true emotional expression of the participants and the degree to which they care about winning or losing, because offensiveness is also an essential link of forming "manliness". In this way, it is not difficult to explain that confrontations between teams and between individuals are more exciting than collective confrontations, since the audience can see a frantic Jam Hsiao who is recognized by his opponents, Jackson Wang who tries to express his opinions, Jing Boran who is confident but self-conceit and sophisticated, and Wang Kai who loves nagging and fails to control his laughter... In the competition between people, the nature of the guests is inspired by the situation they are in. Wang Kai said in an interview: "Reality shows are more difficult than acting, because you do something with a purpose in acting, you know what kind of role the character is, you know what kind of scene you're playing in this scene, and you go in with a mission and know how to act in a play. But as for the reality shows, there is no character setting or screenplay for you. Everything is based on what you hear, see, and think, and then reflect it truthfully. Now that I'm here, I shall show everyone my most real side". 1 Therefore, in this environment where it is necessary to prove whose ability is more prominent, what the participants actually show is their desire and expectation for victory and the desire to gain more recognition, that is, the manliness self-expression. CONCLUSION With the so booming Chinese reality TV market, reality shows with male guests as the main performance targets occupy a large share of the media market. Most of these shows focus on experience to show the true reactions and personal abilities of male guests in the face of specific situations in social life. Therefore, the audience will see the "situation setting" in "Go Fighting!", the "teenage years" in "Back To Youth" and the "heroic dream" in "Fighting Men"... These shows continuously convey the "manliness" of the guests to the audience through different performance content. However, the core of their construction is inseparable from the presentation of the appearance of the male guests, the inner description, and the performance in team and individual competition. For reality shows, this method is not only in line with the production cycle of the filming team and the effects of the shows, but also helps to highlight the personal charm of the male guests to the greatest extent. However, it is precisely because too many shows are set up and arranged in this way, the innovation of the production team is only the replacement of the filming theme, which will inevitably cause the audience's aesthetic fatigue and the decline of the shows' interestingness. Therefore, in addition to highlighting the "manliness" of the guests, the production team must enrich the connotation of "manliness" and add their own understanding and recognition of "manliness" to make China's reality show more localized. AUTHORS' CONTRIBUTIONS This paper is independently completed by Ningning Wang.
|
A line-diamond parallel search algorithm for block motion estimation The widespread use of block matching motion estimation (BMME) in video coding is due to its effectiveness and simplicity of implementation. This paper presents a novel fast BMME algorithm called the line-diamond parallel search (LDPS). The algorithm is based on the following two properties: the special directionality of the SAD distribution and the characteristics of the center-biased motion vector distribution. In addition, in order to increase the speed of search, the parallel processing idea is used in LDPS. That is to say LDPS realizes the coarse orientation and the accurate search in the same step. Our experimental results show that not only the processing speed of the LDPS algorithm is much higher than that of other fast algorithms, but also its accuracy of motion compensation is as nearly good as that of full search (FS).
|
. The adsorption of a typical biogenic toxin aflatoxin B1 on montmorillonite modified by low-molecular-weight humic acids (M(r) < 3 500) was investigated. The montmorillonite rapidly adsorbed the aflatoxin B1 until amounting to the maximal capacity, and then the adsorbed aflatoxin B1 slowly released into solution and reached the sorption equilibrium state after 12 h. The sorption isotherm of aflatoxin B1 by montmorillonite could be well described by Langmiur model, while the sorption isotherm by humic acid-modified montmorillonite was well fitted by using the Freundlich model. The modification of the montmorillonite with humic acids obviously enhanced its adsorption capacity for aflatoxin B1, and the amounts of aflatoxin adsorbed by modified montmorillonite were obviously higher than those by montmorillonite. The sorption enhancement by humic acid modification was attributed to the enlarged adsorption sites which owed to the surface collapse of crystal layers induced by organic acids, and the binding of aflatoxin with the humic acid sorbed on mineral surface. In addition, the adsorption amounts of aflatoxin by montmorillonite and modified montmorillonite increased with the increase of pH values in solution, and more significant enhancement was observed for the latter than the former, which attributed to the release of humic acids from the modified montmorillonite with the high pH values in solution. This indicates that increasing the pH values resulted in the enhanced hydrophilic property and the release of the organic acids presented in modified montmorillonite, and more sorption sites were available for aflatoxin on the modified montmorillonite. Results of this work would strengthen our understanding of the behavior and fate of biological contaminants in the environment.
|
/**
* Initiate the recovery flow for the user with matching claims.
*
* @param claims User claims
* @param tenantDomain Tenant domain
* @param recoveryScenario Recovery scenario
* @param properties Meta properties
* @return RecoveryChannelInfoDTO object
*/
public RecoveryChannelInfoDTO retrieveUserRecoveryInformation(Map<String, String> claims, String tenantDomain,
RecoveryScenarios recoveryScenario,
Map<String, String> properties)
throws IdentityRecoveryException {
String username = getUsernameByClaims(claims, tenantDomain);
if (StringUtils.isNotEmpty(username)) {
checkAccountLockedStatus(buildUser(username, tenantDomain));
List<NotificationChannel> notificationChannels;
boolean isNotificationsInternallyManaged = Utils.isNotificationsInternallyManaged(tenantDomain, properties);
/* If the notification is internally managed, then notification channels available for the user needs to
be retrieved. If external notifications are enabled, external channel list should be returned.*/
if (isNotificationsInternallyManaged) {
notificationChannels = getInternalNotificationChannelList(username, tenantDomain);
} else {
notificationChannels = getExternalNotificationChannelList();
}
String recoveryCode = UUIDGenerator.generateUUID();
return buildUserRecoveryInformationResponseDTO(username, recoveryCode,
getNotificationChannelsResponseDTOList(username, recoveryCode, tenantDomain, notificationChannels,
recoveryScenario));
} else {
if (log.isDebugEnabled()) {
log.debug("No valid user found for the given claims");
}
throw Utils.handleClientException(IdentityRecoveryConstants.ErrorMessages.ERROR_CODE_NO_USER_FOUND, null);
}
}
|
Insight into the Temperature Evolution of Electronic Structure and Mechanism of Exchange Interaction in EuS. Discovered in 1962, the divalent ferromagnetic semiconductor EuS (TC = 16.5 K, Eg = 1.65 eV) has remained constantly relevant to the engineering of novel magnetically active interfaces, heterostructures, and multilayer sequences and to combination with topological materials. Because detailed information on the electronic structure of EuS and, in particular, its evolution across TC is not well-represented in the literature but is essential for the development of new functional systems, the present work aims at filling this gap. Our angle-resolved photoemission measurements complemented with first-principles calculations demonstrate how the electronic structure of EuS evolves across a paramagnetic-ferromagnetic transition. Our results emphasize the importance of the strong Eu 4f-S 3p mixing for exchange-magnetic splittings of the sulfur-derived bands as well as coupling between f and d orbitals of neighboring Eu atoms to derive the value of TC accurately. The 4f-3p mixing facilitates the coupling between 4f and 5d orbitals of neighboring Eu atoms, which mainly governs the exchange interaction in EuS.
|
<reponame>bihu0201/card-pro
package com.ruoyi.project.module.award.controller;
import java.util.List;
import org.apache.shiro.authz.annotation.RequiresPermissions;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.ui.ModelMap;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import com.ruoyi.framework.aspectj.lang.annotation.Log;
import com.ruoyi.framework.aspectj.lang.enums.BusinessType;
import com.ruoyi.project.module.award.domain.Award;
import com.ruoyi.project.module.award.service.IAwardService;
import com.ruoyi.framework.web.controller.BaseController;
import com.ruoyi.framework.web.page.TableDataInfo;
import com.ruoyi.framework.web.domain.AjaxResult;
/**
* 奖项 信息操作处理
*
* @author snailever
* @date 2018-10-11
*/
@Controller
@RequestMapping("/module/award")
public class AwardController extends BaseController
{
private String prefix = "module/award";
@Autowired
private IAwardService awardService;
@RequiresPermissions("module:award:view")
@GetMapping()
public String award()
{
return prefix + "/award";
}
/**
* 查询奖项列表
*/
@RequiresPermissions("module:award:list")
@PostMapping("/list")
@ResponseBody
public TableDataInfo list(Award award)
{
startPage();
List<Award> list = awardService.selectAwardList(award);
return getDataTable(list);
}
/**
* 新增奖项
*/
@GetMapping("/add")
public String add()
{
return prefix + "/add";
}
/**
* 新增保存奖项
*/
@RequiresPermissions("module:award:add")
@Log(title = "奖项", businessType = BusinessType.INSERT)
@PostMapping("/add")
@ResponseBody
public AjaxResult addSave(Award award)
{
return toAjax(awardService.insertAward(award));
}
/**
* 修改奖项
*/
@GetMapping("/edit/{id}")
public String edit(@PathVariable("id") Integer id, ModelMap mmap)
{
Award award = awardService.selectAwardById(id);
mmap.put("award", award);
return prefix + "/edit";
}
/**
* 修改保存奖项
*/
@RequiresPermissions("module:award:edit")
@Log(title = "奖项", businessType = BusinessType.UPDATE)
@PostMapping("/edit")
@ResponseBody
public AjaxResult editSave(Award award)
{
return toAjax(awardService.updateAward(award));
}
/**
* 删除奖项
*/
@RequiresPermissions("module:award:remove")
@Log(title = "奖项", businessType = BusinessType.DELETE)
@PostMapping( "/remove")
@ResponseBody
public AjaxResult remove(String ids)
{
return toAjax(awardService.deleteAwardByIds(ids));
}
}
|
package com.hotfixs.frameworks.hibernatevalidator.beanvalidation.propertylevel;
import static org.junit.Assert.assertEquals;
import org.junit.BeforeClass;
import org.junit.Test;
import java.util.Set;
import javax.validation.ConstraintViolation;
import javax.validation.Validation;
import javax.validation.Validator;
import javax.validation.ValidatorFactory;
/**
* @author wangjunwei
*/
public class CarTest {
private static Validator validator;
@BeforeClass
public static void setUpValidator() {
ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
validator = factory.getValidator();
}
@Test
public void validate() {
Car car = new Car(null, true);
Set<ConstraintViolation<Car>> constraintViolations = validator.validate(car);
assertEquals(1, constraintViolations.size());
assertEquals("may not be null", constraintViolations.iterator().next().getMessage());
}
@Test
public void validateProperty() {
Car car = new Car(null, true);
Set<ConstraintViolation<Car>> constraintViolations = validator.validateProperty(car, "manufacturer");
assertEquals(1, constraintViolations.size());
assertEquals("may not be null", constraintViolations.iterator().next().getMessage());
}
@Test
public void validateValue() {
Set<ConstraintViolation<Car>> constraintViolations = validator.validateValue(Car.class, "manufacturer", null);
assertEquals(1, constraintViolations.size());
assertEquals("may not be null", constraintViolations.iterator().next().getMessage());
}
}
|
A study on organizational factors that influence job stress among medical laboratory technologists in klang valley hospitals. A cross-sectional study on organizational factors that influences job stress was carried out among Medical Laboratory Technologists (MLT) in Klang Valley's Hospitals. There were three organizational factors that were measured, interpersonal factor, job condition and career development. A total of 249 respondents participated in this study, 126 were from the private hospitals and 123 from the government hospitals. The prevalence of stress was found higher in the private hospitals with the percentage of 16.7% compared to the government hospitals of 15.4%. All three organizational factors were significantly associated with job stress (interpersonal factor p <0.001, job condition p<0.001 and career development p < 0.001). Management team in hospitals as well as the laboratory managers should introduce stress prevention programmes to assist MLTs in stress management.
|
/**
* Method/Function invoked when it's received a JOIN Operation Message from an User (Client),
* through the (Secure) Multicast Chat's Session.
*/
public void secureMulticastChatParticipantJoined(String userUsername, InetAddress userINETAddress, int port) {
this.textMessageLog("A NEW PARTICIPANT JOINED:\n- " + userUsername
+ " has joined to the Multicast Chat's Group, from the following IP Address ["
+ userINETAddress.getHostName() + ":" + port + "]");
this.addUserToTheOnlineUsersList(userUsername);
}
|
ILION — An Ilion woman was charged with felony assault after stabbing a victim in an Ilion residence, the Ilion Police Department said.
At 12:43 a.m. Saturday, authorities responded to a reported stabbing at a residence on South Third Avenue. There, a 44-year-old victim was found with a stab wound in his upper left chest area, police said.
Tara S. Lyndaker, 46, of Ilion was then arrested and charged with felony second-degree assault, police said.
Police said the victim was transported to a local hospital, where he remains a patient.
Lyndaker was arraigned in Ilion village court and taken to Herkimer County jail, according to authorities. She is scheduled to reappear in village court at 10 a.m. on April 23, police said.
Assisting in the arrest were the Mohawk Police Department, the Village of Frankfort Police Department and the New York State Police.
The Herkimer County District Attorney’s Office and the Herkimer County Department of Child Protective Services also assisted in the investigation.
|
Chinese Midlife Women's Perceptions and Attitudes About Menopause ObjectiveThe purpose of this research was to discover and describe the meaning of and attitudes toward menopause in midlife Chinese women in Taiwan. How these women learned about menopause was also explored. DesignQuestionnaires were distributed to a convenience sample of 208 Chinese women aged 35 to 55 living in Taiwan; 168 responded. Qualitative data were analyzed using content analysis. Percentage and chi-square were used to examine the quantitative data. ResultsThe findings revealed that 154 (91.7%) women perceived menopause as a natural phenomenon. No statistically significant differences in attitude toward menopause were found between women grouped by different menopausal levels, by use or not use of hormones, or by religious preference. Some women described menopause as, no longer young, getting old. Others described menopause as, wisdom and maturation, a symbol of achievement, and a time to start enjoying life. Sixty-eight (40.5%) of the sample indicated they obtained menopausal information from friends and printed materials such as books, newspapers, and magazines. ConclusionsStudy findings indicate that Chinese women in Taiwan perceive menopause in a positive and holistic way. Culturally sensitized Western practitioners can utilize this study's findings to more appropriately individualize care for Chinese midlife women.
|
Kurihama Flower Park is not safe! Actually, it is. The giant Godzilla that towers above the play area doesn't breathe fire. Heck, it doesn't even move. But, the kaiju does have an unusual secret.
This nine meter tall, five ton Godzilla is actually playground equipment. The monster's tail doubles as a slide for children.
However, to enjoy this piece of playground equipment, children must climb stairs that lead directly into Godzilla's open crotch.
Memories of that Pikachu bouncy house come rushing back.
The reason why there's a huge Godzilla at this park in Kanagawa is because in the first Godzilla film, the beast emerged out of the ocean at a nearby beach, known as Kanonzaki.
To mark this, there was a Godzilla slide erected at the beach in 1958, which probably inspired other, far less impressive Godzilla slides on Japanese playgrounds throughout the country.
The slide at Kanonzaki fell in to disrepair by the early 1970s. You can, however, still see "Godzilla's footprint" at the shore.
A new, far more impressive version of the slide (top photos) was built at nearby Kurihama Flower Park in 1999. It still stands today.
Toho Studios actually oversaw the slide's construction, meaning that Toho signed off on the Godzilla crotch entry, making it official. That's incredible.
It also means that giant Godzilla pre-dates the more recent "giant statue" trend.
Besides giant Gundam, Japan has seen giant anime heroines, giant robots, and even a giant plastic model box in the past few years.
Bigger is better. Yes, even if you have to crawl into a monster crotch.
|
Evaluation of genetic diversity in some promising varieties of lentil using karyological characters and protein profiling Somatic chromosome study from root tip cells using the squash technique of the cytological method and seed protein profile of 5 varieties of Lens culinaris (Lentil) through SDSPAGE were investigated. Karyotype analysis showed gross uniformity in morphology. Somatic chromosome number 2n = 14 is constant for all the varieties. Chromosomes are mostly long to medium in length with secondary constrictions in one pair of chromosome. Primary constrictions in chromosome ranged from nearly median to nearly submedian in most of the cases. Notwithstanding the gross homogeneity, karyotype analysis revealed minute differences in details. Each lentil variety is thus characterized by its own karyotype, serving as one of the identifying criteria. The seed protein profile revealed that varieties are very close to each other with respect to similarity index that ranged from 0.594 to 0.690. With regard to seed protein banding patterns, slight polymorphism (14.285%) indicating low genetic diversity has been identified among the 5 varieties. A dendrogram indicates one variety is plesiomorphic and rest varieties are apomorphic. All the experimental varieties of lentil studied here show low genetic diversity due to their similar genetic base, indigenous genetic resources and conservative nature of the seed protein. Introduction Lentil (Lens culinaris Medik.) belonging to the family Fabaceae is considered as one of the ancient, domesticated, economically important winter legume crop agriculturally cultivated worldwide as human food. The seeds of this plant are commonly used as edible pulse. Lentils are valued for their high protein content (as much as 30%) and good source of vitamins and other important nutrients. Seed protein profiles obtained by gel electrophoresis have been successfully used not only to resolve taxonomic and evolutionary problems of several crop species but also to distinguish cultivars of a particular crop species. In particular, seed protein profiles produced by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) have been successfully used for the identification or discrimination of various crop species, even at the varietal levels. The technique is economical, simple, and rapid and generally free from environmental effects compared with the traditional morphological and other descriptive criteria derived from field trials. Moreover, it is reported that SDS-PAGE of seed proteins is an extensively used technique for describing and assessing the seed protein diversity of many crop germplasm and is potentially a useful identifier and descriptor for the purpose of seed identification and Plant Variety Rights. Knowledge of cytological and molecular relationships between plant species is very useful in planning effective breeding strategies designed to transfer desirable genes or gene clusters from one species into another, thereby producing fruitful genomic reconstructions and disease free plants. Determination of genetic diversity of any given crop species is a suitable precursor for improvement of the crop because it generates baseline data to guide selection of parental lines and design of a breeding scheme. It is a valuable technique to get knowledge closeness between investigated genera (i.e., through similarity index). The aim of the present study was to find out genetic diversity of 5 different varieties of lentil using cytological characters and protein profiling. Study of somatic chromosome Somatic chromosomes were studied from root-tip cells. Fresh healthy roots (November and December months are suitable for seed germination within a day), showing peak mitotic activity from 11 AM to 12.00 Noon, were collected and washed in distilled water. For scattering and clarification of chromosome morphology, pretreatment of root tips with mixture of saturated solution of pDB and aesculine for 3-3.15 h at 12-14°C was found to be very effective for different varieties of lentil. The root tips putting in pre-treating solution were initially chilled at 0-5°C for 4-6 min and then kept at 12-14°C. For the sake of comparative karyological analysis, the same pre-treatment chemical was used for all the varieties. Root-tips were then carefully washed in distilled water and transferred to a suitable fixative such as, glacial acetic acid and ethanol (1:3) and kept overnight. The materials were then kept in 45% acetic acid for 3-5 min, subsequently warm over a flame in 2% acetic-orcein:HCl (1 N) mixture (9:1) for 3-4 s and finally kept for 2-3 h. Root-tips were squashed in 45% acetic acid for microscopic observation. Karyomorphometrical analysis The total length as well as the short arm length of all the chromosomes of the 5 varieties of lentil was measured accurately. In all the karyotypes, ratio of the short arm to the total length of the chromosome in percentage, F% (form percentage or centromeric percentage) was determined after Krikorian et al.. The centromeric index (F%) i.e. the position of centromere of each chromosome was calculated using the following formula: Centromeric index F% Length of the short arm Whole length of the chromosome 100 Total centromeric index (TF%) was also determined in each taxa following Huziwara by the formula: Sum of the short arm length Sum of total chromosome length 100 Disparity index (DI%) of chromosomes in a karyotype was calculated according to Mohanty et al. by the formula: During the preparation of karyotypes at least 4-5 well spread metaphase plates were compared and analyzed. Photomicrographs were taken from the well spread preparations with the help of Olympus digital SLR camera and LM digital SLR adaptor fitted with Olympus CX 41 microscope. Extraction of seed proteins 0.2 g dry seed of each variety was taken in pre-chilled pestle and mortar and homogenized in chilled 2 ml of 0.2 M phosphate buffer (pH-8.2). The extracts were centrifuged at 10,000 rpm for 15 min at 4°C. The extracted crude proteins were recovered as supernatant which was used for protein profiling. Protein concentration of extracts was measured immediately and directly from the supernatant by dye binding assay as described by Bradford. A standard curve of absorbance at 595 nm versus 10-80 lg of BSA was also drawn and from this curve, the amount of protein in sample was calculated and finally expressed as mg per g of seed. Repetition of same experiment was done 3 times in order to check the reproducibility of the method. SDS-PAGE Just before starting electrophoresis process, supernatant was mixed (1:1) with 2X sample buffer and heated in a 1.5 ml eppendorf tube in water bath at 85°C for 3 min to denature the protein. After that, the protein samples were subjected to one dimensional SDS-PAGE in a gel slab of 1 mm thickness (4% stacking gel = 2.5 cm height and 10% resolving gel = 5.5 cm height). Total size of the gel was 8 7.3 cm 2. Electrophoresis was carried out in the discontinuous buffer system in a vertical electrophoresis apparatus (Bio-Tech India Pvt. Ltd) according to the method of Laemmli. Using micro-pipette 20 ll protein samples were loaded to each well of the gel. In one well of the same gel, protein molecular weight marker (molecular weight range = 14-97 kDa) of Chrommas Biotech, India, was applied. 0.02% bromophenol blue (BPB) was added in the protein sample as tracking dye to see the movement of protein in the gel. The gel was run at 10 mA constant current mode. Then, the gel was stained for overnight in 0.025% Coomassie brilliant blue (CBB) R-250. Analysis of gel documentation Finally, gels were photographed and scanned using Bio-Rad, USA made Gel-Doc TM XR+ system. Detailed analysis of protein band patterns in terms of band number, mobility of protein bands, staining intensity, band percentage, lane percentage and the determination of molecular weight of each band were done by Image Lab TM software (version 5.0). The presence and absence of the bands were entered in a binary data matrix. The similarity matrix thus generated was used to construct dendrogram using SEAVIEW version 2.6 Software. Analysis of somatic chromosome Mitosis in root tip cells of 5 varieties of lentil showed regular cell division. No apparent chromosomal abnormalities were found. The chromosome number counted from the mitotic metaphase plate was constant i.e., 2n = 14 for all varieties ( Fig. 2A-E). The chromosome complements of all 5 varieties of lentil showed gross morphological similarities. In general, chromosomes were short and bi-armed. Types A-C were found in the varieties WBL 58 and B 77, while Types A-D were found in the varieties WBL 81, WBL 256 and WBL 77 of lentil. In lentil total chromosome length varies between 70.90 and 83.00 l in the varieties. There is one pair of chromosome with secondary constriction (SC) in all 5 varieties. The mean of total F% varies within 33.23-35.19% (Table 2). Analysis of seed storage proteins The total seed protein of 5 varieties of lentil obtained by one dimensional denaturing SDS-PAGE were studied and revealed qualitative and quantitative intervarietal differences in terms of total number of protein band, position, thickness, staining intensity, relative mobility of bands and molecular weight (Figs. 4 and 5). Varieties showed variation in total number of protein bands ranging from 21 to 26. During protein profiling of total proteins of experimental varieties, WBL 58 and WBL 77 showed maximum number of protein bands while minimum number of protein band was found in WBL 81. The eletrophorogram of WBL 256 and B 77 showed intermediate number of protein bands. The results of relative mobility (R f value) and molecular weight of protein bands exhibited variation ranging from 0.003 to 0.990 and 14 to 97 kDa respectively. Quantitative variation of seed protein among different varieties of lentil studied in the present experiment was also found and is represented by bar diagram in Fig. 6. The highest amount of protein content i.e. 148.166 ± 0.763763 mg/g of seed was obtained in WBL 58. However, in other varieties such as WBL 81, WBL 256, WBL 77 and B 77 seed protein contents were 121.5 ± 1.322, 136.7167 ± 1.1026, 126.666 ± 1.5275 and 137.666 ± 1.6441 mg/g of seed respectively. Analysis of seed proteins cluster Similarity indices among the 5 lentil varieties based on protein analysis is given in Table 3. The similarity relationships ranged from 0.690 to 0.594. The highest similarity index (0.690) was found between WBL 77 (Moitree) and B 77 (Asha) followed The dendrogram, which represents the genetic relationships among the tested lentil varieties, is presented in Fig. 7. The dendrogram indicates that variety B 77 (Asha) is separated as outgroup and the remaining four varieties are included into one ingroup or one cluster i.e. cluster I with two sister/subgroups. The outgroup variety is known to be less closely related to the rest of varieties than they are to each other. Therefore, B 77 variety is plesiomorphic and rest varieties are apomorphic. The values on each branch are actually branch value and the values at each node are divergence values. Among the 5 lentil varieties, the first sister group comprising of WBL 77 and WBL 58 under cluster I are a comparatively and phylogenetically advanced group due to their high branch value (0.312) and low divergence value (0.015). However, the second sister group consisting of WBL 256 and WBL 81 is a relatively less advanced group due to their comparatively low branch value (0.297) and high divergence value (0.030). Discussion Karyotype analysis in 5 varieties of lentil shows gross uniformity in morphology and the chromosomes with graded symmetrical karyotype. Somatic chromosome number 2n = 14 is constant for all the varieties. Identical chromosome numbers have been recorded earlier by different workers. Chromosomes are mostly long to medium in length with secondary constrictions in only pair of chromosome of all 5 varieties of lentil. While studying with 15 varieties of lentil Sinha and Acharia reported the presence of one pair chromosome in some varieties but absence of this pair in rest varieties. It was also noted that there is a variation in the distance between the 2 constrictions (primary and secondary). From their study they also concluded that there might have been a gradual reduction in the distance between the 2 constrictions due to translocation and hybridization which might have led to their total loss and thus giving rise to the varieties without the presence of chromosome with secondary constriction. But no such absence of chromosomes with secondary constriction and variation in the distance between the 2 constrictions was found in the present study. Notwithstanding this gross homogeneity, karyotype analysis reveals minute differences in details. Each variety is thus characterized by its own karyotype, serving as one of the identifying criteria. Based on this parameter, the varieties WBL 58 (Subrata) and B 77 (Asha) can be distinguished from other 3 varieties by the absence of one pair of chromosome with nearly subterminal primary constrictions. Despite the fact that the same karyotype formula A 2 + B 4 + C 6 + D 2 is represented in WBL 77 (Moitree), WBL 81 (Subhendu) and WBL 256 (Ranjan), the varieties can be distinguished from one another by their range of chromosome length, namely 4.21-6.56 l in WBL 77, 3.98-6.32 l in WBL 81 and 4.26-6.04 l in WBL 256. All the features of the chromosome However, among the varieties of lentil, the size ranges show a remarkable constancy. It may thus be inferred that rather than deletion or duplication, structural rearrangements involving certain chromosomes have been of principal importance in bringing about changes. Accumulation of such changes can sometimes lead to genetic diversity during the process of evolution. The disparity index (DI) value corresponds to the homogeneity or heterogenous assemblage of chromosomes. Normally a low disparity index value corresponds to the homogeneity of chromosome whereas a high disparity index value points towards the general heterogeneity of chromosomes. In the present study, the range of lower values of DI found in lentil is 17.18-27.48% which corresponds to the homogeneity of chromosomes among the 5 varieties. In addition to that, the mean centromeric index (TF%) value undoubtedly confirms the status of a taxon with respect to chromosome study. In lentil, the lower TF% exhibited among different varieties shows that it represents the climax of evolution i.e. its advanced status. The abundance of submetacentric chromosomes in the karyotype of 5 varieties of lentil is also advanced karyomorphological features. Thus individuals with same chromosome number but with minute differences in karyomorphological details reflect the ongoing evolutionary processes at a microlevel. The SDS-PAGE for water-soluble seed protein electrophoresis was used to investigate the genetic differences among the varieties. The band patterns indicate differences among the Figure 7 Showing dendrogram based on R f value of electrophoretically separated seed protein bands of five varieties of lentil using SEAVIEW version 2.6 Software. varieties in number of bands, position of the bands and molecular weight of the bands etc. The present investigation revealed that protein profiling is one of the basic methods to detect inter varietal genetic diversity and study phylogenetic relationship among 5 selected experimental lentil varieties. During the present study on 5 varieties of lentil, the similarity index ranged from 0.690 to 0.594. The highest similarity index (0.690) was found between WBL 77 (Moitree) and B 77 (Asha) followed by in descending order between WBL 58 (Subrata) and B 77 (0.683) then WBL 256 and B 77 (0.683) next between WBL 81 (Suvendu) and WBL 77 (0.662), WBL 58 and WBL 256 (0.662), WBL 256 and WBL 77 (0.655) then between WBL 81 and B 77 (0.647), WBL 58 and WBL 81 (0.640) lastly between WBL 256 and WBL 81 (0.594) showing lowest similarity index and they also genetically related to each other. The higher the similarity coefficient between two genotypes, the more the similarity between two genotypes based on protein bands. With regard to seed protein banding patterns, slight polymorphism has been identified among the 5 varieties under study. Binary data obtained for absence and presence from protein gel electrophoresis among 5 lentil varieties showed 14.285% polymorphism. It means that the level of protein polymorphism is very low and it is correlated with low genetic diversity. The low level of protein polymorphism could result from conservative nature of the seed protein. Moderately high similarity index values ranging from 0.690 to 0.594 found among the lentils genotypes tested indicate that the genetic diversity between them is narrowed due to their more or less common origin in the breeding program. Similar results were reported by Hamdi and Omar, Hamadi and Elemery, and Hamadi et al. who found that the highest similarity index was between Giza 370 and Family 29 followed by Giza 9 and Giza 370 then between FLIP95-67L and 81-17, the most promising lentil genotypes in Egypt. This result indicated the strong genetic relations among the Egyptian genotype Giza 370 and Family 29 and Giza 9 and Giza 370 that is logical since these genotypes originated from similar Egyptian landraces and hence they have a similar genetic base. Sharma et al. also obtained similar results using AFLP and RAPD marker techniques to evaluate and study the genetic diversity and phylogeny of 54 lentil accessions. The study on genetic diversity in ex-situ conserved lentils for botanical descriptors, biochemical and molecular markers and identification of landraces from indigenous genetic resources of Pakistan also gave a low level of genetic diversity of seed protein. While studying the genetic characterization of ninetysix genotypes of lentil germplasm using SSR markers, Kushwaha et al. obtained a wide range of genetic variability among the lentil genotypes due to their different centres of origin, different genetic constitution and different cluster forming group. Therefore, it is concluded that all the experimental varieties of lentil studied here show low genetic diversity due to their similar genetic base, indigenous genetic resources and conservative nature of the seed protein and should be diversified using modern breeding techniques. The genetic relatedness detected in this study may constitute the foundation for future systematic lentil breeding programmes.
|
Covid 19 Infection in Diabetic Patients Regarding Clinical Manifestation and Serious Outcome Including Icu Admission Background: Age, hypertension, diabetes mellitus, obesity, cardiovascular illnesses, chronic obstructive pulmonary disease, and cancer are all associated with a higher risk of death from Coronavirus disease-19 (COVID-19). Patients with hyperglycemia on the COVID-19 have severe clinical issues, a higher rate of ICU admissions, mechanical ventilation, and a marked increase in inflammatory markers. The goal of therapeutic approaches should be to make it easier for patients to access the healthcare system. To lower the risk of problems and relieve the strain on healthcare systems, blood glucose control and comorbidities must be personalised. Studying the severity of COVID 19 infection in diabetic patients and serious outcomes, such as ICU admission and invasive ventilation, along with mortality rates, is the goal of this work. Diabetes is one of the most prevalent diseases.
|
1. Field of the Invention
The present invention relates to a technique of retrieving a desired function among a plurality of home appliances respectively having different functions.
2. Description of the Related Art
The following are existing appliance/device connectivity techniques and their characteristics.
UPnP (Universal Plug and Play) is a technique of performing retrieval based on the type names of appliances/devices, the type names of devices, and device names. HAVi (Home Audio/Video interoperability) is a technique of performing retrieval based on IDs and attributes (function type names and vendor names) and a standard for AV appliances. There is some technique of performing retrieval based on the functions of appliances as well as the types of appliances.
Assume that the number of appliances that can simultaneously access one appliance is determined in advance. For such a case, there is also available a technique (exclusive control) for allowing access to the appliance within this limit (e.g., Jpn. Pat. Appln. KOKAI Publication No. 2001-196636).
Conventionally, connection between appliances (devices),has been established by using the type names of appliances (devices), limitations and the like predetermined for the respective appliances. However, since the names of the respective appliances, the name of an appliance to be found at the time of retrieval, limitations of the respective appliances, and the like are statically written in advance in the respective appliances, only static connection relationships can be established.
According to the conventional techniques described above, therefore, it is difficult to retrieve an optimal appliance suitable for one of various kinds of requests from various kinds of appliances among various kinds of functions (services) used in the home and appliances that are assumed to be in various states.
|
/*******************************************************************************
* Copyright (C) Marvell International Ltd. and its affiliates
*
* This software file (the "File") is owned and distributed by Marvell
* International Ltd. and/or its affiliates ("Marvell") under the following
* alternative licensing terms. Once you have made an election to distribute the
* File under one of the following license alternatives, please (i) delete this
* introductory statement regarding license alternatives, (ii) delete the three
* license alternatives that you have not elected to use and (iii) preserve the
* Marvell copyright notice above.
*
********************************************************************************
* Marvell Commercial License Option
*
* If you received this File from Marvell and you have entered into a commercial
* license agreement (a "Commercial License") with Marvell, the File is licensed
* to you under the terms of the applicable Commercial License.
*
********************************************************************************
* Marvell GPL License Option
*
* If you received this File from Marvell, you may opt to use, redistribute and/or
* modify this File in accordance with the terms and conditions of the General
* Public License Version 2, June 1991 (the "GPL License"), a copy of which is
* available along with the File in the license.txt file or by writing to the Free
* Software Foundation, Inc., or on the worldwide web at http://www.gnu.org/licenses/gpl.txt.
*
* THE FILE IS DISTRIBUTED AS-IS, WITHOUT WARRANTY OF ANY KIND, AND THE IMPLIED
* WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE ARE EXPRESSLY
* DISCLAIMED. The GPL License provides additional details about this warranty
* disclaimer.
*
********************************************************************************
* Marvell GNU General Public License FreeRTOS Exception
*
* If you received this File from Marvell, you may opt to use, redistribute and/or
* modify this File in accordance with the terms and conditions of the Lesser
* General Public License Version 2.1 plus the following FreeRTOS exception.
* An independent module is a module which is not derived from or based on
* FreeRTOS.
* Clause 1:
* Linking FreeRTOS statically or dynamically with other modules is making a
* combined work based on FreeRTOS. Thus, the terms and conditions of the GNU
* General Public License cover the whole combination.
* As a special exception, the copyright holder of FreeRTOS gives you permission
* to link FreeRTOS with independent modules that communicate with FreeRTOS solely
* through the FreeRTOS API interface, regardless of the license terms of these
* independent modules, and to copy and distribute the resulting combined work
* under terms of your choice, provided that:
* 1. Every copy of the combined work is accompanied by a written statement that
* details to the recipient the version of FreeRTOS used and an offer by yourself
* to provide the FreeRTOS source code (including any modifications you may have
* made) should the recipient request it.
* 2. The combined work is not itself an RTOS, scheduler, kernel or related
* product.
* 3. The independent modules add significant and primary functionality to
* FreeRTOS and do not merely extend the existing functionality already present in
* FreeRTOS.
* Clause 2:
* FreeRTOS may not be used for any competitive or comparative purpose, including
* the publication of any form of run time or compile time metric, without the
* express permission of Real Time Engineers Ltd. (this is the norm within the
* industry and is intended to ensure information accuracy).
*
********************************************************************************
* Marvell BSD License Option
*
* If you received this File from Marvell, you may opt to use, redistribute and/or
* modify this File under the following licensing terms.
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* * Neither the name of Marvell nor the names of its contributors may be
* used to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
* ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*******************************************************************************/
/**
* @file pp2_mem.h
*
* Packet Processor I/O Memory mapping and Contiguous Allocator
*
*/
#ifndef _PP2_MEM_H_
#define _PP2_MEM_H_
#include "std_internal.h"
#include "pp2_types.h"
#include "pp2_hw_type.h"
/**
* User I/O map API
*
*/
/**
* pp2_maps_handle_t
* Handle for pp maps context structure
* The handle should be initialized by calling
* pp2_sys_ioinit(&pp2_maps_hdl, pp2_name) and will be passed
* to all the other pp maps methods.
*
*/
typedef uintptr_t pp2_maps_handle_t;
int pp2_sys_io_exists(const char *name);
/**
* Initialize user I/O devices and maps structures
*
* In order to initialize Marvell packet processor structures
* Marvell UIO driver should be inserted and probed.
*
* pp2_sys_ioinit will allocate the memory for the pp structure
* pp2_sys_ioinit will filter the devices based on driver name
*
* @param pp2_maps_hdl pointer to pp2_maps_handle_t
* @param name name of the packet processor
*
* @retval 0 Success
* @retval < 0 Failure. Info could not be retrieved.
*/
int pp2_sys_ioinit(pp2_maps_handle_t *pp2_maps_hdl, const char *name);
/**
* I/O mapping based on map name exported by I/O driver
*
* pp2_sys_iomap will search through available maps and filter by map name.
* This method will take care of opening the device if not
* already opened by another call and will mmap the region
* requested. This function exports physical the address of the
* mapped region if pa != NULL. This should never be accessed via r/w calls.
* The possible use case of the physical address is for debug prints
* or passing it by value to registers (i.e. contiguous dma mapping).
* Each successful pp2_sys_iomap call will add the map to a list.
*
* @param pp2_maps_handle_t pp2_maps_handle_t handle
* @param pa physical address of the mapped region
* @param name name of the memory map
*
* @retval Virtual address of the mapped region
*
*/
uintptr_t pp2_sys_iomap(pp2_maps_handle_t pp2_maps_hdl,
u32 *pa, const char *name);
/**
* I/O unmapping based on map name exported by I/O driver
*
* pp2_sys_iounmap will search through the map list and filter by map name.
*
* This method will take care of closing the device if no map
* is registered as mapped.
* Each successful pp2_sys_iounmap call will remove the map from the list.
*
* @param pp2_maps_handle_t pp2_maps_handle_t handle
* @param name name of the memory map
*
* @retval 0 Success
* @retval < 0 Failure. Memory map not found.
*
*/
int pp2_sys_iounmap(pp2_maps_handle_t pp2_maps_hdl, const char *name);
/**
* Destroy user I/O pp structure
*
* @param pp2_maps_handle_t pp2_maps_handle_t handle
*
* pp2_sys_iodestroy will release the memory for the pp structure
*
*/
void pp2_sys_iodestroy(pp2_maps_handle_t pp2_maps_hdl);
/** Slot specific r/w access routines */
/**
* Packet Processor register write function
* Offers lock-less access to shares resources based on PP CPU memory slots
*
* @param cpu_slot PP CPU slot
* @offset register offset
* @data data to feed register
*
*/
static inline void pp2_reg_write(uintptr_t cpu_slot, uint32_t offset,
uint32_t data)
{
uintptr_t addr = cpu_slot + offset;
writel(data, (void *)addr);
}
/**
* Packet Processor register relaxed write function
* Offers lock-less access to shares resources based on PP CPU memory slots, without memory barriers.
*
* @param cpu_slot PP CPU slot
* @offset register offset
* @data data to feed register
*
*/
static inline void pp2_relaxed_reg_write(uintptr_t cpu_slot, uint32_t offset,
uint32_t data)
{
uintptr_t addr = cpu_slot + offset;
writel_relaxed(data, (void *)addr);
}
/**
* Packet Processor register read function
* Offers lock-less access to shares resources based on PP CPU memory slots
*
* @param cpu_slot PP CPU slot
* @offset register offset
*
* @retval content of register
*
*/
static inline uint32_t pp2_reg_read(uintptr_t cpu_slot, uint32_t offset)
{
uintptr_t addr = cpu_slot + offset;
return readl((void *)addr);
}
/**
* Packet Processor register relaxed read function
* Offers lock-less access to shares resources based on PP CPU memory slots, without memory mem_barriers.
*
* @param cpu_slot PP CPU slot
* @offset register offset
*
* @retval content of register
*
*/
static inline uint32_t pp2_relaxed_reg_read(uintptr_t cpu_slot, uint32_t offset)
{
uintptr_t addr = cpu_slot + offset;
return readl_relaxed((void *)addr);
}
#define pp2_relaxed_read pp2_reg_read
static inline void cm3_write(uintptr_t base, u32 offset, u32 data)
{
uintptr_t addr = base + offset;
writel(data, (void *)addr);
}
static inline u32 cm3_read(uintptr_t base, u32 offset)
{
uintptr_t addr = base + offset;
return readl((void *)addr);
}
#endif /* _PP2_MEM_H_ */
|
Chasing a Feeling
Music video
The music video was filmed Allentown, PS in 12 hours shooting on November 11, 2013. The Narrative confirmed the music video on Facebook post, "We recorded a music video some months ago (...) and will be finishing that as well as the song associated with it very soon." It was directed and filmed by the photographer's Sean O'Kane, Terry O'Kane and Hilary J. Corts. The music video was premiered on July 28, 2014 on Idolator.
The concept was created around the idea of the song itself. As Jesse explained on interview, "The song itself is telling you that life just keeps moving whether or not you’re ready for it, but the video is almost making fun of what happens if you try and force something into being the thing you want it to be, when it’s really not that thing at all."
The video utilizes from bright colors, vintage elements and merges between scenes where Suzie Zeldin and Jesse Gabriel are dressed like dolls trying to escape from an blond girl (Zeldin's sister, Victoria Zeldin) who locked them at the closet in an hiding house in the forest. In the presence of this girl, they turn as dolls, when she leaves the room, they get back to 'real life' running between the rooms. The band appears also singing in normal dresses in an room with the ground covered with petals. The video closes with a shot of Zeldin and Gabriel running out from the house, when the girl appears they fall to the ground as dolls again, She pull their legs and drag them back for the house.
Critical reception
Tori Mier from AlterThePress wrote: "Their work collection boasts songs that feel like an afternoon stroll on a gentle summer’s day, full of soft sounds that build to a release, as well as tracks like the recently-released “Chasing A Feeling,” that touch upon something murkier beneath the surface."
|
import datetime
from django.core import mail
from django.contrib.auth.models import User
from utils import BaseTestCase
from invitation import app_settings
from invitation.models import InvitationError, Invitation, InvitationStats
from invitation.models import performance_calculator_invite_only
from invitation.models import performance_calculator_invite_optional
EXPIRE_DAYS = app_settings.EXPIRE_DAYS
INITIAL_INVITATIONS = app_settings.INITIAL_INVITATIONS
class InvitationTestCase(BaseTestCase):
def setUp(self):
super(InvitationTestCase, self).setUp()
user = self.user()
user.invitation_stats.use()
self.invitation = Invitation.objects.create(user=user,
email=u'<EMAIL>',
key=u'F' * 40)
def make_invalid(self, invitation=None):
invitation = invitation or self.invitation
invitation.date_invited = datetime.datetime.now() - \
datetime.timedelta(EXPIRE_DAYS + 10)
invitation.save()
return invitation
def test_send_email(self):
self.invitation.send_email()
self.assertEqual(len(mail.outbox), 1)
self.assertEqual(mail.outbox[0].recipients()[0], u'<EMAIL>')
self.invitation.send_email(u'<EMAIL>')
self.assertEqual(len(mail.outbox), 2)
self.assertEqual(mail.outbox[1].recipients()[0], u'<EMAIL>')
def test_mark_accepted(self):
new_user = User.objects.create_user('test', '<EMAIL>', 'test')
pk = self.invitation.pk
self.invitation.mark_accepted(new_user)
self.assertRaises(Invitation.DoesNotExist,
Invitation.objects.get, pk=pk)
def test_invite(self):
self.user().invitation_stats.add_available(10)
Invitation.objects.all().delete()
invitation = Invitation.objects.invite(self.user(), '<EMAIL>')
self.assertEqual(invitation.user, self.user())
self.assertEqual(invitation.email, '<EMAIL>')
self.assertEqual(len(invitation.key), 40)
self.assertEqual(invitation.is_valid(), True)
self.assertEqual(type(invitation.expiration_date()), datetime.date)
# Test if existing valid record is returned
# when we try with the same credentials
self.assertEqual(Invitation.objects.invite(self.user(),
'<EMAIL>'), invitation)
# Try with an invalid invitation
invitation = self.make_invalid(invitation)
new_invitation = Invitation.objects.invite(self.user(),
'<EMAIL>')
self.assertEqual(new_invitation.is_valid(), True)
self.assertNotEqual(new_invitation, invitation)
def test_find(self):
self.assertEqual(Invitation.objects.find(self.invitation.key),
self.invitation)
invitation = self.make_invalid()
self.assertEqual(invitation.is_valid(), False)
self.assertRaises(Invitation.DoesNotExist,
Invitation.objects.find, invitation.key)
self.assertEqual(Invitation.objects.all().count(), 0)
self.assertRaises(Invitation.DoesNotExist,
Invitation.objects.find, '')
class InvitationStatsBaseTestCase(BaseTestCase):
def stats(self, user=None):
user = user or self.user()
return (user.invitation_stats.available,
user.invitation_stats.sent,
user.invitation_stats.accepted)
class MockInvitationStats(object):
def __init__(self, available, sent, accepted):
self.available = available
self.sent = sent
self.accepted = accepted
class InvitationStatsInviteOnlyTestCase(InvitationStatsBaseTestCase):
def setUp(self):
super(InvitationStatsInviteOnlyTestCase, self).setUp()
app_settings.INVITE_ONLY = True
def test_default_performance_func(self):
self.assertAlmostEqual(performance_calculator_invite_only(
self.MockInvitationStats(5, 5, 1)), 0.42)
self.assertAlmostEqual(performance_calculator_invite_only(
self.MockInvitationStats(0, 10, 10)), 1.0)
self.assertAlmostEqual(performance_calculator_invite_only(
self.MockInvitationStats(10, 0, 0)), 0.0)
def test_add_available(self):
self.assertEqual(self.stats(), (INITIAL_INVITATIONS, 0, 0))
self.user().invitation_stats.add_available()
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 1, 0, 0))
self.user().invitation_stats.add_available(10)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 11, 0, 0))
def test_use(self):
self.user().invitation_stats.add_available(10)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 10, 0, 0))
self.user().invitation_stats.use()
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 9, 1, 0))
self.user().invitation_stats.use(5)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 4, 6, 0))
self.assertRaises(InvitationError,
self.user().invitation_stats.use,
INITIAL_INVITATIONS + 5)
def test_mark_accepted(self):
if INITIAL_INVITATIONS < 10:
i = 10
self.user().invitation_stats.add_available(10-INITIAL_INVITATIONS)
else:
i = INITIAL_INVITATIONS
self.user().invitation_stats.use(i)
self.user().invitation_stats.mark_accepted()
self.assertEqual(self.stats(), (0, i, 1))
self.user().invitation_stats.mark_accepted(5)
self.assertEqual(self.stats(), (0, i, 6))
self.assertRaises(InvitationError,
self.user().invitation_stats.mark_accepted, i)
def test_give_invitations(self):
self.assertEqual(self.stats(), (INITIAL_INVITATIONS, 0, 0))
InvitationStats.objects.give_invitations(count=3)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 3, 0, 0))
InvitationStats.objects.give_invitations(self.user(), count=3)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 6, 0, 0))
InvitationStats.objects.give_invitations(self.user(),
count=lambda u: 4)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS + 10, 0, 0))
def test_reward(self):
self.assertAlmostEqual(self.user().invitation_stats.performance, 0.0)
InvitationStats.objects.reward()
self.assertEqual(self.user().invitation_stats.available,
INITIAL_INVITATIONS)
self.user().invitation_stats.use(INITIAL_INVITATIONS)
self.user().invitation_stats.mark_accepted(INITIAL_INVITATIONS)
InvitationStats.objects.reward()
invitation_stats = self.user().invitation_stats
self.assertEqual(invitation_stats.performance > 0.5, True)
self.assertEqual(invitation_stats.available, INITIAL_INVITATIONS)
class InvitationStatsInviteOptionalTestCase(InvitationStatsBaseTestCase):
def setUp(self):
super(InvitationStatsInviteOptionalTestCase, self).setUp()
app_settings.INVITE_ONLY = False
def test_default_performance_func(self):
self.assertAlmostEqual(performance_calculator_invite_optional(
self.MockInvitationStats(5, 5, 1)), 0.2)
self.assertAlmostEqual(performance_calculator_invite_optional(
self.MockInvitationStats(20, 5, 1)), 0.2)
self.assertAlmostEqual(performance_calculator_invite_optional(
self.MockInvitationStats(0, 5, 1)), 0.2)
self.assertAlmostEqual(performance_calculator_invite_optional(
self.MockInvitationStats(0, 10, 10)), 1.0)
self.assertAlmostEqual(performance_calculator_invite_optional(
self.MockInvitationStats(10, 0, 0)), 0.0)
def test_use(self):
self.assertEqual(self.stats(), (INITIAL_INVITATIONS, 0, 0))
self.user().invitation_stats.use()
self.assertEqual(self.stats(), (INITIAL_INVITATIONS, 1, 0))
self.user().invitation_stats.use(5)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS, 6, 0))
self.user().invitation_stats.use(INITIAL_INVITATIONS + 5)
self.assertEqual(self.stats(), (INITIAL_INVITATIONS,
INITIAL_INVITATIONS + 11,
0))
def test_mark_accepted(self):
if INITIAL_INVITATIONS < 10:
i = 10
self.user().invitation_stats.add_available(10-INITIAL_INVITATIONS)
else:
i = INITIAL_INVITATIONS
self.user().invitation_stats.use(i)
self.user().invitation_stats.mark_accepted()
self.assertEqual(self.stats(), (i, i, 1))
self.user().invitation_stats.mark_accepted(5)
self.assertEqual(self.stats(), (i, i, 6))
self.assertRaises(InvitationError,
self.user().invitation_stats.mark_accepted, i)
self.user().invitation_stats.mark_accepted(4)
self.assertEqual(self.stats(), (i, i, 10))
def test_reward(self):
self.assertAlmostEqual(self.user().invitation_stats.performance, 0.0)
InvitationStats.objects.reward()
self.assertEqual(self.user().invitation_stats.available,
INITIAL_INVITATIONS)
self.user().invitation_stats.use(INITIAL_INVITATIONS)
self.user().invitation_stats.mark_accepted(INITIAL_INVITATIONS)
InvitationStats.objects.reward()
invitation_stats = self.user().invitation_stats
self.assertEqual(
invitation_stats.performance > app_settings.REWARD_THRESHOLD, True)
self.assertEqual(invitation_stats.available, INITIAL_INVITATIONS * 2)
|
#include "util.hpp"
#include "reader.hpp"
#include "tick.hpp"
#include "tfidf_transformer.hpp"
#include "inverted_index.hpp"
#include <cstdio>
#include "SETTINGS.h"
#define K 12
#define K_FIRST 3000
#define PREDICT_LABEL 5
static void
predict(std::vector<int> &results,
const InvertedIndex::result_t &search_results,
const std::vector<label_t> &labels)
{
std::map<int, float> score;
std::vector<std::pair<float, int> > tmp;
results.clear();
for (auto i = search_results.begin(); i != search_results.end(); ++i) {
const label_t &label = labels[i->id];
for (auto j = label.begin(); j != label.end(); ++j) {
auto s = score.find(*j);
if (s != score.end()) {
s->second += 1.0f + i->cosine * 0.1f;
} else {
score.insert(std::make_pair(*j, 1.0f + i->cosine * 0.1f));
}
}
}
for (auto i = score.begin(); i != score.end(); ++i) {
tmp.push_back(std::make_pair(i->second, i->first));
}
std::sort(tmp.begin(), tmp.end(), std::greater<std::pair<float, int> >());
for (auto i = tmp.begin(); i != tmp.end(); ++i) {
if (results.size() < PREDICT_LABEL) {
results.push_back(i->second);
}
}
}
void
make_submission(const std::vector<std::pair<int, std::vector<int> > > &submission)
{
FILE *fp = fopen(SUBMISSION, "w");
fprintf(fp, "Id,Predicted\n");
for (auto i = submission.begin(); i != submission.end(); ++i) {
bool first = true;
fprintf(fp, "%d,", i->first + 1);
for (auto j = i->second.begin(); j != i->second.end(); ++j) {
if (first) {
first = false;
} else {
fprintf(fp, " ");
}
fprintf(fp, "%d", *j);
}
fprintf(fp, "\n");
}
fclose(fp);
}
int main(void)
{
DataReader reader, test_reader;
std::vector<fv_t> data;
std::vector<fv_t> test_data;
std::vector<label_t> labels;
std::vector<label_t> dummy_labels;
TFIDFTransformer tfidf;
long t = tick();
InvertedIndex knn;
std::vector<std::pair<int, std::vector<int> > > submission;
if (!reader.open(TRAIN_DATA)) {
fprintf(stderr, "open failed: %s\n", TRAIN_DATA);
return -1;
}
if (!test_reader.open(TEST_DATA)) {
fprintf(stderr, "open failed: %s\n", TEST_DATA);
return -1;
}
reader.read(data, labels);
test_reader.read(test_data, dummy_labels);
printf("load: train %ld test %ld %ldms\n",
data.size(), test_data.size(), tick() - t);
reader.close();
test_reader.close();
t = tick();
tfidf.train(data);
tfidf.transform(data);
tfidf.transform(test_data);
knn.build(&data);
printf("build index %ldms\n", tick() -t );
t = tick();
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic, 1)
#endif
for (int i = 0; i < (int)test_data.size(); ++i) {
std::vector<int> topn_labels;
InvertedIndex::result_t results;
knn.fast_knn(results, K, test_data[i], K_FIRST, data.size() / 100);
predict(topn_labels, results, labels);
#ifdef _OPENMP
#pragma omp critical
#endif
{
submission.push_back(std::make_pair(i, topn_labels));
if (i % 1000 == 0) {
printf("--- predict %d/%ld %ldms\n", i, test_data.size(), tick() -t);
t = tick();
}
}
}
std::sort(submission.begin(), submission.end());
make_submission(submission);
return 0;
}
|
1. Field of the Invention
The present invention relates to a parking control unit for establishing whether or not a parking fee has been paid for parked vehicles.
2. Description of the Related Art
Cities will normally have one or more vehicle parking companies who distribute parking meters, or so-called pay meters, throughout the city in a number of different places, of which streets and large ground-based parking areas are the most common.
In addition to coin payments, it has become increasingly common practice to pay a parking fee with a cash card of one kind or another. Card payments are made by drawing the cash card through a card reader on the pay meter.
The invention relates to the type of payment system with which the person parking a vehicle draws the cash card through a card reader In the pay meter and the pay meter stores the cash-card number and the time at which the card was read.
It is highly desirable to be able to use any parking meter whatsoever when parking a vehicle and then use any parking meter or pay meter whatsoever to pay the parking fee when collecting the vehicle. Thus, it should be possible to commence a series of parking occasions at one place in the city or town and draw the cash card through the reader of a given meter, and to terminate the series of parking occasions at another place in the city or town, by drawing the cash card through the reader of another meter.
One problem with the majority of known systems is that the pay meter must produce a parking ticket that contains machine-readable information and that each pay meter must have a reader that is able to read the ticket. This requires the pay meters to be serviced at relatively short intervals, in order to ensure the function of the meters. It is also necessary to replenish the pay meters with tickets.
Handling of the tickets can also be problematic. A lost ticket must be reported as being lost, in order to be able to terminate parking of the vehicle concerned.
A solution to this problem is described in Swedish Patent Specification No. 960112-7.
The invention according to this prior patent relates to a method of cash card billing by means of parking meters or pay meters when parking a vehicle, wherewith a system of pay meters includes several pay meters that are equipped with a cash card reader. A person parking a vehicle will in initially look for a first pay meter and with the aid of the cash card reader enter information carried on the card, at least with respect to the card account number, wherewith the pay meter is caused to store the account number and the time at which parking had commenced in a memory belonging to the pay meter, when reading the cash card. In conjunction with terminating parking of the vehicle, the cash card is inserted into another pay meter and read by the card reader of this meter, this latter pay meter optionally being any chosen pay meter in the pay meter system, including the pay meter first mentioned. The second pay meter is caused to store the account number together with the time at which the card was read, i.e. the parking terminating time, in a memory belonging to this second pay meter.
The invention according to this prior patent is characterized in that each pay meter has a keyboard by means of which a person can enter the registration number of the vehicle to be parked, in conjunction with causing the cash card to be read by the card meter at the commencement of parking of the vehicle. The pay meter is caused to store the vehicle registration number together with the account number and the time at which parking commenced, and the memory of each pay meter is connected to the memory of a central computer; and in that billing is carried out on the account number carried on the cash card in question for the parking time that has elapsed between commencing and terminating the parking of the vehicle.
According to one embodiment of this patent, the system of pay meters can be instructed to print-out or display those vehicle registration numbers that have commenced a parking period but have not terminated a parking period of the vehicle.
This instruction may, for instance, be given in a manner such that a car park attendant inserts a special authorization card in the card reader. The pay meter will be designed to print-out a list of vehicle registration numbers in numerical order with regard to those vehicles where a parking has not been terminated. Alternatively, the registration numbers can be shown on a display in alphabetical numerical order and the car park attendant can skim between registration numbers with the aid of arrow keys, for instance.
The car park superintendent can then compare the registration numbers of parked vehicles with numbers displayed or printed-out by the system. A vehicle whose registration number is not found In the system will be duly fined.
It will be understood that the system described in this prior patent specification is best suited for parking systems whose geographical separation is limited, such as multi-car parks or ground-based car parks. When the system is applied, for instance, over the whole city center, the lists will be very comprehensive and quickly out of date.
The present invention solves this problem and provides a very fast and effective parking control unit.
The present invention thus relates to a parking control unit which is intended primarily for checking whether or not a parking fee for a parked vehicle has been paid. It is intended for use in a parking system of the kind with which the user registers an account number and the vehicle registration number at the start of a parking period, and again registers the account number at the time of terminating the parking period of the vehicle. The account number, the vehicle registration number, the parking starting time and the parking terminating time are stored in the memory of a central computer so that billing can be effected on the account number concerned for the parking time that has elapsed between the parking commencement time and the parking terminating time. The parking control unit is portable and includes a display, a computer unit with an associated memory, and a communications unit adapted to communicate with the memory of the central computer to obtain information relating to the registration numbers of those vehicles which have commenced a parking period in the system but have not terminated the parking period of the vehicle. The parking control unit also includes an optical image or picture reproducing device which functions to reproduce an image of the registration plates of vehicles and to clearly show the registration numbers. The parking control unit is adapted to compare an imaged and clearly shown registration number with those registration numbers of vehicles that have commenced a parking period but have not terminated parking in the system, and to indicate, with the aid of an indicating means, whether or not a clearly shown registration number belongs to a vehicle that has commenced a parking period but where parking in the system has not been terminated.
|
Chand Baori, located in the village of Abhaneri in Rajasthan, is a famous and arguably among the most beautiful stepwells in India. The stepwell was built in the 9th century by the then ruling king of the region, Raja Chand. Baoris or stepwells in the old days served as reservoirs to store water for summers or parched days.
Built on a square plan, Chand Baori is a 100-feet-deep stepwell that has 3500 narrow steps in 13 storeys. The well is surrounded by steps from three sides, while the fourth side has a set of pavilions that are built one atop the other. The side having pavilions is characterised by niches having beautiful sculptures and religious carvings. There is also a stage for performing arts and several rooms for the king and queen.
Chand Baori is now being managed by the Archaeological Survey of India. There is no fee charged for visiting the monument. The place has also been shown in various movies like The Fall and The Dark Knight Rises.
|
SPIFFY: A Simpler Image Viewer for Medical Imaging Medical imaging visualization technology takes a significant role in the medical community. With the assistance of medical imaging visualization applications, huge convenience has been brought into clinical diagnosis, monitoring, and treatment. It allows doctors and researchers to see inside the human body, to identify medical problems, and to diagnose diseases. This article presents a lightweight, fast and user-friendly image viewer for medical imaging called SPIFFY. Some developing methodologies with the integration of VTK, ITK, and Qt will be presented in this article. Besides, the minimalist user interface(UI) design of SPIFFY with an application of Human-Computer Interaction(HCI) psychology principles will also be introduced. Moreover, this article will identify the benefits provided by SPIFFY and present a benchmark against some existing medical visualization applications. Experiments using cognitive walkthrough evaluation shows that SPIFFY provides both high effectiveness and efficiency.
|
<reponame>flepied/monocle
# MIT License
# Copyright (c) 2019 <NAME>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import logging
import requests
from time import sleep
from datetime import datetime
class GithubGraphQLQuery(object):
log = logging.getLogger("monocle.GithubGraphQLQuery")
def __init__(self, token):
self.url = 'https://api.github.com/graphql'
self.headers = {'Authorization': 'token %s' % token}
self.session = requests.session()
# Will get every 25 requests
self.get_rate_limit_rate = 25
self.query_count = 0
# Set an initial value
self.quota_remain = 5000
self.get_rate_limit()
def get_rate_limit(self):
try:
ratelimit = self.getRateLimit()
except requests.exceptions.ConnectionError:
sleep(5)
ratelimit = self.getRateLimit()
self.quota_remain = ratelimit['remaining']
self.resetat = datetime.strptime(
ratelimit['resetAt'], '%Y-%m-%dT%H:%M:%SZ')
self.log.info("Got rate limit data: remain %s resetat %s" % (
self.quota_remain, self.resetat))
def wait_for_call(self):
if self.quota_remain <= 150:
until_reset = self.resetat - datetime.utcnow()
self.log.info(
"Quota remain: %s/calls delay until "
"reset: %s/secs waiting ..." % (
self.quota_remain, until_reset.seconds))
sleep(until_reset.seconds + 60)
self.get_rate_limit()
def getRateLimit(self):
qdata = '''{
rateLimit {
limit
cost
remaining
resetAt
}
}'''
data = self.query(qdata, skip_get_rate_limit=True)
return data['data']['rateLimit']
def query(self, qdata, skip_get_rate_limit=False, ignore_not_found=False):
if not skip_get_rate_limit:
if self.query_count % self.get_rate_limit_rate == 0:
self.get_rate_limit()
self.wait_for_call()
data = {'query': qdata}
r = self.session.post(
url=self.url, json=data, headers=self.headers,
timeout=30.3)
self.query_count += 1
if not r.status_code != "200":
raise Exception("No ok response code see: %s" % r.text)
ret = r.json()
if 'errors' in ret:
raise Exception("Errors in response see: %s" % r.text)
return ret
|
// Copyright 2017 Yahoo Holdings. Licensed under the terms of the Apache 2.0 license. See LICENSE in the project root.
package com.yahoo.io;
import static org.junit.Assert.*;
import java.util.Arrays;
import java.util.zip.Deflater;
import org.junit.Before;
import org.junit.Test;
import com.yahoo.text.Utf8;
/**
* Check decompressor used among other things for packed summary fields.
*
* @author <a href="mailto:<EMAIL>"><NAME></a>
*/
public class SlowInflateTestCase {
private String value;
private byte[] raw;
private byte[] output;
private byte[] compressed;
private int compressedDataLength;
@Before
public void setUp() throws Exception {
value = "000000000000000000000000000000000000000000000000000000000000000";
raw = Utf8.toBytesStd(value);
output = new byte[raw.length * 2];
Deflater compresser = new Deflater();
compresser.setInput(raw);
compresser.finish();
compressedDataLength = compresser.deflate(output);
compresser.end();
compressed = Arrays.copyOf(output, compressedDataLength);
}
@Test
public final void test() {
byte[] unpacked = new SlowInflate().unpack(compressed, raw.length);
assertArrayEquals(raw, unpacked);
}
@Test
public final void testCorruptData() {
compressed[0] = (byte) (compressed[0] ^ compressed[1]);
compressed[1] = (byte) (compressed[1] ^ compressed[2]);
compressed[2] = (byte) (compressed[2] ^ compressed[3]);
compressed[3] = (byte) (compressed[3] ^ compressed[4]);
boolean caught = false;
try {
new SlowInflate().unpack(compressed, raw.length);
} catch (RuntimeException e) {
caught = true;
}
assertTrue(caught);
}
}
|
When the software giant releases Office XP in a few months, the company will face off against its two toughest competitors: software pirates and, well, Microsoft.
When Microsoft releases Office XP in a few months, the company will face off against its two toughest competitors: software pirates and, well, Microsoft.
On one front, Microsoft must convince as many as 120 million Office 95 and 97 users to upgrade to the new version, an opportunity customers passed over with Office 2000. On the other front, the Redmond, Wash.-based company must combat lost sales to casual and professional pirates. In North America alone, potentially one out of every four copies of Office is pirated, meaning it was copied illegally.
"Microsoft's biggest competitor may in fact be software pirates, just as their biggest competitor is their installed base," said Prudential Securities analyst James Lucier. He said that with so many different versions of Office or Windows available, "there's just no adequate incentive for people to move up."
Interestingly, Microsoft is marshalling similar tools to deal with both problems. With the first package of bug fixes--or service release--to Office 2000, Microsoft introduced activation technology that makes pirating the software more difficult. With Office XP and, later, Windows XP, Microsoft will refine the technology, essentially "locking" the software to the user's PC configuration.
Also with Office XP, Microsoft will begin selling software on a subscription basis, under which customers pay a fee for using the product for a predetermined time period. While the move is largely viewed as a way of generating additional revenue, the subscription scheme also could undercut piracy.
"Moving all of their software to a subscription basis is the ultimate way to combat piracy, because then you're not going to get anything unless you're a subscriber," Lucier said.
Moving more customers to a subscription basis could also diminish competition from older versions of Office, as customers would pay for incremental upgrades as they became available.
For Microsoft, its ability to successfully reduce piracy and move more people to Office XP is vital to the company's financial success in a soft economy and at a time when technology sales are slowing, analysts say. About 46 percent of the company's revenue and more than 50 percent of income is derived from Office, making it Microsoft's most important product line--maybe even more so than Windows.
But Office sales are slowing, with revenue during Microsoft's second fiscal quarter declining 2 percent year over year to $2.49 billion from $2.53 billion. The fiscal third quarter is expected to be flat or show similar declines.
"No other product is more important to Microsoft than Office," said Gartner analyst Chris LeTocq. "Any slowdown in Office sales is bound to hurt the whole company."
Particularly as Microsoft prepares for perhaps the most important strategic shift in its history--moving to Windows XP and the Microsoft.Net software-as-a-service initiative--strong Office sales could be essential to carrying the company through the transition.
With the North American market saturated, Microsoft must expand Office sales in other geographic regions. But in many of those areas, the company faces stiff competition from casual and professional software pirates.
Technology trade groups the Business Software Alliance and the Software & Information Industry Association (SIIA) estimate about 25 percent of business software used in the United States is pirated. Worldwide, the rate jumps to 36 percent, but in some of the most important growth markets, the rate is much higher.
In China, for example, more than 90 percent of software is pirated. In some smaller markets, such as Vietnam, the piracy rate jumps to 98 percent.
"If you have factories in China ripping off Microsoft product, clearly this is a problem," Lucier said. "You have the Chinese government saying Windows is the tool of American imperialism and saying, 'We want our world to run on Red Flag Linux.' That's a serious problem for Microsoft."
Microsoft wouldn't say how much money it loses worldwide to piracy, but the SIIA puts the figure around $12 billion a year for all companies selling business software. In China, the loss is $650 million.
But Microsoft, because of its market dominance, may be disproportionately affected. LeTocq said Office's market share is "in the low nineties in the U.S. and in the eighties most everywhere else."
"If you take the market in China, yes, Microsoft probably is disproportionately taking the brunt of the piracy problem because it is the dominant player there," said Peter Beruk, the SIIA's vice president of antipiracy.
This could mean Microsoft's piracy rate is much higher in many markets and well above the 36 percent worldwide average. The company would also face higher losses--and on its most profitable product line.
"It does seem to make sense that the popularity of Office would increase the piracy rate," said Microsoft corporate attorney Tom Cranton. "I can't tell you from an informed perspective. All I can tell you is, yeah, certainly that is something you would want to consider when looking at the piracy rates."
But such untapped revenue has proved elusive for Microsoft and potentially more difficult to mine given the North American high-tech sales crisis. Microsoft's activation technology, for as long as it stays ahead of the pirates, could be an important tool to fight piracy in China and other burgeoning markets, Lucier said. A move to subscription-based computing could have an even greater effect.
"Obviously, it's a great way to expand their revenues," he said. "If you can turn just 10 percent of their losses into China legitimate sales, you obviously have significant revenues there."
But some analysts warn Microsoft could kill the golden goose while trying to minimize losses.
"In the Chinese market, they're very sensitive that Microsoft is a tool of U.S. nationalism, and here is an approach that may give them reason to stick with generating illegal copies of Office 2000," LeTocq said.
Microsoft's bigger challenge may be rustling Office sales on its home turf, North America, where it faces competition not only from piracy, but from older versions of Office.
Tom Bailey, lead product manager for Office XP, estimates that as much as 60 percent of Office users have stuck with a version older than Office 2000.
LeTocq said there is good reason for that: "Microsoft simply hasn't given users a compelling reason to upgrade. What they've got works, so why switch?"
Since many companies upgrade with every other version of Office and the majority of users are on Office 95 or 97, Microsoft could see some demand for Office XP after its expected late-May release. Cajoling customers into paying on a subscription basis would be one way to reduce and eventually eliminate the problem of competition with older versions. Office users paying by subscription would always have the latest version.
However, "this is a very critical time for Microsoft to adjust how this works," LeTocq said, referring to how people pay for Office. Microsoft released gold code to Office XP last week but doesn't plan to sell the new version at retail until late May or early June. But corporate customers subscribing to Microsoft's licensing programs are expected to get Office XP sometime in April.
"Microsoft clearly is being cautious with the locking feature and subscription scheme," LeTocq said. "That's why they're holding back on retail."
On the one hand, the company wants to reap as much up-front sales of the full version as possible before fully exposing the subscription payment option. "The subscription thing is more a long-term play for Microsoft," Le Tocq said.
Besides potentially cutting down Office-to-Office competition, the subscription scheme and activation wizards may be tools for thwarting piracy.
"The more software becomes a service that requires a key or some more tightly controlled access, you're going to see more piracy taken out of the picture," Lucier said.
The activation mechanism, which locks Office to a particular PC configuration, is expected to help combat casual piracy, such as friends sharing copies of Office or small businesses buying one copy for many PCs.
"We found that the vast majority of piracy is this kind of casual piracy," said Lisa Gurry, a product manager for Office XP. "This Office activation wizard is designed to combat this casual piracy."
If successful, the mechanism could tap another source of Office revenue as more legal copies are sold.
But LeTocq believes customers forced to use the activation mechanism, which is a mandatory feature to use Office, may balk and upgrade no further than Office 2000.
"It's interesting that Microsoft uses the word 'activation,' when it's really locking the code to a particular PC," he said. "That carries a different connotation, and Microsoft knows this."
|
import * as React from 'react';
import { Button } from './Button';
import { ShowOf, ShowOfComponentProps } from '../..';
import 'animate.css/animate.css';
type AnimatedProps = {
in: string;
out: string;
children: React.ReactNode;
};
export function Animated(props: ShowOfComponentProps<AnimatedProps>) {
return (
<div
className={`animate__animated animate__faster animate__${
props.state === 'exit' ? props.out : props.in
}`}
>
{props.children}
</div>
);
}
export function AnimatedContent(props: AnimatedProps) {
const [on, setOn] = React.useState(false);
return (
<div>
<Button onClick={() => setOn(!on)}>
{props.in} / {props.out}
</Button>
<ShowOf when={on} duration={500} render={Animated} {...props} />
</div>
);
}
|
Therapeutic applications of mesenchymal stromal cells: paracrine effects and potential improvements. Among the various types of cell-to-cell signaling, paracrine signaling comprises those signals that are transmitted over short distances between different cell types. In the human body, secreted growth factors and cytokines instruct, among others, proliferation, differentiation, and migration. In the hematopoietic stem cell (HSC) niche, stromal cells provide instructive cues to stem cells via paracrine signaling and one of these cell types, known to secrete a broad panel of growth factors and cytokines, is mesenchymal stromal cells (MSCs). The factors secreted by MSCs have trophic, immunomodulatory, antiapoptotic, and proangiogenic properties, and their paracrine profile varies according to their initial activation by various stimuli. MSCs are currently studied as treatment for inflammatory diseases such as graft-versus-host disease and Crohn's disease, but also as treatment for myocardial infarct and solid organ transplantation. In addition, MSCs are investigated for their use in tissue engineering applications, in which their differentiation plays an important role, but as we have recently demonstrated, their trophic factors may also be involved. Furthermore, a functional improvement of MSCs might be obtained after preconditioning or tailoring the cells themselves. Also, the way the cells are clinically administered may be specialized for specific therapeutic scenarios. In this review we will first discuss the HSC niche, in which MSCs were recently identified and are thought to play an instructive and supportive role. We will then evaluate therapeutic applications that currently try to utilize the trophic and/or immunomodulatory properties of MSCs, and we will also discuss new options to enhance their therapeutic effects.
|
Rep. Mo Brooks, R-Ala., speaks during a news conference with House and Senate members on immigration on Sept. 9, 2014.
Republican Rep. Mo Brooks of Alabama has an idea of who is to blame for America’s measles outbreak.
Dr. Zeke Emanuel discusses the measles outbreak and the political debate over vaccinations that have made headlines in the last week.
In an interview with conservative radio host Matt Murphy Tuesday, Brooks suggested that not only were undocumented immigrants bringing diseases across the U.S. border, but that they likely have caused a number of deaths of American children. “I don’t think there is any health care professional who has examined the facts who can honestly say that Americans have not died because the diseases brought into America by illegal aliens who are not properly health care screened as lawful immigrants are,” Brooks said.
Public health officials are scrambling to contain a measles outbreak that spans across 14 states and has reignited a debate over vaccines and the extent to which parents can choose whether to vaccinate their children against easily preventable diseases.
It’s not the first time that Republican elected officials have made alarmist claims linking headline-grabbing public health concerns to immigrant communities. Last summer, when thousands of unaccompanied minors fled from Central America to the U.S.’s southern border, Republican Rep. Phil Gingrey, a retired Georgia physician, stoked fears that the children were bringing the fatal Ebola virus with them. Others raised alarms of outbreaks of scabies, lice and chicken pox.
|
Holmes: a graphical tool for development, simulation and analysis of Petri net based models of complex biological systems Summary Model development and its analysis is a fundamental step in systems biology. The theory of Petri nets offers a tool for such a task. Since the rapid development of computer science, a variety of tools for Petri nets emerged, offering various analytical algorithms. From this follows a problem of using different programs to analyse a single model. Many file formats and different representations of results make the analysis much harder. Especially for larger nets the ability to visualize the results in a proper form provides a huge help in the understanding of their significance. We present a new tool for Petri nets development and analysis called Holmes. Our program contains algorithms for model analysis based on different types of Petri nets, e.g. invariant generator, Maximum Common Transitions (MCT) sets and cluster modules, simulation algorithms or knockout analysis tools. A very important feature is the ability to visualize the results of almost all analytical modules. The integration of such modules into one graphical environment allows a researcher to fully devote his or her time to the model building and analysis. Availability and implementation Available at http://www.cs.put.poznan.pl/mradom/Holmes/holmes.html. Contact [email protected].
|
/*
* Copyright 2013 Google Inc.
*
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE file.
*/
#include "SkOpContour.h"
#include "SkPathWriter.h"
#include "SkReduceOrder.h"
#include "SkTSort.h"
void SkOpContour::toPath(SkPathWriter* path) const {
if (!this->count()) {
return;
}
const SkOpSegment* segment = &fHead;
do {
SkAssertResult(segment->addCurveTo(segment->head(), segment->tail(), path));
} while ((segment = segment->next()));
path->finishContour();
path->assemble();
}
void SkOpContour::toReversePath(SkPathWriter* path) const {
const SkOpSegment* segment = fTail;
do {
SkAssertResult(segment->addCurveTo(segment->tail(), segment->head(), path));
} while ((segment = segment->prev()));
path->finishContour();
path->assemble();
}
SkOpSpan* SkOpContour::undoneSpan() {
SkOpSegment* testSegment = &fHead;
bool allDone = true;
do {
if (testSegment->done()) {
continue;
}
allDone = false;
return testSegment->undoneSpan();
} while ((testSegment = testSegment->next()));
if (allDone) {
fDone = true;
}
return nullptr;
}
void SkOpContourBuilder::addConic(SkPoint pts[3], SkScalar weight) {
this->flush();
fContour->addConic(pts, weight);
}
void SkOpContourBuilder::addCubic(SkPoint pts[4]) {
this->flush();
fContour->addCubic(pts);
}
void SkOpContourBuilder::addCurve(SkPath::Verb verb, const SkPoint pts[4], SkScalar weight) {
if (SkPath::kLine_Verb == verb) {
this->addLine(pts);
return;
}
SkArenaAlloc* allocator = fContour->globalState()->allocator();
switch (verb) {
case SkPath::kQuad_Verb: {
SkPoint* ptStorage = allocator->makeArrayDefault<SkPoint>(3);
memcpy(ptStorage, pts, sizeof(SkPoint) * 3);
this->addQuad(ptStorage);
} break;
case SkPath::kConic_Verb: {
SkPoint* ptStorage = allocator->makeArrayDefault<SkPoint>(3);
memcpy(ptStorage, pts, sizeof(SkPoint) * 3);
this->addConic(ptStorage, weight);
} break;
case SkPath::kCubic_Verb: {
SkPoint* ptStorage = allocator->makeArrayDefault<SkPoint>(4);
memcpy(ptStorage, pts, sizeof(SkPoint) * 4);
this->addCubic(ptStorage);
} break;
default:
SkASSERT(0);
}
}
void SkOpContourBuilder::addLine(const SkPoint pts[2]) {
// if the previous line added is the exact opposite, eliminate both
if (fLastIsLine) {
if (fLastLine[0] == pts[1] && fLastLine[1] == pts[0]) {
fLastIsLine = false;
return;
} else {
flush();
}
}
memcpy(fLastLine, pts, sizeof(fLastLine));
fLastIsLine = true;
}
void SkOpContourBuilder::addQuad(SkPoint pts[3]) {
this->flush();
fContour->addQuad(pts);
}
void SkOpContourBuilder::flush() {
if (!fLastIsLine)
return;
SkArenaAlloc* allocator = fContour->globalState()->allocator();
SkPoint* ptStorage = allocator->makeArrayDefault<SkPoint>(2);
memcpy(ptStorage, fLastLine, sizeof(fLastLine));
(void) fContour->addLine(ptStorage);
fLastIsLine = false;
}
|
HGF-mediated crosstalk between cancer-associated fibroblasts and MET-unamplified gastric cancer cells activates coordinated tumorigenesis and metastasis Cancer-associated fibroblasts (CAFs) are important components of tumor stroma and play a key role in tumor progression. CAFs involve in crosstalk with tumor cells through various kinds of cytokines. In the present study, we screened hepatocyte growth factor (HGF) as a cytokine predominantly originating from CAFs. CAFs-derived HGF was found to promote MET-unamplified gastric cancer (GC) proliferation, migration, and invasion through the activation of HGF/c-Met/STAT3/twist1 pathway. It also activated interleukin (IL)-6/IL-6R/JAK2/STAT3/twist1 pathway by up-regulating IL-6R expression. As IL-6 was also found to upregulate c-Met expression, we identified the cooperation of HGF and IL-6 in enhancing the characteristics of CAFs. In vivo experiments revealed that CAFs-derived HGF promoted tumorigenesis and metastasis of MET-unamplified GC. Gene set enrichment analysis (GSEA) was performed to confirm our findings. Our study found that the increased expression of HGF in CAFs induced by MET-unamplified GC contributed to the malignant phenotype of both MET-unamplified GC and CAFs in tumor microenvironment. Introduction The tumor stroma, which comprises the extracellular matrix (ECM) and various kinds of stromal cells, such as fibroblasts, inflammatory cells, and endothelial cells, has a marked influence on tumor initiation and progression 1,2. Fibroblasts, the predominate cells in stroma, display their crucial roles in maintaining ECM and adjacent epithelia homeostasis through direct stromal-epithelial contact and secretion of cytokines 3. However, following neoplastic transformation of epithelia, local normal fibroblasts (NFs) were educated to be CAFs, which were phenotypically and functionally distinguishable from their normal counterparts with the enhanced marker expressions, mainly alpha smooth muscle actin (-SMA) and others including fibroblast activation protein (FAP), C-X-C motif chemokine ligand-12/ stromal cell-derived factor-1 (CXCL-12/ SDF-1), fibroblast-specific protein-1 (FSP-1), plateletderived growth factor receptor- (PDGFR), and platelet-derived growth factor receptor- (PDGFR) 3,4. After being transformed, the capacity of CAFs is enhanced in promoting malignant processes via secreting growth factors and inflammation factors 5,6. An increasing number of articles reported that the crosstalk between tumor and stromal cells created an appropriate microenvironment for tumor growth and metastasis 3,7. CAFs actively communicated with cancer cells and promoted tumor progression through cytokines such as HGF, IL-6, TGF-, VEGF, FGF, and CXCL12. Among the cytokines secreted by CAFs, HGF and IL-6 participated in phenotype modulation of cancer cells in many solid tumors 11,12. HGF was originally determined to be a stimulating factor that promoting the mitosis of hepatocytes, followed by discovering its effect on accelerating wound healing and histodifferentiation. HGF from fibroblasts and c-Met from tumor cells formed a signaling pathway, which was intensely correlated with proliferation, metastasis, and angiogenesis 16,17. Several studies reported that high-c-Met expression tumor cells due to copy number alteration of proto-oncogene MET showed no responses to the stimulation of their ligands 18,19, while the opposites remained 8,20. On the other hand, IL-6, a multifunctional cytokine, which was originally determined to be a regulator of immune and inflammatory responses 21, was proved to be another important mediator linking epithelial cells and stromal cells 9,12. IL-6 bound to a cell-surface type I cytokinereceptor complex consisting of IL-6R chain (IL-6-R) and a common cytokine-receptor signal-transducing subunit gp130, and then activates STAT3 with the phosphorylation of Tyr705 via the JAK2 signaling pathway 22,23. It has been well elucidated that enhanced effect of IL-6/JAK2/ STAT3 axis increased the chance of oncogenesis of ovarian, renal, and breast cancers. In the present study, we identified the cooperation of HGF and IL-6 both on MET-unamplified GC and fibroblasts. Furthermore, we also characterized the molecular mechanisms underlying the cooperative effect on tumor growth and metastasis. HGF is predominantly derived from CAF in GC microenvironment CAFs and their normal counterparts were isolated from GC tissues and corresponding non-cancerous tissues, respectively. Immunofluorescent staining identified spindle-shape fibroblasts by vimentin and CAFs were verified by enhanced-expression of -SMA and FAP (Fig. 1a). The markers of epithelial cells, endothelial cells and leukocytes were also used to insure the purity of fibroblasts (Supplement Fig. 1A). An animal model was built to assess the contribution of CAFs to the migration of GC cells. CAFs labeled with Dil were injected into athymic nude mice through caudal vein, followed by MGC803 cells labeled with DiO a week later. The mice were sacrificed in 1 week and lung tissues were examined. In most cases, MGC803 cells assembled at the same place where CAFs settled down (Fig. 1b). The result suggests existing cytokines derived from CAFs but not from MGC803 cells. To identify these factors, a number of cytokines mainly from stromal cells but rarely from epithelial cells were subjected to quantitative real-time PCR (qRT-PCR) in MGC803 cells and CAFs. As shown in Fig. 1c, HGF mRNA level was significantly higher in CAFs compared with MGC803 cells. To confirm the dominant driver of HGF, one immortalized normal gastric epithelial cell line GES-1, 12 human GC cell lines and three pairs of primary fibroblasts were subjected to qRT-PCR and enzyme-linked immunosorbent assay (ELISA). HGF was mainly expressed in fibroblasts, especially in CAFs (Fig. 1d, e). In addition, the mRNA expression of HGF and -SMA were examined in 35 pairs of GC and corresponding adjacent non-cancerous gastric tissues. Both HGF and -SMA mRNA expression levels were significantly higher in GC tissues (Supplement Fig. 1B), and HGF was positive correlated with -SMA on mRNA level (Supplement Fig. 1C). In conclusion, these results suggested that HGF was predominantly expressed in CAFs and may serve some distinct functions in GC progression. HGF from CAFs promotes MET-unamplified GC proliferation and migration in vitro Immunofluorescence stain of c-Met, the only known receptor of HGF, in GC tissues and adjacent noncancerous gastric tissues showed that c-Met expression was higher in GC tissues than normal tissues, and higher in GC cells than fibroblasts (Supplement Fig. 2A). An online database of Gene Expression across Normal and Tumor tissues (GENT) containing more than 21,000 samples was used to confirm the higher expression of MET gene in tumor tissues, especially in GC tissues (Supplement Fig. 2B). Furthermore, analyzing a platform of 20,981 tumor samples from The Cancer Genome Atlas (TCGA) in cBioportal Web resource online (cBioportal for Cancer Genomic) revealed that the amplification of MET gene accounted for a considerable part of alterations, especially in GC (Supplement Fig. 2C). In addition, MET gene alteration was correlated with disease-free survival but not with overall survival (Supplement Fig. 2D). GC cell lines were classified into non-MET, METamplified, and MET-unamplified according to copy number of oncogene MET as described in previous study 27. NCI-N87 was selected as non-MET, Hs-746T and MKN45 as MET-amplified, and MGC803 and AGS as MET-unamplified according to the expression of c-Met and phospho-c-Met (Tyr1234/1235) proteins (Fig. 2a). To evaluate the effect of HGF derived from CAFs on proliferation of GC, an appropriate concentration of c-Met inhibitor crizotinib was found out (Supplement Fig. 3). As shown in Fig. 2b, NCI-N87 without c-Met expression showed no difference with or without HGF or crizotinib existed. The MET-amplified GC, Hs-746T and MKN45, showed no response to HGF but were highly sensitive to crizotinib. However, MET-unamplified GC, MGC803 and AGS, showed lower baseline proliferation with HGF neutralization or c-Met inhibition. CAFs promoted were measured by western blot and FC. Lines and areas were used to indicate protein expression: black dotted lines, stained with isotype-control IgG; green solid lines, p-c-Met; red solid lines, c-Met. b CAFs-derived HGF promoted the proliferation of MET-unamplified GC cells. In the groups with crizotinib, GC cells were pretreated with crizotinib for 6 h before they were mixed with CAFs. c, d CAFs-derived HGF promoted the migration of METunamplified GC cells through c-Met. In the groups with crizotinib, GC cells were pretreated with crizotinib for 6 h before the transwell assays. e, f HGF mRNA expression and protein levels in MGC803, AGS, and CAFs were measured by qRT-PCR and ELISA. HGF (50 ng/ml); HGFab (300 ng/ml); Crizotinib (0.1 m). (ns, no significant difference; *P < 0.05; **P < 0.01; ***P < 0.001) migration of all cell lines in a co-culture system, but only MET-unamplified GC were sensitive to CAFs-derived HGF (Fig. 2c). Recombinant human HGF protein promoted migration of MET-unamplified GC cells but not non-MET or MET-amplified GC cells (Fig. 2d). The above results suggested that only MET-unamplified GC cells were involved in the influence of CAFs-derived HGF on cell proliferation and migration. It was interesting that a co-culture system were more powerful than CAFs condition medium in facilitating migration (Fig. 2c). Thus, the expression of HGF in MGC803 cells, AGS cells and CAFs were examined. CAFs enhanced HGF mRNA and protein expression when co-cultured with MGC803 cells and AGS cells (Fig. 2e, f). However, it made no differences in HGF secretion between co-culture system and direct contact system (Fig. 2f). These data emphasize the importance of CAFs in tumor microenvironment and indicate that CAFs-derived HGF specially promote METunamplified GC progression. The tumor-promoting effects induced by CAFs-derived HGF on MET-unamplified GC cells are mediated through the activation of ERK1/2 and STAT3 signaling pathways HGF bound to c-Met and then triggers a number of downstream oncogenic signaling cascades such as PI3K/ AKT, ERK1/2, and p-STAT3 in MET-unamplified GC, MGC803 and AGS cells ( Fig. 3a and Supplement Fig. 4A). Gene set enrichment analysis (GSEA) using RNA-seq of 415 GC samples from TCGA and microarray profiles of 300 GC samples from GSE62254 showed that HGF was highly correlated with epithelial-mesenchymal transition (EMT) (data not shown). Then, we examined the change of typical markers of EMT. Both treated with recombinant human HGF protein and co-culture with CAFs markedly decreased epithelial marker (E-cadherin) expression and increased mesenchymal markers (N-cadherin, Vimentin, Snail, Slug, and Twist1) expression in MGC803 and AGS cells ( Fig. 3b and Supplement Fig. 4C). When HGF was inhibited, whether by using HGF neutralizing antibody or small-interfering RNA (Supplement Fig. 4B), EMT induced by CAFs was impaired ( Fig. 3b and Supplement Fig. 4C). As twist1 protein was known to promote tumor metastasis by inducing invadopodia formation 28, we analyzed databases of TCGA and GSE62254 and found a positive correlation between HGF and twist1 (Supplement Fig. 4D, E). To investigate how CAFs-derived HGF influenced twist1 expression, MGC803 and AGS cells were co-cultured with CAFs and downstream signals of HGF/c-Met were inhibited using corresponding inhibitors. Twist1 expression was significantly inhibited when treated with MEK1/2 inhibitor, U0126, and STAT3 inhibitor, S3I-201, but not with PI3K/AKT inhibitor, LY294002 (Fig. 3c). To confirm whether the ERK1/2 and STAT3 signaling pathways participated in HGF-enhanced proliferation, migration and invasion, corresponding experiments were conducted. Cell proliferation of MGC803 cells and AGS cells were significantly inhibited when HGF expression in CAFs was decreased (Fig. 3d). Meanwhile, cell proliferation also dramatically impaired after treatment with U0126 and S3I-201 compared with untreated groups (Fig. 3d). In addition, the inhibition of HGF expression as well as the inhibitors of ERK1/2 and STAT3 signaling pathways significantly reduced migrated and invaded MGC803 and AGS cells (Fig. 3e). Altogether, these in vitro data suggest that CAFs-derived HGF increases twist1 expression in MET-unamplified GC cells and shows biological function via activating ERK1/2 and STAT3 signaling pathways. Positive cytokine-receptor interactions between METunamplified GC and CAFs increases twist1 expression and shows radical promotion of GC progression in vitro Given above results, it is confused that only S3I-201 decreased twist1 expression induced by recombinant human HGF (Supplement Fig. 5A). When looking back, we found that ERK1/2 signal inhibition induced STAT3 signal inhibition; meanwhile, STAT3 signal inhibition also induced ERK1/2 signal inhibition (Fig. 3c). It is conceivable that there is a crosstalk between GC cells and CAFs, which indicates the complexity of tumor microenvironment. As STAT3 signal was always activated by IL-6/JAK2 signaling pathway, the expression of IL-6 mRNA and protein in MGC803 cells, AGS cells and CAFs were examined. IL-6 was mainly expressed in CAFs and upregulated when in co-culture system (Supplement Fig. 5B). Immunofluorescence staining of IL-6 and IL-6R in GC tissues and adjacent non-cancerous gastric tissues showed that IL-6 was mainly expressed in fibroblasts, while IL-6R in both cancer cells and fibroblasts (Supplement Fig. 6A, B). Considering this, we suppose that CAFsderived HGF-enhanced IL-6/JAK2/STAT3 signaling pathway in GC cells when co-cultured with CAFs (in the presence of IL-6), but not when cultured GC cells alone (in the absence of IL-6). To verify the idea, GSEA and correlation analyses were performed and the results showed a positive correlation between HGF and IL-6R (Supplement Fig. 5C, D). The regulation of IL-6R by HGF was investigated next. As shown in Fig. 4a, recombinant human HGF protein increased the expression of IL-6R in both MGC803 and AGS cells. We also found that IL-6 could increase c-Met expression, which also had been reported in myeloma cells 29. A co-culture system was then built to confirm above results. As shown in Fig. 4b, co-cultured with CAFs increased the expression of c-Met and IL-6R in GC cells, which were reversed by IL-6 neutralization and HGF neutralization, respectively. To investigate the mechanism how HGF regulated IL-6R expression, a MEK1/2 inhibitor, U0126, was used to inhibit ERK1/2 signaling. U0126 decreased IL-6R expression in MGC803 and AGS cells in co-culture system (Fig. 4c). Previous studies has shown that IL-6/JAK2/ STAT3 axis played an active role in oncogenesis of tumors 22,26,30. To avoid the interference of p-STAT3 induced by HGF/c-Met signaling, a JAK2 inhibitor, Fig. 3 CAFs-derived HGF induces EMT and promotes proliferation, migration, and invasion of MET-unamplified GC cells via ERK1/2 and STAT3 signaling. a Downstream oncogenic signals triggered by HGF in MGC803 cells. b Expression of EMT markers in MGC803 cells were detected by western blotting. MGC803 cells were lysed after treatment with recombinant human HGF protein for 2 days or co-cultured with CAFs for 2 days. c Twist1 expression in MGC803 and AGS cells were detected by western blotting. GC cells were pretreated with inhibitors for 6 h, and the same concentration of these inhibitors were added into co-culture system. d Schematic charts of cell growth were measured by CCK-8. GC cells were pretreated with inhibitors for 6 h before they were mixed with CAFs. e Cell migration and invasion of MGC803 and AGS cells with different treatments as indicated were determined using transwell assays. Scale bars, 200 m. HGF (50 ng/ml); HGFab (300 ng/ml); Crizotinib (0.1 m); LY294002 (50 M); U0126 (20 M); S3I-201 (100 M). (*P < 0.05; **P < 0.01; ***P < 0.001) AG490, was applied as IL-6/JAK2/STAT3 signaling inhibitor. AG490 decreased c-Met expression in MGC803 and AGS cells when they were co-cultured with CAFs ( Fig. 4c). Above results suggested that CAFs-derived HGF increased IL-6R expression in MET-unamplified GC cells through HGF/c-Met/ERK1/2 signaling pathway, and that b IL-6R and c-Met expression in coculture system were detected by western blotting. GC cells and CAFs were co-cultured for 2 days. c HGF increased IL-6R expression via ERK1/ 2 signaling, and IL-6 increased c-Met expression via JAK2 signaling. GC cells were pretreated with inhibitors for 6 h, and the same concentration of these inhibitors were added into co-culture system. d Twist1 expression in GC cells transfected with shRNA or siRNA were detected by western blotting. e ChIP assays performed in MGC803 cells and in GC tissues. f Cell proliferation of MGC803 and AGS cells were measured by CCK-8 assays. g Migration and invasion of MGC803 and AGS cells were measured by transwell assays. Scale bars, 200 m. HGF (50 ng/ml); HGFab (300 ng/ml); IL-6 (10 ng/ml); IL-6ab (150 ng/ml); U0126 (20 M); AG490 (10 M). (*P < 0.05; **P < 0.01; ***P < 0.001) CAFs-derived IL-6 increased c-Met expression in METunamplified GC cells through IL-6/IL-6R/JAK2/ STAT3 signaling pathway. To elucidate the influence of IL-6R and STAT3 on twist1 expression, IL-6R and STAT3 expressions were silenced by transfecting IL-6R small-interfering RNA a Gene set enrichment analyses (GSEA) of GC samples from TCGA showed that high HGF (left) and IL-6 (right) expression were positively associated with upregulation of carcinoma-associated fibroblasts phenotype. Each bar corresponds to one gene. b GSEA results of HGF (left) and IL-6 (right) in GC samples from GSE62254. c HGF and IL-6 increased CAFs markers expression in NFs. NFs were lysed after treatment with HGF and IL-6 for 2 days. d HGF and IL-6 neutralization decreased CAFs markers expression in CAFs. e HGF and IL-6 were positively correlated with -SMA and FAP in nine pairs of CAFs (red solid triangle) and NFs (blue solid circle), respectively. f Positive correlation of HGF and CAFs markers were analyzed with samples from TCGA and GSE62254. HGF (50 ng/ml); HGFab (300 ng/ml); IL-6 (10 ng/ml); IL-6ab (150 ng/ml). (**P < 0.01; ***P < 0.001) (siRNA) and STAT3 short hairpin RNA (shRNA) into MGC803 and AGS cells (Supplement Fig. 5E, F). In coculture system, twist1 expression decreased in GC cells when transfected with IL-6R siRNA or STAT3 shRNA (Fig. 4d). Immunofluorescence staining of p-STAT3 and twist1 in MGC803 cells indicated their co-expression on the condition of CAFs-derived HGF (Supplement Fig. 5G). Next, TWIST1 promoter region for potential STAT3-binding sites was analyzed using the JASPAR database and ALGGEN-PROMO, and the result was consistent with previous study 31. Then chromatin immunoprecipitation assays were performed in Both MGC803 cells and GC tissues. As indicated in Fig. 4e, CAFs activated the binding ability of p-STAT3 to STAT3binding site (-71 to -80 relative to the transcription start site) in the TWIST1 promoter. Function studies were performed to further confirm the biological roles of CAFs-derived HGF via IL-6R and STAT3. Cell proliferation, migration, and invasion of MET-unamplified GC cells was significantly inhibited when IL-6R and STAT3 expression decreased (Fig. 4f, g). Taking together, we hypothesize that CAFs-derived HGF induce twist1 expression not only via activating HGF/c-Met/ STAT3 but also by enhancing IL-6/IL-6R/JAK2/ STAT3 signaling pathway through increasing IL-6R expression, and CAFs-derived IL-6 intensify the promoting effects of HGF by increasing c-Met expression, thus building positive cytokine-receptor interactions between MET-unamplified GC cells and CAFs. And therefore, the cooperation accelerates MET-unamplified GC progression. To further verify the correlation between HGF, IL-6 and characteristic of CAFs, the mRNA expression of HGF, IL-6, and CAFs markers, -SMA and FAP, in nine paired CAFs and NFs were examined. Both HGF and IL-6 were found to be positively correlated with -SMA and FAP, respectively (Fig. 5e). In addition, correlation analysis showed that both HGF and IL-6 were positively correlated with CAFs markers (Fig. 5f and Supplement Fig. 7B). Activated fibroblasts are functionally distinguishable from their homologously quiescent fibroblasts 33,34. Both HGF and IL-6 facilitated cell migration of NFs, and the number of migrational NFs induced by IL-6 has significantly increased by HGF (Supplement Fig. 7C). Meanwhile, co-culture with MET-unamplified GC cells promoted cell migration of CAFs, which was reversed by both HGF neutralization and IL-6 neutralization (Supplement Fig. 7D). These observations suggested that both HGF and IL-6 participate in transdifferentiation of NFs to CAFs, and HGF could enhance the promoting effect of IL-6. So the cooperation of HGF and IL-6 influence not only MET-unamplified GC cells but also fibroblasts. CAF-derived HGF promotes MET-unamplified GC tumorigenesis and metastasis through STAT3 signaling in vivo The promoting effects of HGF on cell proliferation and migration were confirmed by GSEA with databases of GC samples from TCGA and GSE62254, respectively (Fig. 6a, b). The functions of CAFs-derived HGF on MET-unamplified GC tumorigenesis and metastasis were evaluated in vivo. Co-injection of MGC803 cells and CAFs showed progressive growth than MGC803 cells alone (Fig. 6c, d). However, inhibition of HGF expression in CAFs and STAT3 in MGC803 cells significantly decreased tumor growth (Fig. 6c, d). Additionally, co-injection of MGC803 cells and CAFs significantly increased the average weight of tumors as compared to MGC803 cells alone (Fig. 6e), which was reversed by both inhibition of HGF expression in CAFs (0.602 ± 0.062 g vs. 0.876 ± 0.256 g, P = 0.003) and inhibition of STAT3 expression in MGC803 cells (0.546 ± 0.156 g vs. 0.966 ± 0.196 g, P = 0.002) (Fig. 6e). Immunohistochemistry staining results showed that CAFs significantly increased Ki67, vimentin and twist1 expression, and decreased Ecadherin expression, which were reversed by inhibition of STAT3 expression (Fig. 7a). In addition, the incidence of pulmonary metastasis for MGC803 alone, MGC803 + CAFsiNC, MGC803 + CAFsiHGF, MGC803shNC + CAF, MGC803shSTAT3 + CAF were 0%, 80%, 20%, 100%, 20% under the vision of microscope, respectively (Fig. 6f). Likewise, the number and size of metastatic clusters in the MGC803 + CAFsiHGF and MGC803shSTAT3 + CAF groups were less than those of MGC803 + CAFsiNC and MGC803shNC + CAF groups, respectively (Fig. 6f). In addition, abundant fibroblasts were found in the metastatic clusters under microscope (Fig. 6f), which is consistent with the result of in vivo chemotaxis assay (Fig. 1b). Given the in vitro results above, we conclude that CAFs-derived HGF promotes tumorigenesis and metastasis of MET-unamplified GC in vivo, in part, via STAT3 signaling (Fig. 7b). However, the underlying mechanisms of interactions between tumor cells and the activated fibroblasts remain largely unexplored. In the current study, the chemotaxis of tumor cells induced by CAFs was evaluated in vivo by co-localization of Dil-labeled CAFs and DiO-labeled GC, and then the main cytokines secreted by CAFs and GC cells were compared. HGF and IL-6 were identified as As the only known tyrosine kinase receptor of HGF currently, c-Met has shown its unlimited proto-oncogene potential in both HGF-dependent and independent manner in various solid tumors. High expression of c-Met correlates with poor survival of GC patients and breast cancer patients 40,41, and a series of inhibitors, such as crizotinib, and its antibodies have been applied to clinical practices. Though MET amplification accounts for only small part of total GC patients 42,43, it is the most common of MET gene alteration, which leads to a poor disease-free survival in GC (Supplement Fig. 2C, D). MET amplification induces highly phosphorylated state of c-Met, which could activate several intracellular signaling pathways without HGF 18. We tested whether HGF could change functional phenotype of GC cells with different state of c-Met and p-c-Met expression, and found that HGF only focused on MET-unamplified GC cells, which suggests that CAFs-derived HGF participates in communication with selective group of GC cells in tumor environment. Twist1 is a basic helix-loop-helix domain-containing transcription factor that inducing EMT and promoting tumor metastasis 28,44,45. HGF and IL-6 have been reported to induce EMT of tumor cells 46,47. In the present study, we showed that CAF-derived HGF and IL-6 were upregulated in co-culture system and acted as positive regulators of twist1 expression. Further, we found that HGF induced the expression of IL-6R, and thus activated the IL-6/IL-6R/JAK2/STAT3/twist1 signaling pathway. We also found increased c-Met expression in METunamplified GC cells in response to recombinant human IL-6. These results indicate the important roles of HGF and IL-6 in complicatedly reciprocal interactions between CAFs and MET-unamplified GC cells. Fibroblasts acquire malignancy phenotype when transformed to cancer-associated fibroblasts 48,49. IL-6 is a Fig. 7 Immunohistochemistry staining of tumors from nude mice and schematic diagram showing the effects of CAFs-derived HGF and IL-6 on MET-unamplified GC cells. a Immunohistochemistry staining of tumors showed that CAFs increased the expression of Ki67, twist1, and vimentin, as well as decreased expression of E-cadherin in MGC803 cells, which were reserved by inhibiting STAT3 expression. Scale bars, 100 m. b Schematic diagram showed that the cooperation of HGF and IL-6 enhanced the characteristic of CAFs and promoted twist1 expression via STAT3 signaling in MET-unamplified GC cells stimulative factor that can accelerate this process 12,32. Given CAFs are primary source of HGF and IL-6, in current study, CAFs marker expressions in NFs were increased when stimulated with HGF and IL-6. Meanwhile, enhanced-expression of these markers in CAFs induced by MET-unamplified GC cells was reversed by HGF neutralization and IL-6 neutralization. This indicates the cooperation work of HGF and IL-6 in transdifferentiation of quiescent fibroblasts to CAFs and in maintaining characteristic of CAFs. Gene set enrichment analyses of GC samples from TCGA and GSE62254 showed that high HGF expression was positively correlated with tumor growth and metastasis, which were confirmed by both functional experiments of cell proliferation, migration, and invasion in vitro and in animal models. In summary, when METunamplified GC cells were co-cultured with CAFs, the HGF expression was significantly increased. Increased HGF intensified malignant phenotype of both METunamplified gastric cancer cells and CAFs, then CAFs with intensified malignant phenotype facilitated the expression of HGF, thus building a positive crosstalk. CAFs-derived HGF and IL-6 upregulated each other's receptor in MET-unamplified GC cells and collaboratively facilitated phosphorylation of STAT3, thus promoting tumorigenesis and metastasis of MET-unamplified GC. However, how MET-unamplified GC cells promoted the expression of HGF in CAFs was still unclear, which required further study. Our study linked HGF mediated crosstalk to the control of MET-unamplified GC progression, and factors participate in the crosstalk may serve as prognostic indicators and therapeutic targets. Crosstalks between tumor cells and stromal cells are essential for tumor progression. A better understanding of the underlying mechanisms accelerates the discovery of therapeutic interventions. It has been shown that anti-HGF has clinical benefits in a subgroup of pulmonary adenocarcinoma 50. Thus, HGFtargeted therapy could be a possible approach in the treatment of MET-unamplified GC. However, extensive research is required before its application. Cell lines and primary cell isolation Human GC cell lines SGC7901, BGC823, AGS, SNU-1, MKN45, MKN28, KATOIII, SNU-16, NCI-N87, HGC27, Hs-746T, MGC803, and GES-1 (an immortalized normal gastric epithelial cell line) were obtained from Shanghai Institute of Digestive Surgery, Shanghai, People's Republic of China. Cells were cultured and passaged in RPMI-1640 medium supplemented with 10% fetal calf serum according to the manufacturer's instructions. Primary fibroblasts were isolated from GC tissues and paired nontumor tissues of nine independent GC patients, who underwent radical gastrectomy at the Department of Surgery, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University with informed consents. None of the patients received chemotherapy before surgery. GC tissues and paired non-tumor tissues were minced into organoids of 1 mm 3 after gently swilled and seeded onto 10 cm petri dishes within 30 min after resection. Fibroblasts creep out and produced a homogenous fibroblastic cell population after 7 days of culture as described in our previous study 12. To maintain the characters of primary cells, the subsequent experiments were performed using fibroblasts with up to ten passages. Reagents Antibodies used to detect EMT including E-cadherin, N-cadherin, Vimentin, Snail1, Slug were purchased from Cell Signaling Technology. FAP, CD31, and CD45 antibodies from Santa Cruz Biotech, -SMA antibody from Abcam and Pan-cytokeratin from Cell Signaling Technology were used to identify CAFs and paired NFs. Human HGF antibody and Human IL-6 antibody were purchased from R&D Systems and used for HGF and IL-6 neutralization, respectively. Crizotinib and U0126 from Cell Signaling Technology and S3I-201 from Abcam were used as inhibitors of HGF/c-Met signaling, ERK1/2 signaling and STAT3 signaling pathways, respectively. AG490 from Medchem Express used as IL-6/JAK2/ STAT3 inhibitor. Recombinant human HGF protein from Abcam and recombinant human IL-6 protein from ABclonal Biotech were used as stimulating factors in the experiments, respectively. See Supplementary Table S1 for the reagents used. Gene set enrichment analysis (GSEA) and correlation analysis RNA-seq of 415 GC patients from Stomach Adenocarcinoma (TCGA, Provisional) and microarray profiles of 300 GC patients from GSE62254 were downloaded from cBioPortal platform (http://www.cbioportal.org/) and GEO database (https://www.ncbi.nlm.nih.gov/geo/), respectively. GSEA 3.0 software (Broad Institute, Cambridge, MA, USA) was used for GSEA and the number of permutations was set to 1000. The mean value of each gene expression was used for correlation analysis. Supplementary Table S2 for siRNA and shRNA targeted sequences. Flow cytometry Pan-cytokeratin, CD31 and CD45 in CAF and NF, c-Met and p-c-Met in GC were measured as by Flow cytometry (FC). Briefly, equal cells were collected and permeated with 0.5% Triton for 10 min, washed with PBS and then stained with primary antibody at 4°C overnight. Cells were then incubated with Alexa Fluor 488 dyeconjugated secondary antibody at room temperature for 3 h in the dark after being washed with PBS for three times. The fluorescence intensity of Pan-cytokeratin, CD31 and CD45 in CAF and NF, c-Met and p-c-Met in GC was detected by Flow Cytometer and the data were analyzed by FlowJo V10 software. Immunofluorescence Briefly, cells and the frozen sections of tissue samples were fixed in 1% neutralized formaldehyde buffer for 30 min at room temperature, followed by permeabilization with 0.5% Triton for 10 min. Cells were blocked with 3% bovine serum albumin, sections with normal nonimmune goat serum, both for 30 min at room temperature. After being gently washed three times with PBS, cells and sections were incubated at 4°C overnight with primary antibodies for -SMA, FAP, Vimentin, HGF, MET, IL-6, IL-6R, p-STAT3, Twist1 antibody. Cells and sections were stained with appropriate Alexa dye-conjugated secondary immune reagents at room temperature in the dark after three PBS washes. Then cells and sections were subjected to an Olympus BX70 microscope (Olympus, Tokyo, Japan) after covered with slides with Anti-fade Reagent containing DAPI. Cell migration and invasion assay Cell migration and invasion assays were conducted using Matrigel (BD Bioscience, CA) in 8 m transwell chambers (Corning Life Science, Acton, MA, USA). For GC cells migration assays, 5 10 4 GC cells were suspended in 200 l serum-free medium and cultured in the upper chamber for 17 h with or without 1.5 10 4 fibroblasts in the lower chamber with 600 l 10% serumconditioned medium. In the groups with inhibitors, GC cells were pretreated with U0126 (20 M) and S3I-201 (100 M), respectively, for 6 h, and then suspended in 200 l serum-free medium containing the same concentration of these inhibitors. Oppositely for the migration assays of NFs and CAFs. For invasion assays, 1 10 5 GC cells were used as described above and cultured for 24 h after the inserts were coated with 50 l Matrigel/well. Inserts were fixed in formalin and stained with 0.1% crystal violet for 30 min, then removed non-migrating or non-invading cells with cotton swabs. Nikon Digital Sight DS-U2 (Nikon, Tokyo, Japan) and Olympus BX50 microscopes (Olympus, Tokyo, Japan) were used to photograph migrated and invaded cells. Each experiment was performed three times on the same conditions. Enzyme-linked immunosorbent assay (ELISA) The levels of cytokines HGF and IL-6 in supernatants of GC cells and fibroblasts were detected by ELISA kit (R&D Systems, Minneapolis, MN, USA) according to the manufacturer's instructions. Briefly, GC cells (1 10 5 ) and fibroblasts (1 10 5 ) were culture alone or together with or without 0.4 m-6-well plate transwell inserts (Millipore) in 2 ml of RPMI-1640 complete medium for 36 h. After centrifuging at 12,000 g for 10 min to remove cell debris, cancer cell and fibroblasts conditioned medium as well as co-culture medium from the lower wells were collected for ELISA. Quantitative real-time PCR (qRT-PCR) Total RNA extracted from cells and tissues using Trizol reagent (Invitrogen, Carlsbad, CA) was reversely transcribed to cDNA using a Reverse Transcription system (Promega, Madison, WI) according to the manufacturer's instructions. The mRNA levels were quantified by qRT-PCR using the SYBR Green PCR Master Mix (Applied Biosystems, Waltham, MA, USA) ABI Prism 7900HT sequence detection system (Applied Biosystems, CA, USA). The relative mRNA levels were evaluated based on the Ct values and normalized to glyceraldehyde 3phosphate dehydrogenase (GAPDH). The PCR primers for all genes are listed in Supplementary Table S2. Western blot analysis In co-culture system, GC cells and CAFs were cocultured for 2 days. GC cells were pretreated with inhibitors (crizotinib, LY294002, U0126, S3I-201 and AG490) for 6 h before co-cultured with CAFs in groups of inhibition, and the same concentration of these inhibitors were added into co-culture system for 2 days until cells were lysed in protein extraction reagent. Briefly, cells were lysed in mammalian protein extraction reagent (Pierce, Rockford, IL, USA) supplemented with protease and phosphatase inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). The same amount of protein samples were fractionated with 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis gel and then transferred onto 0.22 m polyvinylidene fluoride (PVDF) membranes (Millipore, MA, USA). After blocking with 1 TBST buffer supplemented with 5% bovine serum albumin at 37°C for 2 h, the membranes were incubated at 4°C overnight with the corresponding primary antibodies. The membranes were then incubated with HRPconjugated secondary antibody (1:5000, LI-COR, Nebraska, USA) for 2 h at room temperature. Thermo Pierce chemiluminescent (ECL) Western Blotting Substrate (Thermo, Waltham, MA, USA) and infrared imaging system (LI-COR Biosciences, Lincoln, USA) were used to visualize the membranes. The antibodies used were shown in Supplementary Table S1. Chromatin immunoprecipitation (ChIP) The ChIP assays were performed with Enzymatic Chromatin IP Kit (#9005, CST) according to the manufacturer's instructions. Briefly, cells without treatment and co-cultured with cancer-associated fibroblast and GC tissues were cross-linked with 1% formaldehyde, and stopped by glycine. Cells were collected via centrifugation for 5 min at 4°C, 1500 rpm. DNA was sheared by micrococcal nuclease to 150-900 bp. Nuclear membrane was broken by sonication. STAT3 antibody and normal IgG were used in Chromatin immunoprecipitation. After reversing the protein/DNA cross-links, PCR was performed to detect the sequences of TWIST1 promoter. The product spanned extending from -45 to -329 regions of Twist1 promoter included putative binding sites. The primers used were shown in Supplementary Table S2. Immunohistochemistry staining (IHC) Tissue samples were fixed with formalin and embedded with paraffin before slicing to 4 m-thick slices and then Immunohistochemistry staining performed following EnVision two-step procedure of Dako REAL™ Envision™ Detection System (Dako, Agilent Technologies, Ca, USA). After antigen retrieval with 0.01 M citrate buffer (pH 6.0), samples were stained with the primary antibodies. Then samples were incubation in secondary antibody for 30 min at 37°C and visualized with DAB solution, followed by counterstain with hematoxylin. The antibodies used were shown in Supplementary Table S1. In vivo tumorigenesis and metastasis Four-week-old male healthy athymic nude mice received from the Institute of Zoology, Chinese Academy of Sciences were housed in a specific pathogen-free environment in the Animal Experimental Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine. The experiments were reviewed and approved by the Institutional Review of Committee for Animal Use of the Shanghai Jiao Tong University. Mice were randomly divided into different groups (five mice per group) and then injected subcutaneously with MGC803 (2 10 6 ) alone or accompanied by CAFs (5 10 5 ) suspended in 100 l PBS into flanks of the mice. Tumor size was measured weekly using digital Vernier caliper and tumor volume was calculated using the following formula: tumor volume = (Width 2 Length)/2. Mice were killed at 4 weeks and tumors were weighted and processed for immunohistochemical analysis. For the in vivo chemotaxis assay, five-week-old male healthy athymic nude mice were injected with Dil-labeled CAFs through caudal vein. One week later, the same mice were injected with DiOlabeled MGC803. After another week, mice were killed and the lungs were freezed and sliced to 200 m-thick frozen tissues slices by Cryostat (Leica, Germany) and then subjected to Olympus BX70 microscope (Olympus, Tokyo, Japan) immediately. For pulmonary metastasis assay, nude mice were injected with CAFs (5 10 5 ) or PBS (control) through caudal vein, followed by MGC803 (2 10 6 ) in 1 week. Mice were killed at 8 weeks and lungs were collected and sliced to find metastasis focus. Statistical analysis The statistical analysis was performed using GraphPad Prism6 and the experimental results were presented as mean ± standard deviation (SD). Differences between groups were compared by Student's t-test and two-tailed P-value ≤ 0.05 was considered as significant. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
Fresh off his second big win over Sergey Kovalev this weekend, Andre Ward and his team were elated, with trainer Virgil Hunter even saying during the post-fight press conference that Andre Ward’s dream of one day winning a heavyweight title could become a reality.
Hunter says that if Ward does move up in weight to take on new challenges, he’d like Ward to pass right over the cruiserweight division and go right after perhaps the biggest name in the heavyweight division — Anthony Joshua.
“I’ve seen some things we can do against Joshua. We wouldn’t even have to put extra weight [on Ward] beyond what he walks around at, about 192 [pounds]. We’d just go right at him,” Hunter said.
Ward followed up Hunter’s message by saying that he used to think Hunter was crazy, but now he simply follows his trusted trainer’s lead, while adding:
“It’s a dream. I know it sounds crazy, but … I do really well against bigger fighters because of my stamina and because I’m strong.”
That said, Hunter did admit that at least for this fall (where HBO is holding a date for Ward) it might be more reasonable for them to take on either Badou Jack or Nathan Cleverly, who are currently in talks to land on the Mayweather-McGregor undercard. Ward himself has ruled out any possible third fight with Kovalev.
So, tell me, how would you feel about a potential Ward-Joshua fight? Do you think it’s simply a bridge too far for Andre, or could his craft be enough to offset Joshua’s physical advantages?
|
<gh_stars>0
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon May 17 11:01:58 2021
@author: root
"""
import sklearn
from sklearn import datasets
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn as sns
import torch.nn.functional as F
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import torch
from utils_two_moons import evaluate_model, brier_score, expectation_calibration_error
from utils_two_moons import NeuralNet, MCDropout, EnsembleNeuralNet
from utils_two_moons import mixup_log_loss
from training_loops import train_model_dropout
from utils_two_moons import MyData
from sklearn.neighbors import KernelDensity
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
def get_device():
if torch.cuda.is_available():
device = 'cuda:0'
else:
device = 'cpu'
return device
device = get_device()
################# 1.CREATE THE DATASETS #################
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
batch_sample = 1000
X,Y = datasets.make_moons(n_samples=batch_sample, shuffle=True, noise=.1, random_state=None)
X_test,Y_test = datasets.make_moons(n_samples=batch_sample, shuffle=True, noise=.1, random_state=None)
plt.scatter(X[:, 0], X[:, 1], c=Y)
# Scale in x and y directions
aug_x = (1.5 - 0.5) * np.random.rand() + 0.5
aug_y = (2.5 - 1.5) * np.random.rand() + 1.5
aug = np.array([aug_x, aug_y])
X_scale = X * aug
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cm_bright)
plt.scatter(X_scale[:, 0], X_scale[:, 1], marker='+',c=Y, cmap=cm_bright, alpha=0.4)
## rotation of 45 degrees
theta = (np.pi/180)* -35
rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
X_rot = np.dot(X,rotation_matrix)
plt.scatter(X[:, 0], X[:, 1], c=Y,cmap=cm_bright)
plt.scatter(X_rot[:, 0], X_rot[:, 1], marker='+', c=Y, cmap=cm_bright, alpha=0.4)
# We create the same dataset with more noise
X_noise,Y_noise = datasets.make_moons(n_samples=batch_sample, shuffle=True, noise=.3, random_state=None)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cm_bright)
plt.scatter(X_noise[:, 0], X_noise[:, 1], marker='+', c=Y_noise, cmap=cm_bright, alpha=0.4)
train_dataset = MyData(data=X,labels=Y)
test_dataset = MyData(data=X_test,labels=Y_test)
scale_dataset = MyData(X_scale, Y)
rot_dataset = MyData(X_rot, Y)
noise_dataset = MyData(X_noise, Y_noise)
trainLoader = DataLoader(train_dataset, batch_size=batch_sample)
testLoader = DataLoader(test_dataset, batch_size=batch_sample)
scaleLoader = DataLoader(scale_dataset, batch_size=batch_sample)
rotLoader = DataLoader(rot_dataset, batch_size=batch_sample)
noiseLoader = DataLoader(noise_dataset, batch_size=batch_sample)
################# 2.TRAINING #################
# Simple Neural Network
base_nn = NeuralNet(input_dim=2, hidden_dim=10, output_dim=2).double()
optimizer = torch.optim.Adam(base_nn.parameters(), lr=0.01)
MC_sample=1
crit = nn.CrossEntropyLoss()
n_epochs = 500
_, training_loss = train_model_dropout(base_nn, None, MC_sample, trainLoader, n_epochs, crit, optimizer, no_classes=2)
# Neural Network with MC Dropout
vi_nn = MCDropout(input_dim=2, hidden_dim=10, output_dim=2).double()
optimizer = torch.optim.Adam(vi_nn.parameters(), lr=0.01)
MC_sample=50
crit = nn.CrossEntropyLoss()
n_epochs = 500
_, training_loss = train_model_dropout(vi_nn, None, MC_sample, trainLoader, n_epochs, crit, optimizer, no_classes=2)
# project the 64-dimensional data to a lower dimension
def estimate_input_density(data):
# project the 64-dimensional data to a lower dimension
pca = PCA(n_components=2, whiten=False)
data = pca.fit_transform(data)
# use grid search cross-validation to optimize the bandwidth
params = {'bandwidth': np.logspace(-1, 1, 20)}
grid = GridSearchCV(KernelDensity(), params)
grid.fit(data)
print("best bandwidth: {0}".format(grid.best_estimator_.bandwidth))
# use the best estimator to compute the kernel density estimate
kde = grid.best_estimator_
return kde, pca
kde, pca = estimate_input_density(X)
## Train an ensemble of NN
def train_ensemble(N, n_epochs, trainLoader):
ensembles = []
for i in range(N):
base_nn = NeuralNet(input_dim=2, hidden_dim=10, output_dim=2).double()
optimizer = torch.optim.Adam(base_nn.parameters(), lr=0.01)
MC_sample=1
crit = nn.CrossEntropyLoss()
_, training_loss = train_model_dropout(base_nn, None, MC_sample, trainLoader, n_epochs, crit, optimizer, no_classes=2)
ensembles.append(base_nn)
return ensembles
ensemble = train_ensemble(5, 500, trainLoader)
ensemble_nn = EnsembleNeuralNet(ensemble)
## Train with mixup
# Simple Neural Network
mu_nn = NeuralNet(input_dim=2, hidden_dim=10, output_dim=2).double()
optimizer = torch.optim.Adam(mu_nn.parameters(), lr=0.01)
MC_sample=1
crit = mixup_log_loss
n_epochs = 500
_, training_loss = train_model_dropout(mu_nn, None, MC_sample, trainLoader, n_epochs, crit, optimizer, no_classes=2, mixup=True)
## Train Fast Gradient Sign Method
# Simple Neural Network
fgsm_nn = NeuralNet(input_dim=2, hidden_dim=10, output_dim=2).double()
optimizer = torch.optim.Adam(fgsm_nn.parameters(), lr=0.01)
MC_sample=1
crit = nn.CrossEntropyLoss()
n_epochs = 500
_, training_loss = train_model_dropout(fgsm_nn, None, MC_sample, trainLoader, n_epochs, crit, optimizer, no_classes=2, mixup=False, fgsm=True)
#plt.plot(training_loss)
# Train using the density
#base_density_nn = NeuralNet(input_dim=2, hidden_dim=10, output_dim=2).double()
#optimizer = torch.optim.Adam(base_density_nn.parameters(), lr=0.01)
#MC_sample=1
#crit = nn.CrossEntropyLoss()
#n_epochs = 500
#_, training_loss = train_model_dropout(base_density_nn, None, MC_sample, trainLoader, n_epochs, crit, optimizer, no_classes=2, kde=kde, pca=pca)
################# 3.EVALUATION BASED ON ACCURCAY #################
from uncertainty import sample_lowest_entropy, sample_highest_density, sample_lowest_entropy_highest_density
retained = [50, 60, 70, 80, 90, 100]
def model_accuracy_over_low_entropy_high_density_data_retained(model,kde, pca, data, label, MC_sample, no_classes):
"""
This function will retain the data with the highest density
at 6 different levels and put them in loaders.
Furthermore, the accuracy at each level will be computed
The accuracies can be used to plot how the accuracy drops while we increase data
the loader allow to have access to the data sampled with the high density
criterion.
"""
loader50 = sample_lowest_entropy_highest_density(.5, model, kde, pca, data, label, MC_sample, no_classes)
loader60 = sample_lowest_entropy_highest_density(.6, model, kde, pca, data, label, MC_sample, no_classes)
loader70 = sample_lowest_entropy_highest_density(.7, model, kde, pca, data, label, MC_sample, no_classes)
loader80 = sample_lowest_entropy_highest_density(.8, model, kde, pca, data, label, MC_sample, no_classes)
loader90 = sample_lowest_entropy_highest_density(.9, model, kde, pca, data, label, MC_sample, no_classes)
loader100 = sample_lowest_entropy_highest_density(1., model, kde, pca, data, label, MC_sample, no_classes)
acc_50 = evaluate_model(model, loader50, MC_sample, no_classes=2)
acc_60 = evaluate_model(model, loader60, MC_sample, no_classes=2)
acc_70 = evaluate_model(model, loader70, MC_sample, no_classes=2)
acc_80 = evaluate_model(model, loader80, MC_sample, no_classes=2)
acc_90 = evaluate_model(model, loader90, MC_sample, no_classes=2)
acc_100 = evaluate_model(model, loader100, MC_sample, no_classes=2)
acc = [acc_50, acc_60, acc_70, acc_80, acc_90, acc_100]
loaders = [loader50, loader60, loader70, loader80, loader90, loader100]
return acc, loaders
def model_accuracy_over_high_density_data_retained(model,kde, pca, data, label, MC_sample, no_classes):
"""
This function will retain the data with the highest density
at 6 different levels and put them in loaders.
Furthermore, the accuracy at each level will be computed
The accuracies can be used to plot how the accuracy drops while we increase data
the loader allow to have access to the data sampled with the high density
criterion.
"""
loader50 = sample_highest_density(0.5, kde, pca, data, label)
loader60 = sample_highest_density(0.6, kde, pca, data, label)
loader70 = sample_highest_density(0.7, kde, pca, data, label)
loader80 = sample_highest_density(0.8, kde, pca, data, label)
loader90 = sample_highest_density(0.9, kde, pca, data, label)
loader100 = sample_lowest_entropy(1., model, data, label, MC_sample, no_classes)
acc_50 = evaluate_model(model, loader50, MC_sample, no_classes=2)
acc_60 = evaluate_model(model, loader60, MC_sample, no_classes=2)
acc_70 = evaluate_model(model, loader70, MC_sample, no_classes=2)
acc_80 = evaluate_model(model, loader80, MC_sample, no_classes=2)
acc_90 = evaluate_model(model, loader90, MC_sample, no_classes=2)
acc_100 = evaluate_model(model, loader100, MC_sample, no_classes=2)
acc = [acc_50, acc_60, acc_70, acc_80, acc_90, acc_100]
loaders = [loader50, loader60, loader70, loader80, loader90, loader100]
return acc, loaders
def model_accuracy_over_low_entropy_data_retained(model, data, label, MC_sample, no_classes):
"""
This function will retain the data with the lowest entropy
at 6 different levels and put them in loaders.
Furthermore, the accuracy at each level will be computed and returned
along with the associated loaders.
The accuracies can be used to plot how the accuracy drops while we increase data
the loader allow to have access to the data sampled with the low entropy
criterion.
"""
loader50 = sample_lowest_entropy(0.5, model, data, label, MC_sample, no_classes)
loader60 = sample_lowest_entropy(0.6, model, data, label, MC_sample, no_classes)
loader70 = sample_lowest_entropy(0.7, model, data, label, MC_sample, no_classes)
loader80 = sample_lowest_entropy(0.8, model, data, label, MC_sample, no_classes)
loader90 = sample_lowest_entropy(0.9, model, data, label, MC_sample, no_classes)
loader100 = sample_lowest_entropy(1., model, data, label, MC_sample, no_classes)
acc_50 = evaluate_model(model, loader50, MC_sample, no_classes=2)
acc_60 = evaluate_model(model, loader60, MC_sample, no_classes=2)
acc_70 = evaluate_model(model, loader70, MC_sample, no_classes=2)
acc_80 = evaluate_model(model, loader80, MC_sample, no_classes=2)
acc_90 = evaluate_model(model, loader90, MC_sample, no_classes=2)
acc_100 = evaluate_model(model, loader100, MC_sample, no_classes=2)
acc = [acc_50, acc_60, acc_70, acc_80, acc_90, acc_100]
loaders = [loader50, loader60, loader70, loader80, loader90, loader100]
return acc, loaders
### Comparing sampling methods against each others
def aggregate_accuracy_perturbation_retained_data(model, kde, pca, datasets, labels, MC_sample, no_classes):
X_test, X_scale, X_rot, X_noise = datasets
Y_test, Y, Y_noise = labels
test_ende_acc, test_ende_loaders = model_accuracy_over_low_entropy_high_density_data_retained(model,kde, pca, X_test, Y_test, MC_sample=1, no_classes=2)
test_en_acc, test_en_loaders = model_accuracy_over_low_entropy_data_retained(model, X_test, Y_test, MC_sample=1, no_classes=2)
test_de_acc, test_de_loaders = model_accuracy_over_high_density_data_retained(model,kde, pca, X_test, Y_test, MC_sample=1, no_classes=2)
scale_ende_acc, scale_ende_loaders = model_accuracy_over_low_entropy_high_density_data_retained(model,kde, pca, X_scale, Y, MC_sample=1, no_classes=2)
scale_en_acc, scale_en_loaders = model_accuracy_over_low_entropy_data_retained(model, X_scale, Y, MC_sample=1, no_classes=2)
scale_de_acc, scale_de_loaders = model_accuracy_over_high_density_data_retained(model,kde, pca, X_scale, Y, MC_sample=1, no_classes=2)
noise_ende_acc, noise_ende_loaders = model_accuracy_over_low_entropy_high_density_data_retained(model,kde, pca, X_noise, Y_noise, MC_sample=1, no_classes=2)
noise_en_acc, noise_en_loaders = model_accuracy_over_low_entropy_data_retained(model, X_noise, Y_noise, MC_sample=1, no_classes=2)
noise_de_acc, noise_de_loaders = model_accuracy_over_high_density_data_retained(model,kde, pca, X_noise, Y_noise, MC_sample=1, no_classes=2)
rot_ende_acc, rot_ende_loaders = model_accuracy_over_low_entropy_high_density_data_retained(model,kde, pca, X_rot, Y, MC_sample=1, no_classes=2)
rot_en_acc, rot_en_loaders = model_accuracy_over_low_entropy_data_retained(model, X_rot, Y, MC_sample=1, no_classes=2)
rot_de_acc, rot_de_loaders = model_accuracy_over_high_density_data_retained(model,kde, pca, X_rot, Y, MC_sample=1, no_classes=2)
aggregate_ende = np.concatenate([test_ende_acc, scale_ende_acc, noise_ende_acc, rot_ende_acc], 1)
aggregate_en = np.concatenate([test_en_acc, scale_en_acc, noise_en_acc, rot_en_acc], 1)
aggregate_de = np.concatenate([test_de_acc, scale_de_acc, noise_de_acc, rot_de_acc], 1)
loaders_ende = [test_ende_loaders, scale_ende_loaders, noise_ende_loaders, rot_ende_loaders]
loaders_en = [test_en_loaders, scale_en_loaders, noise_en_loaders, rot_en_loaders]
loaders_de = [test_de_loaders, scale_de_loaders, noise_de_loaders, rot_de_loaders]
return (aggregate_ende, aggregate_en, aggregate_de), (loaders_ende, loaders_en, loaders_de)
datasets = [X_test, X_scale, X_rot, X_noise]
labels = [Y_test, Y, Y_noise]
(base_ende, base_en, base_de), base_loaders = aggregate_accuracy_perturbation_retained_data(base_nn, kde, pca, datasets, labels, 1, 2)
vi_ende, vi_en, vi_de = aggregate_accuracy_perturbation_retained_data(vi_nn, kde, pca, datasets, labels, 50, 2)
en_ende, en_en, en_de = aggregate_accuracy_perturbation_retained_data(ensemble_nn, kde, pca, datasets, labels, 1, 2)
mu_ende, mu_en, mu_de = aggregate_accuracy_perturbation_retained_data(mu_nn, kde, pca, datasets, labels, 1, 2)
ad_ende, ad_en, ad_de = aggregate_accuracy_perturbation_retained_data(fgsm_nn, kde, pca, datasets, labels, 1, 2)
fig, ax = plt.subplots(1,5, figsize=(22,4))
ax[0].set_ylabel("Aggregate over perturbations")
ax[0].plot(base_ende.mean(1), label="Entropy-Density")
ax[0].plot(base_en.mean(1), label="Entropy")
ax[0].plot(base_de.mean(1), label="Density")
ax[0].legend()
ax[0].set_title("Softmax")
ax[1].plot(vi_ende.mean(1), label="Entropy-Density")
ax[1].plot(vi_en.mean(1), label="Entropy")
ax[1].plot(vi_de.mean(1), label="Density")
ax[1].legend()
ax[1].set_title("Dropout")
ax[2].plot(en_ende.mean(1), label="Entropy-Density")
ax[2].plot(en_en.mean(1), label="Entropy")
ax[2].plot(en_de.mean(1), label="Density")
ax[2].legend()
ax[2].set_title("Ensemble")
ax[3].plot(mu_ende.mean(1), label="Entropy-Density")
ax[3].plot(mu_en.mean(1), label="Entropy")
ax[3].plot(mu_de.mean(1), label="Density")
ax[3].legend()
ax[3].set_title("Mixup")
ax[4].plot(ad_ende.mean(1), label="Entropy-Density")
ax[4].plot(ad_en.mean(1), label="Entropy")
ax[4].plot(ad_de.mean(1), label="Density")
ax[4].legend()
ax[4].set_title("FGSM")
plt.savefig("retained_aggregate_over_perturbation")
# Plot the aggregate accuracy with data retained
fig, ax = plt.subplots(1,4, figsize=(22,4))
ax[0].plot(base_en[0], label="Entropy")
ax[0].plot(base_de[0], label="Density")
#ax[0].plot(base_test_de2_acc, label="Density relaxed 2")
#ax[0].plot(base_test_de1_1_acc, label="Density relaxed 1.1")
ax[0].plot(base_ende[0], label="Entropy-Density")
ax[0].legend()
ax[0].set_title("Test data")
ax[1].plot(base_en[1], label="Entropy")
ax[1].plot(base_de[1], label="Density")
#ax[1].plot(base_scale_de2_acc, label="Density relaxed 2")
#ax[1].plot(base_scale_de1_1_acc, label="Density relaxed 1.1")
ax[1].plot(base_ende[1], label="Entropy-Density")
ax[1].legend()
ax[1].set_title("Scale data")
ax[2].plot(base_en[2], label="Entropy")
ax[2].plot(base_de[2], label="Density")
#ax[2].plot(base_noise_de2_acc, label="Density relaxed 2")
#ax[2].plot(base_noise_de1_1_acc, label="Density relaxed 1.1")
ax[2].plot(base_ende[2], label="Entropy-Density")
ax[2].legend()
ax[2].set_title("Noise data")
ax[3].plot(base_en[3], label="Entropy")
ax[3].plot(base_de[3], label="Density")
#ax[3].plot(base_rot_de2_acc, label="Density relaxed 2")
#ax[3].plot(base_rot_de1_1_acc, label="Density relaxed 1.1")
ax[3].plot(base_ende[3], label="Entropy-Density")
ax[3].legend()
ax[3].set_title("Rotation data")
plt.savefig("retained_lowestEntropy_highestDensity")
### Comparing methods agains each others
# Accuracies for data retained on the test set
base_test_acc, base_test_loaders = model_accuracy_over_low_entropy_data_retained(base_nn, X_test, Y_test, MC_sample=1, no_classes=2)
vi_test_acc, vi_test_loaders = model_accuracy_over_low_entropy_data_retained(vi_nn, X_test, Y_test, MC_sample=50, no_classes=2)
en_test_acc, en_test_loaders = model_accuracy_over_low_entropy_data_retained(ensemble_nn, X_test, Y_test, MC_sample=1, no_classes=2)
mu_test_acc, mu_test_loaders = model_accuracy_over_low_entropy_data_retained(mu_nn, X_test, Y_test, MC_sample=1, no_classes=2)
ad_test_acc, ad_test_loaders = model_accuracy_over_low_entropy_data_retained(fgsm_nn, X_test, Y_test, MC_sample=1, no_classes=2)
pde_test_acc, pde_test_loaders = model_accuracy_over_high_density_data_retained(base_nn,kde, pca, X_test, Y_test, MC_sample=1, no_classes=2)
# Accuracies for data retained on the scale perturbation set
base_scale_acc, base_scale_loaders = model_accuracy_over_low_entropy_data_retained(base_nn, X_scale, Y, MC_sample=1, no_classes=2)
vi_scale_acc, vi_scale_loaders = model_accuracy_over_low_entropy_data_retained(vi_nn, X_scale, Y, MC_sample=50, no_classes=2)
en_scale_acc, en_scale_loaders = model_accuracy_over_low_entropy_data_retained(ensemble_nn, X_scale, Y, MC_sample=1, no_classes=2)
mu_scale_acc, mu_scale_loaders = model_accuracy_over_low_entropy_data_retained(mu_nn, X_scale, Y, MC_sample=1, no_classes=2)
ad_scale_acc, ad_scale_loaders = model_accuracy_over_low_entropy_data_retained(fgsm_nn, X_scale, Y, MC_sample=1, no_classes=2)
pde_scale_acc, pde_scale_loaders = model_accuracy_over_high_density_data_retained(base_nn,kde, pca, X_scale, Y, MC_sample=1, no_classes=2)
# Accuracies for data retained on the scale rotation set
base_rot_acc, base_rot_loaders = model_accuracy_over_low_entropy_data_retained(base_nn, X_rot, Y, MC_sample=1, no_classes=2)
vi_rot_acc, vi_rot_loaders = model_accuracy_over_low_entropy_data_retained(vi_nn, X_rot, Y, MC_sample=50, no_classes=2)
en_rot_acc, en_rot_loaders = model_accuracy_over_low_entropy_data_retained(ensemble_nn, X_rot, Y, MC_sample=1, no_classes=2)
mu_rot_acc, mu_rot_loaders = model_accuracy_over_low_entropy_data_retained(mu_nn, X_rot, Y, MC_sample=1, no_classes=2)
ad_rot_acc, ad_rot_loaders = model_accuracy_over_low_entropy_data_retained(fgsm_nn, X_rot, Y, MC_sample=1, no_classes=2)
pde_rot_acc, pde_rot_loaders = model_accuracy_over_high_density_data_retained(base_nn,kde, pca, X_rot, Y, MC_sample=1, no_classes=2)
# Accuracies for data retained on the scale noise set
base_noise_acc, base_noise_loaders = model_accuracy_over_low_entropy_data_retained(base_nn, X_noise, Y_noise, MC_sample=1, no_classes=2)
vi_noise_acc, vi_noise_loaders = model_accuracy_over_low_entropy_data_retained(vi_nn, X_noise, Y_noise, MC_sample=50, no_classes=2)
en_noise_acc, en_noise_loaders = model_accuracy_over_low_entropy_data_retained(ensemble_nn, X_noise, Y_noise, MC_sample=1, no_classes=2)
mu_noise_acc, mu_noise_loaders = model_accuracy_over_low_entropy_data_retained(mu_nn, X_noise, Y_noise, MC_sample=1, no_classes=2)
ad_noise_acc, ad_noise_loaders = model_accuracy_over_low_entropy_data_retained(fgsm_nn, X_noise, Y_noise, MC_sample=1, no_classes=2)
pde_noise_acc, pde_noise_loaders = model_accuracy_over_high_density_data_retained(base_nn,kde, pca, X_noise, Y_noise, MC_sample=1, no_classes=2)
# Plot the aggregate accuracy with data retained
fig, ax = plt.subplots(1,4, figsize=(22,4))
ax[0].plot(retained, base_test_acc, label="Base")
ax[0].plot(retained, vi_test_acc, label="Dropout")
ax[0].plot(retained, en_test_acc, label="Ensemble")
ax[0].plot(retained, mu_test_acc, label="Mixup")
ax[0].plot(retained, ad_test_acc, label="FGSM")
ax[0].plot(retained, pde_test_acc, label="PDE")
ax[0].set_title("Test Set")
ax[1].plot(retained, base_scale_acc, label="Base")
ax[1].plot(retained, vi_scale_acc, label="Dropout")
ax[1].plot(retained, en_scale_acc, label="Ensemble")
ax[1].plot(retained, mu_scale_acc, label="Mixup")
ax[1].plot(retained, ad_scale_acc, label="FGSM")
ax[1].plot(retained, pde_scale_acc, label="PDE")
ax[1].set_title("Scale Perturbation")
ax[2].plot(retained, base_rot_acc, label="Base")
ax[2].plot(retained, vi_rot_acc, label="Dropout")
ax[2].plot(retained, en_rot_acc, label="Ensemble")
ax[2].plot(retained, mu_rot_acc, label="Mixup")
ax[2].plot(retained, ad_rot_acc, label="FGSM")
ax[2].plot(retained, pde_rot_acc, label="PDE")
ax[2].set_title("Rotation Perturbation")
ax[3].plot(retained, base_noise_acc, label="Base")
ax[3].plot(retained, vi_noise_acc, label="Dropout")
ax[3].plot(retained, en_noise_acc, label="Ensemble")
ax[3].plot(retained, mu_noise_acc, label="Mixup")
ax[3].plot(retained, ad_noise_acc, label="FGSM")
ax[3].plot(retained, pde_noise_acc, label="PDE")
ax[3].set_title("Noise Perturbation")
ax[3].legend(loc="upper left", bbox_to_anchor=(1,1))
plt.savefig("retained_aggregate_accuracy", dpi=300)
################ 4. EVALUATION BASED ON AUC ################
def compute_auc_models(model, loaders, vi=False):
loader50, loader60, loader70, loader80, loader90, loader100 = loaders
if vi==True:
Y_pred50 = torch.cat([torch.sigmoid(model(torch.tensor(loader50.dataset.data)))[:,1:] for i in range(50)],1).mean(1).detach().numpy()
Y_pred60 = torch.cat([torch.sigmoid(model(torch.tensor(loader60.dataset.data)))[:,1:] for i in range(50)],1).mean(1).detach().numpy()
Y_pred70 = torch.cat([torch.sigmoid(model(torch.tensor(loader70.dataset.data)))[:,1:] for i in range(50)],1).mean(1).detach().numpy()
Y_pred80 = torch.cat([torch.sigmoid(model(torch.tensor(loader80.dataset.data)))[:,1:] for i in range(50)],1).mean(1).detach().numpy()
Y_pred90 = torch.cat([torch.sigmoid(model(torch.tensor(loader90.dataset.data)))[:,1:] for i in range(50)],1).mean(1).detach().numpy()
Y_pred100 = torch.cat([torch.sigmoid(model(torch.tensor(loader100.dataset.data)))[:,1:] for i in range(50)],1).mean(1).detach().numpy()
else:
Y_pred50 = torch.sigmoid(model(torch.tensor(loader50.dataset.data)))[:,1].detach().numpy()
Y_pred60 = torch.sigmoid(model(torch.tensor(loader60.dataset.data)))[:,1].detach().numpy()
Y_pred70 = torch.sigmoid(model(torch.tensor(loader70.dataset.data)))[:,1].detach().numpy()
Y_pred80 = torch.sigmoid(model(torch.tensor(loader80.dataset.data)))[:,1].detach().numpy()
Y_pred90 = torch.sigmoid(model(torch.tensor(loader90.dataset.data)))[:,1].detach().numpy()
Y_pred100 = torch.sigmoid(model(torch.tensor(loader100.dataset.data)))[:,1].detach().numpy()
auc50 = sklearn.metrics.roc_auc_score(loader50.dataset.labels, Y_pred50)
auc60 = sklearn.metrics.roc_auc_score(loader60.dataset.labels, Y_pred60)
auc70 = sklearn.metrics.roc_auc_score(loader70.dataset.labels, Y_pred70)
auc80 = sklearn.metrics.roc_auc_score(loader80.dataset.labels, Y_pred80)
auc90 = sklearn.metrics.roc_auc_score(loader90.dataset.labels, Y_pred90)
auc100 = sklearn.metrics.roc_auc_score(loader100.dataset.labels, Y_pred100)
return [auc50, auc60, auc70, auc80, auc90, auc100]
# AUC for data retained on the test set
base_auc_test = compute_auc_models(base_nn, base_test_loaders, vi=False)
vi_auc_test = compute_auc_models(vi_nn, vi_test_loaders, vi=True)
en_auc_test = compute_auc_models(ensemble_nn, en_test_loaders, vi=False)
mu_auc_test = compute_auc_models(mu_nn, mu_test_loaders, vi=False)
ad_auc_test = compute_auc_models(fgsm_nn, ad_test_loaders, vi=False)
pde_auc_test = compute_auc_models(base_nn, pde_test_loaders, vi=False)
# AUC for data retained on the scale perturbation set
base_auc_scale = compute_auc_models(base_nn, base_scale_loaders, vi=False)
vi_auc_scale = compute_auc_models(vi_nn, vi_scale_loaders, vi=True)
en_auc_scale = compute_auc_models(ensemble_nn, en_scale_loaders, vi=False)
mu_auc_scale = compute_auc_models(mu_nn, mu_scale_loaders, vi=False)
ad_auc_scale = compute_auc_models(fgsm_nn, ad_scale_loaders, vi=False)
pde_auc_scale = compute_auc_models(base_nn, pde_scale_loaders, vi=False)
# AUC for data retained on the rotation perturbation set
base_auc_rot = compute_auc_models(base_nn, base_rot_loaders, vi=False)
vi_auc_rot = compute_auc_models(vi_nn, vi_rot_loaders, vi=True)
en_auc_rot = compute_auc_models(ensemble_nn, en_rot_loaders, vi=False)
mu_auc_rot = compute_auc_models(mu_nn, mu_rot_loaders, vi=False)
ad_auc_rot = compute_auc_models(fgsm_nn, ad_rot_loaders, vi=False)
pde_auc_rot = compute_auc_models(base_nn, pde_rot_loaders, vi=False)
# AUC for data retained on the noise perturbation set
base_auc_noise = compute_auc_models(base_nn, base_noise_loaders, vi=False)
vi_auc_noise = compute_auc_models(vi_nn, vi_noise_loaders, vi=True)
en_auc_noise = compute_auc_models(ensemble_nn, en_noise_loaders, vi=False)
mu_auc_noise = compute_auc_models(mu_nn, mu_noise_loaders, vi=False)
ad_auc_noise = compute_auc_models(fgsm_nn, ad_noise_loaders, vi=False)
pde_auc_noise = compute_auc_models(base_nn, pde_noise_loaders, vi=False)
# Plot the aggregate accuracy with data retained
fig, ax = plt.subplots(1,4, figsize=(22,4))
ax[0].plot(retained, base_auc_test, label="Base")
ax[0].plot(retained, vi_auc_test, label="Dropout")
ax[0].plot(retained, en_auc_test, label="Ensemble")
ax[0].plot(retained, mu_auc_test, label="Mixup")
ax[0].plot(retained, ad_auc_test, label="FGSM")
ax[0].plot(retained, pde_auc_test, label="PDE")
ax[0].set_title("Test Set")
ax[1].plot(retained, base_auc_scale, label="Base")
ax[1].plot(retained, vi_auc_scale, label="Dropout")
ax[1].plot(retained, en_auc_scale, label="Ensemble")
ax[1].plot(retained, mu_auc_scale, label="Mixup")
ax[1].plot(retained, ad_auc_scale, label="FGSM")
ax[1].plot(retained, pde_auc_scale, label="PDE")
ax[1].set_title("Scale Perturbation")
ax[2].plot(retained, base_auc_rot, label="Base")
ax[2].plot(retained, vi_auc_rot, label="Dropout")
ax[2].plot(retained, en_auc_rot, label="Ensemble")
ax[2].plot(retained, mu_auc_rot, label="Mixup")
ax[2].plot(retained, ad_auc_rot, label="FGSM")
ax[2].plot(retained, pde_auc_rot, label="PDE")
ax[2].set_title("Rotation Perturbation")
ax[3].plot(retained, base_auc_noise, label="Base")
ax[3].plot(retained, vi_auc_noise, label="Dropout")
ax[3].plot(retained, en_auc_noise, label="Ensemble")
ax[3].plot(retained, mu_auc_noise, label="Mixup")
ax[3].plot(retained, ad_auc_noise, label="FGSM")
ax[3].plot(retained, pde_auc_noise, label="PDE")
ax[3].set_title("Noise Perturbation")
ax[3].legend(loc="upper left", bbox_to_anchor=(1,1))
plt.savefig("retained_aggregate_auc", dpi=300)
################# 5.DRAW DECISION BOUNDARIES #################
def negatify(X):
X = np.copy(X)
neg = X < 0.5
X[neg] =X[neg]-1
return X
# Create a mesh
h = .02 # step size in the mesh
x_min = np.concatenate([X[:, 0], X_rot[:, 0], X_scale[:, 0], X_noise[:, 0]]).min()
x_max = np.concatenate([X[:, 0], X_rot[:, 0], X_scale[:, 0], X_noise[:, 0]]).max()
y_min = np.concatenate([X[:, 1], X_rot[:, 1], X_scale[:, 1], X_noise[:, 1]]).min()
y_max = np.concatenate([X[:, 1], X_rot[:, 1], X_scale[:, 1], X_noise[:, 1]]).max()
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# Predict for each point of the mesh
base_Z = torch.sigmoid(base_nn(torch.tensor(np.c_[xx.ravel(), yy.ravel()]))[:, 1])
# Here we create a list that we concatenate and we average the result
vi_Z = torch.cat([torch.sigmoid(vi_nn(torch.tensor(np.c_[xx.ravel(), yy.ravel()]))[:, 1:]) for i in range(50)],1).mean(1)
en_Z = torch.sigmoid(ensemble_nn(torch.tensor(np.c_[xx.ravel(), yy.ravel()]))[:, 1])
mu_Z = torch.sigmoid(mu_nn(torch.tensor(np.c_[xx.ravel(), yy.ravel()]))[:, 1])
ad_Z = torch.sigmoid(fgsm_nn(torch.tensor(np.c_[xx.ravel(), yy.ravel()]))[:, 1])
base_Z = base_Z.reshape(xx.shape).detach().numpy()
base_Z_ = negatify(base_Z)
vi_Z = vi_Z.reshape(xx.shape).detach().numpy()
vi_Z_ = negatify(vi_Z)
en_Z = en_Z.reshape(xx.shape).detach().numpy()
en_Z_ = negatify(en_Z)
mu_Z = mu_Z.reshape(xx.shape).detach().numpy()
mu_Z_ = negatify(mu_Z)
ad_Z = ad_Z.reshape(xx.shape).detach().numpy()
ad_Z_ = negatify(ad_Z)
p_x = kde.score_samples(pca.transform(np.c_[xx.ravel(), yy.ravel()]))
p_x = p_x.reshape(xx.shape)
p_x_e = np.power(np.exp(1), p_x)
p_x_2 = np.power(2, p_x)
p_x_1_5 = np.power(1.5, p_x)
cm = plt.cm.RdBu
plt.rcParams.update({'font.size': 14})
##### 5.1 Plot on the test dataset
fig, ax = plt.subplots(6,6, figsize=(24,22))
ax[0,0].set_title("50 % retained")
ax[0,1].set_title("60 % retained")
ax[0,2].set_title("70 % retained")
ax[0,3].set_title("80 % retained")
ax[0,4].set_title("90 % retained")
ax[0,5].set_title("100 % retained")
ax[0,0].set_ylabel("Softmax")
ax[1,0].set_ylabel("Dropout")
ax[2,0].set_ylabel("Ensemble")
ax[3,0].set_ylabel("Mixup")
ax[4,0].set_ylabel("FGSM")
ax[5,0].set_ylabel("PDE")
for i in range(0,6):
if i==0:
loaders = base_test_loaders
Z = base_Z
elif i==1:
loaders = vi_test_loaders
Z = vi_Z
elif i==2:
loaders = en_test_loaders
Z = en_Z
elif i==3:
loaders = mu_test_loaders
Z = mu_Z
elif i==4:
loaders = ad_test_loaders
Z = ad_Z
else:
loaders = pde_test_loaders
Z = base_Z_ * p_x_e
for j in range(0,6):
base_x, base_y = next(iter(loaders[j]))
im = ax[i,j].contourf(xx, yy, Z, cmap=cm, alpha=.8)
ax[i,j].scatter(base_x[:, 0], base_x[:, 1], c=base_y, cmap=cm_bright)
ax[i,j].scatter(X_test[:, 0], X_test[:, 1], c=Y_test, cmap=cm_bright, alpha=0.1)
plt.savefig("retained_test", dpi=300)
##### 5.2 Plot on the scale dataset
fig, ax = plt.subplots(6,6, figsize=(24,22))
ax[0,0].set_title("50 % retained")
ax[0,1].set_title("60 % retained")
ax[0,2].set_title("70 % retained")
ax[0,3].set_title("80 % retained")
ax[0,4].set_title("90 % retained")
ax[0,5].set_title("100 % retained")
ax[0,0].set_ylabel("Softmax")
ax[1,0].set_ylabel("Dropout")
ax[2,0].set_ylabel("Ensemble")
ax[3,0].set_ylabel("Mixup")
ax[4,0].set_ylabel("FGSM")
ax[5,0].set_ylabel("PDE")
for i in range(0,6):
if i==0:
loaders = base_scale_loaders
Z = base_Z
elif i==1:
loaders = vi_scale_loaders
Z = vi_Z
elif i==2:
loaders = en_scale_loaders
Z = en_Z
elif i==3:
loaders = mu_scale_loaders
Z = mu_Z
elif i==4:
loaders = ad_scale_loaders
Z = ad_Z
else:
loaders = pde_scale_loaders
Z = base_Z_ * p_x_e
for j in range(0,6):
base_x, base_y = next(iter(loaders[j]))
im = ax[i,j].contourf(xx, yy, Z, cmap=cm, alpha=.8)
ax[i,j].scatter(base_x[:, 0], base_x[:, 1], c=base_y, cmap=cm_bright)
ax[i,j].scatter(X_scale[:, 0], X_scale[:, 1], c=Y, cmap=cm_bright, alpha=0.1)
plt.savefig("retained_scale", dpi=300)
##### 5.3 Plot on the rotation dataset
fig, ax = plt.subplots(6,6, figsize=(24,22))
ax[0,0].set_title("50 % retained")
ax[0,1].set_title("60 % retained")
ax[0,2].set_title("70 % retained")
ax[0,3].set_title("80 % retained")
ax[0,4].set_title("90 % retained")
ax[0,5].set_title("100 % retained")
ax[0,0].set_ylabel("Softmax")
ax[1,0].set_ylabel("Dropout")
ax[2,0].set_ylabel("Ensemble")
ax[3,0].set_ylabel("Mixup")
ax[4,0].set_ylabel("FGSM")
ax[5,0].set_ylabel("PDE")
for i in range(0,6):
if i==0:
loaders = base_rot_loaders
Z = base_Z
elif i==1:
loaders = vi_rot_loaders
Z = vi_Z
elif i==2:
loaders = en_rot_loaders
Z = en_Z
elif i==3:
loaders = mu_rot_loaders
Z = mu_Z
elif i==4:
loaders = ad_rot_loaders
Z = ad_Z
else:
loaders = pde_rot_loaders
Z = base_Z_ * p_x_e
for j in range(0,6):
base_x, base_y = next(iter(loaders[j]))
im = ax[i,j].contourf(xx, yy, Z, cmap=cm, alpha=.8)
ax[i,j].scatter(base_x[:, 0], base_x[:, 1], c=base_y, cmap=cm_bright)
ax[i,j].scatter(X_rot[:, 0], X_rot[:, 1], c=Y, cmap=cm_bright, alpha=0.1)
plt.savefig("retained_rot", dpi=300)
##### 5.4 Plot on the noise dataset
fig, ax = plt.subplots(6,6, figsize=(24,22))
ax[0,0].set_title("50 % retained")
ax[0,1].set_title("60 % retained")
ax[0,2].set_title("70 % retained")
ax[0,3].set_title("80 % retained")
ax[0,4].set_title("90 % retained")
ax[0,5].set_title("100 % retained")
ax[0,0].set_ylabel("Softmax")
ax[1,0].set_ylabel("Dropout")
ax[2,0].set_ylabel("Ensemble")
ax[3,0].set_ylabel("Mixup")
ax[4,0].set_ylabel("FGSM")
ax[5,0].set_ylabel("PDE")
for i in range(0,6):
if i==0:
loaders = base_noise_loaders
Z = base_Z
elif i==1:
loaders = vi_noise_loaders
Z = vi_Z
elif i==2:
loaders = en_noise_loaders
Z = en_Z
elif i==3:
loaders = mu_noise_loaders
Z = mu_Z
elif i==4:
loaders = ad_noise_loaders
Z = ad_Z
else:
loaders = pde_noise_loaders
Z = base_Z_ * p_x_e
for j in range(0,6):
base_x, base_y = next(iter(loaders[j]))
im = ax[i,j].contourf(xx, yy, Z, cmap=cm, alpha=.8)
ax[i,j].scatter(base_x[:, 0], base_x[:, 1], c=base_y, cmap=cm_bright)
ax[i,j].scatter(X_noise[:, 0], X_noise[:, 1], c=Y_noise, cmap=cm_bright, alpha=0.1)
plt.savefig("retained_noise", dpi=300)
##### 5.5 Compare ENDE, DE, EN with base_nn
# on test dataset
fig, ax = plt.subplots(3,6, figsize=(24,18))
ax[0,0].set_title("50 % retained")
ax[0,1].set_title("60 % retained")
ax[0,2].set_title("70 % retained")
ax[0,3].set_title("80 % retained")
ax[0,4].set_title("90 % retained")
ax[0,5].set_title("100 % retained")
ax[0,0].set_ylabel("Entropy-Density")
ax[1,0].set_ylabel("Entropy")
ax[2,0].set_ylabel("Density")
for i in range(0,3):
if i==0:
loaders = base_loaders[0][0]
Z = base_Z
elif i==1:
loaders = base_loaders[1][0]
Z = base_Z
else:
loaders = base_loaders[2][0]
Z = base_Z_ * p_x_e
for j in range(0,6):
base_x, base_y = next(iter(loaders[j]))
im = ax[i,j].contourf(xx, yy, Z, cmap=cm, alpha=.8)
ax[i,j].scatter(base_x[:, 0], base_x[:, 1], c=base_y, cmap=cm_bright)
ax[i,j].scatter(X_noise[:, 0], X_noise[:, 1], c=Y_noise, cmap=cm_bright, alpha=0.1)
plt.savefig("retained_test_ende_en_de", dpi=300)
# on scale dataset
fig, ax = plt.subplots(3,6, figsize=(24,18))
ax[0,0].set_title("50 % retained")
ax[0,1].set_title("60 % retained")
ax[0,2].set_title("70 % retained")
ax[0,3].set_title("80 % retained")
ax[0,4].set_title("90 % retained")
ax[0,5].set_title("100 % retained")
ax[0,0].set_ylabel("Entropy-Density")
ax[1,0].set_ylabel("Entropy")
ax[2,0].set_ylabel("Density")
for i in range(0,3):
if i==0:
loaders = base_loaders[0][1]
Z = base_Z
elif i==1:
loaders = base_loaders[1][1]
Z = base_Z
else:
loaders = base_loaders[2][1]
Z = base_Z_ * p_x_e
for j in range(0,6):
base_x, base_y = next(iter(loaders[j]))
im = ax[i,j].contourf(xx, yy, Z, cmap=cm, alpha=.8)
ax[i,j].scatter(base_x[:, 0], base_x[:, 1], c=base_y, cmap=cm_bright)
ax[i,j].scatter(X_noise[:, 0], X_noise[:, 1], c=Y_noise, cmap=cm_bright, alpha=0.1)
plt.savefig("retained_scale_ende_en_de", dpi=300)
|
/**
* A no-frills implementation of DistributionTrainerContext.
*
* @author Matthew Pocock
* @since 1.0
*/
public class SimpleDistributionTrainerContext
implements DistributionTrainerContext, Serializable {
private final Map distToTrainer;
private final Set trainers;
private double nullModelWeight;
public double getNullModelWeight() {
return this.nullModelWeight;
}
public void setNullModelWeight(double nullModelWeight) {
this.nullModelWeight = nullModelWeight;
}
public void registerDistribution(Distribution dist) {
if(!distToTrainer.keySet().contains(dist)) {
dist.registerWithTrainer(this);
}
}
public void registerTrainer(
Distribution dist, DistributionTrainer trainer
) {
distToTrainer.put(dist, trainer);
trainers.add(trainer);
}
public DistributionTrainer getTrainer(Distribution dist) {
return (DistributionTrainer) distToTrainer.get(dist);
}
public void addCount(Distribution dist, Symbol sym, double times)
throws IllegalSymbolException {
DistributionTrainer dt = getTrainer(dist);
if(dt == null) {
throw new NullPointerException(
"No trainer associated with distribution " + dist
);
}
if (sym instanceof AtomicSymbol) {
dt.addCount(this, (AtomicSymbol) sym, times);
} else {
// Distribution nullModel = dist.getNullModel();
// double totWeight = nullModel.getWeight(sym);
for (
Iterator asi = ((FiniteAlphabet) sym.getMatches()).iterator();
asi.hasNext();
) {
AtomicSymbol as = (AtomicSymbol) asi.next();
//dt.addCount(this, as, times * (nullModel.getWeight(as) / totWeight));
dt.addCount(this, as, times);
}
}
}
public double getCount(Distribution dist, Symbol sym)
throws IllegalSymbolException {
DistributionTrainer dt = getTrainer(dist);
if(dt == null) {
throw new NullPointerException(
"No trainer associated with distribution " + dist
);
}
if (sym instanceof AtomicSymbol) {
return dt.getCount(this, (AtomicSymbol) sym);
} else {
double totWeight = 0.0;
for (
Iterator asi = ((FiniteAlphabet) sym.getMatches()).iterator();
asi.hasNext();
) {
AtomicSymbol as = (AtomicSymbol) asi.next();
totWeight += dt.getCount(this, as);
}
return totWeight;
}
}
public void train()
throws ChangeVetoException {
for(Iterator i = trainers.iterator(); i.hasNext(); ) {
((DistributionTrainer) i.next()).train(this, getNullModelWeight());
}
}
public void clearCounts() {
for(Iterator i = trainers.iterator(); i.hasNext(); ) {
((DistributionTrainer) i.next()).clearCounts(this);
}
}
/**
* Create a new context with no initial distributions or trainers.
*/
public SimpleDistributionTrainerContext() {
this.distToTrainer = new IdentityHashMap();
this.trainers = new HashSet();
}
}
|
def collect(self):
try:
val = np.asarray(self.data, dtype=float)
except ValueError:
bins, counts = np.unique(self.data, return_counts=True)
bin_to_count = {str(bins[i]): counts[i] for i in range(len(bins))}
yield InferenceDistribution.as_discrete(bin_to_count=bin_to_count)
return
bin_to_count = fast_histogram(val, discrete=self.is_discrete)
if "+Inf" not in bin_to_count:
metric = InferenceDistribution.as_discrete(bin_to_count=bin_to_count)
else:
val = _remove_nans_and_infs(val)
metric = InferenceDistribution.as_continuous(
bin_to_count=bin_to_count, sum_value=np.sum(val)
)
yield metric
|
A gene associated with Alzheimer's may be visible in the brain and have an effect on cognition as "early as childhood", according to a new study.
The study looked at genes – in particular the epsilon(ε)4 variant of the apolipoprotein-E gene – that had been identified in children.
Those with the ε4 gene had previously been found to have developed Alzheimer's more often than those with ε2 or 3; other variants of the gene.
"Studying these genes in young children may ultimately give us early indications of who may be at risk for dementia in the future and possibly even help us develop ways to prevent the disease from occurring or to delay the start of the disease," said Linda Chang professor of medicine at University of Hawaii, who led the study.
More than 1,185 children aged between three and 20 undertook brain scans and tests of "thinking and memory skills". The study found that "children with any form of the ε4 gene had differences in their brain development compared to children with ε2 and ε3 forms of the gene", and that these differences were seen in areas of the brain affected by Alzheimer's. In children with the ε4 gene, the hippocampus was 5 per cent smaller than those without.
"These findings mirror the smaller volumes and steeper decline of the hippocampus volume in the elderly who have the ε4 gene," said Chang.
The children with the ε4 genotypes also scored lower on cognitive tests such as memory and verbal reasoning, with scores up to 50 per cent lower on texts of executive function and working memory. The youngest children also had scores "up to 50 per cent lower" on tests of attention.
The team noted that its study was cross-sectional, meaning that "the information is from one point in time for each child".
Other genetic markers have been discovered to predict Alzheimer's. A 2015 study found that an allele, or variant form of a gene, commonly associated with Alzheimer's also conveys an increased risk of late-life depression. And VR has been used to predict the early onset of the disease.
Claims about being able to diagnose, or spot indicators of, Alzheimer's in childhood should also be treated with caution. Having the gene doesn't necessarily mean a person will develop the disease, only that it may be an indicator.
Further research will also need to be done to confirm the link. If proven the findings will likely trigger an ethical debate on whether to tell those who carry the gene.
|
package cn.wildfirechat.app.shiro;
import org.apache.shiro.authc.UsernamePasswordToken;
public class UsernameCodeToken extends UsernamePasswordToken {
public UsernameCodeToken() {
}
public UsernameCodeToken(String username, String password) {
super(username, password);
}
}
|
Selection of Two Identical Pictures from a Group of Similar Ones II: Changes in Ongoing EEG Activity The paper presents finding of an experiment the aim of which was to judge the impact of an instant cognitive activity (identical pictures searching) on behavior of ongoing EEG activity. Two types of mental task were used: The first passive watching of a blank white oval and the second active searching for identical pictures in a group of similar nine line-drawings of living individuals or inanimate objects filling out the white oval.Presented results showed that higher mental load pertinent to active searching for identical pictures in a group of similar pictures results in the prominent event-related desynchronization (ERD) the mean Total Power value, a quantitative measure of ERD extent, in comparison with reference level (passive watching) was lower while solving the mental task.The results also showed that the actual mental task performance affects the ERD only at some scalp-recording sites. The mean EEG Total Power significantly decreases at parietal and frontal scalp-recording sites whereas the significant decrease of the Frequency at Maximum Power involves occipital scalp electrodes, too.Our results also demonstrated that some subjects personality traits (moderation, openness and extraversion) affect the actual decrease/increase in size of Frequency at Maximum Power during active mental task solving.Presented findings point at the high suitability of the ERD method to uncover differences in peoples brain activation patterns when engaged in performing cognitively demanding tasks. large number of papers published in the last three decades show that research of brain oscillations during cognitive and motor brain functions brings valuable information allowing us to better understand the functional organization of the brain and its role in organizing an optimal behavior of an organism. Sensory and cognitive processing results not only in an event-related potential (ERP), but also in a change in the ongoing EEG. The former potentials are of short duration (~ 200-300 ms) and reflect direct neuronal activation; the latter represents a short-lasting decrease/increase in rhythmic activity (event -related desynchronization /synchronization or ERD/ERS) that occurs in relation to an event (Pfurtscheller and Aranibar 1977). The ERD and ERS reflect the dynamics of neuronal networks and that is why they are recognized, together with ERP, as a valuable tool in neurocognitive reseach. For further details see for example Pfurtscheller, Lopes da Silva, Morrell, Klimesch, Pfurtscheller and Lopes da Silva, Pfurtscheller (1992Pfurtscheller (, 2003, Mazaheri. The ERD is not only an electrophysiological correlate of cortical activation related to stimulus processing but also characteristic of cortical areas preparing to process sensory information or ready to execute a motor command. It is therefore not surprising that many cognitive psychologists and neuroscientists combine psychological testing or solving special cognitive tasks with simultaneous registration of brain electrical activity. The review of Klimesch and monographs of Faber (2001Faber (, 2005 and especially papers presented in the book "Event-Related Dynamics of Brain Oscillations", edited by Neuper C. and Klimesch W. in 2006 confirm it. The aim of our experiment was to judge the impact of an instant cognitive activity (selection of two identical pictures in a group of similar pictures) on brain potentials -ERP and an ongoing EEG. All visual stimuli (drawing of living creatures or common objects) were taken from Matching Familiar Test TE-NA-ZO (). Our earlier paper (Petrek 2007) was addressed to an analysis of ERPs dynamics during selection of two identical pictures. The present paper deals with the EEG changes in the course of a cognitive problems solving. Attention is paid especially to the dynamics of ERD and its relation to the site of recording electrode and to the participants personality traits. METHODS The experimental paradigm, procedures, features of visual stimuli and method of data acquisition were described in detail in our earlier paper (Petrek 2007). Here only the principles of the data analysis are described. SciWorks version 5 with DataWave CP Analysis Modules and Data Editing Software were used for an off-line analysis of experimental data. The analysis ran as follows: A visual elimination of distorted records, sorting records into groups according the type of mental task, digital filtering of records (bandpass 1-30 Hz), and cutting digitally filtered records for the Fast Fourier Transform analysis (FFT) -the starting points of the cut segments of EEG record were time-locked to the photodiode signal. For each experimental task and everybody's subject the SciWork FFT Analysis module calculated the total power histograms from a fixed frequency band (3-20 Hz or 8-13 Hz) for each 3-second EEG sample with 50 % overlay and all electrodes. From the histograms the software derived two FFT single values -Total Power and Frequency at Maximum Power, averaged either of them separately and actual average numerical values visualized in a spreadsheet format on the screen DataPad Plugin module. Subsequent statistical data analyses were accomplished with the StatSoft software package (StatSoft, Tulsa, OK). The average decrease in Total Power and Frequency at Maximum Power expressed as the percentage of decrease in TP/FMP within the frequency band of interest in the period of the mental task as compared to the reference interval (passive watching of a blank white oval) was used to quantify ERD/ERS and the EEG frequency change within the frequency band of interest. Table 1 summarizes the basic statistical characteristics of tested parameters in our experiment. From it follows that the type of the experimental task (passive watching of a blank white oval or active searching for identical pictures in a group of similar ones) determines the mean values of Total Power calculated for the 3 to 20 Hz EEG frequency bands (TP) or for the 8 to 13 Hz EEG frequency bands (TPA). The experimental task affects also the mean Frequency at Maximum Power computed for the former frequency band (FM), but the Frequency at Maximum Power of the latter band (FMA), that is the alpha band, shows no such dependence. The mean TP and TPA values as well as FM values are lower during mental task solving (active searching for identical pictures in a group of similar ones) and higher during passive watching of a blank white oval (reference level). However, this is not valid for FMA (the average Frequency at Maximum Power of the 8 to 13 Hz EEG A frequency bands) -the FMA means do not differ from each other. The results of the Simple Factorial ANOVA with Repeated Measures confirm these conclusions (for details see Figure l). The statistically significant interactions between tested variables (TP, TPA, MF and FMA) and the independent variables (the type of mental task and the site of scalprecording electrode) were also established (see Figure 2). From Figure 2A follows that increasing mental load significantly decreases TP at P3, P4, F3 and F4 scalprecording sites. The TP in other electrodes did not show such dependence on the task demands -there are no significant differences between the compared means (i. e., between mean TP values during passive watching of a blank white oval and mean TP values during active searching for identical pictures in a group of similar ones). Fig. 1 Comparison of FFT parameters in two experimental situations. A higher mental load significantly decreases also the mean MF values at all parietal and occipital electrodes in both hemispheres with the exception of CP5 electrode. It also decreases mean FMA value at O2 and increases it at F4 electrodes ( Figure 2C and D). The decrease in TPA is restricted only to the parietal scalp-recording electrodes ( Figure 2B). The average decrease in TP and MF expressed as percentage of decrease in band power/frequency during a mental task as compared to the reference interval (passive watching of a blank white oval) ranged from 7 to 10 per cent, with the exception of FMA -the latter value ranged around zero (see Table 1). The average percentage of decrease in TP and FM reached statistical significance in both cases (t-test for a single mean). The results of analysis of variance (Repeated Measures ANOVA) and standard regression analysis confirmed our assumption that some subjects' personality traits could affect the actual decrease/increase size of the tested parameters during mental task solving. It has been shown that the five FPI subject's personality features from the thirteen established affect the extent in decrease/increase of FM and FMA. They are especially moderation (FPI 8), openness (FPI 9) and extraversion (E). The first of them was related to FM, the other two to FMA. The TP and TPA variables did not show any relation to FPI subject's personality traits. The scatterplots in Figure 3 graphically demonstrate the mutual relationship between FPI subjects' score and the mean decrease/increase in size of FM/FMA (in percentage) during its intensive cognitive activity. DISCUSSION In our paper (Petrek 2007) it was shown that cognitive processes underlying successful selection of two identical pictures from a group of similar pictures affect the activity of systems giving rise to ERPs. The amplitude changes of individual ERP components revealed it. The present paper extends this conclusion showing that cognitive processes also result in a change in the ongoing EEG activity. The short-lasting and localized amplitude decrease in rhythmic activity (ERD) and the change of EEG frequency are among the most frequent changes. The presented results show that a higher mental load pertinent to active searching for identical pictures in a group of similar pictures results in a prominent ERD -the mean TP value, a quantitative measure of ERD extent, in comparison with reference level was lower during the mental task, both for the wide (3-20 Hz) and the narrow (8-13 Hz) EEG frequency bands. These findings correspond with those of many other authors (for details see for example Klimesch -1996, Faber -2005. It has to be so because maximal readiness and optimal excitability of neural structures (the extent of ERD represents their objective measure) is a prerequisite of successful processing of information in specific brain systems during the solving of a cognitive task. It is accepted that ERD is not only an electrophysiological correlate of cortical activation to stimulus processing but also a characteristic of cortical areas preparing to process sensory information or ready to execute a motor command (Pfurtscheller 1992,. From this follows that during a cognitive task alpha ERD should be topographically localized over the corresponding brain areas involved with a specific task. The results of our experiment are not in a fundamental conflict with this assumption. It follows from them that the actual mental task performance (selection of two identical pictures from a group of similar pictures) affects differently the mean EEG Total Power or the extent of ERD only in some scalp-recording sites. The mean TP within 3 to 20 Hz EEG frequency bands significantly decreases in parietal (P3, P4) and frontal (F3, F4) scalp recording sites whereas the Total Power of the alpha band (TPA) decreases in parietal recording sites only.. The mean TPA values of remaining electrodes, including occipital electrodes, do not show any dependence on the mental task performance. The latter statement is somewhat surprising because occipital brain areas are certainly engaged in the processing of visual information pertinent to mental task solving. In this connection, however, it should be emphasized that evaluation of another dependent variable -the Frequency at Maximum Power of the 3 to 20 Hz EEG frequency bands, unambiguously proved the role of the occipital cortex in processing of visual information during mental task solving. The MFA was significantly decreased in occipital scalp electrodes. It appears that both FFT dependent variables -Total Power and Frequency at Maximum Power -used in our experiment as indicators of momentary excitability of neural structures have different predictive values. The FM was more sensitive and it told more accurately about engagement of different cortical areas in the processing of visual information during active mental task solving. The lower predicative value of the TP in our experiment is probably related to the method used to measure this parameter -averaging of the EEG signal, which we used, might mask the dynamics of the tested parameter and erase possible differences. Last but not least, our results also show that some subjects' personality traits (moderation, openness and extroversion) affect the actual decrease/increase in size of Frequency at Maximum Power during active mental task solving. The influence of the personality dimension, extraversion/introversion, on the extent an topographical distribution of ERD in subjects engaged in cognitive information processing was also described by Fink and Fink et al. (2005Fink et al. (, 2008. The paper by Jausovec studying the differences in cognitive processes related to creativity and intelligence by using EEG coherence and power measures should be mentioned here too. All this indicates that subjects' personality traits ought to be taken into account in the evaluation of cognitive information processing in the brain. In conclusion we can say that our findings point at the high suitability of the ERD method for uncovering differences in brain activation patterns when people are engaged in performing cognitively demanding tasks.
|
<reponame>wapalxj/Android_demo_project
package Utils;
/**
* Created by Administrator on 2016/11/30.
*/
public interface MyConstants {
String SPFILE="myconfig";//SP的配置文件名字
String PASSWORD="<PASSWORD>";//手机防盗密码
String ISSETUP="issetup";//是否进入设置向导界面
String SIM = "simSerialNumber";
String SAFENUMBER = "safenumber";
int ENCRYSEED=120;//加密/解密种子
String LOSTFIND="bootlostfind";//开机是否开启手机防盗
String LOSTFINDNAME="lostfindname";//手机防盗名
String AUTOUPDATE="autoupdate";//自动更新设置
String TOAST_X="toastX";//自定义吐司X坐标
String TOAST_Y="toastY";//自定义吐司X坐标
String STYLEINDEX="styleindex";//归属地背景样式
String SHOWSYSTEM="showsystem";//显示系统进程
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.