title
stringlengths
8
300
abstract
stringlengths
0
10k
Comparative long-term experience with immunoadsorption and dextran sulfate cellulose adsorption for extracorporeal elimination of low-density lipoproteins
Two low-density lipoprotein (LDL) apheresis methods allowing a specific extracorporeal removal of atherogenic lipoproteins from plasma were compared concerning their efficacy and safety in the long-term therapy of severe familial hypercholesterolemia. Five patients were treated with immunoadsorption (IMA) at weekly intervals over 3 years each, and three patients received weekly therapy with dextran sulfate cellulose adsorption (DSA) for up to 2 years. The mean plasma volume processed per session to decrease total cholesterol to a target level of 100–150 mg/dl at the end of LDL apheresis was significantly lower in DSA than in IMA: 143% vs. 180% of the individual plasma volume. Both LDL apheresis procedures achieved a mean acute reduction of plasma LDL cholesterol by more than 70%. The average interval concentrations of plasma LDL cholesterol obtained without concomitant lipid-lowering medication were 151 ± 26 mg/dl compared to 351 ± 65 mg/dl at baseline in the IMA-treated patients and 139 ± 18 mg/dl compared to 359 ± 48 mg/dl at baseline in the DSA-treated patients. Two patients from the DSA group died after 2 years of study participation due to a stroke and a sudden cardiac death several days after the last plasma therapy. Treatment-related side effects were infrequent. Long-term therapy with IMA and DSA was associated with symptomatic improvement of coronary artery disease and mobilization of tissue cholesterol deposits. Analysis of coronary angiograms after 3 years of weekly LDL apheresis with IMA revealed in five patients nearly identical atherosclerotic lesions without definite regression or progression.
An Internal Triple-Band WLAN Antenna
A triple-band wireless local area network (WLAN) antenna has been proposed. The antenna comprises a planar inverted-F antenna (PIFA) in conjunction with a parasitic element. It has been demonstrated that triple-band WLAN operations including the IEEE 802.11 2.4 GHz (2.4-2.484 GHz), 5.2 GHz (5.15-5.35 GHz), and 5.8 GHz (5.725-5.825 GHz) can be achieved by using the proposed antenna with a very compact size, probably the most compact WLAN internal antenna covering the three frequency bands.
Exploring Principles-of-Art Features For Image Emotion Recognition
Emotions can be evoked in humans by images. Most previous works on image emotion analysis mainly used the elements-of-art-based low-level visual features. However, these features are vulnerable and not invariant to the different arrangements of elements. In this paper, we investigate the concept of principles-of-art and its influence on image emotions. Principles-of-art-based emotion features (PAEF) are extracted to classify and score image emotions for understanding the relationship between artistic principles and emotions. PAEF are the unified combination of representation features derived from different principles, including balance, emphasis, harmony, variety, gradation, and movement. Experiments on the International Affective Picture System (IAPS), a set of artistic photography and a set of peer rated abstract paintings, demonstrate the superiority of PAEF for affective image classification and regression (with about 5% improvement on classification accuracy and 0.2 decrease in mean squared error), as compared to the state-of-the-art approaches. We then utilize PAEF to analyze the emotions of master paintings, with promising results.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.
Securing wireless sensor networks: a survey
The significant advances of hardware manufacturing technology and the development of efficient software algorithms make technically and economically feasible a network composed of numerous, small, low-cost sensors using wireless communications, that is, a wireless sensor network. WSNs have attracted intensive interest from both academia and industry due to their wide application in civil and military scenarios. In hostile scenarios, it is very important to protect WSNs from malicious attacks. Due to various resource limitations and the salient features of a wireless sensor network, the security design for such networks is significantly challenging. In this article, we present a comprehensive survey of WSN security issues that were investigated by researchers in recent years and that shed light on future directions for WSN security.
Optimal Thresholding of Classifiers to Maximize F1 Measure
This paper provides new insight into maximizing F1 measures in the context of binary classification and also in the context of multilabel classification. The harmonic mean of precision and recall, the F1 measure is widely used to evaluate the success of a binary classifier when one class is rare. Micro average, macro average, and per instance average F1 measures are used in multilabel classification. For any classifier that produces a real-valued output, we derive the relationship between the best achievable F1 value and the decision-making threshold that achieves this optimum. As a special case, if the classifier outputs are well-calibrated conditional probabilities, then the optimal threshold is half the optimal F1 value. As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive. When the actual prevalence of positive examples is low, this behavior can be undesirable. As a case study, we discuss the results, which can be surprising, of maximizing F1 when predicting 26,853 labels for Medline documents.
Gap bifurcations in nonlinear dynamical systems.
We investigate the dynamics generated by a type of equation which is common to a variety of physical systems where the undesirable effects of a number of self-consistent nonlinear forces are balanced by an externally imposed controlling harmonic force. We show that the equation presents a new sequence of bifurcations where periodic orbits are created and destroyed in such a nonsimultaneous way that may leave the appropriate phase-space occasionally empty of fundamental harmonic orbits and confined trajectories. A generic analytical model is developed and compared with a concrete physical example.
Decompositions of All Different, Global Cardinality and Related Constraints
Predictive accuracy has been used as the main and often only evaluation criterion for the predictive performance of classification learning algorithms. In recent years, the area under the ROC (Receiver Operating Characteristics) curve, or simply AUC, has been proposed as an alternative single-number measure for evaluating learning algorithms. In this paper, we prove that AUC is a better measure than accuracy. More specifically, we present rigourous definitions on consistency and discriminancy in comparing two evaluation measures for learning algorithms. We then present empirical evaluations and a formal proof to establish that AUC is indeed statistically consistent and more discriminating than accuracy. Our result is quite significant since we formally prove that, for the first time, AUC is a better measure than accuracy in the evaluation of learning algorithms.
Synthesis of ZnO/CuO Composite by The Electrochemical Method in The Acetat Acid Solution
The metal oxide composite is used to the microelectronic circuit, piezoelectric, fuel cell, sensor, catalyst, coating for preventing corrosion, and solar cell. The ZnO/CuO is one of the metal oxide composites. The combination of ZnO and CuO is the potential composite used to the catalyst and the anti-bacterial agent. The method used in this research was the electrochemical method in the acetate acid solution. The acetate acid solution used in this research is cheaper than the succinite acid used in the previous research. The electrochemical method has advantages due the easy to control and cheap. The composite resulted was analyzed by the XRD and the FTIR. The aims of this analysis are to know the crystallite phase, structure, and the functional groups of the particle resulted. The analysis showed that the ZnO-CuO composite can be resulted by the electrochemical method.
Single-energy metal artifact reduction for helical computed tomography of the pelvis in patients with metal hip prostheses
To compare the quality of helical computed tomography (CT) images of the pelvis in patients with metal hip prostheses reconstructed using adaptive iterative dose reduction (AIDR) and AIDR with single-energy metal artifact reduction (SEMAR-A). This retrospective study included 28 patients (mean age, 64.6 ± 11.4 years; 6 men and 22 women). CT images were reconstructed using AIDR and SEMAR-A. Two radiologists evaluated the extent of metal artifacts and the depiction of structures in the pelvic region and looked for mass lesions. A radiologist placed a region of interest within the bladder and recorded CT attenuation. The metal artifacts were significantly reduced in SEMAR-A as compared to AIDR (p < 0.0001). The depictions of the bladder, ureter, prostate/uterus, rectum, and pelvic sidewall were significantly better with SEMAR-A than with AIDR (p < 0.02). All lesions were diagnosed with SEMAR-A, while some were not diagnosed with AIDR. The median and interquartile range (in parentheses) of CT attenuation within the bladder for AIDR were −34.0 (−46.6 to −15.0) Hounsfield units (HU) and were more variable than those seen for SEMAR-A [5.4 (−1.3 to 11.1)] HU (p = 0.033). In comparison with AIDR, SEMAR-A provided pelvic CT images of significantly better quality for patients with metal hip prostheses.
Residual Reconstruction for Block-Based Compressed Sensing of Video
A simple block-based compressed-sensing reconstruction for still images is adapted to video. Incorporating reconstruction from a residual arising from motion estimation and compensation, the proposed technique alternatively reconstructs frames of the video sequence and their corresponding motion fields in an iterative fashion. Experimental results reveal that the proposed technique achieves significantly higher quality than a straightforward reconstruction that applies a still-image reconstruction independently frame by frame, a 3D reconstruction that exploits temporal correlation between frames merely in the form of a motion-agnostic 3D transform, and a similar, yet non-iterative, motion-compensated residual reconstruction.
"Theory of mind" in schizophrenia: a review of the literature.
The term theory of mind (ToM) refers to the capacity to infer one's own and other persons' mental states. A substantial body of research has highlighted the evolution of ToM in nonhuman primates, its emergence during human ontogeny, and impaired ToM in a variety of neuropsychiatric disorders, including schizophrenia. There is good empirical evidence that ToM is specifically impaired in schizophrenia and that many psychotic symptoms-for instance, delusions of alien control and persecution, the presence of thought and language disorganization, and other behavioral symptoms-may best be understood in light of a disturbed capacity in patients to relate their own intentions to executing behavior, and to monitor others' intentions. However, it is still under debate how an impaired ToM in schizophrenia is associated with other aspects of cognition, how the impairment fluctuates with acuity or chronicity of the schizophrenic disorder, and how this affects the patients' use of language and social behavior. In addition to these potential research areas, future studies may also address whether patients could benefit from cognitive training in this domain.
On Power Splitting Games in Distributed Computation: The Case of Bitcoin Pooled Mining
Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the "block withholding attack". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.
Dual Circularly Polarized Broadside Beam Metasurface Antenna
This paper presents the design of a modulated metasurface (MTS) antenna capable to provide both right-hand (RH) and left-hand (LH) circularly polarized (CP) boresight radiation at Ku-band (13.5 GHz). This antenna is based on the interaction of two cylindrical-wavefront surface wave (SW) modes of transverse electric (TE) and transverse magnetic (TM) types with a rotationally symmetric, anisotropic-modulated MTS placed on top of a grounded slab. A properly designed centered circular waveguide feed excites the two orthogonal (decoupled) SW modes and guarantees the balance of the power associated with each of them. By a proper selection of the anisotropy and modulation of the MTS pattern, the phase velocities of the two modes are synchronized, and leakage is generated in broadside direction with two orthogonal linear polarizations. When the circular waveguide is excited with two mutually orthogonal TE11 modes in phase-quadrature, an LHCP or RHCP antenna is obtained. This paper explains the feeding system and the MTS requirements that guarantee the balanced conditions of the TM/TE SWs and consequent generation of dual CP boresight radiation.
Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review
Moral dilemma tasks have been a much appreciated experimental paradigm in empirical studies on moral cognition for decades and have, more recently, also become a preferred paradigm in the field of cognitive neuroscience of moral decision-making. Yet, studies using moral dilemmas suffer from two main shortcomings: they lack methodological homogeneity which impedes reliable comparisons of results across studies, thus making a metaanalysis manifestly impossible; and second, they overlook control of relevant design parameters. In this paper, we review from a principled standpoint the studies that use moral dilemmas to approach the psychology of moral judgment and its neural underpinnings. We present a systematic review of 19 experimental design parameters that can be identified in moral dilemmas. Accordingly, our analysis establishes a methodological basis for the required homogeneity between studies and suggests the consideration of experimental aspects that have not yet received much attention despite their relevance.
Improving the intrinsic calibration of a Velodyne LiDAR sensor
LiDAR (Light detection and ranging) sensors are widely used in research and development. As such, they build the base for the evaluation of newly developed ADAS (Advanced Driver Assistance Systems) functions in the automotive field where they are used for ground truth establishment. However, the factory calibration provided for the sensors is not able to satisfy the high accuracy requirements by such applications. In this paper we propose a concept to easily improve the existing calibration of a Velodyne LiDAR sensor without the need for special calibration setups which can even be used to enhance already recorded data.
Role of self-care in management of diabetes mellitus
Diabetes mellitus (DM) is a chronic progressive metabolic disorder characterized by hyperglycemia mainly due to absolute (Type 1 DM) or relative (Type 2 DM) deficiency of insulin hormone. World Health Organization estimates that more than 346 million people worldwide have DM. This number is likely to more than double by 2030 without any intervention. The needs of diabetic patients are not only limited to adequate glycemic control but also correspond with preventing complications; disability limitation and rehabilitation. There are seven essential self-care behaviors in people with diabetes which predict good outcomes namely healthy eating, being physically active, monitoring of blood sugar, compliant with medications, good problem-solving skills, healthy coping skills and risk-reduction behaviors. All these seven behaviors have been found to be positively correlated with good glycemic control, reduction of complications and improvement in quality of life. Individuals with diabetes have been shown to make a dramatic impact on the progression and development of their disease by participating in their own care. Despite this fact, compliance or adherence to these activities has been found to be low, especially when looking at long-term changes. Though multiple demographic, socio-economic and social support factors can be considered as positive contributors in facilitating self-care activities in diabetic patients, role of clinicians in promoting self-care is vital and has to be emphasized. Realizing the multi-faceted nature of the problem, a systematic, multi-pronged and an integrated approach is required for promoting self-care practices among diabetic patients to avert any long-term complications.
Prevalence and determinants of essential newborn care practices in the Lawra District of Ghana
BACKGROUND There was less than satisfactory progress, especially in sub-Saharan Africa, towards child and maternal mortality targets of Millennium Development Goals (MDGs) 4 and 5. The main aim of this study was to describe the prevalence and determinants of essential new newborn care practices in the Lawra District of Ghana. METHODS A cross-sectional study was carried out in June 2014 on a sample of 422 lactating mothers and their children aged between 1 and 12 months. A systematic random sampling technique was used to select the study participants who attended post-natal clinic in the Lawra district hospital. RESULTS Of the 418 newborns, only 36.8% (154) was judged to have had safe cord care, 34.9% (146) optimal thermal care, and 73.7% (308) were considered to have had adequate neonatal feeding. The overall prevalence of adequate new born care comprising good cord care, optimal thermal care and good neonatal feeding practices was only 15.8%. Mothers who attained at least Senior High Secondary School were 20.5 times more likely to provide optimal thermal care [AOR 22.54; 95% CI (2.60-162.12)], compared to women had no formal education at all. Women who received adequate ANC services were 4.0 times (AOR  =  4.04 [CI: 1.53, 10.66]) and 1.9 times (AOR  =  1.90 [CI: 1.01, 3.61]) more likely to provide safe cord care and good neonatal feeding as compared to their counterparts who did not get adequate ANC. However, adequate ANC services was unrelated to optimal thermal care. Compared to women who delivered at home, women who delivered their index baby in a health facility were 5.6 times more likely of having safe cord care for their babies (AOR = 5.60, Cl: 1.19-23.30), p = 0.03. CONCLUSIONS The coverage of essential newborn care practices was generally low. Essential newborn care practices were positively associated with high maternal educational attainment, adequate utilization of antenatal care services and high maternal knowledge of newborn danger signs. Therefore, greater improvement in essential newborn care practices could be attained through proven low-cost interventions such as effective ANC services, health and nutrition education that should span from community to health facility levels.
Best paper - Follow the money: understanding economics of online aggregation and advertising
The large-scale collection and exploitation of personal information to drive targeted online advertisements has raised privacy concerns. As a step towards understanding these concerns, we study the relationship between how much information is collected and how valuable it is for advertising. We use HTTP traces consisting of millions of users to aid our study and also present the first comparative study between aggregators. We develop a simple model that captures the various parameters of today's advertising revenues, whose values are estimated via the traces. Our results show that per aggregator revenue is skewed (5% accounting for 90% of revenues), while the contribution of users to advertising revenue is much less skewed (20% accounting for 80% of revenue). Google is dominant in terms of revenue and reach (presence on 80% of publishers). We also show that if all 5% of the top users in terms of revenue were to install privacy protection, with no corresponding reaction from the publishers, then the revenue can drop by 30%.
Quasar: resource-efficient and QoS-aware cluster management
Cloud computing promises flexibility and high performance for users and high cost-efficiency for operators. Nevertheless, most cloud facilities operate at very low utilization, hurting both cost effectiveness and future scalability. We present Quasar, a cluster management system that increases resource utilization while providing consistently high application performance. Quasar employs three techniques. First, it does not rely on resource reservations, which lead to underutilization as users do not necessarily understand workload dynamics and physical resource requirements of complex codebases. Instead, users express performance constraints for each workload, letting Quasar determine the right amount of resources to meet these constraints at any point. Second, Quasar uses classification techniques to quickly and accurately determine the impact of the amount of resources (scale-out and scale-up), type of resources, and interference on performance for each workload and dataset. Third, it uses the classification results to jointly perform resource allocation and assignment, quickly exploring the large space of options for an efficient way to pack workloads on available resources. Quasar monitors workload performance and adjusts resource allocation and assignment when needed. We evaluate Quasar over a wide range of workload scenarios, including combinations of distributed analytics frameworks and low-latency, stateful services, both on a local cluster and a cluster of dedicated EC2 servers. At steady state, Quasar improves resource utilization by 47% in the 200-server EC2 cluster, while meeting performance constraints for workloads of all types.
Iteratively re-weighted least squares for sparse signal reconstruction from noisy measurements
Finding sparse solutions of under-determined systems of linear equations is a problem of significance importance in signal processing and statistics. In this paper we study an iterative reweighted least squares (IRLS) approach to find sparse solutions of underdetermined system of equations based on smooth approximation of the L0 norm and the method is extended to find sparse solutions from noisy measurements. Analysis of the proposed methods show that weaker conditions on the sensing matrices are required. Simulation results demonstrate that the proposed method requires fewer samples than existing methods, while maintaining a reconstruction error of the same order and demanding less computational complexity.
Mobile Money, Smallholder Farmers, and Household Welfare in Kenya
The use of mobile phones has increased rapidly in many developing countries, including in rural areas. Besides reducing the costs of communication and improving access to information, mobile phones are an enabling technology for other innovations. One important example are mobile phone based money transfers, which could be very relevant for the rural poor, who are often underserved by the formal banking system. We analyze impacts of mobile money technology on the welfare of smallholder farm households in Kenya. Using panel survey data and regression models we show that mobile money use has a positive impact on household income. One important pathway is through remittances received from relatives and friends. Such remittances contribute to income directly, but they also help to reduce risk and liquidity constraints, thus promoting agricultural commercialization. Mobile money users apply more purchased farm inputs, market a larger proportion of their output, and have higher profits than non-users of this technology. These results suggest that mobile money can help to overcome some of the important smallholder market access constraints that obstruct rural development and poverty reduction.
Color image database TID2013: Peculiarities and preliminary results
Visual quality of color images is an important aspect in various applications of digital image processing and multimedia. A large number of visual quality metrics (indices) has been proposed recently. In order to assess their reliability, several databases of color images with various sets of distortions have been exploited. Here we present a new database called TID2013 that contains a larger number of images. Compared to its predecessor TID2008, seven new types and one more level of distortions are included. The need for considering these new types of distortions is briefly described. Besides, preliminary results of experiments with a large number of volunteers for determining the mean opinion score (MOS) are presented. Spearman and Kendall rank order correlation factors between MOS and a set of popular metrics are calculated and presented. Their analysis shows that adequateness of the existing metrics is worth improving. Special attention is to be paid to accounting for color information and observers focus of attention to locally active areas in images.
Spatial working memory in humans as revealed by PET
THE concept of working memory is central to theories of human cognition because working memory is essential to such human skills as language comprehension and deductive reasoning1–4. Working memory is thought to be composed of two parts, a set of buffers that temporarily store information in either a phonological or visuospatial form, and a central executive responsible for various computations such as mental arithmetic5,6. Although most data on working memory come from behavioural studies of normal and brain-injured humans7, there is evidence about its physiological basis from invasive studies of monkeys8–10. Here we report positron emission tomography (PET) studies of regional cerebral blood flow in normal humans that reveal activation in right-hemisphere prefrontal, occipital, parietal and premotor cortices accompanying spatial working memory processes. These results begin to uncover the circuitry of a working memory system in humans.
A SEM-neural network approach for predicting antecedents of m-commerce acceptance
Higher penetration of powerful mobile devices – especially smartphones – and high-speed mobile internet access are leading to better offer and higher levels of usage of these devices in commercial activities, especially among young generations. The purpose of this paper is to determine the key factors that influence consumers’ adoption of mobile commerce. The extended model incorporates basic TAM predictors, such as perceived usefulness and perceived ease of use, but also several external variables, such as trust, mobility, customization and customer involvement. Data was collected from 224 m-commerce consumers. First, structural equation modeling (SEM) was used to determine which variables had signif-
Allopurinol improves myocardial efficiency in patients with idiopathic dilated cardiomyopathy.
BACKGROUND Dilated cardiomyopathy is characterized by an imbalance between left ventricular performance and myocardial energy consumption. Experimental models suggest that oxidative stress resulting from increased xanthine oxidase (XO) activity contributes to this imbalance. Accordingly, we hypothesized that XO inhibition with intracoronary allopurinol improves left ventricular efficiency in patients with idiopathic dilated cardiomyopathy. METHODS AND RESULTS Patients (n=9; ejection fraction, 29+/-3%) were instrumented to assess myocardial oxygen consumption (MVO(2)), peak rate of rise of left ventricular pressure (dP/dt(max)), stroke work (SW), and efficiency (dP/dt(max)/MV O(2) and SW/MVO(2)) at baseline and after sequential infusions of intracoronary allopurinol (0.5, 1.0, and 1.5 mg/min, each for 15 minutes). Allopurinol caused a significant decrease in MVO(2) (peak effect, -16+/-5%; P<0.01; n=9) with no parallel decrease in dP/dt(max) or SW and no change in ventricular load. The net result was a substantial improvement in myocardial efficiency (peak effects: dP/dt(max)/MVO(2), 22+/-9%, n=9; SW/MVO(2), 40+/-17%, n=6; both P<0.05). These effects were apparent despite concomitant treatment with standard heart failure therapy, including ACE inhibitors and beta-blockers. XO and its parent enzyme xanthine dehydrogenase were more abundant in failing explanted human myocardium on immunoblot. CONCLUSIONS These findings indicate that XO activity may contribute to abnormal energy metabolism in human cardiomyopathy. By reversing the energetic inefficiency of the failing heart, pharmacological XO inhibition represents a potential novel therapeutic strategy for the treatment of human heart failure.
Task-Agnostic Meta-Learning for Few-shot Learning
Meta-learning approaches have been proposed to tackle the few-shot learning problem. Typically, a meta-learner is trained on a variety of tasks in the hopes of being generalizable to new tasks. However, the generalizability on new tasks of a meta-learner could be fragile when it is over-trained on existing tasks during meta-training phase. In other words, the initial model of a meta-learner could be too biased towards existing tasks to adapt to new tasks, especially when only very few examples are available to update the model. To avoid a biased meta-learner and improve its generalizability, we propose a novel paradigm of Task-Agnostic Meta-Learning (TAML) algorithms. Specifically, we present an entropy-based approach that meta-learns an unbiased initial model with the largest uncertainty over the output labels by preventing it from over-performing in classification tasks. Alternatively, a more general inequality-minimization TAML is presented for more ubiquitous scenarios by directly minimizing the inequality of initial losses beyond the classification tasks wherever a suitable loss can be defined. Experiments on benchmarked datasets demonstrate that the proposed approaches outperform compared meta-learning algorithms in both few-shot classification and reinforcement learning tasks.
Sentiment mining in WebFountain
WebFountain is a platform for very large-scale text analytics applications that allows uniform access to a wide variety of sources. It enables the deployment of a variety of document-level and corpus-level miners in a scalable manner, and feeds information that drives end-user applications through a set of hosted Web services. Sentiment (or opinion) mining is one of the most useful analyses for various end-user applications, such as reputation management. Instead of classifying the sentiment of an entire document about a subject, our sentiment miner determines sentiment of each subject reference using natural language processing techniques. In this paper, we describe the fully functional system environment and the algorithms, and report the performance of the sentiment miner. The performance of the algorithms was verified on online product review articles, and more general documents including Web pages and news articles.
Rosuvastatin pharmacokinetics and pharmacogenetics in Caucasian and Asian subjects residing in the United States
Systemic exposure to rosuvastatin in Asian subjects living in Japan or Singapore is approximately twice that observed in Caucasian subjects in Western countries or in Singapore. This study was conducted to determine whether pharmacokinetic differences exist among the most populous Asian subgroups and Caucasian subjects in the USA. Rosuvastatin pharmacokinetics was studied in Chinese, Filipino, Asian-Indian, Korean, Vietnamese, Japanese and Caucasian subjects residing in California. Plasma concentrations of rosuvastatin and metabolites after a single 20-mg dose were determined by mass spectrometric detection. The influence of polymorphisms in SLCO1B1 (T521>C [Val174Ala] and A388>G [Asn130Asp]) and in ABCG2 (C421>A [Gln141Lys]) on exposure to rosuvastatin was also assessed. The average rosuvastatin area under the curve from time zero to time of last quantifiable concentration was between 64 and 84 % higher, and maximum drug concentration was between 70 and 98 % higher in East Asian subgroups compared with Caucasians. Data for Asian-Indians was intermediate to these two ethnic groups at 26 and 29 %, respectively. Similar increases in exposure to N-desmethyl rosuvastatin and rosuvastatin lactone were observed. Rosuvastatin exposure was higher in subjects carrying the SLCO1B1 521C allele compared with that in non-carriers of this allele. Similarly, exposure was higher in subjects carrying the ABCG2 421A allele compared with that in non-carriers. Plasma exposure to rosuvastatin and its metabolites was significantly higher in Asian populations residing in the USA compared with Caucasian subjects living in the same environment. This study suggests that polymorphisms in the SLCO1B1 and ABCG2 genes contribute to the variability in rosuvastatin exposure.
Low Complexity Damped Gauss-Newton Algorithms for CANDECOMP/PARAFAC
The damped Gauss-Newton (dGN) algorithm for CANDECOMP /PARAFAC (CP) decomposition has been successfully applied for di fficult tensor factorization such as collinearity of factors, different magnitudes of factors. Nevertheless, for factorization of an N-D tensor of sizeI1 × . . . × IN with rank R, the algorithm is computationally demanding due to construction of large app roximate Hessian of size (( R ∑ n In) × (R ∑ n In)) and its inversion. In this paper, we propose a fast implementation o f the dGN algorithm which is based on novel expressions of the inverse approximate Hessian in block form. The new imp le entation has a lower computational complexity, which beside computation of the gradient (this part is commo n to both methods) involves inversion of a matrix of the sizeNR2 × NR2, much smaller than the Hessian, if n In ≫ NR. In addition, the implementation has a lower memory requirements, because neither the Hessian nor its in verse need to be stored entire in one time. A variant of the algorithm working with complex valued data is proposed a s well. Complexity and performance of the proposed algorithm is compared with those of dGN and ALS with line sear ch on examples with di fficult benchmark tensors.
Ozonation of drinking water : Part I . Oxidation kinetics and product formation
The oxidation of organic and inorganic compounds during ozonation can occur via ozone or OH radicals or a combination thereof. The oxidation pathway is determined by the ratio of ozone and OH radical concentrations and the corresponding kinetics. A huge database with several hundred rate constants for ozone and a few thousand rate constants for OH radicals is available. Ozone is an electrophile with a high selectivity. The second-order rate constants for oxidation by ozone vary over 10 orders of magnitude, between o0.1M s 1 and about 7 10M s . The reactions of ozone with drinking-water relevant inorganic compounds are typically fast and occur by an oxygen atom transfer reaction. Organic micropollutants are oxidized with ozone selectively. Ozone reacts mainly with double bonds, activated aromatic systems and non-protonated amines. In general, electron-donating groups enhance the oxidation by ozone whereas electron-withdrawing groups reduce the reaction rates. Furthermore, the kinetics of direct ozone reactions depend strongly on the speciation (acid-base, metal complexation). The reaction of OH radicals with the majority of inorganic and organic compounds is nearly diffusion-controlled. The degree of oxidation by ozone and OH radicals is given by the corresponding kinetics. Product formation from the ozonation of organic micropollutants in aqueous systems has only been established for a few compounds. It is discussed for olefines, amines and aromatic compounds. r 2002 Elsevier Science Ltd. All rights reserved.
Personality Traits , Self-Esteem and Academic Achievement in Secondary School Students in Campania , Italy
For years educators have attempted to identify the effective predictors of scholastic achievement and several personality variables were described as significantly correlated with grade performance. Since one of the crucial practical implications of identifying the factors involved in academic achievement is to facilitate the teaching-learning process, the main variables that have been associated with achievement should be investigated simultaneously in order to provide information as to their relative merit in the population examined. In contrast with this premise, limited research has been conducted on the importance of personality traits and self-esteem on scholastic achievement. To this aim in a sample of 439 subjects (225 males) with an average age of 12.36 years (SD= .99) from three first level secondary school classes of Southern Italy, personality traits, as defined by the Five Factor Model, self-esteem and socioeconomic status were evaluated. The academic results correlated significantly both with personality traits and with some dimensions of self-esteem. Moreover, hierarchical regression analyses brought to light, in particular, the predictive value of openness to experience on academic marks. The results, stressing the multidimensional nature of academic performance, indicate a need to adopt complex approaches for undertaking action addressing students’ difficulties in attaining good academic achievement.
Prospective validation of the palliative prognostic index in patients with cancer.
The Palliative Prognostic Index (PPI) was devised and validated in patients with cancer in a hospice inpatient unit in Japan. The aim of this study was to test its accuracy in a different population, in a range of care settings and in those receiving palliative chemotherapy and radiotherapy. The information required to calculate the PPI was recorded for patients referred to a hospital-based consultancy palliative care service, a hospice home care service, and a hospice inpatient unit. One hundred ninety-four patients were included in the study, 43% of whom were receiving chemotherapy /or radiotherapy or both. Use of the PPI split patients into three subgroups based on PPI score. Group 1 corresponded to patients with PPI<or=4, median survival 68 days (95% confidence interval [CI] 52, 115 days). Group 2 corresponded to those with PPI>4 and <or=6, median survival 21 days (95% CI 13, 33), and Group 3 corresponded to patients with PPI>6, median survival five days (95% CI 3, 11). Using the PPI, survival of less than three weeks was predicted with a positive predictive value of 86% and negative predictive value of 76%. Survival of less than six weeks was predicted with a positive predictive value of 91% and negative predictive value of 64%. The PPI is quick and easy to use, and can be applied to patients with cancer, in hospital, in hospice, and at home. It may be used by general physicians to achieve prognostic accuracy comparable, if not superior, to that of physicians experienced in oncology and palliative care, and by oncology and palliative care specialists, to improve the accuracy of their survival predictions.
Safety of Gadoterate Meglumine (Gd-DOTA) as a Contrast Agent for Magnetic Resonance Imaging
BACKGROUND Safety is a primary concern with contrast agents used for MRI. If precautions could be taken before the repeated administration of gadolinium-based contrast media, then the awareness and management of adverse reactions would be more efficient. OBJECTIVES To assess the safety and efficacy of gadoterate meglumine (Gd-DOTA) [Magnescope® in Japan, Dotarem® in other countries], a gadolinium-based contrast agent, in patients undergoing imaging of the brain/spinal cord and/or trunk/limbs, and to identify factors associated with the onset of adverse reactions. METHODS The study ran for 4 years and included 3444 cases. The study was conducted before it became known that gadolinium-based contrast agents could trigger the development of nephrogenic systemic fibrosis. Patients for whom the contrast agent was indicated and who underwent imaging of the brain/spinal cord and/or trunk/limbs by MRI were enrolled. There were 1300 inpatients who were followed up during hospitalization (for several days), and 2144 outpatients who were followed up for at least 2 hours on-site. After Gd-DOTA administration, 13 patient baseline characteristics were used to explore factors that might predict a greater likelihood of acute non-renal adverse reactions. The physician's appraisal of the efficacy of Gd-DOTA was also assessed. RESULTS A total of 40 adverse reactions were recorded in 32 patients, giving an overall incidence of adverse reactions of 0.93%. Gastrointestinal disorders were the most commonly reported adverse reactions (0.49%). Most adverse reactions reported were of mild intensity and no serious adverse reactions were reported. This study found that statistically significant risk factors for adverse reactions were general patient condition, liver disorder, kidney disorder, health complications, concomitant treatments, and Gd-DOTA dose (although the incidence of adverse reactions was not dose dependent). In the majority of cases (99.53%), the efficacy of Gd-DOTA was rated as 'effective' or 'very effective'; only the presence of kidney disorder was associated with a significantly greater likelihood of Gd-DOTA inefficacy. CONCLUSION Overall, this post-marketing surveillance study did not reveal any untoward or unexpected findings concerning the safety or efficacy of Gd-DOTA. The low incidence of adverse reactions (<1%) and the absence of serious adverse reactions reported during the survey period showed that Gd-DOTA was very well tolerated. The use of Gd-DOTA as an MRI-enhancing contrast medium in the clinical practice setting appears to be safe and effective.
A 72-week randomized study of the safety and efficacy of a stavudine to zidovudine switch at 24 weeks compared to zidovudine or tenofovir disoproxil fumarate when given with lamivudine and nevirapine.
BACKGROUND Due to superior long-term toxicity profiles, zidovudine (AZT) and tenofovir disoproxil fumarate (TDF) are preferred over stavudine (d4T) for first-line antiretroviral regimens. However, short-term d4T use could be beneficial in avoiding AZT-induced anaemia. METHODS We randomized (1:1:1) 150 treatment-naive Thai HIV-infected adults with CD4(+) T-cell count <350 cells/mm(3) to arm 1 (24-week GPO-VIR S30(®) [d4T plus lamivudine (3TC) plus nevirapine (NVP)] followed by 48-week GPO-VIR Z250(®) [AZT plus 3TC plus NVP]), arm 2 (72-week GPO-VIR Z250(®)) or arm 3 (72-week TDF plus emtricitabine [FTC] plus NVP). Haemoglobin (Hb), dual energy x-ray absorptiometry, neuropathic signs, estimated glomerular filtration rate (eGFR), CD4(+) T-cell count, plasma HIV RNA and adherence were assessed. RESULTS In an intention-to-treat analysis, mean Hb decreased from baseline to week 24 in arm 2 compared with arm 1 (-0.19 versus 0.68 g/dl; P=0.001) and arm 3 (0.48 g/dl; P=0.010). Neuropathic signs were more common in arm 2 compared with arm 3 (20.4 versus 4.2%; P=0.028) at week 24. There were no differences in changes in peripheral fat and eGFR from baseline to weeks 24 and 72 among arms. CD4(+) T-cell count increased more in arm 1 than arms 2 and 3 from baseline to week 24 (168 versus 117 and 118 cells/mm(3); P=0.01 and 0.02, respectively) but the increase from baseline to week 72 was similar among arms. CONCLUSIONS A 24-week d4T lead-in therapy caused less anaemia and greater initial CD4(+) T-cell count increase than initiating treatment with AZT. This strategy could be considered in patients with baseline anaemia or low CD4(+) T-cell count. If confirmed in a larger study, this may guide global recommendations on antiretroviral initiation where AZT is more commonly used than TDF.
Rumors detection in Chinese via crowd responses
In recent years, microblogging platforms have become good places to spread various spams, making the problem of gauging information credibility on social networks receive considerable attention especially under an emergency situation. Unlike previous studies on detecting rumors using tweets' inherent attributes generally, in this work, we shift the premise and focus on identifying event rumors on Weibo by extracting features from crowd responses that are texts of retweets (reposting tweets) and comments under a certain social event. Firstly the paper proposes a method of collecting theme data, including a sample set of tweets which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Weibo. Secondly clustering analysis of tweets are made to examine the text features extracted from retweets and comments, and a classifier is trained based on observed feature distribution to automatically judge rumors from a mixed set of valid news and false information. The experiments show that the new features we propose are indeed effective in the classification, and especially some stop words and punctuations which are treated as noises in previous works can play an important role in rumor detection. To the best of our knowledge, this work is the first to detect rumors in Chinese via crowd responses under an emergency situation.
A SPRINT to the finish, or just the beginning? Implications of the SPRINT results for nephrologists.
The Systolic Blood Pressure Intervention Trial (SPRINT) demonstrated a significant reduction in major cardiovascular events and all-cause mortality with intensive blood pressure control in older individuals at high cardiovascular risk, including patients with chronic kidney disease and mild proteinuria. Nephrologists should consider the SPRINT results when determining the optimal blood pressure target for patients with chronic kidney disease.
Opportunities for rehabilitation of patients with radiation fibrosis syndrome.
This review discusses the pathophysiology, evaluation, and treatment of neuromuscular, musculoskeletal, and functional disorders that can result as late effects of radiation treatment. Although radiation therapy is often an effective method of killing cancer cells, it can also damage nearby blood vessels that nourish the skin, ligaments, tendons, muscles, nerves, bones and lungs. This can result in a progressive condition called radiation fibrosis syndrome (RFS). It is generally a late complication of radiotherapy which may manifest clinically years after treatment. Radiation-induced damage can include "myelo-radiculo-plexo-neuro-myopathy," causing muscle weakness and dysfunction and contributing to neuromuscular injury. RFS is a serious and lifelong disorder which, nevertheless, may often be decremented when identified and rehabilitated early enough. This medical treatment should be a complex procedure consisting of education, physical therapy, occupational therapy, orthotics as well as medications.
How to Measure Motivation : A Guide for the Experimental Social Psychologist
This article examines cognitive, affective, and behavioral measures of motivation and reviews their use throughout the discipline of experimental social psychology. We distinguish between two dimensions of motivation (outcome-focused motivation and process-focused motivation). We discuss circumstances under which measures may help distinguish between different dimensions of motivation, as well as circumstances under which measures may capture different dimensions of motivation in similar ways. Furthermore, we examine situations in which various measures may capture fluctuations in nonmotivational factors, such as learning or physiological depletion. This analysis seeks to advance research in experimental social psychology by highlighting the need for caution when selecting measures of motivation and when interpreting fluctuations captured by these measures. Motivation – the psychological force that enables action – has long been the object of scientific inquiry (Carver & Scheier, 1998; Festinger, 1957; Fishbein & Ajzen, 1974; Hull, 1932; Kruglanski, 1996; Lewin, 1935; Miller, Galanter, & Pribram, 1960; Mischel, Shoda, & Rodriguez, 1989; Zeigarnik, 1927). Because motivation is a psychological construct that cannot be observed or recorded directly, studying it raises an important question: how to measure motivation? Researchers measure motivation in terms of observable cognitive (e.g., recall, perception), affective (e.g., subjective experience), behavioral (e.g., performance), and physiological (e.g., brain activation) responses and using self-reports. Furthermore, motivation is measured in relative terms: compared to previous or subsequent levels of motivation or to motivation in a different goal state (e.g., salient versus non-salient goal). For example, following exposure to a health-goal prime (e.g., gymmembership card), an individual might be more motivated to exercise now than she was 20minutes ago (before exposure to the prime), or than another person who was not exposed to the same prime. An important aspect of determining how to measure motivation is understanding what type of motivation one is attempting to capture. Thus, in exploring the measures of motivation, the present article takes into account different dimensions of motivation. In particular, we highlight the distinction between the outcome-focused motivation to complete a goal (Brehm & Self, 1989; Locke & Latham, 1990; Powers, 1973) and the process-focused motivation to attend to elements related to the process of goal pursuit – with less emphasis on the outcome. Process-related elements may include using “proper” means during goal pursuit (means-focused motivation; Higgins, Idson, Freitas, Spiegel, & Molden, 2003; Touré-Tillery & Fishbach, 2012) and enjoying the experience of goal pursuit (intrinsic motivation; Deci & Ryan, 1985; Fishbach & Choi, 2012; Sansone & Harackiewicz, 1996; Shah & Kruglanski, 2000). In some cases, particular measures of motivation may help distinguish between these different dimensions of motivation, whereas other measures may not. For example, the measured speed at which a person works on a task can have several interpretations. © 2014 John Wiley & Sons Ltd How to Measure Motivation 329 Working slowly could mean (a) that the individual’s motivation to complete the task is low (outcome-focused motivation); or (b) that her motivation to engage in the task is high such that she is “savoring” the task (intrinsic motivation); or (c) that her motivation to “do it right” and use proper means is high such that she is applying herself (means-focused motivation); or even (d) that she is tired (diminished physiological resources). In this case, additional measures (e.g., accuracy in performance) and manipulations (e.g., task difficulty) may help tease apart these various potential interpretations. Thus, experimental researchers must exercise caution when selecting measures of motivation and when interpreting the fluctuations captured by these measures. This review provides a guide for how to measure fluctuations in motivation in experimental settings. One approach is to ask people to rate their motivation (i.e., “how motivated are you?”). However, such an approach is limited to people’s conscious understanding of their own psychological states and can further be biased by social desirability concerns; hence, research in experimental social psychology developed a variety of cognitive and behavioral paradigms to assess motivation without relying on self-reports. We focus on these objective measures of situational fluctuations in motivation. We note that other fields of psychological research commonly use physiological measures (e.g., brain activation, skin conductance), self-report measures (i.e., motivation scales), or measure motivation as a stable trait. These physiological, self-report, and trait measures of motivation are beyond the scope our review. In the sections that follow, we start with a discussion of measures researchers commonly use to capture motivation. We review cognitive measures such as memory accessibility, evaluations, and perceptions of goal-relevant objects, as well as affective measures such as subjective experience. Next, we examine the use of behavioral measures such as speed, performance, and choice to capture fluctuations in motivational strength. In the third section, we discuss the outcomeand process-focused dimensions of motivation and examine specific measures of process-focused motivation, including measures of intrinsic motivation and means-focused motivation. We then discuss how different measures may help distinguish between the outcomeand process-focused dimensions. In the final section, we explore circumstances under which measures may capture fluctuations in learning and physiological resources, rather than changes in motivation. We conclude with some implications of this analysis for the measurement and study of motivation. Cognitive and Affective Measures of Motivation Experimental social psychologists conceptualize a goal as the cognitive representation of a desired end state (Fishbach & Ferguson, 2007; Kruglanski, 1996). According to this view, goals are organized in associative memory networks connecting each goal to corresponding constructs. Goal-relevant constructs could be activities or objects that contribute to goal attainment (i.e., means; Kruglanski et al., 2002), as well as activities or objects that hinder goal attainment (i.e., temptations; Fishbach, Friedman, & Kruglanski, 2003). For example, the goal to eat healthily may be associated with constructs such as apple, doctor (facilitating means), or French fries (hindering temptation). Cognitive and affective measures of motivation include the activation, evaluation, and perception of these goal-related constructs and the subjective experience they evoke. Goal activation: Memory, accessibility, and inhibition of goal-related constructs Constructs related to a goal can activate or prime the pursuit of that goal. For example, the presence of one’s study partner or the word “exam” in a game of scrabble can activate a student’s academic goal and hence increase her motivation to study. Once a goal is active, Social and Personality Psychology Compass 8/7 (2014): 328–341, 10.1111/spc3.12110 © 2014 John Wiley & Sons Ltd 330 How to Measure Motivation the motivational system prepares the individual for action by activating goal-relevant information (Bargh & Barndollar, 1996; Gollwitzer, 1996; Kruglanski, 1996). Thus, motivation manifests itself in terms of how easily goal-related constructs are brought tomind (i.e., accessibility; Aarts, Dijksterhuis, & De Vries, 2001; Higgins & King, 1981; Wyer & Srull, 1986). The activation and subsequent pursuit of a goal can be conscious, such that one is aware of the cues that led to goal-related judgments and behaviors. This activation can also be non-conscious, such that a one is unaware of the goal prime or that one is even exhibiting goal-related judgments and behaviors. Whether goals are conscious or non-conscious, a fundamental characteristic of goal-driven processes is the persistence of the accessibility of goal-related constructs for as long as the goal is active or until an individual disengages from the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Goschke & Kuhl, 1993). Upon goal completion, motivation diminishes and accessibility is inhibited (Liberman & Förster, 2000; Marsh, Hicks, & Bink, 1998). This active reduction in accessibility allows individuals to direct their cognitive resources to other tasks at hand without being distracted by thoughts of a completed goal. Thus, motivation can be measured by the degree to which goal-related concepts are accessible inmemory. Specifically, the greater the motivation to pursue/achieve a goal, the more likely individuals are to remember, notice, or recognize concepts, objects, or persons related to that goal. For example, in a classic study, Zeigarnik (1927) instructed participants to perform 20 short tasks, ten of which they did not get a chance to finish because the experimenter interrupted them. At the end of the study, Zeigarnik inferred the strength of motivation by asking participants to recall as many of the tasks as possible. Consistent with the notion that unfulfilled goals are associated with heightened motivational states, whereas fulfilled goals inhibit motivation, the results show that participants recalled more uncompleted tasks (i.e., unfulfilled goals) than completed tasks (i.e., fulfilled goals; the Zeigarnik effect). More recently, Förster, Liberman, and Higgins (2005) replicated these findings; inferring motivation from performance on a lexical decision task. Their study assessed the speed of recognizing – i.e., identifying as words versus non-words –words related to a focal goal prior to (versus after) completing that goal. A related measure of motivation is the inhibition of conflicting constructs. In
SnappyData: A Unified Cluster for Streaming, Transactions and Interactice Analytics
Many modern applications are a mixture of streaming, transactional and analytical workloads. However, traditional data platforms are each designed for supporting a specific type of workload. The lack of a single platform to support all these workloads has forced users to combine disparate products in custom ways. The common practice of stitching heterogeneous environments has caused enormous production woes by increasing complexity and the total cost of ownership. To support this class of applications, we present SnappyData as the first unified engine capable of delivering analytics, transactions, and stream processing in a single integrated cluster. We build this hybrid engine by carefully marrying a big data computational engine (Apache Spark) with a scale-out transactional store (Apache GemFire). We study and address the challenges involved in building such a hybrid distributed system with two conflicting components designed on drastically different philosophies: one being a lineage-based computational model designed for high-throughput analytics, the other a consensusand replication-based model designed for low-latency operations.
Chaplaincy and mental health in the department of Veterans affairs and department of defense.
Chaplains play important roles in caring for Veterans and Service members with mental health problems. As part of the Department of Veterans Affairs (VA) and Department of Defense (DoD) Integrated Mental Health Strategy, we used a sequential approach to examining intersections between chaplaincy and mental health by gathering and building upon: 1) input from key subject matter experts; 2) quantitative data from the VA / DoD Chaplain Survey (N = 2,163; response rate of 75% in VA and 60% in DoD); and 3) qualitative data from site visits to 33 VA and DoD facilities. Findings indicate that chaplains are extensively involved in caring for individuals with mental health problems, yet integration between mental health and chaplaincy is frequently limited due to difficulties between the disciplines in establishing familiarity and trust. We present recommendations for improving integration of services, and we suggest key domains for future research.
A STAP overview
This tutorial provides a brief overview of space-time adaptive processing (STAP) for radar applications. We discuss space-time signal diversity and various forms of the adaptive processor, including reduced-dimension and reduced-rank STAP approaches. Additionally, we describe the space-time properties of ground clutter and noise-jamming, as well as essential STAP performance metrics. We conclude this tutorial with an overview of some current STAP topics: space-based radar, bistatic STAP, knowledge-aided STAP, multi-channel synthetic aperture radar and non-sidelooking array configurations.
Serverless computing: economic and architectural impact
Amazon Web Services unveiled their ‘Lambda’ platform in late 2014. Since then, each of the major cloud computing infrastructure providers has released services supporting a similar style of deployment and operation, where rather than deploying and running monolithic services, or dedicated virtual machines, users are able to deploy individual functions, and pay only for the time that their code is actually executing. These technologies are gathered together under the marketing term ‘serverless’ and the providers suggest that they have the potential to significantly change how client/server applications are designed, developed and operated. This paper presents two case industrial studies of early adopters, showing how migrating an application to the Lambda deployment architecture reduced hosting costs – by between 66% and 95% – and discusses how further adoption of this trend might influence common software architecture design practices.
Image-based reconstruction and synthesis of dense foliage
Flora is an element in many computer-generated scenes. But trees, bushes and plants have complex geometry and appearance, and are difficult to model manually. One way to address this is to capture models directly from the real world. Existing techniques have focused on extracting macro structure such as the branching structure of trees, or the structure of broad-leaved plants with a relatively small number of surfaces. This paper presents a finer scale technique to demonstrate for the first time the processing of densely leaved foliage - computation of 3D structure, plus extraction of statistics for leaf shape and the configuration of neighboring leaves. Our method starts with a mesh of a single exemplar leaf of the target foliage. Using a small number of images, point cloud data is obtained from multi-view stereo, and the exemplar leaf mesh is fitted non-rigidly to the point cloud over several iterations. In addition, our method learns a statistical model of leaf shape and appearance during the reconstruction phase, and a model of the transformations between neighboring leaves. This information is useful in two ways - to augment and increase leaf density in reconstructions of captured foliage, and to synthesize new foliage that conforms to a user-specified layout and density. The result of our technique is a dense set of captured leaves with realistic appearance, and a method for leaf synthesis. Our approach excels at reconstructing plants and bushes that are primarily defined by dense leaves and is demonstrated with multiple examples.
Visual Sedimentation
We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.
An Evaluation of Unified Memory Technology on NVIDIA GPUs
Unified Memory is an emerging technology which is supported by CUDA 6.X. Before CUDA 6.X, the existing CUDA programming model relies on programmers to explicitly manage data between CPU and GPU and hence increases programming complexity. CUDA 6.X provides a new technology which is called as Unified Memory to provide a new programming model that defines CPU and GPU memory space as a single coherent memory (imaging as a same common address space). The system manages data access between CPU and GPU without explicit memory copy functions. This paper is to evaluate the Unified Memory technology through different applications on different GPUs to show the users how to use the Unified Memory technology of CUDA 6.X efficiently. The applications include Diffusion3D Benchmark, Parboil Benchmark Suite, and Matrix Multiplication from the CUDA SDK Samples. We changed those applications to corresponding Unified Memory versions and compare those with the original ones. We selected the NVIDIA Keller K40 and the Jetson TK1, which can represent the latest GPUs with Keller architecture and the first mobile platform of NVIDIA series with Keller GPU. This paper shows that Unified Memory versions cause 10% performance loss on average. Furthermore, we used the NVIDIA Visual Profiler to dig the reason of the performance loss by the Unified Memory technology.
Effective Web Log Mining and Online Navigational Pattern Prediction: A Survey
The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________
Low-Profile Folded Monopoles With Embedded Planar Metamaterial Phase-Shifting Lines
This paper presents the analysis, design and measurement of novel, low-profile, small-footprint folded monopoles employing planar metamaterial phase-shifting lines. These lines are composed of fully-printed spiral elements, that are inductively coupled and hence exhibit an effective high- mu property. An equivalent circuit for the proposed structure is presented, validating the operating principles of the antenna and the metamaterial line. The impact of the antenna profile and the ground plane size on the antenna performance is investigated using accurate full-wave simulations. A lambda/9 antenna prototype, designed to operate at 2.36 GHz, is fabricated and tested on both electrically large and small ground planes, achieving on average 80% radiation efficiency, 5% (110 MHz) and 2.5% (55 MHz) -10 dB measured bandwidths, respectively, and fully omnidirectional, vertically polarized, monopole-type radiation patterns.
Designing parts feeders using dynamic simulation
We consider the problem of designing traditional (e.g. vibratory bowl) feeders for singulating and orienting industrial parts. Our ultimate goal is to prototype new designs using analyticallyand geometrically-based methods. We have developed a tool for designing industrial parts feeders based on dynamic simulation. Our tool allows us to automatically perform multiple feeder design experiments, and to evaluate their outcomes. These results can then be used to compute the probabilities of a Markov model for the feeder. To demonstrate our technique, we present preliminary results for the design of two simple feeders. Our findings suggest that using dynamic simulation is a promising approach for designing parts feeders.
Validation of traditional claim of Tulsi, Ocimum sanctum Linn. as a medicinal plant.
In several ancient systems of medicine including Ayurveda, Greek, Roman, Siddha and Unani, Ocimum sanctum has vast number of therapeutic applications such as in cardiopathy, haemopathy, leucoderma, asthma, bronchitis, catarrhal fever, otalgia, hepatopathy, vomiting, lumbago, hiccups, ophthalmia, gastropathy, genitourinary disorders, ringworm, verminosis and skin diseases etc. The present review incorporates the description of O. sanctum plant, its chemical constituents, and various pharmacological activities.
A high-performance overlay architecture for pipelined execution of data flow graphs
A major issue facing the widespread use of FPGAs as accelerators is their programmability wall: the difficulty of hardware design and the long synthesis times. Overlays-pre-synthesized FPGA circuits that are themselves reconfigurable - promise to tackle these challenges. We design and evaluate an overlay architecture, structured as a mesh of functional units, for pipelined execution of data-flow graphs (DFGs), a common abstraction for expressing parallelism in applications. We use data-driven execution based on elastic pipelines to balance pipeline latencies and achieve a high fMAX, scalability and maximum throughput. We prototype two overlays on a Stratix IV FPGA: a 355 MHz 24×16 integer overlay and a 312 MHz 18×16 floating-point overlay. We also design a tool that maps DFGs to overlays. We map 15 DFGs and show that the two overlays deliver throughputs of up to 35 GOPS and 22 GFLOPS, respectively. We also show that DFG mapping is fast, taking no more than 6 seconds for the largest DFG. Thus, our overlay architecture raises the level of abstraction of FPGA programming closer to that of software and avoids lengthy synthesis time, easing the use of these devices to accelerate applications.
Growth performance of Cobb broilers given varying concentrations of Malunggay (Moringa oleifera Lam.) aqueous leaf extract.
A study was conducted to determine the growth performance of Cobb broilers supplemented with varying concentrations of Moringa oleifera Aqueous Leaf Extract (MoALE) via the drinking water. A total of four hundred day-old chicks were randomly distributed into four treatment groups, replicated four times with twenty-five broilers per replicate. The growth performance of broilers was evaluated based on their feed consumption, live weight, feed conversion ratio (FCR) and return of investment (ROI). Results of the study showed that at 90 mL MoALE (T3), the feed consumption of broilers was consistently lower than the control group (T0) and this was statistically significant (P<0.01). The live weight of broilers given 30 mL (T1), 60 mL (T2) and 90 mL (T3) MoALEs were significantly higher than the control group (T0) and this was also statistically significant (P<0.01). In terms of feed conversion ratio (FCR), the MoALE treated broilers (T1-T3) were more efficient converter of feeds into meat than the control group (T0) and this was statistically significant (P<0.01). Furthermore, the return of investment (ROI) of MoALE treated broilers (T1-T3) was significantly higher (P<0.01) than the control group (T0) with a revenue per peso invested of Php 0.62 in T1 and T2, and Php 0.63 in T3 compared to Php 0.50 in T0.
Hybrid Beamforming for Massive MIMO: A Survey
Hybrid multiple-antenna transceivers, which combine large-dimensional analog pre/postprocessing with lower-dimensional digital processing, are the most promising approach for reducing the hardware cost and training overhead in massive MIMO systems. This article provides a comprehensive survey of the various incarnations of such structures that have been proposed in the literature. We provide a taxonomy in terms of the required channel state information, that is, whether the processing adapts to the instantaneous or average (second-order) channel state information; while the former provides somewhat better signal- to-noise and interference ratio, the latter has much lower overhead for CSI acquisition. We furthermore distinguish hardware structures of different complexities. Finally, we point out the special design aspects for operation at millimeter-wave frequencies.
Low-energy ηd-resonance
Abstract: Elastic ηd-scattering is considered within the Alt-Grassberger-Sandhas (AGS) formalism for various ηN input data. A three-body resonant state is found close to the ηd threshold. This resonance is sustained for different choices of the two-body ηN-scattering length aηN. The position of the resonance moves towards the ηd threshold when ReaηN is increased, and turns into a quasi-bound state at ReaηN∼ 0.7-0.8 fm depending on the choice of ImaηN.
A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on sentence classification tasks (Kim, 2014; Kalchbrenner et al., 2014). However, these models require practitioners to specify the exact model architecture and accompanying hyperparameters, e.g., the choice of filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance (Kim, 2014). We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification. One important observation borne out by our experimental results is that researchers should report performance variances, as these can be substantial due to stochastic initialization and inference.
Avoiding Discrimination through Causal Reasoning
Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively. Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from “What is the right fairness criterion?” to “What do we want to assume about our model of the causal data generating process?” Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
Maximum Power Point Tracking for PV system under partial shading condition via particle swarm optimization
Performance of Photovoltaic (PV) system is greatly dependent on the solar irradiation and operating temperature. Due to partial shading condition, the characteristics of a PV system considerably change and often exhibit several local maxima with one global maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily be trapped at local maxima under partial shading. This significantly reduced the energy yield of the PV systems. In order to solve this problem, this paper proposes a Maximum Power Point tracking algorithm based on particle swarm optimization (PSO) that is capable of tracking global MPP under partial shaded conditions. The performance of proposed algorithm is evaluated by means of simulation in MATLAB Simulink. The proposed algorithm is applied to a grid connected PV system, in which a Boost (step up) DC-DC converter satisfactorily tracks the global peak.
Development of an Exergame for individual rehabilitation of patients with cardiovascular diseases
An Exergame prototype for improved and patient-adapted rehabilitation was developed. A target heart rate for individual users was defined and tracked using a chest belt. Physical activity was tracked by two 3-axis accelerometers, fixed to both wrists. Dependent on the recorded heart rate and by means of a supporting factor and linear regression the movement of the user within the game was supported or hindered. The Exergame was evaluated on 15 healthy users regarding entertaining aspects, physical effort, and impressions concerning the handling of the whole setup. The support factor algorithm to reach the target heart rate was reliable in all subjects.
Service-Oriented Computing
The Internet of Things (IoT) paradigm refers to the network of physical objects or “things” embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with servers, centralized systems, and/or other connected devices based on a variety of communication infrastructures. IoT makes it possible to sense and control objects creating opportunities for more direct integration between the physical world and computer-based systems. IoT will usher automation in a large number of application domains, ranging from manufacturing and energy management (e.g. SmartGrid), to healthcare management and urban life (e.g. SmartCity). However, because of its fine-grained, continuous and pervasive data acquisition and control capabilities, IoT raises concerns about the security and privacy of data. Deploying existing data security solutions to IoT is not straightforward because of device heterogeneity, highly dynamic and possibly unprotected environments, and large scale. In this talk, after outlining key challenges in data security and privacy, we present initial approaches to techniques and services for securing IoT data, including efficient and scalable encryption protocols, software protection techniques for small devices, and fine-grained data packet loss analysis for sensor networks. Bio: Elisa Bertino is professor of computer science at Purdue University, and serves as Director of Purdue Cyber Center and Research Director of the Center for Information and Research in Information Assurance and Security (CERIAS). She is also an adjunct professor of Computer Science & Info Tech at RMIT. Prior to joining Purdue in 2004, she was a professor and department head at the Department of Computer Science and Communication of the University of Milan. She has been a visiting researcher at the IBM Research Laboratory (now Almaden) in San Jose, at the Microelectronics and Computer Technology Corporation, at Rutgers University, at Telcordia Technologies. Her recent research focuses on data security and privacy, digital identity management, policy systems, and security for drones and embedded systems. She is a Fellow of ACM and of IEEE. She received the IEEE Computer Society 2002 Technical Achievement Award, the IEEE Computer Society 2005 Kanai Award and the 2014 ACM SIGSAC outstanding contributions award. She is currently serving as EiC of IEEE Transactions on Dependable and Secure Computing.
Heritable CRISPR/Cas9-Mediated Genome Editing in the Yellow Fever Mosquito, Aedes aegypti
In vivo targeted gene disruption is a powerful tool to study gene function. Thus far, two tools for genome editing in Aedes aegypti have been applied, zinc-finger nucleases (ZFN) and transcription activator-like effector nucleases (TALEN). As a promising alternative to ZFN and TALEN, which are difficult to produce and validate using standard molecular biological techniques, the clustered regularly interspaced short palindromic repeats/CRISPR-associated sequence 9 (CRISPR/Cas9) system has recently been discovered as a "do-it-yourself" genome editing tool. Here, we describe the use of CRISPR/Cas9 in the mosquito vector, Aedes aegypti. In a transgenic mosquito line expressing both Dsred and enhanced cyan fluorescent protein (ECFP) from the eye tissue-specific 3xP3 promoter in separated but tightly linked expression cassettes, we targeted the ECFP nucleotide sequence for disruption. When supplying the Cas9 enzyme and two sgRNAs targeting different regions of the ECFP gene as in vitro transcribed mRNAs for germline transformation, we recovered four different G1 pools (5.5% knockout efficiency) where individuals still expressed DsRed but no longer ECFP. PCR amplification, cloning, and sequencing of PCR amplicons revealed indels in the ECFP target gene ranging from 2-27 nucleotides. These results show for the first time that CRISPR/Cas9 mediated gene editing is achievable in Ae. aegypti, paving the way for further functional genomics related studies in this mosquito species.
An algorithm for suffix stripping
Removing suffixes by automatic means is an operation which is especially useful in the field of information retrieval. In a typical IR environment, one has a collection of documents, each described by the words in the document title and possibly by words in the document abstract. Ignoring the issue of precisely where the words originate, we can say that a document is represented by a vetor of words, or terms. Terms with a common stem will usually have similar meanings, for example:
"Anything as a Service" for 5G Mobile Systems
5G network architecture and its functions are yet to be defined. However, it is generally agreed that cloud computing, network function virtualization (NFV), and software defined networking (SDN) will be key enabling technologies for 5G. Indeed, putting all these technologies together ensures several advantages in terms of network configuration flexibility, scalability, and elasticity, which are highly needed to fulfill the numerous requirements of 5G. Furthermore, 5G network management procedures should be as simple as possible, allowing network operators to orchestrate and manage the lifecycle of their virtual network infrastructures (VNIs) and the corresponding virtual network functions (VNFs), in a cognitive and programmable fashion. To this end, we introduce the concept of “Anything as a Service” (ANYaaS), which allows a network operator to create and orchestrate 5G services on demand and in a dynamic way. ANYaaS relies on the reference ETSI NFV architecture to orchestrate and manage important services such as mobile Content Delivery Network as a Service (CDNaaS), Traffic Offload as a Service (TOFaaS), and Machine Type Communications as a Service (MTCaaS). Ultimately, ANYaaS aims for enabling dynamic creation and management of mobile services through agile approaches that handle 5G network resources and services.
Correlation between gastric acid secretion and severity of acid reflux in children.
The purpose of our study was to systematically evaluate gastric acid output in children with long-lasting gastro-esophageal reflux (GER) in order to assess its mechanism and the need for anti-acid treatment. The investigation was carried out in 20 males and 10 females, aged 7.5 +/- 3.8 years, with prolonged (>15 months) clinical manifestations of GER. All underwent routine ambulatory 24-h esophageal pH-monitoring and measurement of gastric acid secretion including gastric basal (BAO) (micromol/kg/h), maximal (MAO) and peak acid outputs (PAO) after pentagastrin (6 microg/kg sec) stimulation. Children with heartburn or abdominal pain underwent upper fiber-endoscopy. In group A (moderate GER, n=12), patients had a normal reflux index (pH<4 below 5.2% of total recording time) despite abnormal Euler and Byrne scoring (median 57, 95% confidence interval 53.5-73.4). In group B (severe GER, n=18, among whom 5 were with grade III esophagitis), reflux index was >5.2%. When considering all children, esophageal pH (%) was significantly correlated with MAO and PAO, r=0.33, p=0.05 and r=0.37, p=0.04, respectively. Children of group B exhibited significantly higher BAO (75, 53.96-137.81), MAO (468, 394.1-671.3) and PAO (617, 518.8-782.3) than those of group A, BAO (27, 10.8-38.5), MAO (266, 243.2-348.2) and PAO (387, 322.5-452.7), p<0.05). The five children of group B with severe esophagitis exhibited significantly higher BAO, MAO and PAO than the other 13 children from the same group and those of group A, p<0.05. Children with long-lasting and severe GER hyper-secrete gastric acid. Individual variations in gastric acid secretion probably account for variations in gastric acid inhibitor requirements. Anti-secretory treatment is justified in children with long-lasting GER and high pH-metric reflux index.
Mining Fuzzy Association Rules
In this paper, we introduce a novel technique, called F-APACS, for mining fuzzy association rules. Existing algorithms involve discretizing the domains of quantitative attributes into intervals so as to discover quantitative association rules. These intervals may not be concise and meaningful enough for human experts to easily obtain nontrivial knowledge from those rules discovered. Instead of using intervals, F-APACS employs linguistic terms to represent the revealed regularities and exceptions. The linguistic representation is especially useful when those rules discovered are presented to human experts for examination. The definition of linguistic terms is based on fuzzy set theory and hence we call the rules having these terms fuzzy association rules. The use of fuzzy techniques makes F-APACS resilient to noises such as inaccuracies in physical measurements of real-life entities and missing values in the databases. Furthermore, F-APACS employs adjusted difference analysis which has the advantage that it does not require any user-supplied thresholds which are often hard to determine. The fact that F-APACS is able to mine fuzzy association rules which utilize linguistic representation and that it uses an objective yet meaningful confidence measure to determine the interestingness of a rule makes it very effective at the discovery of rules from a real-life transactional database of a PBX system provided by a telecommunication corporation.
One Reality: Augmenting How the Physical World is Experienced by combining Multiple Mixed Reality Modalities
Most of our daily activities take place in the physical world, which inherently imposes physical constraints. In contrast, the digital world is very flexible, but usually isolated from its physical counterpart. To combine these two realms, many Mixed Reality (MR) techniques have been explored, at different levels in the continuum. In this work we present an integrated Mixed Reality ecosystem that allows users to incrementally transition from pure physical to pure virtual experiences in a unique reality. This system stands on a conceptual framework composed of 6 levels. This paper presents these levels as well as the related interaction techniques.
Girona 500 AUV: From Survey to Intervention
This paper outlines the specifications and basic design approach taken on the development of the Girona 500, an autonomous underwater vehicle whose most remarkable characteristic is its capacity to reconfigure for different tasks. The capabilities of this new vehicle range from different forms of seafloor survey to inspection and intervention tasks.
Mood Detection : Implementing a facial expression recognition system
Facial expressions play a significant role in human dialogue. As a result, there has been considerable work done on the recognition of emotional expressions and the application of this research will be beneficial in improving human-machine dialogue. One can imagine the improvements to computer interfaces, automated clinical (psychological) research or even interactions between humans and autonomous robots.
Color-Guided Depth Map Super Resolution Using Convolutional Neural Network
With the development of 3-D applications, such as 3-D reconstruction and object recognition, accurate and high-quality depth map is urgently required. Recently, depth cameras have been affordable and widely used in daily life. However, the captured depth map always owns low resolution and poor quality, which limits its practical application. This paper proposes a color-guided depth map super resolution method using convolutional neural network. First, a dual-stream convolutional neural network, which integrates the color and depth information simultaneously, is proposed for depth map super resolution. Then, the optimized edge map generated by the high resolution color image and low resolution depth map is used as additional information to refine the object boundary in the depth map. Experimental results demonstrate the effectiveness of the proposed method compared with the state-of-the-art methods.
A Study of Face Recognition as People Age
In this paper we study face recognition across ages within a real passport photo verification task. First, we propose using the gradient orientation pyramid for this task. Discarding the gradient magnitude and utilizing hierarchical techniques, we found that the new descriptor yields a robust and discriminative representation. With the proposed descriptor, we model face verification as a two-class problem and use a support vector machine as a classifier. The approach is applied to two passport data sets containing more than 1,800 image pairs from each person with large age differences. Although simple, our approach outperforms previously tested Bayesian technique and other descriptors, including the intensity difference and gradient with magnitude. In addition, it works as well as two commercial systems. Second, for the first time, we empirically study how age differences affect recognition performance. Our experiments show that, although the aging process adds difficulty to the recognition task, it does not surpass illumination or expression as a confounding factor.
Media Literacy and the Challenge of New Information and Communication Technologies
Within both academic and policy discourses, the concept of media literacy is being extended from its traditional focus on print and audiovisual media to encompass the internet and other new media. The present article addresses three central questions currently facing the public, policy-makers and academy: What is media literacy? How is it changing? And what are the uses of literacy? The article begins with a definition: media literacy is the ability to access, analyse, evaluate and create messages across a variety of contexts. This four-component model is then examined for its applicability to the internet. Having advocated this skills-based approach to media literacy in relation to the internet, the article identifies some outstanding issues for new media literacy crucial to any policy of promoting media literacy among the population. The outcome is to extend our understanding of media literacy so as to encompass the historically and culturally conditioned relationship among three processes: (i) the symbolic and material representation of knowledge, culture and values; (ii) the diffusion of interpretative skills and abilities across a (stratified) population; and (iii) the institutional, especially, the state management of the power that access to and skilled use of knowledge brings to those who are ‘literate’.
Pueri, Iuvenes, and Viri: Age and Utility In the Gregorian Reform
This article explores the role played by ideas about age and appropriate behavior for different stages of life in shaping the eleventh-century ecclesiastical reformers' vision for an ordered Christian society-notable at a time when the role of the Church and especially the papacy as both the definer and enforcer of utilitas was increasingly emphasized. By focusing on how some influential reformers and writers characterized youth, adulthood, and that shadowy stage between them, the iuventus, this article examines the extent to which the reformers not only drew upon the language of age and life stages but also combined them with ideas of suitability and utility in a powerful rhetoric that reinforced their scheme of social definition. The definition of precise roles for all parts of the societas Christiana has long been acknowledged as a fundamental part of the movement for ecclesiastical reform in the later eleventh century.1 Old assumptions of privilege and status were often set to one side as individuals were increasingly evaluated in terms of their efficacy in the promotion of the reformers' goals.2 Not only were kings and nobles-friends and foes alike-reprimanded and castigated for their failure to live up to the reformers' expectations, even the clergy were pointedly reminded of their specific place in the new order of things.3 Indeed, it can be argued that Gregory VII in particular, as well as reformers associated with him, by increasingly defining individuals in terms of their function, their suitability (idoneitas), and perhaps especially their utility (utilitas), thereby focused their attention less on the broader issues of pastoral care, penance, and personal salvation than on more pressing practical and ecclesiological issues. Such a view-however tenable in part-nevertheless presents a problematic characterization of the reformers and their program for the renovation of the Church and Christian society, one in which both pastoral care and concern for penance, in fact, played an integral part. Gregory VII, for instance, repeatedly displayed a strong interest in the spiritual well-being of the wider Christian familia. He exhorted and offered spiritual advice to individuals as diverse as Matilda of Tuscany, Empress Agnes, Queen Judith of Hungary, William the Conqueror, Count Albert of CaIw, Olaf III of Norway, the kings and princes of Spain, Centullus of Beam, the people of Bohemia, and the monks of Vallombrosa and Cluny as well as addressing numerous letters, especially in the later years of his pontificate, to all the faithful.4 Indeed, it can be argued in many ways that even his letters of chastisement were motivated by pastoral concern, perhaps most notably seen in his rebuke of Abbot Hugh of Cluny over the monastic profession of Duke Hugh of Burgundy in 1079.5 Both before and during his pontificate, Gregory was clearly devoted to urging monastic and canonical orders to ever more stringent interpretations of religious life, and as Cowdrey has argued, "before all else, his motives were religious."6 Moreover, at his November synod in Rome in 1078, Gregory famously promulgated an important initiative against false penances and described how true penance should be given. Here he not only showed himself to be especially preoccupied with specific consideration of individuals' positions and occupations but also stressed the importance of inner contrition largely lacking in the earlier penitential tradition where, when determining the amount of penance required, considerable emphasis had been placed on formulaic compensation along with the status, age, and condition of individuals (be it clerical or lay).7 Although the connection of penance and reform in the eleventh century is beyond the scope of the present article,8 the extent to which ideas about age and appropriate behavior for different stages of life played a role in shaping the reformers' vision for an ordered Christian society remains something of a neglected topic. …
Open Source Software Development and Lotka's Law: Bibliometric Patterns in Programming
This research applies Lotka’s Law to metadata on open source software development. Lotka’s Law predicts the proportion of authors at different levels of productivity. Open source software development harnesses the creativity of thousands of programmers worldwide, is important to the progress of the Internet and many other computing environments, and yet has not been widely researched. We examine metadata from the Linux Software Map (LSM), which documents many open source projects, and Sourceforge, one of the largest resources for open source developers. Authoring patterns found are comparable to prior studies of Lotka’s Law for scientific and scholarly publishing. Lotka’s Law was found to be effective in understanding software development productivity patterns, and offer promise in predicting aggregate behavior of open source developers.
Transforming Dependency Structures to Logical Forms for Semantic Parsing
The strongly typed syntax of grammar formalisms such as CCG, TAG, LFG and HPSG offers a synchronous framework for deriving syntactic structures and semantic logical forms. In contrast—partly due to the lack of a strong type system—dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. However, the lack of a type system makes a formal mechanism for deriving logical forms from dependency structures challenging. We address this by introducing a robust system based on the lambda calculus for deriving neo-Davidsonian logical forms from dependency trees. These logical forms are then used for semantic parsing of natural language to Freebase. Experiments on the Free917 and Web-Questions datasets show that our representation is superior to the original dependency trees and that it outperforms a CCG-based representation on this task. Compared to prior work, we obtain the strongest result to date on Free917 and competitive results on WebQuestions.
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.
Rebalancing Bike Sharing Systems: A Multi-source Data Smart Optimization
Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.
Series Connection of IGBT ’ s with Active Voltage Balancing
This paper describes an active gate drive circuit for series-connected insulated gate bipolar transistors (IGBT’s) with voltage balancing in high-voltage applications. The gate drive circuit not only amplifies the gate signal, but also actively limits the overvoltage during switching transients, while minimizing the switching transients and losses. In order to achieve the control objective, an analog closed-loop control scheme is adopted. The closed-loop control injects current to an IGBT gate as required to limit the IGBT collector–emitter voltage to a predefined level. The performance of the gate drive circuit is examined experimentally by the series connection of three IGBT’s with conventional snubber circuits. The experimental results show the voltage balancing by an active control with wide variations in loads and imbalance conditions.
Culture Wars, Soul-Searching,
Attaching political labels to a situation whose roots transcend politics constitutes a critical weakness of Western policies vis-a-vis Belarus. The contemporary nationalist discourse in Belarus allows one to discern three “national projects,” each being a corpus of ideas about Belarus “the way it should be”: (1) Nativist/pro-European, (2) Muscovite liberal, and (3) Creole. While the projects’ nametags are debatable, the trichotomy is a useful abstraction, as it reflects the lines of force in the “magnetic field” of Belarusian nationalism. The article analyzes the strengths and weaknesses of each project, cultural wars between them, the role of a civilizational fault line that runs across Belarus and the attendant geopolitical divisions that underlie multiplicity of national projects. The idea is expressed of a desirable consensus based on the most viable aspects of the national projects.
Adaptive Digital Predistortion of Wireless Power Amplifiers/Transmitters Using Dynamic Real-Valued Focused Time-Delay Line Neural Networks
Neural networks (NNs) are becoming an increasingly attractive solution for power amplifier (PA) behavioral modeling, due to their excellent approximation capability. Recently, different topologies have been proposed for linearizing PAs using neural based digital predistortion, but most of the previously reported results have been simulation based and addressed the issue of linearizing static or mildly nonlinear PA models. For the first time, a realistic and experimentally validated approach towards adaptive predistortion technique, which takes advantage of the superior dynamic modeling capability of a real-valued focused time-delay neural network (RVFTDNN) for the linearization of third-generation PAs, is proposed in this paper. A comparative study of RVFTDNN and a real-valued recurrent NN has been carried out to establish RVFTDNN as an effective, robust, and easy-to-implement baseband model, which is suitable for inverse modeling of RF PAs and wireless transmitters, to be used as an effective digital predistorter. Efforts have also been made on the selection of the most efficient training algorithm during the reverse modeling of PA, based on the selected NN. The proposed model has been validated for linearizing a mildly nonlinear class AB amplifier and a strongly nonlinear Doherty PA with wideband code-division multiple access (WCDMA) signals for single- and multiple-carrier applications. The effects of memory consideration on linearization are clearly shown in the measurement results. An adjacent channel leakage ratio correction of up to 20 dB is reported due to linearization where approximately 5-dB correction is observed due to memory effect nullification for wideband multicarrier WCDMA signals.
A survey of kernels for structured data
Kernel methods in general and support vector machines in particular have been successful in various learning tasks on data represented in a single table. Much 'real-world' data, however, is structured - it has no natural representation in a single table. Usually, to apply kernel methods to 'real-world' data, extensive pre-processing is performed to embed the data into areal vector space and thus in a single table. This survey describes several approaches of defining positive definite kernels on structured instances directly.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.
Zero-Shot Recognition via Structured Prediction
• Test-time: For each image feature , identify max scoring source descriptor from a collection of (unseen) classes, and label the test image using the label of the source descriptor. This Paper : Transductive Zero-Shot Recognition • Problem: projection domain shift between source and target data leads to ZSR degradation. • Solution: Model domain shift as small distortion in predicted cluster centers based on seen classes. Then solve a structured matching problem to improve alignment between source and target domain.
COMMUNICATION PROBLEMS IN REQUIREMENTS ENGINEERING : A FIELD STUDY By
The requirements engineering phase of software development projects is characterised by the intensity and importance of communication activities. During this phase, the various stakeholders must be able to communicate their requirements to the analysts, and the analysts need to be able to communicate the specifications they generate back to the stakeholders for validation. This paper describes a field investigation into the problems of communication between disparate communities involved in the requirements specification activities. The results of this study are discussed in terms of their relation to three major communication barriers : 1) ineffectiveness of the current communication channels; 2) restrictions on expressiveness imposed by notations; and 3) social and organisational barriers. The results confirm that organisational and social issues have great influence on the effectiveness of communication. They also show that in general, endusers find the notations used by software practitioners to model their requirements difficult to understand and validate.
Handling Cold-Start Problem in Review Spam Detection by Jointly Embedding Texts and Behaviors
Solving the cold-start problem in review spam detection is an urgent and significant task. It can help the on-line review websites to relieve the damage of spammers in time, but has never been investigated by previous work. This paper proposes a novel neural network model to detect review spam for the cold-start problem, by learning to represent the new reviewers’ review with jointly embedded textual and behavioral information. Experimental results prove the proposed model achieves an effective performance and possesses preferable domain-adaptability. It is also applicable to a large-scale dataset in an unsupervised way.
LEADERSHIP SKILLS IN ORDER TO INCREASE EMPLOYEE EFFEICIENCY
The role of leadership as a core management function becomes extremely important in the case of rapid changes occurring in the market, and then within an organization that must adapt to new changes. Therefore, management becomes a central topic of study within the field of management. Terms of managers and leaders are not equal, and have not same meaning. The manager may be the person who operates in a stable business environment; a leader is needed in terms of uncertainty that identifies new opportunities for the company in a dynamic business environment. Therefore, leadership, charisma and inspiring employees, and the use of power, are becoming the key to the success of the enterprise market, and among its competitors. There is no dilemma that is leadership crucial for its success or not, the importance of leadership is unquestioned. Therefore the study of this area management as a management tool is importance for the success of the business. A leadership skill derives satisfaction of employees work activity. The company, which have no leader will result , with bad results, not motivated and disgruntled employees, while the opposite, organization that are based on knowledge and expertise in the field of management will be successful in their own business domain. Because of its importance in achieving the goals set out by managers and organizations, purpose of this paper is to examine the effects of a valid, leadership on the effectiveness of employees in the enterprises. The results show that a leadership skill affects the efficiency of enterprises and employee motivation. Leadership skills becoming a key success factor in business and in achieving the organization's objectives.
Growing a Brain: Fine-Tuning by Increasing Model Capacity
CNNs have made an undeniable impact on computer vision through the ability to learn high-capacity models with large annotated training sets. One of their remarkable properties is the ability to transfer knowledge from a large source dataset to a (typically smaller) target dataset. This is usually accomplished through fine-tuning a fixed-size network on new target data. Indeed, virtually every contemporary visual recognition system makes use of fine-tuning to transfer knowledge from ImageNet. In this work, we analyze what components and parameters change during fine-tuning, and discover that increasing model capacity allows for more natural model adaptation through fine-tuning. By making an analogy to developmental learning, we demonstrate that growing a CNN with additional units, either by widening existing layers or deepening the overall network, significantly outperforms classic fine-tuning approaches. But in order to properly grow a network, we show that newly-added units must be appropriately normalized to allow for a pace of learning that is consistent with existing units. We empirically validate our approach on several benchmark datasets, producing state-of-the-art results.
Unleashing Mayhem on Binary Code
In this paper we present Mayhem, a new system for automatically finding exploitable bugs in binary (i.e., executable) programs. Every bug reported by Mayhem is accompanied by a working shell-spawning exploit. The working exploits ensure soundness and that each bug report is security-critical and actionable. Mayhem works on raw binary code without debugging information. To make exploit generation possible at the binary-level, Mayhem addresses two major technical challenges: actively managing execution paths without exhausting memory, and reasoning about symbolic memory indices, where a load or a store address depends on user input. To this end, we propose two novel techniques: 1) hybrid symbolic execution for combining online and offline (concolic) execution to maximize the benefits of both techniques, and 2) index-based memory modeling, a technique that allows Mayhem to efficiently reason about symbolic memory at the binary level. We used Mayhem to find and demonstrate 29 exploitable vulnerabilities in both Linux and Windows programs, 2 of which were previously undocumented.
Clustering Orders
We propose a method of using clustering techniques to partition a set of orders. We define the term order as a sequence of objects that are sorted according to some property, such as size, preference, or price. These orders are useful for, say, carrying out a sensory survey. We propose a method called the k-o’means method, which is a modified version of a k-means method, adjusted to handle orders. We compared our method with the traditional clustering methods, and analyzed its characteristics. We also applied our method to a questionnaire survey data on people’s preferences in types of sushi (a Japanese food).
Characterization of B-cell lines from SLE patients and their relatives
Epstein-Barr-virus (EBV)-transformed lymphoblastoid B-cell lines were generated from peripheral blood lymphocytes of 55 patients with systemic lupus erythematosus (SLE) and 44 healthy relatives. All donors have previously been extensively characterized with regard to clinical, serologic, and genetic parameters. Here, peripheral blood lymphocytes and lines were characterized for cell surface antigens. Furthermore, autoantibody production and proliferation rate of the cell lines were monitored. A significant difference between patients and relatives was the lower proliferation rate of EBV-transformed cell lines of the SLE patients. All SLE cell lines are available for interested researches and can be obtained from the European Cell Bank, Salisbury, UK.
Biodiesel Production from Azolla filiculoides (Water Fern)
Purpose: To assess the potential of Azolla filiculoides, total body collected from a rice farm in northern Iran as source for biodiesel production. Methods: Solvent extraction using Soxhlet apparatus with chloroform-methanol (2:1 v/v) solvent blend was used to obtain crude oil from freeze-dried the Azolla plant. Acid-catalyzed transesterification was used to convert fatty acids (FA), monoglycerides (MG), diglycerides (DG) and triglycerides (TG) in the extracts to fatty acid methyl esters (FAMEs) by acid-catalyzed methylation. Gas chromatography–mass spectrometry (GC–MS) was employed to analyze the FAMEs in the macroalgae biodiesel. Results: The presence of myristic acid (C14:0), palmitic acid (C16:0), palmitoleic acid (C16:1), myristic acid (C14:0), stearic acid (C18:3), oleic acid (C18:1) and linoleic acid 9C18:2), eicosenoic acid (C20:1), eicosapentaenoic acid (C20:5), erucic acid (C22:1) and docosahexaenoic acid (C22:6) in the macroalgae biodiesel was confirmed. Conclusion: The results indicate that biodiesel can be produced from macroalgae and that water fern is potentially an economical source of biodiesel due its ready availability and probable low cost.
Stereo Panorama with a Single Camera
Full panoramic images, covering 360 degrees, can be created either by using panoramic cameras or by mosaicing together many regular images. Creating panoramic views instereo, where one panorama is generated for the left eye, and another panorama is generated for the right eye is more problematic. Earlier attempts to mosaic images from a rotating pair of stereo cameras faced severe problems of parallax and of scale changes. A new family of multiple viewpoint image projections, the Circular Projections, is developed. Two panoramic images taken using such projections can serve as a panoramic stereo pair. A system is described to generates a stereo panoramic image using circular projections from images or video taken by a single rotating camera. The system works in real-time on a PC. It should be noted that the stereo images are created without computation of 3D structure, and the depth effects are created only in the viewer's brain.
Prospective analysis of long-term psychosocial outcomes in breast reconstruction: two-year postoperative results from the Michigan Breast Reconstruction Outcomes Study.
OBJECTIVE To prospectively evaluate the psychosocial outcomes and body image of patients 2 years postmastectomy reconstruction using a multicenter, multisurgeon approach. BACKGROUND Although breast reconstruction has been shown to confer significant psychosocial benefits in breast cancer patients at year 1 postreconstruction, we considered the possibility that psychosocial outcomes may remain in a state of flux for years after surgery. METHODS Patients were recruited as part of the Michigan Breast Reconstruction Outcome Study, a 12 center, 23 surgeon prospective cohort study of mastectomy reconstruction patients. Two-sided paired sample t tests were used to compare change scores for the various psychosocial subscales. Multiple regression analysis was used to determine whether the magnitude of the change score varied by procedure type. RESULTS Preoperative and postoperative year 2 surveys were received from 173 patients; 116 with immediate and 57 with delayed reconstruction. For the immediate reconstruction cohort, significant improvements were observed in all psychosocial subscales except for body image. This occurred essentially independent of procedure type. In the cohort with delayed reconstruction, significant change scores were observed only in body image. Women with transverse rectus abdominis musculocutaneous flaps had significantly greater gains in body image scores (P = 0.003 and P = 0.034, respectively) when compared with expander/implants. CONCLUSIONS General psychosocial benefits and body image gains continued to manifest at 2 years postmastectomy reconstruction. In addition, procedure type had a surprisingly limited effect on psychosocial well being. With outcomes evolving beyond year 1, these data support the need for additional longitudinal breast reconstruction outcome studies.
Duty-related trauma exposure in 911 telecommunicators: considering the risk for posttraumatic stress.
Peritraumatic distress may increase the risk for posttraumatic stress disorder (PTSD) in police officers. Much less is known about emotional reactions and PTSD symptomatology in 911 telecommunicators. The current study assessed duty-related exposure to potentially traumatic calls, peritraumatic distress, and PTSD symptomatology in a cross-sectional, convenience sample of 171 telecommunicators. Results showed that telecommunicators reported high levels of peritraumatic distress and a moderate, positive relationship was found between peritraumatic distress and PTSD symptom severity (r = .34). The results suggest that 911 telecommunicators are exposed to duty-related trauma that may lead to the development of PTSD, and that direct, physical exposure to trauma may not be necessary to increase risk for PTSD in this population.
Composite retrieval of heterogeneous web search
Traditional search systems generally present a ranked list of documents as answers to user queries. In aggregated search systems, results from different and increasingly diverse verticals (image, video, news, etc.) are returned to users. For instance, many such search engines return to users both images and web documents as answers to the query "flower". Aggregated search has become a very popular paradigm. In this paper, we go one step further and study a different search paradigm: composite retrieval. Rather than returning and merging results from different verticals, as is the case with aggregated search, we propose to return to users a set of "bundles", where a bundle is composed of "cohesive" results from several verticals. For example, for the query "London Olympic", one bundle per sport could be returned, each containing results extracted from news, videos, images, or Wikipedia. Composite retrieval can promote exploratory search in a way that helps users understand the diversity of results available for a specific query and decide what to explore in more detail. In this paper, we propose and evaluate a variety of approaches to construct bundles that are relevant, cohesive and diverse. Compared with three baselines (traditional "general web only" ranking, federated search ranking and aggregated search), our evaluation results demonstrate significant performance improvement for a highly heterogeneous web collection.
Approximation by exponential sums revisited ✩
Article history: Received 15 May 2009 Accepted 15 August 2009 Available online 2 September 2009 Communicated by Ginette Saracco
Throughput and delay analysis of IEEE 802.11 protocol
Wireless technologies in the LAN environment are becoming increasingly important. The IEEE 802.11 standard is the most mature technology for wireless local area networks (WLANs). The performance of the medium access control (MAC) layer, which consists of distributed coordination function (DCF) and point coordination function (PCF), has been examined over the past years. We present an analytical model to compute the saturated throughput of 802.11 protocol in the absence of hidden stations and transmission errors. A throughput analysis is carried out in order to study the performance of 802.11 DCF. Using the analytical model, we develop a frame delay analysis under traffic conditions that correspond to the maximum load that the network can support in stable conditions. The behaviour of the exponential backoff algorithm used in 802.11 is also examined.
Moral cognition and its neural constituents
Identifying the neural mechanisms of moral cognition is especially difficult. In part, this is because moral cognition taps multiple cognitive sub-processes, being a highly distributed, whole-brain affair. The assumptions required to make progress in identifying the neural constituents of moral cognition might simplify morally salient stimuli to the point that they no longer activate the requisite neural architectures, but the right experiments can overcome this difficulty. The current evidence allows us to draw a tentative conclusion: the moral psychology required by virtue theory is the most neurobiologically plausible.
Not All Samples Are Created Equal: Deep Learning with Importance Sampling
Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.
ECLiPSe - from LP to CLP
ECLPS is a Prolog-based programming system, aimed at the development and deployment of constraint programming applications. It is also used for teaching most aspects of combinatorial problem solving, e.g. problem modelling, constraint programming, mathematical programming, and search techniques. It uses an extended Prolog as its high-level modelling and control language, complemented by several constraint solver libraries, interfaces to third-party solvers, an integrated development environment and interfaces for embedding into host environments. This paper discusses language extensions, implementation aspects, components and tools that we consider relevant on the way from Logic Programming to Constraint Logic Programming. To appear in Theory and Practice of Logic Programming (TPLP).
Stock time series pattern matching: Template-based vs. rule-based approaches
One of the major duties of financial analysts is technical analysis. It is necessary to locate the technical patterns in the stock price movement charts to analyze the market behavior. Indeed, there are two main problems: how to define those preferred patterns (technical patterns) for query and how to match the defined pattern templates in different resolutions. As we can see, defining the similarity between time series (or time series subsequences) is of fundamental importance. By identifying the perceptually important points (PIPs) directly from the time domain, time series and templates of different lengths can be compared. Three ways of distance measure, including Euclidean distance (PIP-ED), perpendicular distance (PIP-PD) and vertical distance (PIP-VD), for PIP identification are compared in this paper. After the PIP identification process, both templateand rule-based pattern-matching approaches are introduced. The proposed methods are distinctive in their intuitiveness, making them particularly user friendly to ordinary data analysts like stock market investors. As demonstrated by the experiments, the templateand the rule-based time series matching and subsequence searching approaches provide different directions to achieve the goal of pattern identification. r 2006 Elsevier Ltd. All rights reserved.
A novel approach to MRI Brain Tumor delineation with Independent Components & Finite Generalized Gaussian Mixture Models
Automated segmentation of tumors from a multispectral data set like that of the Magnetic Resonance Images (MRI) is challenging. Independent Component Analysis (ICA) and its variations for Blind Source Separation (BSS) have been employed in previous studies but have met with cumbersome obstacles due to its inherent limitations. Here we have approached the multispectral data set initially with feature extraction followed by a kernel shape based unsupervised classification method, Finite Generalized Gaussian Mixture Model (FGGM) ICA-FGGM model, for an improved classification of brain tissues in MRI. First, ICA is applied to MRI brain data from 3 source image sets T1, T2 and PD/ FLAIR images to get optimally feature extracted three independent components. FGGM model can then incorporate various distributions from peaked ones to flat ones; thereby overriding the disadvantages of conventional approaches trying to represent data using a single probability density function. ExpectationMaximization algorithm is used to estimate the model parameters. Experiments were carried out initially on synthetic image sets to validate the algorithm and then on normal and abnormal clinical multispectral MRI brain images. Comparative studies using quantitative and qualitative analysis against conventional approaches confirm the effectiveness and superiority of the proposed method.