title
stringlengths
8
300
abstract
stringlengths
0
10k
Multi-Scale Dense Convolutional Networks for Efficient Prediction
This paper studies convolutional networks that require limited computational resources at test time. We develop a new network architecture that performs on par with state-of-the-art convolutional networks, whilst facilitating prediction in two settings: (1) an anytime-prediction setting in which the network’s prediction for one example is progressively updated, facilitating the output of a prediction at any time; and (2) a batch computational budget setting in which a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” examples. Our network architecture uses multi-scale convolutions and progressively growing feature representations, which allows for the training of multiple classifiers at intermediate layers of the network. Experiments on three image-classification datasets demonstrate the efficacy of our architecture, in particular, when measured in terms of classification accuracy as a function of the amount of compute available.
Low-Rank Representation For Enhanced Deep Neural Network Acoustic Models M aster Project Report
Automatic speech recognition (ASR) is a fascinating area of research towards realizing humanmachine interactions. After more than 30 years of exploitation of Gaussian Mixture Models (GMMs), state-of-the-art systems currently rely on Deep Neural Network (DNN) to estimate class-conditional posterior probabilities. The posterior probabilities are used for acoustic modeling in hidden Markov models (HMM), and form a hybrid DNN-HMM which is now the leading edge approach to solve ASR problems. The present work builds upon the hypothesis that the optimal acoustic models are sparse and lie on multiple low-rank probability subspaces. Hence, the main goal of this Master project aimed at investigating different ways to restructure the DNN outputs using low-rank representation. Exploiting a large number of training posterior vectors, the underlying low-dimensional subspace can be identified, and low-rank decomposition enables separation of the “optimal” posteriors from the spurious (unstructured) uncertainties at the DNN output. Experiments demonstrate that low-rank representation can enhance posterior probability estimation, and lead to higher ASR accuracy. The posteriors are grouped according to their subspace similarities, and structured through low-rank decomposition. Furthermore, a novel hashing technique is proposed exploiting the low-rank property of posterior subspaces that enables fast search in the space of posterior exemplars.
Effects of a new calcium sensitizer, levosimendan, on haemodynamics, coronary blood flow and myocardial substrate utilization early after coronary artery bypass grafting.
AIMS The aim of the study was to evaluate the effects on systemic and coronary haemodynamics and myocardial substrate utilization of a new calcium sensitizer, levosimendan, after coronary artery bypass grafting. METHODS AND RESULTS Twenty-three low-risk patients were included in this randomized and double-blind study. They received placebo (n = 8), 8 (n = 8) or 24 (n = 7) micrograms.kg-1 of levosimendan after coronary artery bypass operation. Systemic and coronary sinus haemodynamics with thermodilution and myocardial substrate utilization were measured. The heart rate increased 11 beats.min-1 after the higher dose (P < 0.05). Cardiac output increased by 0.7 and 1.61.min-1 (P < 0.05 for both) after 8 and 24 micrograms.kg-1 of levosimendan, respectively. Systemic and pulmonary vascular resistance decreased significantly after both doses. Coronary sinus blood flow increased by 28 and 42 ml/(P = 0.054 for the combined effect) after the lower and higher dose, respectively. Myocardial oxygen consumption or substrate extractions did not change statistically significantly. CONCLUSION Despite improved cardiac performance, levosimendan did not increase myocardial oxygen consumption or change myocardial substrate utilization. Thus levosimendan has the potential to treat low cardiac output states after cardiopulmonary bypass surgery.
Exploring the Millennium Run - Scalable Rendering of Large-Scale Cosmological Datasets
In this paper we investigate scalability limitations in the visualization of large-scale particle-based cosmological simulations, and we present methods to reduce these limitations on current PC architectures. To minimize the amount of data to be streamed from disk to the graphics subsystem, we propose a visually continuous level-of-detail (LOD) particle representation based on a hierarchical quantization scheme for particle coordinates and rules for generating coarse particle distributions. Given the maximal world space error per level, our LOD selection technique guarantees a sub-pixel screen space error during rendering. A brick-based page-tree allows to further reduce the number of disk seek operations to be performed. Additional particle quantities like density, velocity dispersion, and radius are compressed at no visible loss using vector quantization of logarithmically encoded floating point values. By fine-grain view-frustum culling and presence acceleration in a geometry shader the required geometry throughput on the GPU can be significantly reduced. We validate the quality and scalability of our method by presenting visualizations of a particle-based cosmological dark-matter simulation exceeding 10 billion elements.
Connecting Two (or Less) Dots: Discovering Structure in News Articles
Finding information is becoming a major part of our daily life. Entire sectors, from Web users to scientists and intelligence analysts, are increasingly struggling to keep up with the larger and larger amounts of content published every day. With this much data, it is often easy to miss the big picture. In this article, we investigate methods for automatically connecting the dots---providing a structured, easy way to navigate within a new topic and discover hidden connections. We focus on the news domain: given two news articles, our system automatically finds a coherent chain linking them together. For example, it can recover the chain of events starting with the decline of home prices (January 2007), and ending with the health care debate (2009). We formalize the characteristics of a good chain and provide a fast search-driven algorithm to connect two fixed endpoints. We incorporate user feedback into our framework, allowing the stories to be refined and personalized. We also provide a method to handle partially-specified endpoints, for users who do not know both ends of a story. Finally, we evaluate our algorithm over real news data. Our user studies demonstrate that the objective we propose captures the users’ intuitive notion of coherence, and that our algorithm effectively helps users understand the news.
Syntactic Skeleton-Based Translation
In this paper we propose an approach to modeling syntactically-motivated skeletal structure of source sentence for machine translation. This model allows for application of high-level syntactic transfer rules and low-level non-syntactic rules. It thus involves fully syntactic, non-syntactic, and partially syntactic derivations via a single grammar and decoding paradigm. On large-scale Chinese-English and EnglishChinese translation tasks, we obtain an average improvement of +0.9 BLEU across the newswire and web genres.
Adaptive Hierarchical Fair Competition (AHFC) Model For Parallel Evolutionary Algorithms
The HFC model for parallel evolutionary computation is inspired by the stratified competition often seen in society and biology. Subpopulations are stratified by fitness. Individuals move from low-fitness to higher-fitness subpopulations if and only if they exceed the fitness-based admission threshold of the receiving subpopulation, but not of a higher one. The HFC model implements several critical features of a competent parallel evolutionary computation model, simultaneously and naturally, allowing rapid exploitation while impeding premature convergence. The AHFC model is an adaptive version of HFC, extending it by allowing the admission thresholds of fitness levels to be determined dynamically by the evolution process itself. The effectiveness of the Adaptive HFC model is compared with the HFC model on a genetic programming-based evolutionary synthesis example.
DeViSE: A Deep Visual-Semantic Embedding Model
Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources – such as text data – both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions by up to 65%, achieving hit rates of up to 10% across thousands of novel labels never seen by the visual model.
Node Detection and Internode Length Estimation of Tomato Seedlings Based on Image Analysis and Machine Learning
Seedling vigor in tomatoes determines the quality and growth of fruits and total plant productivity. It is well known that the salient effects of environmental stresses appear on the internode length; the length between adjoining main stem node (henceforth called node). In this study, we develop a method for internode length estimation using image processing technology. The proposed method consists of three steps: node detection, node order estimation, and internode length estimation. This method has two main advantages: (i) as it uses machine learning approaches for node detection, it does not require adjustment of threshold values even though seedlings are imaged under varying timings and lighting conditions with complex backgrounds; and (ii) as it uses affinity propagation for node order estimation, it can be applied to seedlings with different numbers of nodes without prior provision of the node number as a parameter. Our node detection results show that the proposed method can detect 72% of the 358 nodes in time-series imaging of three seedlings (recall = 0.72, precision = 0.78). In particular, the application of a general object recognition approach, Bag of Visual Words (BoVWs), enabled the elimination of many false positives on leaves occurring in the image segmentation based on pixel color, significantly improving the precision. The internode length estimation results had a relative error of below 15.4%. These results demonstrate that our method has the ability to evaluate the vigor of tomato seedlings quickly and accurately.
Randomised phase II trial of hyperbaric oxygen therapy in patients with chronic arm lymphoedema after radiotherapy for cancer.
BACKGROUND A non-randomised phase II study suggested a therapeutic effect of hyperbaric oxygen (HBO) therapy on arm lymphoedema following adjuvant radiotherapy for early breast cancer, justifying further investigation in a randomised trial. METHODS Fifty-eight patients with ≥ 15% increase in arm volume after supraclavicular ± axillary radiotherapy (axillary surgery in 52/58 patients) were randomised in a 2:1 ratio to HBO (n=38) or to best standard care (n=20). The HBO group breathed 100% oxygen at 2.4 atmospheres absolute for 100 min on 30 occasions over 6 weeks. Primary endpoint was ipsilateral limb volume expressed as a percentage of contralateral limb volume. Secondary endpoints included fractional removal rate of radioisotopic tracer from the arm, extracellular water content, patient self-assessments and UK SF-36 Health Survey Questionnaire. FINDINGS Of 53/58 (91.4%) patients with baseline assessments, 46 had 12-month assessments (86.8%). Median volume of ipsilateral limb (relative to contralateral) at baseline was 133.5% (IQR 126.0-152.3%) in the control group, and 135.5% (IQR 126.5-146.0%) in the treatment group. Twelve months after baseline the median (IQR) volume of the ipsilateral limb was 131.2% (IQR 122.7-151.5%) in the control group and 133.5% (IQR 122.3-144.9%) in the treatment group. Results for the secondary endpoints were similar between randomised groups. INTERPRETATION No evidence has been found of a beneficial effect of HBO in the treatment of arm lymphoedema following primary surgery and adjuvant radiotherapy for early breast cancer.
Video Summarization by Curve Simplification
.4 video sequence can be reprmented as a trajectory curve in a high dmensiond feature space. This video curve can be an~yzed by took Mar to those devdoped for planar cnrv=. h partidar, the classic biiary curve sphtting algorithm has been fonnd to be a nseti tool for video analysis. With a spEtting condition that checks the dimension&@ of the curve szgrnent being spht, the video curve can be recursivdy sirnpMed and repr~ented as a tree stmcture, and the framm that are fomtd to be junctions betieen curve segments at Merent, lev& of the tree can be used as ke-fiarn~s to summarize the tideo sequences at Merent levds of det ti. The-e keyframes can be combmed in various spatial and tempord configurations for browsing purposes. We describe a simple video player that displays the ke.fiarn~ seqnentifly and lets the user change the summarization level on the fly tith an additiond shder. 1.1 Sgrrlficance of the Problem Recent advances in digitd technology have promoted video as a vdnable information resource. I$le can now XCaS Se lected &ps from archives of thousands of hours of video footage host instantly. This new resource is e~citing, yet the sheer volume of data makes any retried task o~emhehning and its dcient. nsage impowible. Brow= ing tools that wodd flow the user to qnitiy get an idea of the content of video footage are SW important ti~~ ing components in these video database syst-Fortunately, the devdopment of browsing took is a very active area of research [13, 16, 17], and pow~ solutions are in the horizon. Browsers use as balding blocks subsets of fiarnes c~ed ke.frames, sdected because they smnmarize the video content better than their neighbors. Obviously, sdecting one keytiarne per shot does not adeqnatdy surnPermisslonlo rna~edigitalorhardcopi= of aftorpartof this v:ork for personalor classroomuse is granted v;IIhouIfee providedlhat copies are nol made or distributed for profitor commercial advantage, andthat copiesbear!hrsnoticeandihe full citationon ihe first page.To copyoxhem,se,IOrepublishtopostonservers or lo redistribute10 lists, requiresprior specific pzrrnisston znt’or a fe~ AChl hlultimedia’9S. BnsIol.UK @ 199sAchi 1-5s11>036s!9s/000s S.oo 211 marize the complex information content of long shots in which camera pan and zoom as we~ as object motion pr~ gr=sivdy unvd entirely new situations. Shots shotid be sampled by a higher or lower density of keyfrarnes according to their activity level. Sampbg techniques that would attempt to detect sigficant information changes simply by looking at pairs of frames or even several consecutive frames are bound to lack robustness in presence of noise, such as jitter occurring during camera motion or sudden ~urnination changes due to fluorescent Eght ticker, glare and photographic flash. kterestin~y, methods devdoped to detect perceptually signi$mnt points and &continuities on noisy 2D curves have succes~y addressed this type of problem, and can be extended to the mdtidimensiond curves that represent video sequences. h this paper, we describe an algorithm that can de compose a curve origin~y defined in a high dmensiond space into curve segments of low dimension. In partictiar, a video sequence can be mapped to a high dimensional polygonal trajectory curve by mapping each frame to a time dependent feature usctor, and representing these feature vectors as points. We can apply this algorithm to segment the curve of the video sequence into low ditnensiond curve segments or even fine segments. Th=e segments correspond to video footage where activity is low and frames are redundant. The idea is to detect the constituent segments of the video curoe rather than attempt to lomte the jtmctions between these segments directly. In such a dud aPProach, the curve is decomposed into segments \vhich exkibit hearity or low dirnensiontity. Curvature discontinuiti~ are then assigned to the junctions between these segments. Detecting generrd stmcture in the video curves to derive frame locations of features such as cuts and shot transitions, rather than attempting to locate the features thernsdv~ by Iocrd analysis of frame changes, ensures that the detected positions of these features are more stable in the presence of noise which is effectively faltered out. h addition, the proposed technique butids a binary tree representation of a video sequence where branches cent tin frarn= corresponding to more dettied representations of the sequence. The user can view the video sequence at coarse or fine lev& of detds, zooming in by displaying keyfrantes corresponding to the leaves of the tree, or zooming out by displaying keyframes near the root of the tree. ●
3D Face Morphable Models "In-the-Wild"
3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.
3-D Snake Robot Motion: Nonsmooth Modeling, Simulations, and Experiments
A nonsmooth (hybrid) 3-D mathematical model of a snake robot (without wheels) is developed and experimentally validated in this paper. The model is based on the framework of nonsmooth dynamics and convex analysis that allows us to easily and systematically incorporate unilateral contact forces (i.e., between the snake robot and the ground surface) and friction forces based on Coulomb's law of dry friction. Conventional numerical solvers cannot be employed directly due to set-valued force laws and possible instantaneous velocity changes. Therefore, we show how to implement the model for numerical treatment with a numerical integrator called the time-stepping method. This method helps to avoid explicit changes between equations during simulation even though the system is hybrid. Simulation results for the serpentine motion pattern lateral undulation and sidewinding are presented. In addition, experiments are performed with the snake robot ldquoAikordquo for locomotion by lateral undulation and sidewinding, both with isotropic friction. For these cases, back-to-back comparisons between numerical results and experimental results are given.
High dynamic range video
Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. We show examples for a variety of dynamic scenes. We also show how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.
Molecular aspects of age-related cognitive decline: the role of GABA signaling.
Alterations in inhibitory interneurons contribute to cognitive deficits associated with several psychiatric and neurological diseases. Phasic and tonic inhibition imparted by γ-aminobutyric acid (GABA) receptors regulates neural activity and helps to establish the appropriate network dynamics in cortical circuits that support normal cognition. This review highlights basic science demonstrating that inhibitory signaling is altered in aging, and discusses the impact of age-related shifts in inhibition on different forms of memory function, including hippocampus-dependent spatial reference memory and prefrontal cortex (PFC)-dependent working memory. The clinical appropriateness and tractability of select therapeutic candidates for cognitive aging that target receptors mediating inhibition are also discussed.
Therapeutic choices in patients with Ph-positive CML living in Mexico in the tyrosine kinase inhibitor era: SCT or TKIs?
A total of 72 patients with Ph-positive CML in first chronic phase were followed during a 6-year period in two different institutions in México. Among them, 22 were given a reduced-intensity allogeneic SCT, whereas 50 were given a tyrosine kinase inhibitor (TKI), mainly imatinib mesylate. The 6-year overall survival (OS) after the therapeutic intervention for patients allografted or given a TKI was 77 and 84%, respectively (P, NS); the median OS for both groups has not been reached, being above 90 and 71 months, respectively (P, NS). The freedom from progression to blast or accelerated phases was also similar for both groups, as well as the overall OS after diagnosis. Most patients allografted (91%) chose this treatment because they were unable to afford continuing treatment with the TKI, whereas most treated with the TKI (84%) were given the treatment without charge, through institutions able to pay for their treatment. The median cost of each nonmyeloablative allograft was US$18 000, an amount that is enough to cover 180 days of treatment with imatinib (400 mg per day) in México. Cost considerations favor allogeneic SCT as a ‘once only’ procedure whereas lifelong treatment with an expensive drug represents an excessive burden on resources.
A Survey on Software Defined Networking: Architecture for Next Generation Network
The evolution of software defined networking (SDN) has played a significant role in the development of next-generation networks (NGN). SDN as a programmable network having “service provisioning on the fly” has induced a keen interest both in academic world and industry. In this article, a comprehensive survey is presented on SDN advancement over conventional network. The paper covers historical evolution in relation to SDN, functional architecture of the SDN and its related technologies, and OpenFlow standards/protocols, including the basic concept of interfacing of OpenFlow with network elements (NEs) such as optical switches. In addition a selective architecture survey has been conducted. Our proposed architecture on software defined heterogeneous network, points towards new technology enabling the opening of new vistas in the domain of network technology, which will facilitate in handling of huge internet traffic and helps infrastructure and service providers to customize their resources dynamically. Besides, current research projects and various activities as being carried out to standardize SDN as NGN by different standard development organizations (SODs) have been duly elaborated to judge how this technology moves towards standardization.
A Novel Approach to Trust Management in Unattended Wireless Sensor Networks
Unattended Wireless Sensor Networks (UWSNs) are characterized by long periods of disconnected operation and fixed or irregular intervals between sink visits. The absence of an online trusted third party implies that existing WSN trust management schemes are not applicable to UWSNs. In this paper, we propose a trust management scheme for UWSNs to provide efficient and robust trust data storage and trust generation. For trust data storage, we employ a geographic hash table to identify storage nodes and to significantly decrease storage cost. We use subjective logic based consensus techniques to mitigate trust fluctuations caused by environmental factors. We exploit a set of trust similarity functions to detect trust outliers and to sustain trust pollution attacks. We demonstrate, through extensive analyses and simulations, that the proposed scheme is efficient, robust and scalable.
Induced systemic resistance and plant responses to fungal biocontrol agents.
Biocontrol fungi (BCF) are agents that control plant diseases. These include the well-known Trichoderma spp. and the recently described Sebacinales spp. They have the ability to control numerous foliar, root, and fruit pathogens and even invertebrates such as nematodes. However, this is only a subset of their abilities. We now know that they also have the ability to ameliorate a wide range of abiotic stresses, and some of them can also alleviate physiological stresses such as seed aging. They can also enhance nutrient uptake in plants and can substantially increase nitrogen use efficiency in crops. These abilities may be more important to agriculture than disease control. Some strains also have abilities to improve photosynthetic efficiency and probably respiratory activities of plants. All of these capabilities are a consequence of their abilities to reprogram plant gene expression, probably through activation of a limited number of general plant pathways.
Tunable 1.55 - 2.1 GHz 4-Pole Elliptic Bandpass Filter With Bandwidth Control and ${> 50}~{\hbox {dB}}$ Rejection for Wireless Systems
This paper presents a four-pole elliptic tunable combline bandpass filter with center frequency and bandwidth control. The filter is built on a Duroid substrate with εr=10.2 and h=25 mils, and the tuning is done using packaged Schottky diodes. A frequency range of 1.55-2.1 GHz with a 1-dB bandwidth tuning from 40-120 MHz (2.2-8% fractional bandwidth) is demonstrated. A pair of tunable transmission zeroes are synthesized at both passband edges and significantly improve the filter selectivity. The rejection level at both the lower and upper stopbands is >; 50 dB and no spurious response exists close to the passband. The measured third-order intermodulation intercept point (TOI) and 1-dB power compression point at midband (1.85 GHz) and a bandwidth of 110 MHz are >; 14& dBm and 6 dBm, respectively, and are limited by the Schottky diodes. It is believed that this is the first four-pole combline tunable bandpass filter with an elliptic function response and center frequency and bandwidth control. The application areas are in tunable filters for wireless systems and cognitive radios.
Bipolar Ripple Cancellation Method to Achieve Single-Stage Electrolytic-Capacitor-Less High-Power LED Driver
Conventional topologies for high-power LED drivers with high power factors (PFs) require large capacitances to limit the low frequency (100 or 120 Hz) LED current ripples. Electrolytic capacitors are commonly used because they are the only capacitors with sufficient energy density to accommodate high-power applications. However, the short life span of electrolytic capacitors significantly reduces the life span of the entire LED lighting fixture, which is undesirable. This paper proposes a bipolar (ac) ripple cancellation method with two different full-bridge power structures to cancel the low-frequency ac ripple in the LED current and minimize the output capacitance requirement, enabling the use of long-life film capacitors. Compared with the existing technologies, the proposed circuit achieves zero double-line-frequency current ripple through LED lamps and achieves a high PF and high efficiency. A 100-W (150 V/0.7 A) LED driver prototype was built which demonstrates that the proposed method can achieve the same double-line-frequency LED current ripple with only 44-μF film capacitors, compared with the 4700-μF electrolytic capacitors required in the conventional single-stage LED drivers. Meanwhile, the proposed prototype has achieved a peak power efficiency of 92.5%, benefiting from active clamp technology.
Precise 2.5D facial landmarking via an analysis by synthesis approach
3D face landmarking aims at automatic localization of 3D facial features and has a wide range of applications, including face recognition, face tracking, facial expression analysis. Methods so far developed for pure 2D texture images were shown sensitive to lighting condition changes. In this paper, we present a statistical model-based technique for accurate 3D face landmarking, thus using an ¿analysis by synthesis¿ approach. Our model learns from a training set both variations of global face shapes as well as the local ones in terms of scale-free texture and range patches around each landmark. Given a shape instance, local regions of a new face can be approximated by synthesizing texture and range instances using respectively the texture and range models. By optimizing an objective function describing the similarity of the new face and instances, we can optimize the best shape in order to locate the landmarks. Experimented on more than 1860 face models from FRGC datasets, our method achieves an average of locating errors less than 7 mm for 15 feature points. Compared with a curvature analysis-based method also developed within our team, this learning-based method enables localization of more facial landmarks with a general better accuracy at the cost of a learning step.
Mitigating Address Spoofing Attacks in Hybrid SDN
Address spoofing attacks like ARP spoofing and DDoS attacks are mostly launched in a networking environment to degrade the performance. These attacks sometimes break down the network services before the administrator comes to know about the attack condition. Software Defined Networking (SDN) has emerged as a novel network architecture in which date plane is isolated from the control plane. Control plane is implemented at a central device called controller. But, SDN paradigm is not commonly used due to some constraints like budget, limited skills to control SDN, the flexibility of traditional protocols. To get SDN benefits in a traditional network, a limited number of SDN devices can be deployed among legacy devices. This technique is called hybrid SDN. In this paper, we propose a new approach to automatically detect the attack condition and mitigate that attack in hybrid SDN. We represent the network topology in the form of a graph. A graph based traversal mechanism is adopted to indicate the location of the attacker. Simulation results show that our approach enhances the network efficiency and improves the network security Keywords—Communication system security; Network Security; ARP Spoofing Introduction
GOSELO: Goal-Directed Obstacle and Self-Location Map for Robot Navigation Using Reactive Neural Networks
Robot navigation using deep neural networks has been drawing a great deal of attention. Although reactive neural networks easily learn expert behaviors and are computationally efficient, they suffer from generalization of policies learned in specific environments. As such, reinforcement learning and value iteration approaches for learning generalized policies have been proposed. However, these approaches are more costly. In this letter, we tackle the problem of learning reactive neural networks that are applicable to general environments. The key concept is to crop, rotate, and rescale an obstacle map according to the goal location and the agent's current location so that the map representation will be better correlated with self-movement in the general navigation task, rather than the layout of the environment. Furthermore, in addition to the obstacle map, we input a map of visited locations that contains the movement history of the agent, in order to avoid failures that the agent travels back and forth repeatedly over the same location. Experimental results reveal that the proposed network outperforms the state-of-the-art value iteration network in the grid-world navigation task. We also demonstrate that the proposed model can be well generalized to unseen obstacles and unknown terrain. Finally, we demonstrate that the proposed system enables a mobile robot to successfully navigate in a real dynamic environment.
Natural Toxins for Use in Pest Management
Natural toxins are a source of new chemical classes of pesticides, as well as environmentally and toxicologically safer molecules than many of the currently used pesticides. Furthermore, they often have molecular target sites that are not exploited by currently marketed pesticides. There are highly successful products based on natural compounds in the major pesticide classes. These include the herbicide glufosinate (synthetic phosphinothricin), the spinosad insecticides, and the strobilurin fungicides. These and other examples of currently marketed natural product-based pesticides, as well as natural toxins that show promise as pesticides from our own research are discussed.
Transmission line fault detection and classification
Transmission line protection is an important issue in power system engineering because 85–87% of power system faults are occurring in transmission lines. This paper presents a technique to detect and classify the different shunt faults on a transmission lines for quick and reliable operation of protection schemes. Discrimination among different types of faults on the transmission lines is achieved by application of evolutionary programming tools. PSCAD/EMTDC software is used to simulate different operating and fault conditions on high voltage transmission line, namely single phase to ground fault, line to line fault, double line to ground and three phase short circuit. The discrete wavelet transform (DWT) is applied for decomposition of fault transients, because of its ability to extract information from the transient signal, simultaneously both in time and frequency domain. The data sets which are obtained from the DWT are used for training and testing the SVM architecture. After extracting useful features from the measured signals, a decision of fault or no fault on any phase or multiple phases of a transmission line is carried out using three SVM classifiers. The ground detection task is carried out by a proposed ground index. Gaussian radial basis kernel function (RBF) has been used, and performances of classifiers have been evaluated based on fault classification accuracy. In order to determine the optimal parametric settings of an SVM classifier (such as the type of kernel function, its associated parameter, and the regularization parameter c), fivefold cross-validation has been applied to the training set. It is observed that an SVM with an RBF kernel provides better fault classification accuracy than that of an SVM with polynomial kernel. It has been found that the proposed scheme is very fast and accurate and it proved to be a robust classifier for digital distance protection.
Big data in precision agriculture: Weather forecasting for future farming
This paper gives an idea about how to discover additional insights from precision agriculture data through big data approach. We present a scenario for the use of Information and Communication Technology (ICT) services in agricultural big data environment to collect huge data. Big data analytics in agriculture applications provide a new insight to give advance weather decisions, improve yield productivity and avoid unnecessary cost related to harvesting, use of pesticide and fertilizers. Paper list out the different sources of big data in precision agriculture using ICT components and types of structured and unstructured data. Also discussed big data in precision agriculture, an ICT scenario for agricultural big data, platform, its future applications and challenges in precision agriculture. Finally, we have discussed results using a programming model and distributed algorithm for data processing and forecasting application of weather.
Active Learning to Rank using Pairwise Supervision
This paper investigates learning a ranking function using pairwise constraints in the context of human-machine interaction. As the performance of a learnt ranking model is predominantly determined by the quality and quantity of training data, in this work we explore an active learning to rank approach. Furthermore, since humans may not be able to confidently provide an order for a pair of similar instances we explore two types of pairwise supervision: (i) a set of “strongly” ordered pairs which contains confidently ranked instances, and (ii) a set of “weakly” ordered pairs which consists of similar or closely ranked instances. Our active knowledge injection is performed by querying domain experts on pairwise orderings, where informative pairs are located by considering both local and global uncertainties. Under this active scheme, querying of pairs which are uninformative or outliers instances would not occur. We evaluate the proposed approach on three real world datasets and compare with representative methods. The promising experimental results demonstrate the superior performance of our approach, and validate the effectiveness of actively using pairwise orderings to improve ranking performance.
Assessing Students' Beliefs About Mathematics
The beliefs that students and teachers hold about mathematics have been well-documented in the research literature in recent years. (e.g., Cooney, 1985; Frank, 1988, 1990; Garofalo, 1989a, 1989b; Schoenfeld, 1987; Thompson, 1984, 1985) The research has shown that some beliefs are quite salient across various populations. These commonly held beliefs include the following: • Mathematics is computation. • Mathematics problems should be solved in less than five minutes or else there is something wrong with either the problem or the student. • The goal of doing a mathematics problem is to obtain the correct answer. • In the teaching-learning process, the student is passive and the teacher is active. (Frank, 1988) It is generally agreed that these beliefs are not “healthy” in that they are not conducive to the type of mathematics teaching and learning envisioned in the Curriculum and Evaluation Standards for School Mathematics [Standards] (NCTM, 1989). There appears to be a cyclic relationship between beliefs and learning. Students’ learning experiences are likely to contribute to their beliefs about what it means to learn mathematics. In turn, students’ beliefs about mathematics are likely to influence how they approach new mathematical experiences. According to the Standards, “[Students’] beliefs exert a powerful influence on students’ evaluation of their own ability, on their willingness to engage in mathematical tasks, and on their ultimate mathematical disposition.” (NCTM, 1989, p. 233) This apparent relationship between beliefs and learning raises the issue of how the cycle of influence can be broken. The type of mathematics teaching and learning envisioned in the Standards can provide mathematical experiences that will enrich students’ beliefs about mathematics. Thus, mathematical experiences provide one place where intervention can occur; however, it may also be advantageous to intervene at the other point in the cycle, namely students’ beliefs. The Standards suggest that the assessment of students’ beliefs about mathematics is an important component of the overall assessment of students’ mathematical knowledge. Beliefs are addressed in the tenth standard of the evaluation section, which deals with assessing mathematical disposition. Mathematical disposition is defined to include students’ beliefs about mathematics. It is recommended that teachers use informal discussions and observations to assess students’ mathematical beliefs (NCTM, 1989). Although teachers’ awareness of students’ mathematical beliefs is important, it may be equally important for students to be aware of their own beliefs toward mathematics. One medium for bringing students’ beliefs to a conscious level is open-ended questions. As students ponder their responses to such questions, some of their beliefs about mathematics will be revealed. As groups of students discuss their responses to these questions, some students’ beliefs will likely be challenged, leading to an examination of these beliefs and their origins, and, possibly, to the modification of these beliefs. This article presents some open-ended questions that can be used to address students’ beliefs about mathematics. These questions have been used by the author with elementary, junior high, and senior high school students; preservice and inservice elementary, junior high, and senior high school teachers; and graduate students in mathematics education. The questions have been culled from a variety of sources and do not represent original ideas of the author. Each question is followed by a summary of typical responses from the aforementioned groups. The responses from the various populations were strikingly similar, which is not surprising since the beliefs held by these groups are generally quite similar. In some cases, possible origins of the belief or possible avenues for further discussion are included.
Developing an Evaluation Framework for Blockchain in the Public Sector : The Example of the German Asylum Process
The public sector presents several promising applications for blockchain technology. Global organizations and innovative ministries in countries such as Dubai, Sweden, Finland, the Netherlands, and Germany have recognized these potentials and have initiated projects to evaluate the adoption of blockchain technology. As these projects can have a farreaching impact on crucial government services and processes, they should involve a particularly thorough evaluation. In this paper, we provide insights into the development of a framework to support such an evaluation for the German asylum process. We built this framework evolutionarily together with the Federal Office for Migration and Refugees. Its final version consists of three levels and eighteen categories of evaluation criteria across the technical, functional and legal domains and allows specifying use-case specific key performance indicators or knockout criteria. Author
A 5.2mW, 0.0016% THD up to 20kHz, ground-referenced audio decoder with PSRR-enhanced class-AB 16Ω headphone amplifiers
A low-power ground-referenced audio decoder with PSRR-enhanced class-AB headphone amplifiers presents <;0.0016% THD in the whole audio band against the supply ripple by a negative charge-pump. Realized in the 40nm CMOS, the fully-integrated stereo decoder achieves 91dB SNDR and 100dB dynamic range while driving a 16Ω headphone load and consumes 5.2mW from a 1.8V power supply. The core area is 0.093mm2/channel only.
Performance characterization and scalable design of sensing-as-a-service platform
Advancements and proliferation of sensing and actuation technologies in diverse computing and physical devices call for a flexible Sensing-as-a-Service Platform (SeaaS-P) that can enable massive-scale next generation applications. A key aspect of such a platform, which has been largely unexplored in terms of applicability at a massive scale, is the notification system (e.g. city incident notification) to send event alerts to subscribers. Conventional event-based systems (e.g. publish-subscribe) typically trigger notifications with every event arrival. Observations based on deployment of even a simple SeaaS-P instantiation (with just mobile based event reporting at a very large scale) show resources (e.g. CPU) can become bottleneck and get overused when event rates and number of subscriptions go beyond certain thresholds. Thus, it may be necessary to opportunistically delay the notification, i.e. not send the notifications with every event update (e.g., time-triggered periodic notifications), depending on the delay-tolerance of the application. This paper proposes a: (i) performance profiling of SeaaS-P in terms of resource utilizations and % notification drop; and (ii) design framework to determine the design boundaries of SeaaS-P notification system for both event-triggered and time-triggered notifications.
Quality of Life in Hormone Receptor–Positive HER-2+ Metastatic Breast Cancer Patients During Treatment with Letrozole Alone or in Combination with Lapatinib
BACKGROUND A phase III trial compared lapatinib plus letrozole (L + Let) with letrozole plus placebo (Let) as first-line therapy for hormone receptor (HR)(+) metastatic breast cancer (MBC) patients. The primary endpoint of progression-free survival (PFS) in patients whose tumors were human epidermal growth factor receptor (HER)-2(+) was significantly longer for L + Let than for Let (8.2 months versus 3 months; p = .019). This analysis focuses on quality of life (QOL) in the HER-2(+) population. METHODS QOL was assessed at screening, every 12 weeks, and at withdrawal using the Functional Assessment of Cancer Therapy-Breast (FACT-B). Changes from baseline were analyzed and the proportions of patients achieving minimally important differences in QOL scores were compared. Additional exploratory analyses evaluated how QOL changes reflected tumor progression status. RESULTS Among the 1,286 patients randomized, 219 had HER-2(+) tumors. Baseline QOL scores were comparable in the two arms. Mean changes in QOL scores were generally stable over time for patients who stayed on study. The average change from baseline on the FACT-B total score in both arms was positive at all scheduled visits through week 48. There was no significant difference between the two treatment arms in the percentage of QOL responders. CONCLUSION The addition of lapatinib to letrozole led to a significantly longer PFS interval while maintaining QOL during treatment, when compared with letrozole alone, thus confirming the clinical benefit of the combination therapy in the HR(+) HER-2(+) MBC patient population. This all oral regimen provides an effective option in this patient population, delaying the need for chemotherapy and its accompanying side effects.
Overview and Evaluation of Bluetooth Low Energy: An Emerging Low-Power Wireless Technology
Bluetooth Low Energy (BLE) is an emerging low-power wireless technology developed for short-range control and monitoring applications that is expected to be incorporated into billions of devices in the next few years. This paper describes the main features of BLE, explores its potential applications, and investigates the impact of various critical parameters on its performance. BLE represents a trade-off between energy consumption, latency, piconet size, and throughput that mainly depends on parameters such as connInterval and connSlaveLatency. According to theoretical results, the lifetime of a BLE device powered by a coin cell battery ranges between 2.0 days and 14.1 years. The number of simultaneous slaves per master ranges between 2 and 5,917. The minimum latency for a master to obtain a sensor reading is 676 μs, although simulation results show that, under high bit error rate, average latency increases by up to three orders of magnitude. The paper provides experimental results that complement the theoretical and simulation findings, and indicates implementation constraints that may reduce BLE performance.
Utilizing marginal net utility for recommendation in e-commerce
Traditional recommendation algorithms often select products with the highest predicted ratings to recommend. However, earlier research in economics and marketing indicates that a consumer usually makes purchase decision(s) based on the product's marginal net utility (i.e., the marginal utility minus the product price). Utility is defined as the satisfaction or pleasure user u gets when purchasing the corresponding product. A rational consumer chooses the product to purchase in order to maximize the total net utility. In contrast to the predicted rating, the marginal utility of a product depends on the user's purchase history and changes over time. According to the Law of Diminishing Marginal Utility, many products have the decreasing marginal utility with the increase of purchase count, such as cell phones, computers, and so on. Users are not likely to purchase the same or similar product again in a short time if they already purchased it before. On the other hand, some products, such as pet food, baby diapers, would be purchased again and again. To better match users' purchase decisions in the real world, this paper explores how to recommend products with the highest marginal net utility in e-commerce sites. Inspired by the Cobb-Douglas utility function in consumer behavior theory, we propose a novel utility-based recommendation framework. The framework can be utilized to revamp a family of existing recommendation algorithms. To demonstrate the idea, we use Singular Value Decomposition (SVD) as an example and revamp it with the framework. We evaluate the proposed algorithm on an e-commerce (shop.com) data set. The new algorithm significantly improves the base algorithm, largely due to its ability to recommend both products that are new to the user and products that the user is likely to re-purchase.
Multi-vehicle path planning in dynamically changing environments
In this paper, we propose a path planning method for nonholonomic multi-vehicle system in presence of moving obstacles. The objective is to find multiple fixed length paths for multiple vehicles with the following properties: (i) bounded curvature (ii) obstacle avoidant (iii) collision free. Our approach is based on polygonal approximation of a continuous curve. Using this idea, we formulate an arbitrarily fine relaxation of the path planning problem as a nonconvex feasibility optimization problem. Then, we propound a nonsmooth dynamical systems approach to find feasible solutions of this optimization problem. It is shown that the trajectories of the nonsmooth dynamical system always converge to some equilibria that correspond to the set of feasible solutions of the relaxed problem. The proposed framework can handle more complex mission scenarios for multi-vehicle systems such as rendezvous and area coverage.
Late cognitive and radiographic changes related to radiotherapy: initial prospective findings.
BACKGROUND Assumptions about the damaging effects of radiotherapy (XRT) are based on studies in which total dose, dose fraction, treatment volume, degree of malignancy, chemotherapy, tumor recurrence, and neurologic comorbidity interact with XRT effects. This is a prospective, long-term study of XRT effects in adults, in which total dose and dose fraction were constrained and data related to tumor recurrence and neurologic comorbidity (e.g., hypertension) were excluded. METHODS The effects of XRT on the cognitive and radiographic outcomes of 26 patients with low-grade, supratentorial, brain tumors yearly from baseline (6 weeks after surgery and immediately before XRT) and yearly to 6 years were examined. Radiographic findings were examined regionally. RESULTS Selective cognitive declines (in visual memory) emerged only at 5 years, whereas ratings of clinical MRI (T2 images) showed mild accumulation of hyperintensities with post-treatment onset from 6 months to 3 years, with no further progression. White matter atrophy and total hyperintensities demonstrated this effect, with subcortical and deep white matter, corpus callosum, cerebellar structures, and pons accounting for these changes over time. About half of the patients demonstrated cognitive decline and treatment-related hyperintensities. CONCLUSIONS There was no evidence of a general cognitive decline or progression of white matter changes after 3 years. Results argue for limited damage from XRT at this frequently used dose and volume in the absence of other clinical risk factors.
Solar powered ZCS bidirectional buck-boost converter used in battery energy storage systems
The ZCS bidirectional buck boost converter for energy storage applications is a soft switched bidirectional converter with main switches, resonant capacitors and resonant inductors. ZCS or Zero Current Switching is a soft switching technique used to reduce the switching stresses. Switching stresses causes loss and to reduce this switches are turned on and off at zero current or zero voltage instants. In bidirectional buck boost converter the buck mode as well as the boost mode can be operated with bidirectional current and power flow capability. Renewable energy can be effectively utilized by having a Battery Energy Storage System (BESS) as the energy generation is not constant. The system can be mainly used for battery banks, electric vehicles etc. and also where there will be an application of both renewable and non renewable resources. Photo Voltaic panels are to be used as one of the input. The generated energy is stored in a BESS. This creates an energy management system. The bidirectional converter creates an interface between these exchanges. With the converter, the use of the stored energy can be done easily and when needed. To control the working of the bidirectional converter a control system has to be implemented.
Symptom distress and quality of life in patients with cancer newly admitted to hospice home care.
PURPOSE/OBJECTIVES To evaluate the relationships between quality of life (QOL) and symptom distress, pain intensity, dyspnea intensity, and constipation intensity in people with advanced cancer who were newly admitted to hospice home care. DESIGN Descriptive and correlational. SETTING A large hospice that provides primarily home care. SAMPLE 178 adult hospice homecare patients with cancer who were accrued to a clinical trial funded by the National Institutes of Health focusing on symptom management and QOL. Patients were excluded if they received a score lower than seven on the Short Portable Mental Status Questionnaire. METHOD The patients were invited to participate in the clinical trial within 48 hours of admission to hospice home care. Among the questionnaires they completed were a QOL index and a distress scale. Scales measuring present intensity of pain, dyspnea, and constipation also were administered. MAIN RESEARCH VARIABLES QOL, symptom distress, pain intensity, dyspnea intensity, and constipation intensity. FINDINGS The most frequently reported symptoms among the sample were lack of energy, pain, dry mouth, and shortness of breath. Lack of energy caused the greatest distress, followed closely by dry mouth and pain. The results of the regression analysis indicated that total distress score, pain intensity, dyspnea intensity, and constipation intensity were related to QOL at the univariate level. When all predictors were considered simultaneously, only the total distress score remained a significant predictor of QOL (p< 0.001), accounting for about 35% of variance. CONCLUSIONS QOL was affected by symptom distress in people with advanced cancer near the end of life. IMPLICATIONS FOR NURSING The symptoms most commonly reported and those that cause the greatest patient distress should be addressed first by hospice nurses. Continued effort is needed in the important area of symptom management.
Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge
Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.
Validation of the seventh edition of the American Joint Committee on Cancer TNM staging system for gastric cancer.
BACKGROUND The seventh edition of the American Joint Committee on Cancer (AJCC) TNM classification for gastric cancer was published in 2010 and included major revisions. The aim of the current study was to evaluate the validity of the seventh edition TNM classification for gastric cancer based on an Asian population. METHODS A total of 2916 gastric cancer patients who underwent R0 surgical resection from 1989 through 2008 in a single institute were included, and were analyzed according to the seventh edition of the TNM classification for validation. RESULTS When adjusted using the seventh edition of the TNM classification, upstaging was observed in 771 patients (26.4%) and downstaging was observed in 178 patients (6.1%) compared with the sixth edition of the TNM classification. The relative risk (RR) of seventh edition pT classification was found to be increased with regular intensity compared with the sixth edition pT classification. The RR of seventh edition pN classification was found to be increased with irregular intensity compared with the sixth edition pN classification. In survival analysis, there were significant differences noted for each stage of disease, but only a marginal difference was demonstrated between stage IA and stage IB (P = .049). In the hybrid TNM classification, which combines the seventh edition pT classification and the sixth edition pN classification, both pT and pN classifications demonstrated a more ideal distribution of the RR, and 5-year survival rates also showed a significant difference for each stage (P <.01). CONCLUSIONS The seventh edition of the TNM classification was considered valid based on the results of the current study. However, the hybrid TNM classification, comprised of a combination of the seventh edition pT classification and sixth edition pN classification, should be considered for the next edition.
Visual Discovery and Analysis
ÐWe have developed a flexible software environment called ADVIZOR for visual information discovery. ADVIZOR complements existing assumptive-based analyses by providing a discovery-based approach. ADVIZOR consists of five parts: a rich set of flexible visual components, strategies for arranging the components for particular analyses, an in-memory data pool, data manipulation components, and container applications. Working together, ADVIZOR's architecture provides a powerful production platform for creating innovative visual query and analysis applications. Index TermsÐInformation visualization, data analysis, visual design patterns, perspectives, linked views.
LEGO therapy and the social use of language programme: an evaluation of two social skills interventions for children with high functioning autism and Asperger Syndrome.
LEGO therapy and the Social Use of Language Programme (SULP) were evaluated as social skills interventions for 6-11 year olds with high functioning autism and Asperger Syndrome. Children were matched on CA, IQ, and autistic symptoms before being randomly assigned to LEGO or SULP. Therapy occurred for 1 h/week over 18 weeks. A no-intervention control group was also assessed. Results showed that the LEGO therapy group improved more than the other groups on autism-specific social interaction scores (Gilliam Autism Rating Scale). Maladaptive behaviour decreased significantly more in the LEGO and SULP groups compared to the control group. There was a non-significant trend for SULP and LEGO groups to improve more than the no-intervention group in communication and socialisation skills.
A survey and taxonomy of approaches for mining software repositories in the context of software evolution
A comprehensive literature survey on approaches for mining software repositories (MSR) in the context of software evolution is presented. In particular, this survey deals with those investigations that examine multiple versions of software artifacts or other temporal information. A taxonomy is derived from the analysis of this literature and presents the work via four dimensions: the type of software repositories mined (what), the purpose (why), the adopted/invented methodology used (how), and the evaluation method (quality). The taxonomy is demonstrated to be expressive (i.e., capable of representing a wide spectrum of MSR investigations) and effective (i.e., facilitates similarities and comparisons of MSR investigations). Lastly, a number of open research issues in MSR that require further investigation are identified. Copyright c © 2007 John Wiley & Sons, Ltd.
Control of double inverted pendulum (DIP) using fuzzy hybrid adaptive neuro controller
This paper presents a new methodological approach for selection of appropriate type and number of Membership function (MF's) for the effective control of Double Inverted Pendulum (DIP). A Matlab-Simulink model of the system is built using governing mathematical equations. The relation between error tolerance of successive approximations and the number of MF's for controllers is also shown. Stabilization is done using Fuzzy and Adaptive Neuro Fuzzy Inference System (ANFIS) controllers having triangular and gbell MF's respectively. The proposed ANFIS and fuzzy controller stabilizes DIP system within 2.5 and 3.0 seconds respectively. All the three controllers have shown almost zero amount of steady state error. Both the controllers gives excellent result which proves the validity of the proposed model. ANFIS controller provides better results as compared to fuzzy controller. Results for Settling time (s), Steady state error and Maximum overshoot (degrees) for each input and output are elaborated with the help of graphs and tables.
Vehicle Localization Based on the Detection of Line Segments from Multi-Camera Images
For realizing autonomous vehicle driving and advanced safety systems, it is necessary to achieve accurate vehicle localization in cities. This paper proposes a method of accurately estimating vehicle position by matching a map and line segment features detected from images captured by a camera. Features such as white road lines, yellow road lines, road signs, and curb stones, which could be used as clues for vehicle localization, were expressed as line segment features on a two-dimensional road plane in an integrated manner. The detected line segments were subjected to bird’s-eye view transformation to transform them to the vehicle coordinate system so that they could be used for vehicle localization regardless of the camera configuration. Moreover, an extended Kalman filter was applied after a detailed study of the line observation errors for realizing real-time estimation. Vehicle localization was tested under city driving conditions, and the vehicle position was identified with sub-meter accuracy.
ON A STOCHASTIC FIRST-ORDER HYPERBOLIC EQUATION IN A BOUNDED DOMAIN
In this paper, we are interested in the stochastic perturbation of a first order hyperbolic equation of nonlinear type. In order to illustrate our purposes, we have chosen a scalar conservation law in a bounded domain with homogeneous Dirichlet condition on the boundary. Using the concept of measure-valued solutions and Kruzhkov’s entropy formulation, a result of existence and uniqueness of the entropy solution is given. keywords : Stochastic PDE, first-order hyperbolic problems, bounded domain, Young measures, Kruzhkov’s entropy. AMS Subject Classification: 35L60 60H15 35L50
Use of Tethered Small Unmanned Aerial System at Berkman Plaza II Collapse
A tethered Small Unmanned Aerial System (sUAS) provided structural forensic inspection of the collapsed Berkman Plaza II six-story parking garage. The sUAS, an iSENSYS IP3 miniature helicopter, was tethered to meet US Federal Aviation Administration (FAA) requirements for unregulated flight below 45 m (150 ft). This created new platform control, human-robot interaction, and safety issues in addition to the challenges posed by the active, city environment. A new technique, viewpoint-oriented Cognitive Work Analysis (CWA), was used to generate the 4:1 human-robot crew organization and operational protocol. The sUAS over three flights was able to provide useful imagery to structural engineers that had been difficult to obtain from manned helicopters due to dust obscurants. Based on these flights this work shows that tethered operations decreases team effectiveness, increases overall safety liability, and in general is not a recommended solution for sUAS flight.
Tilt thresholds for acceleration rendering in driving simulation
The tilt coordination technique is used in driving simulation for reproducing a sustained linear horizontal acceleration by tilting the simulator cabin. If combined with the translation motion of the simulator, this technique increases the acceleration rendering capabilities of the whole system. To perform this technique correctly, the rotational motion must be slow to remain under the perception threshold and thus be unnoticed by the driver. However, the acceleration to render changes quickly. Between the slow rotational motion limited by the tilt threshold and the fast change of acceleration to render, the design of the coupling between motions of rotation and translation plays a critical role in the realism of a driving simulator. This study focuses on the acceptance by drivers of different configurations for tilt restitution in terms of maximum tilt angle, tilt rate, and tilt acceleration. Two experiments were conducted, focusing respectively on roll tilt for a 0.2 Hz slaloming task and on pitch tilt for an acceleration/deceleration task. The results show what thresholds have to be followed in terms of amplitude, rate, and acceleration. These results are far superior to the standard human perception thresholds found in the literature.
A measure of agility as the complexity of the enterprise system
Agility is the ability of an organization to adapt to change and also to seize opportunities that become available due to change. While there has been much work and discussion of what agility is and how firms can become agile there is little work at measuring the agility of a firm. Measurement is necessary for the strategic planning of determining how much agility an organization currently posses, determining how much is needed, and then for assessing the gap and formulating a strategy for closing any perceived weaknesses. The measurement of agility as defined is difficult to measure since it must be measured in the context of a change. Consequently, most current agility measurement approaches are backward looking. A different and novel approach is to use complexity as a surrogate measure for agility. The hypothesis supporting this substitution is that a less complex enterprise in terms of systems and processes is easier to change and consequently more agile. To test this idea a model and measurement approach for measuring complexity is presented. The model uses Petri Nets to find the state space probabilities needed for the complexity measure. The contribution of this research is the quantification of complexity at the business process level and description of a method for conducting this measure. r 2004 Elsevier Ltd. All rights reserved.
Multifaceted Nature of Intrinsic Motivation : The Theory of 16 Basic Desires
R. W. White (1959) proposed that certain motives, such as curiosity, autonomy, and play (called intrinsic motives, or IMs), have common characteristics that distinguish them from drives. The evidence that mastery is common to IMs is anecdotal, not scientific. The assertion that “intrinsic enjoyment” is common to IMs exaggerates the significance of pleasure in human motivation and expresses the hedonistic fallacy of confusing consequence for cause. Nothing has been shown scientifically to be common to IMs that differentiates them from drives. An empirically testable theory of 16 basic desires is put forth based on psychometric research and subsequent behavior validation. The desires are largely unrelated to each other and may have different evolutionary histories.
The Analysis on The Rock Gold Metallogenic Condition Around The Jiayin Depression
On the analysis of the metallogenic geological environment and the metallogenic rule of large-, medium-, small-scale rock gold deposits around the Jiayin Depression that is the primary rock gold metallogenic region in Heilongjiang Province, The text brings out a cognition that in this region the ore-controlling factors includes the Proterozoic metamorphic rocks (Xindong Group, Heilongjiang Group), the deep fault (F9), the Cretaceous intermediate-acid volcanic rocks, and so on. With the cognition, we do some predictions on the rock gold deposits around the Jiayin Depression, and determine the ore-prospecting direction of rock gold deposits.
The impact of thin idealized media images on body satisfaction: does body appreciation protect women from negative effects?
This article examines whether positive body image can protect women from negative media exposure effects. University women (N=112) were randomly allocated to view advertisements featuring ultra-thin models or control images. Women who reported high levels of body appreciation did not report negative media exposure effects. Furthermore, the protective role of body appreciation was also evident among women known to be vulnerable to media exposure. Women high on thin-ideal internalization and low on body appreciation reported appearance-discrepancies that were more salient and larger when they viewed models compared to the control group. However, women high on thin-ideal internalization and also high on body appreciation rated appearance-discrepancies as less important and no difference in size than the control group. The results support the notion that positive body image protects women from negative environmental appearance messages and suggests that promoting positive body image may be an effective intervention strategy.
Responsive neurostimulation for the treatment of seizures that do not respond to medication.
Efstathia Tzatha, MD Steven C. Karceski, MD In the last 18 years, 12 new antiseizure medications have been discovered. Although there are more medications, there have been no medicines that are clearly better than the older ones. To be clear, the newer medicines do not seem to be any more effective at stopping seizures than the older ones. Studies have shown that about one-third of people with epilepsy have seizures that do not respond to antiseizure medication. A doctor will call the person’s seizures refractory if they have tried 2 or more antiseizure medications, and their seizures have not stopped. Although the newer medicines do not seem to be more effective than the older ones, generally speaking, they seem to cause fewer side effects. Despite this, many patients have difficulty tolerating the high doses that are needed to control their seizures. Because of this, there has been increasing interest in devices that can help to control seizures. One device, the vagus nerve stimulator (VNS), is already available. It was approved for use by the US Food and Drug Administration (FDA) in 1997 for the treatment of refractory epilepsy. Other devices are being studied now, including the responsive cortical stimulator (also known as the responsive neurostimulator or RNS), as discussed in this Patient Page.
Children with congenital spastic hemiplegia obey Fitts’ Law in a visually guided tapping task
Fitts’ Law is commonly found to apply to motor tasks involving precise aiming movements. Children with cerebral palsy (CP) have severe difficulties in such tasks and it is unknown whether they obey Fitts’ Law despite their motor difficulties. If Fitts’ Law still does apply to these children, this would indicate that this law is extremely robust and that even performance of children with damaged central nervous systems can adhere to it. The integrity of motor control processes in spastic CP is usually tested in complex motor tasks, making it difficult to determine whether poor performance is due to a motor output deficit or to problems related to cognitive processes since both affect movement precision. In the present study a simple task was designed to evaluate Fitts’ Law. Tapping movements were evaluated in 22 children with congenital spastic hemiplegia (CSH) and 22 typically developing children. Targets (2.5 and 5 cm in width) were placed at distances of 10 and 20 cm from each other in order to provide Indices of Difficulty (ID) of 2–4 bits. Using this Fitts’ aiming task, prolonged reaction and movement time (MT) were found in the affected hand under all conditions in children with CSH as compared to controls. Like in the control group, MT in children with CSH was related to ID. The intercept ‘a’, corresponding to the time required to realize a tapping movement, was higher in the affected hand of the children in the CSH group. Although, the slope b (which reflects the sensitivity of the motor system to a change in difficulty of the task) and the reciprocal of slope (that represents the cognitive information processing capacity, expressed in bits/s) were similar in both groups. In conclusion, children with CSH obey Fitts’ Law despite very obvious limitations in fine motor control.
I am Robot: (Deep) Learning to Break Semantic Image CAPTCHAs
Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an arms race, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold, to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an "advanced risk analysis system" that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content. In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications, as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.
PHASED ARRAY DESIGN FOR BIOLOGICAL CLUTTER REJECTION: SIMULATION AND EXPERIMENTAL VALIDATION
Radar studies of the atmospheric boundary layer (ABL) have become widespread since the advent of relatively inexpensive and compact profiling radars [Ecklund et al., 1988], termed boundary layer radars (BLR). Arguably, one of the most sophisticated of these type of radar systems is the so-called Turbulent Eddy Profiler (TEP), which was developed at the University of Massachusetts [Mead et al., 1998; Pollard et al., 2000]. The TEP system is a volumetric radar designed for clear-air observations with high temporal and spatial resolution comparable to the grid size used in Large Eddy Simulation (LES) models [Lilly, 1967; Wyngaard et al., 1998]. The multi-receiver design of the TEP radar allows offline digital beamforming to construct volumetric images and is capable of acquiring measurements at altitudes up to 3 km, depending upon atmospheric conditions. Imaging radars, including the TEP radar, help researchers and scientists to enhance their understanding of small-scale structure of the atmosphere. In addition to lower atmospheric measurements, imaging radars are also important for studies of other regions of the atmosphere, such as the mesosphere [Yu et al., 2001; Hysell et al., 2002], stratosphere [Rao et al., 1995], and the ionosphere [Hysell, 1996; Hysell and Woodman, 1997]. For coherent radar imaging (CRI), signals from each of the receiver elements are combined to form a beam pointing in the direction of interest. The coherently combined signals allow spectral moments to be extracted from the beam pointing direction. This technique is known as beamforming. By changing the beamforming weights, signals from arbitrary directions can be extracted. The beamforming weights can also be data adaptive. This allows suppression of strong signals outside the region of interest. With a high spatial resolution radar system such as the TEP, many narrow beams can be formed over the field of view, providing tremendous detail in the imaging area. Pioneering work in spatial in-
The next 50 years of industrial management and engineering
Three interwoven threads of background trace the path that leads to the challenges and questions about IE: Where is IE practiced, what outcomes are expected from the profession, and what techniques form the skills IEs bring to an IE practice. Accordingly, finding answers for the next 50 years of industrial management and engineering, discussions have been done with distinguished and authoritative industrial engineers, managers, and educators with international experience in manufacturing, distribution, office and service work. The purpose of the paper was to provide guidance for young people entering the industrial engineering profession. Primary goal was to identify trends, directions and likely changes in the field and profession that may be helpful in career planning. A secondary goal was to help current practitioners and educators adapt to future demands. To students, I hope that you will find the paper insightful, stimulating and useful as you plan your education and your future career in industry. To educators, I hope that the reflections, predictions and discussions contained here will help you in developing our next generation of professionals. To practicing managers and engineers, I hope that you also will find value and guidance as you plan for continued personal and professional development.
Missing data in value-added modeling of teacher effects
The increasing availability of longitudinal student achievement data has heightened interest among researchers, educators and policy makers in using these data to evaluate educational inputs, as well as for school and possibly teacher accountability. Researchers have developed elaborate “value-added models” of these longitudinal data to estimate the effects of educational inputs (e.g., teachers or schools) on student achievement while using prior achievement to adjust for nonrandom assignment of students to schools and classes. A challenge to such modeling efforts is the extensive numbers of students with incomplete records and the tendency for those students to be lower achieving. These conditions create the potential for results to be sensitive to violations of the assumption that data are missing at random, which is commonly used when estimating model parameters. The current study extends recent value-added modeling approaches for longitudinal student achievement data Lockwood et al. [J. Educ. Behav. Statist. 32 (2007) 125–150] to allow data to be missing not at random via random effects selection and pattern mixture models, and applies those methods to data from a large urban school district to estimate effects of elementary school mathematics teachers. We find that allowing the data to be missing not at random has little impact on estimated teacher effects. The robustness of estimated teacher effects to the missing data assumptions appears to result from both the relatively small impact of model specification on estimated student effects compared with the large variability in teacher effects and the downweighting of scores from students with incomplete data.
A Review of Productivity Factors and Strategies on Software Development
Since the late seventies, efforts to catalog factors that influences productivity, as well as actions to improve it, has been a huge concern for both academy and software development industry. Despite numerous studies, software organizations still do not know which the most significant factors are and what to do with it. Several studies present the factors in a very superficial way, some others address only the related factors or there are those that describe only a single factor. Actions to deal with the factors are spread and frequently were not mapped. Through a literature review, this paper presents a consolidated view of the main factors that have affected productivity over the years, and the strategies to deal with these factors nowadays. This research aims to support software development industry on the selection of their strategies to improve productivity by maximizing the positive factors and minimizing or avoiding the impact of the negative ones.
Sentiment Analysis and Opinion Mining: A Survey
In this age, in this nation, public sentiment is everything. With it, nothing can fail; against it, nothing can succeed. Whoever molds public sentiment goes deeper than he who enacts statutes, or pronounces judicial decisions (Abraham Lincoln, 1858 ) [1]. It is apparent from President Lincoln's well known quote that legislators understood the force of open assumption quite a while prior. In today world, the Internet is the main source of information. An enormous amount of information and opinion online is scattered and unstructured with no machine to arrange it. Because of demand the public to know opinions about exact product and services, political issues, or social scientists. That’s led us to study of field Opining Mining and Sentiment Analysis. Opining Mining and Sentiment Analysis have recently played a significant role for researchers because analysis of online text is beneficial for the market research political issue, business intelligence, online shopping, and scientific survey from psychological. Sentiment Analysis identifies the polarity of extracted public opinions. This paper presents a survey which covers Opining Mining, Sentiment Analysis, techniques, tools and classification.
Antennas for Digital Television Receivers in Mobile Terminals
The incorporation of new services in handheld devices, such as the Digital Video Broadcast-Handheld (DVB-H) operating at the lower ultrahigh-frequency (UHF) band poses a challenge for antenna designers. Wideband small antennas or electrically tunable narrowband small antennas are needed to fulfil the performance requirement. It is well known that below 900 MHz the operation of embedded mobile terminal antennas is based on utilizing the whole structure of the terminal as a radiator. However, even this way reaching the whole required impedance bandwidth of about 46% at about 0.61-GHz center frequency is possible only either with clamshell-type terminals used as thick dipoles by feeding them from the hinge or with “large tablet”-sized terminals. With the popular smartphone-sized terminals with a monoblock structure the available bandwidth with good total efficiency is clearly smaller. We study the options to implement antennas for smartphone-type mobiles for receiving digital TV broadcasts at about 0.47-0.75-GHz frequencies. The mainly studied technology is the nonresonant capacitive coupling element (CCE)-type antennas having one of the smallest achieved volume-to-bandwidth ratios. We show that for a fixed-frequency antenna with a volume of less than a few cubic centimeters the total efficiency will become rather low due to moderate matching level, but the requirement of the DVB-H standard for the realized gain can easily be met. Additionally, we show that by having switching between two bands, one can implement a dual-antenna configuration with small total volume and significant multiple-input-multiple-output (MIMO) gain. Furthermore, we study the very important effect of the user's hands on the antenna performance and find that the effect can range from some increase of the total efficiency due to improved matching to significant losses caused by the hands. Finally, we propose some possible ways ahead in solving this very challenging antenna design problem.
9 Sterilization by Gamma Irradiation
Sterilization is defined as any process that effectively kills or eliminates almost all microorganisms like fungi, bacteria, viruses, spore forms. There are many different sterilization methods depending on the purpose of the sterilization and the material that will be sterilized. The choice of the sterilization method alters depending on materials and devices for giving no harm. These sterilization methods are mainly: dry heat sterilization, pressured vapor sterilization, ethylene oxide (EtO) sterilization, formaldehyde sterilization, gas plasma (H2O2 ) sterilization, peracetic acid sterilization, e-beam sterilization and gamma sterilization.
Using the Multiple Sclerosis Impact Scale to estimate health state utility values: mapping from the MSIS-29, version 2, to the EQ-5D and the SF-6D.
OBJECTIVES The 29-item Multiple Sclerosis Impact Scale (MSIS-29) is a psychometrically validated patient-reported outcome measure increasingly used in trials of treatments for multiple sclerosis. However, it is non-preference-based and not amenable for use across policy decision-making contexts. Our objective was to statistically map from the MSIS-29, version 2, to the EuroQol five-dimension (EQ-5D) and the six-dimension health state short form (derived from short form 36 health survey) (SF-6D) to estimate algorithms for use in cost-effectiveness analyses. METHODS The relationships between MSIS-29, version 2, and EQ-5D and SF-6D scores were estimated by using data from a cohort of people with multiple sclerosis in South West England (n=672). Six ordinary least squares (OLS), Tobit, and censored least adjusted deviation (CLAD) regression analyses were conducted on estimation samples, including the use of subscale and item scores, squared and interaction terms, and demographics. Algorithms from models with the smallest estimation errors (mean absolute error [MAE], root mean square error [RMSE], normalized RMSE) were then assessed by using separate validation samples. RESULTS Tobit and CLAD. For the EQ-5D, the OLS models including subscale squared terms, and item scores and demographics performed comparably (MAE 0.147, RMSE 0.202 and MAE 0.147, RMSE 0.203, respectively), and estimated scores well up to 3 years post-baseline. Estimation errors for the SF-6D were smaller (OLS model including squared terms: MAE 0.058, RMSE 0.073; OLS model using item scores and demographics: MAE 0.059, RMSE 0.08), and the errors for poorer health states found with the EQ-5D were less pronounced. CONCLUSIONS We have provided algorithms for the estimation of health state utility values, both the EQ-5D and SF-6D, from scores on the MSIS-29, version 2. Further research is now needed to determine how these algorithms perform in practical decision-making contexts, when compared with observed EQ-5D and SF-6D values.
Subsea solution for anti-slug control of multiphase risers
A top-side choke valve is usually used as the manipulated variable for anti-slug control of multi-phase risers at offshore oil-fields. With new advances in the subsea technology, it is now possible to move top-side facilities to the sea floor. The two main contributions in this paper are to consider an alternative location for the control valve and to consider how to deal with nonlinearity. This research involved controllability analysis based on a simplified model fitted to experiments, simulations using the OLGA simulator, as well as an experimental study. It was concluded that a control valve close to the riser-base is very suitable for anti-slug control, and its operation range is the same as the top-side valve. However, a subsea choke valve placed at the well-head can not be used for preventing the riser-slugging.
OpenMusic: visual programming environment for music composition, analysis and research
OpenMusic is an open source environment dedicated to music composition. The core of this environment is a full-featured visual programming language based on Common Lisp and CLOS (Common Lisp Object System) allowing to design processes for the generation or manipulation of musical material. This language can also be used for general purpose visual programming and other (possibly extra-musical) applications.
An Adaptive User Interface Based On Personalized Learning
1. Observing the interaction between a user and a software application. We might lose data because of limitations in the types of events we can perceive. So, to extract as much useful information about a user’s intentions and needs as possible, we must identify various low-level interface events from available data. 2. Identifying different episodes from the actions we observe in user–computer interaction. We must formulate several important classes from user interface data—for example, keyboard typing, menu selection, and button clicking. We consider each action a basic episode. These observable clues will help us recognize a user’s intention. 3. Recognizing user behavior patterns. To reveal hidden patterns in the streams of events we retrieve, we need a language or algorithms for representing and computing the events into associated patterns. Moreover, the AUI should be able to compose new events from previously modeled events and build or modify the transformation functions for the modeled events. 4. Adaptively helping users according to recognized user plans. When the interface recognizes that the user is about to execute a certain plan, it should offer assistance. The user can configure the interface with his or her preferences on where and how to display the proposed help. 5. Building user profiles that will enable personalized interactions. Profiles store both userdefined preference information and systemdetected user behavior patterns. The system updates a profile whenever it detects a change in user behavior patterns.
Most patients with minimal histological residuals of gastric MALT lymphoma after successful eradication of Helicobacter pylori can be managed safely by a watch and wait strategy: experience from a large international series.
BACKGROUND Eradication of Helicobacter pylori is the established initial treatment of stage I MALT (mucosa associated lymphoid tissue) lymphoma. Patients with minimal persisting lymphoma infiltrates after successful eradication of H pylori are considered treatment failures and referred for radiation, chemotherapy, immunotherapy, or surgery. AIM To report a watch and wait strategy in such patients. METHODS 108 patients were selected from a larger series of patients treated at various European institutions. Their mean age was 51.6 years (25 to 82), and they were all diagnosed as having gastric marginal zone B cell lymphoma of MALT type stage I. After successful H pylori eradication and normalisation of the endoscopic findings, lymphoma infiltrates were still present histologically at 12 months (minimal histological residuals). No oncological treatment was given but the patients had regular follow up with endoscopies and multiple biopsies. FINDINGS Based on a follow up of 42.2 months (2-144), 102 patients (94%) had a favourable disease course. Of these, 35 (32%) went into complete remission. In 67 (62%) the minimal histological residuals remained stable and no changes became evident. Local lymphoma progression was seen in four patients (5%), and one patient developed a high grade lymphoma. CONCLUSIONS Most patients with minimal histological residuals of gastric MALT lymphoma after successful eradication of H pylori had a favourable disease course without oncological treatment. A watch and wait strategy with regular endoscopies and biopsies appears to be safe and may become the approach of choice in this situation. Longer follow up is needed to establish this definitively.
Modeling Coverage for Neural Machine Translation
Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT.1
A Dynamic Model of pH-Induced Protein G'e Higher Order Structure Changes derived from Mass Spectrometric Analyses.
To obtain insight into pH change-driven molecular dynamics, we studied the higher order structure changes of protein G'e at the molecular and amino acid residue levels in solution by using nanoESI- and IM-mass spectrometry, CD spectroscopy, and protein chemical modification reactions (protein footprinting). We found a dramatic change of the overall tertiary structure of protein G'e when the pH was changed from neutral to acidic, whereas its secondary structure features remained nearly invariable. Limited proteolysis and surface-topology mapping of protein G'e by fast photochemical oxidation of proteins (FPOP) under neutral and acidic conditions reveal areas where higher order conformational changes occur on the amino-acid residue level. Under neutral solution conditions, lower oxidation occurs for residues of the first linker region, whereas greater oxidative modifications occur for amino-acid residues of the IgG-binding domains I and II. We propose a dynamic model of pH-induced structural changes in which protein G'e at neutral pH adopts an overall tight conformation with all four domains packed in a firm assembly, whereas at acidic pH, the three IgG-binding domains form an elongated alignment, and the N-terminal, His-tag-carrying domain unfolds. At the same time the individual IgG-binding domains themselves seem to adopt a more compacted fold. As the secondary structure features are nearly unchanged at either pH, interchange between both conformations is highly reversible, explaining the high reconditioning power of protein G'e-based affinity chromatography columns.
Use of Isokinetic Muscle Strength as a Measure of Severity of Rheumatoid Arthritis: A Comparison of this Assessment Method for RA with other Assessment Methods for the Disease
The aim of this study was to study the association between isokinetic muscle strength (IMS) and other clinical indicators of disability and disease activity in patients with rheumatoid arthritis (RA). A cohort of 36 RA patients was followed over a 1-year period with five measurements of disease activity at regular intervals during this time. IMS was measured at seven angular velocities in both knees, on five separate occasions. The measurement was expressed by the level of the fitted line of the seven peak torque values – IMS30. The association between IMS30 and clinical indicators was stated. As an indicator of disability the score from the Stanford Health Assessment Questionnaire (HAQ) was used. As indicators of disease activity morning stiffness, an index of swelling and pain in the joint, erythrocyte sedimentation rate (ESR) and haemoglobin (Hb) were chosen. Larsen’s X-ray score was used as an indicator of bone destruction due to longer-lasting disease activity. IMS was significantly associated with the HAQ score, but not with indicators of disease activity or radiological findings. IMS was significantly associated with changes in indicators of disease activity, but not with the changes in the HAQ score, or in the X-ray-score. IMS showed the strongest association with changes in the degree of arthritis of the knee. In conclusion, IMS was associated with the HAQ score and can therefore be used when measuring outcome in a specific group of RA patients. Changes in IMS were associated with indicators of changes in disease activity, and are therefore usable as a measure of patient outcome. Of particular importance is that IMS decreased if a patient developed active arthritis in the knee, and normalised again when the inflammation decreased.
A body-shadowing model for off-body and body-to-body communications
This paper presents a simple model for body-shadowing in off-body and body-to-body channels. The model is based on a body shadowing pattern associated with the on-body antenna, represented by a cosine function whose amplitude parameter is calculated from measurements. This parameter, i.e the maximum body-shadowing loss, is found to be linearly dependent on distance. The model was evaluated against a set of off-body channel measurements at 2.45 GHz in an indoor office environment, showing a good fit. The coefficient of determination obtained for the linear model of the maximum body-shadowing loss is greater than 0.6 in all considered scenarios, being higher than 0.8 for the ones with a static user.
Multi-Scale Recurrent Tracking via Pyramid Recurrent Network and Optical Flow
The target in a tracking sequence can be considered as a set of spatiotemporal data with various locations in different frames, and the problem how to extract spatiotemporal information of the target effectively has drawn increasing interest recently. In this paper, we exploit spatiotemporal information by different-scale-context aggregation through the proposed pyramid multi-directional recurrent network (PRNet) together with the FlowNet. The PRNet is proposed to memorize the multi-scale spatiotemporal information of self-structure of the target. The FlowNet is employed to capture motion information for discriminating targets from the background. And the two networks form the FPRNet, being trained jointly to learn more useful spatiotemporal representations for visual tracking. The proposed tracker is evaluated on OTB50, OTB100 and TC128 benchmarks, and the experimental results show that the proposed FPRNet can effectively address different challenging cases and achieve better performance than the state-of-theart trackers.
TeleSpiro: A low-cost mobile spirometer for resource-limited settings
Chronic obstructive pulmonary disease (COPD), a disabling combination of emphysema and chronic bronchitis, relies on spirometric lung function measurements for clinical diagnosis and treatment. Because spirometers are unavailable in most of the developing world, this project developed a low cost point of care spirometer prototype for the mobile phone called the “TeleSpiro.” The key contributions of this work are the design of a novel repeat-use, sterilisable, low cost, phone-powered prototype meeting developing world user requirements. A differential pressure sensor, dual humidity/pressure sensor, microcontroller and USB hardware were mounted on a printed circuit board for measurement of air flow in a custom machine-lathed respiratory air flow tube. The embedded circuit electronics were programmed to transmit data to and receive power directly from either a computer or Android smartphone without the use of batteries. Software was written to filter and extract respiratory cycles from the digitised data. Differential pressure signals from Telespiro showed robust, reproducible responses to the delivery of physiologic lung volumes. The designed device satisfied the stringent design criteria of resource-limited settings and makes substantial inroads in providing evidence-based chronic respiratory disease management.
Scheduled maintenance therapy with infliximab improves the prognosis of Crohn's disease: a single center prospective cohort study in Japan.
The main goal of Crohn's disease (CD) treatment at present is to induce and maintain remission for as long as possible, and several approaches have been used as induction and maintenance therapies. There are no reports that have compared the effects on mid- and long-term prognosis among the induction and maintenance therapies, especially between infliximab, a chimeric antibody to tumor necrosis factor-alpha, and nutritional therapies. A total of 262 CD patients with induced remission were enrolled in the cohort study. Patients who failed to achieve remission, and patients who were lost to follow-up within 12 months were excluded. Induction therapies for CD included total elemental enteral nutrition, total parenteral nutrition, infliximab, prednisolone, and surgical resection. Maintenance therapies included home elemental diet, 5-aminosalicylates, immunomodulators, and scheduled infliximab therapy. We evaluated the possible predictive factors of relapse and surgical recurrence including the clinical backgrounds of the patients and medical therapies, using the Cox multivariate hazard analysis. The main factors that strongly affected the first relapse were scheduled infliximab therapy (hazard ratio (HR) = 0.24, p < 0.0001), surgical induction (HR = 0.19, p < 0.0001) and high frequency of previous relapse (HR = 2.56, p = 0.002). Penetrating (HR = 3.33, p = 0.009) and stricturing (HR = 6.60, p < 0.0001) disease behavior were main risk factors of surgical recurrence. Scheduled infliximab therapy is the most effective maintenance therapy in a real clinical setting with respect to the mid- and long-term prognosis.
Storage Solutions for Big Data Systems: A Qualitative Study and Comparison
Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed.
Mining Techniques and its Applications in Banking Sector
Data mining is becoming strategically important area for many business organizations including banking sector. It is a process of analyzing the data from various perspectives and summarizing it into valuable information. Data mining assists the banks to look for hidden pattern in a group and discover unknown relationship in the data. Today, customers have so many opinions with regard to where they can choose to do their business. Early data analysis techniques were oriented toward extracting quantitative and statistical data characteristics. These techniques facilitate useful data interpretations for the banking sector to avoid customer attrition. Customer retention is the most important factor to be analyzed in today’s competitive business environment. And also fraud is a significant problem in banking sector. Detecting and preventing fraud is difficult, because fraudsters develop new schemes all the time, and the schemes grow more and more sophisticated to elude easy detection. This paper analyzes the data mining techniques and its applications in banking sector like fraud prevention and detection, customer retention, marketing and risk management. Keywords— Banking Sector, Customer Retention, Credit Approval, Data mining, Fraud Detection,
Transforming Dependency Structures to Logical Forms for Semantic Parsing
The strongly typed syntax of grammar formalisms such as CCG, TAG, LFG and HPSG offers a synchronous framework for deriving syntactic structures and semantic logical forms. In contrast—partly due to the lack of a strong type system—dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. However, the lack of a type system makes a formal mechanism for deriving logical forms from dependency structures challenging. We address this by introducing a robust system based on the lambda calculus for deriving neo-Davidsonian logical forms from dependency trees. These logical forms are then used for semantic parsing of natural language to Freebase. Experiments on the Free917 and Web-Questions datasets show that our representation is superior to the original dependency trees and that it outperforms a CCG-based representation on this task. Compared to prior work, we obtain the strongest result to date on Free917 and competitive results on WebQuestions.
Loss on ignition as a method for estimating organic and carbonate content in sediments : reproducibility and comparability of results
Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO 3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40–70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory’s own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.
A Generic Model of Memristors With Parasitic Components
In this paper, a generic model of memristive systems, which can emulate the behavior of real memristive devices is proposed. Non-ideal pinched hysteresis loops are sometimes observed in real memristive devices. For example, the hysteresis loops may deviate from the origin over a broad range of amplitude A and frequency f of the input signal. This deviation from the ideal case is often caused by parasitic circuit elements exhibited by real memristive devices. In this paper, we propose a generic memristive circuit model by adding four parasitic circuit elements, namely, a small capacitance, a small inductance, a small DC current source, and a small DC voltage source, to the memristive device. The adequacy of this model is verified experimentally and numerically with two thermistors (NTC and PTC) memristors.
Rationalizing Sentiment Analysis in Tensorflow
Sentiment analysis using deep learning models is a leading subject of interest in Natural Language Processing that is as powerful as it is opaque. Current state-ofthe-art models can produce accurate predictions, but they provide little insight as to why the model predicted this sentiment. Businesses relying on these models might be less likely to act on insight given the lack of evidence for predictions. These people would be more likely to trust such predictions if a brief explanation of the outcome is provided. Recent work by Lei et al [4]. has set forth a framework for a multi-aspect sentiment analysis concurrently providing text rationalization with each prediction. This framework sets forth a two-part approach, which summarizes a review and predicts a sentiment. In this paper, we explore the performance of this framework, seeking to recreate and improve upon it in TensorFlow.
Abstractions from tests
We present a framework for leveraging dynamic analysis to find good abstractions for static analysis. A static analysis in our framework is parametrised. Our main insight is to directly and efficiently compute from a concrete trace, a necessary condition on the parameter configurations to prove a given query, and thereby prune the space of parameter configurations that the static analysis must consider. We provide constructive algorithms for two instance analyses in our framework: a flow- and context-sensitive thread-escape analysis and a flow- and context-insensitive points-to analysis. We show the efficacy of these analyses, and our approach, on six Java programs comprising two million bytecodes: the thread-escape analysis resolves 80% of queries on average, disproving 28% and proving 52%; the points-to analysis resolves 99% of queries on average, disproving 29% and proving 70%.
Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation
Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: //bitbucket.org/deeplab/deeplab-public.
Detection and Prevention of SQL Injection Attack : A Survey
SQL (structure query language) injection is one of threats to the applications, which are Web-based application, Mobile application and even desktop application, which are connected to the database. By implementing SQL injection, attacker can gain full access to the application or database so that it can remove or change significant data irresponsibly. Applications that do not properly validate the user’s input make them vulnerable against SQL injection. SQL Injection Attacks (SQLIA) occurs when an attacker is able to insert a series of malicious SQL statements into a ―query‖ through manipulating user input data for execution by the back-end database. Using this type of threats, applications could be hacked easily and steal the confidential data by the attacker. In this paper we present classical and modern types of SQLIA and display different existing technique and tools which are used to detect or prevent these attacks. Keywords— SQL injection, Database security, PDO, Web application, SQLite.
GENDER ISSUES IN YOUNG CHILDREN ' S LITERATURE
In recent decades, extensive studies from diverse disciplines have focused on children's developmental awareness of different gender roles and the relationships between genders. Among these studies, researchers agree that children's picture books have an increasingly significant place in children's development because these books are a widely available cultural resource, offering young children a multitude of opportunities to gain information, become familiar with the printed pictures, be entertained, and experience perspectives other than their own. In such books, males are habitually described as active and domineering, while females rarely reveal their identities and very frequently are represented as meek and mild. This valuable venue for children's gender development thus unfortunately reflects engrained societal attitudes and biases in the available choices and expectations assigned to different genders. This discriminatory portrayal in many children's picture books also runs the risk of leading children toward a misrepresented and misguided realization of their true potential in their expanding world.
PCG biometric identification system based on feature level fusion using canonical correlation analysis
In this paper, a new technique for human identification task based on heart sound signals has been proposed. It utilizes a feature level fusion technique based on canonical correlation analysis. For this purpose a robust pre-processing scheme based on the wavelet analysis of the heart sounds is introduced. Then, three feature vectors are extracted depending on the cepstral coefficients of different frequency scale representation of the heart sound namely; the mel, bark, and linear scales. Among the investigated feature extraction methods, experimental results show that the mel-scale is the best with 94.4% correct identification rate. Using a hybrid technique combining MFCC and DWT, a new feature vector is extracted improving the system's performance up to 95.12%. Finally, canonical correlation analysis is applied for feature fusion. This improves the performance of the proposed system up to 99.5%. The experimental results show significant improvements in the performance of the proposed system over methods adopting single feature extraction.
Effects of a music therapy group intervention on enhancing social skills in children with autism.
BACKGROUND Research indicates that music therapy can improve social behaviors and joint attention in children with Autism Spectrum Disorder (ASD); however, more research on the use of music therapy interventions for social skills is needed to determine the impact of group music therapy. OBJECTIVE To examine the effects of a music therapy group intervention on eye gaze, joint attention, and communication in children with ASD. METHOD Seventeen children, ages 6 to 9, with a diagnosis of ASD were randomly assigned to the music therapy group (MTG) or the no-music social skills group (SSG). Children participated in ten 50-minute group sessions over a period of 5 weeks. All group sessions were designed to target social skills. The Social Responsiveness Scale (SRS), the Autism Treatment Evaluation Checklist (ATEC), and video analysis of sessions were used to evaluate changes in social behavior. RESULTS There were significant between-group differences for joint attention with peers and eye gaze towards persons, with participants in the MTG demonstrating greater gains. There were no significant between-group differences for initiation of communication, response to communication, or social withdraw/behaviors. There was a significant interaction between time and group for SRS scores, with improvements for the MTG but not the SSG. Scores on the ATEC did not differ over time between the MTG and SSG. CONCLUSIONS The results of this study support further research on the use of music therapy group interventions for social skills in children with ASD. Statistical results demonstrate initial support for the use of music therapy social groups to develop joint attention.
Traffic Priority and Load Adaptive MAC Protocol for QoS Provisioning in Body Sensor Networks
Body sensor networks (BSNs) carry heterogeneous traffic types having diverse QoS requirements, such as delay, reliability and throughput. In this paper, we design a priority-based traffic load adaptivemedium access control (MAC) protocol for BSNs, namely, PLA-MAC, which addresses the aforementioned requirements and maintains efficiency in power consumption. In PLA-MAC, we classify sensed data packets according to their QoS requirements and accordingly calculate their priorities. The transmission schedules of the packets are determined based on their priorities. Also, the superframe structure of the proposed protocol varies depending on the amount of traffic load and thereby ensures minimal power consumption. Our performance evaluation shows that the PLA-MAC achieves significant improvements over the state-of-the-art protocols.
The Limits of Institutional Design in Oil Sector Governance: Exporting the
Norway has made a point of administering its petroleum resources using three distinct government bodies: a national oil company (NOC) engaged in commercial hydrocarbon operations; a government ministry to help set policy; and a regulatory body to provide oversight and technical expertise. In Norway’s case, this institutional design has provided useful checks and balances, helped minimize conflicts of interest, and allowed the NOC, Statoil, to focus on commercial activities while other government agencies regulate oil operators including Statoil itself. Norway’s relative success in managing its hydrocarbon resources has prompted development institutions to consider whether this “Norwegian Model” of separated government functions should be recommended to other oil-producing countries, particularly those whose oil sectors have underperformed. Seeking insight into this question, we study eight countries with different political and institutional characteristics, some of which have attempted to separate functions in oil in the manner of Norway and some of which have not. We conclude that while the Norwegian Model may be a “best practice” of sorts, it is not the best prescription for every ailing oil sector. The separation of functions approach is most useful and feasible in cases where political competition exists and institutional capacity is relatively strong. Unchallenged leaders, on the other hand, are often able to adequately discharge commercial and policy/regulatory functions in the oil sector using the same entity, although this approach may not be robust against political changes (nor do we address in this paper any possible development or human welfare implications of this arrangement). When technical and regulatory talent is particularly lacking in a country, better outcomes may result from consolidating commercial, policy, and regulatory functions in a single body until institutional capacity has further developed. Countries like Nigeria with vibrant political competition but limited institutional capacity pose the most significant challenge for oil sector reform: unitary control over the sector is impossible but separation of functions is often impossible to implement. In such cases reformers are wise to focus on incremental but sustainable improvements in technical and institutional capacity. Thurber, Hults, and Heller / ISA Annual Convention 2010 14 Feb 2010
Development of a cost-effective ecg monitor for cardiac arrhythmia detection using heart rate variability
Ischemic Heart Disease (IHD) and stroke are statistically the leading causes of death world-wide. Both diseases deal with various types of cardiac arrhythmias, e.g. premature ventricular contractions (PVCs), ventricular and supra-ventricular tachycardia, atrial fibrillation. For monitoring and detecting such an irregular heart rhythm accurately, we are now developing a very cost-effective ECG monitor, which is implemented in 8-bit MCU with an efficient QRS detector using steep-slope algorithm and arrhythmia detection algorithm using a simple heart rate variability (HRV) parameter. This work shows the results of evaluating the real-time steep-slope algorithm using MIT-BIH Arrhythmia Database. The performance of this algorithm has 99.72% of sensitivity and 99.19% of positive predictivity. We then show the preliminary results of arrhythmia detection using various types of normal and abnormal ECGs from an ECG simulator. The result is, 18 of 20 ECG test signals were correctly detected.
Multi-Acupuncture Point Injections and Their Anatomical Study in Relation to Neck and Shoulder Pain Syndrome (So-Called Katakori) in Japan
Katakori is a symptom name that is unique to Japan, and refers to myofascial pain syndrome-like clinical signs in the shoulder girdle. Various methods of pain relief for katakori have been reported, but in the present study, we examined the clinical effects of multi-acupuncture point injections (MAPI) in the acupuncture points with which we empirically achieved an effect, as well as the anatomical sites affected by liquid medicine. The subjects were idiopathic katakori patients (n = 9), and three cadavers for anatomical investigation. BL-10, GB-21, LI-16, SI-14, and BL-38 as the WHO notation were selected as the acupuncture point. Injections of 1 mL of 1% w/v mepivacaine were introduced at the same time into each of these points in the patients. Assessment items were the Pain Relief Score and the therapeutic effect period. Dissections were centered at the puncture sites of cadavers. India ink was similarly injected into each point, and each site that was darkly-stained with India ink was evaluated. Katakori pain in the present study was significantly reduced by MAPI. Regardless of the presence or absence of trigger points, pain was significantly reduced in these cases. Dark staining with India ink at each of the points in the anatomical analysis was as follows: BL-10: over the rectus capitis posterior minor muscle and rectus capitis posterior major muscle fascia; GB-21: over the supraspinatus muscle fascia; LI-16: over the supraspinatus muscle fascia; SI-14: over the rhomboid muscle fascia; and BL-38: over the rhomboid muscle fascia. The anatomical study suggested that the drug effect was exerted on the muscles above and below the muscle fascia, as well as the peripheral nerves because the points of action in acupuncture were darkly-stained in the spaces between the muscle and the muscle fascia.
Glucometer use and glycemic control among Hispanic patients with diabetes in southern Florida.
BACKGROUND Self-monitoring of blood glucose (SMBG) has been deemed a critical component of diabetes care in the United States. To be effective, patients must have some diabetes knowledge, glucometer proficiency, and an ability to take appropriate actions when certain readings are obtained. However, most patients take no action in response to out-of-range glucometer readings, and in many populations, SMBG practices are not associated with improved glycemic control. Thus, SMBG utilization is being reconsidered in other countries. Nonetheless, SMBG behaviors are increasingly recommended in the United States, where the Hispanic population represents the fastest-growing minority group and is disproportionately affected by suboptimal diabetes outcomes. Because a growing number of interventions aim to reduce diabetes disparities by improving glycemic control among minorities, it is essential to determine whether efforts should focus on SMBG practices. We present data on SMBG behaviors and glycemic control among participants from the Miami Healthy Heart Initiative (MHHI), a National Institutes of Health/National Heart, Lung, and Blood Institute-sponsored trial assessing a community health worker (CHW) intervention among Hispanic patients with poorly controlled diabetes. OBJECTIVE This study examined the effects of a CHW intervention on SMBG practices, glycosylated hemoglobin (HbA1c), and knowledge of appropriate responses to glucometer readings among Hispanic patients with diabetes. METHODS This study was an ancillary investigation within MHHI, a randomized, controlled trial in 300 Hispanic patients. Participants were intervention-group members who received 12 months of CHW support. Assessments were administered at baseline and poststudy to determine potential barriers to optimal health. Items from validated instruments were used to determine knowledge of appropriate responses to different glucose readings. These data were linked to HbA1c values. Means and frequencies were used to describe population characteristics and glucometer proficiency. Paired-sample t tests examined potential differences in HbA1c outcomes and SMBG practices. Qualitative data were collected from the CHWs who worked with study participants. RESULTS Our population was diverse, representing several countries. Mean HbA1c improved significantly, from 10% to 8.8% (P ≤ 0.001). SMBG practices did not change. At baseline, 96% of patients reported owning a glucometer and 94% reported knowing how to use it. However, quantitative assessments and qualitative data suggested that participants had suboptimal knowledge regarding actions that could cause an out-of-range reading or how to respond to certain readings. CONCLUSIONS SMBG behaviors were not associated with glycemic control in our sample. We conclude that a CHW intervention may improve glycemic control without improving SMBG practices. Future interventions may reconsider whether efforts should be directed toward improving SMBG behaviors.
Numerical solution of two-dimensional elliptic PDEs with nonlocal boundary conditions
MetaAnchor: Learning to Detect Objects with Customized Anchors
We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on transfer tasks. Our experiment on COCO detection task shows that MetaAnchor consistently outperforms the counterparts in various scenarios.
Effects of activity modification on the patients with osteoarthritis of the knee.
A prospective randomized clinical trial was conducted on 162 patients of osteoarthritis of knee were included in the study. The patients were divided into two groups- Group A and Group B. The Group A was treated with shortwave diathermy, exercise, naproxen and activity modification and the Group B was treated with shortwave diathermy, exercise and naproxen. Improvement was found more in Group A than Group B after 4th week (95% CI was -2.59 to 6.56). Then it was found that the improvement was gradually increased in Group A than Group B and finally, it was found that there was highly significant improvement in Group A than Group B after 6th week (95% CI was -3.45 to -0.70). This study suggests that activity modification play an important role for the treatment of the patients with osteoarthritis of knee.
Watch Global, Cache Local : YouTube Network Traffic at a Campus Network -Measurements and Implications
User Generated Content has become very popular since the birth of web services such as YouTube allowing the distribution of such user-produced media content in an easy manner. YouTube-like services are different from existing traditional VoD services because the service provider has only limited control over the creation of new content. We analyze how the content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. The analysis of the traffic shows that: (1) No strong correlation is observed between global and local popularity; (2) neither time scale nor user population has an impact on the local popularity distribution; (3) video clips of local interest have a high local popularity. Using our measurement data to drive trace-driven simulations, we also demonstrate the implications of alternative distribution infrastructures on the performance of a YouTube-like VoD service. The results of these simulations show that client-based local caching, P2P-based distribution, and proxy caching can reduce network traffic significantly and allow faster access to video clips.
Clinical outcomes from the use of Medication Report when elderly patients are discharged from hospital
Objective The objective of this study was to investigate whether a Medication Report also can reduce the number of patients with clinical outcomes due to medication errors. Method A prospective intervention study with retrospective controls on patients at three departments at Lund University Hospital, Sweden that where transferred to primary care. The intervention group, where patients received a Medication Report at discharge, was compared with a control group with patients of the same age, who were not given a Medication Report when discharged from the same ward one year earlier. For patients with at least one medication error all contacts with hospital or primary care within 3 months after discharge were identified. For each contact it was evaluated whether this was caused by the medication error. We also compared medication errors that have been evaluated as high or moderate clinical risk with medication errors without clinical risk. Main outcome measures Need for medical care in hospital or primary care within three months after discharge from hospital. Medical care is readmission to hospital as well as visits of study population to primary and out-patient secondary health care. Results The use of Medication Report reduced the need for medical care due to medication errors. Of the patients with Medication Report 11 out of 248 (4.4%) needed medical care because of medication errors compared with 16 out of 179 (8.9%) of patients without Medication Report (p = 0.049). The use of a Medication Report significantly reduced the risk of any consequences due to medication errors, p = 0.0052. These consequences included probable and possible care due to medication error as well as administrative procedures (corrections) made by physicians in hospital or primary care. Conclusions The Medication Report seems to be an effective tool to decrease adverse clinical consequences when elderly patients are discharged from hospital care.
Active airborne localisation and exploration in unknown environments using inertial SLAM
Future unmanned aerial vehicle (UAV) applications will require high-accuracy localisation in environments in which navigation infrastructure such as the Global Positioning System (GPS) and prior terrain maps may be unavailable or unreliable. In these applications, long-term operation requires the vehicle to build up a spatial map of the environment while simultaneously localising itself within the map, a task known as simultaneous localisation and mapping (SLAM). In the first part of this paper we present an architecture for performing inertial-sensor based SLAM on an aerial vehicle. We demonstrate an on-line path planning scheme that intelligently plans the vehicle's trajectory while exploring unknown terrain in order to maximise the quality of both the resulting SLAM map and localisation estimates necessary for the autonomous control of the UAV. Two important performance properties and their relationship to the dynamic motion and path planning systems on-board the UAV are analysed. Firstly we analyse information-based measures such as entropy. Secondly we perform an observability analysis of inertial SLAM by recasting the algorithms into an indirect error model form. Qualitative knowledge gained from the observability analysis is used to assist in the design of an information-based trajectory planner for the UAV. Results of the online path planning algorithm are presented using a high-fidelity 6-DoF simulation of a UAV during a simulated navigation and mapping task
Evaluating social spammer detection systems
The rising popularity of social network services, such as Twitter, has attracted many spammers and created a large number of fake accounts, overwhelming legitimate users with advertising, malware and unwanted and disruptive information. This not only inconveniences the users' social activities but causes financial loss and privacy issues. Identifying social spammers is challenging because spammers continually change their strategies to fool existing anti-spamming systems. Thus, many researchers have tried to propose new classification systems using various types of features extracted from the content and user's information. However, no comprehensive comparative study has been done to compare the effectiveness and the efficiency of the existing systems. At this stage, it is hard to know what the best anti spamming system is and why. This paper proposes a unified evaluation workbench that allows researchers to access various user and content-based features, implement new features, and evaluate and compare the performance of their systems against existing systems. Through our analysis, we can identify the most effective and efficient social spammer detection features and help develop a faster and more accurate classifier model that has higher true positives and lower false positives.
Beam Tilting Antenna Using Integrated Metamaterial Loading
This communication presents a technique to re-direct the radiation beam from a planar antenna in a specific direction with the inclusion of metamaterial loading. The beam-tilting approach described here uses the phenomenon based on phase change resulting from an EM wave entering a medium of different refractive index. The metamaterial H-shaped unit-cell structure is configured to provide a high refractive index which was used to implement beam tilting in a bow-tie antenna. The fabricated unit-cell was first characterized by measuring its S-parameters. Hence, a two dimensional array was constructed using the proposed unit-cell to create a region of high refractive index which was implemented in the vicinity bow-tie structure to realize beam-tilting. The simulation and experimental results show that the main beam of the antenna in the E-plane is tilted by 17 degrees with respect to the end-fire direction at 7.3, 7.5, and 7.7 GHz. Results also show unlike conventional beam-tilting antennas, no gain drop is observed when the beam is tilted; in fact there is a gain enhancement of 2.73 dB compared to the original bow-tie antenna at 7.5 GHz. The reflection-coeflicient of the antenna remains <; - 10 dB in the frequency range of operation.