query_id
stringlengths 32
32
| query
stringlengths 0
35.7k
| positive_passages
listlengths 1
7
| negative_passages
listlengths 22
29
| subset
stringclasses 2
values |
---|---|---|---|---|
d5d8cb033291263ffeb48f31e72cde1b | Rekindling network protocol innovation with user-level stacks | [
{
"docid": "f9c938a98621f901c404d69a402647c7",
"text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.",
"title": ""
}
] | [
{
"docid": "bf5f08174c55ed69e454a87ff7fbe6e2",
"text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "bb201a87b4f81c9c4d2c8889d4bd3a6a",
"text": "Computers have difficulty learning how to play Texas Hold’em Poker. The game contains a high degree of stochasticity, hidden information, and opponents that are deliberately trying to mis-represent their current state. Poker has a much larger game space than classic parlour games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems. In this paper, we present several algorithms for teaching agents how to play No-Limit Texas Hold’em Poker using a hybrid method known as evolving neural networks. Furthermore, we adapt heuristics such as halls of fame and co-evolution to be able to handle populations of Poker agents, which can sometimes contain several hundred opponents, instead of a single opponent. Our agents were evaluated against several benchmark agents. Experimental results show the overall best performance was obtained by an agent evolved from a single population (i.e., with no co-evolution) using a large hall of fame. These results demonstrate the effectiveness of our algorithms in creating competitive No-Limit Texas Hold’em Poker agents.",
"title": ""
},
{
"docid": "cf1d8589fb42bd2af21e488e3ea79765",
"text": "This paper presents ProRace, a dynamic data race detector practical for production runs. It is lightweight, but still offers high race detection capability. To track memory accesses, ProRace leverages instruction sampling using the performance monitoring unit (PMU) in commodity processors. Our PMU driver enables ProRace to sample more memory accesses at a lower cost compared to the state-of-the-art Linux driver. Moreover, ProRace uses PMU-provided execution contexts including register states and program path, and reconstructs unsampled memory accesses offline. This technique allows \\ProRace to overcome inherent limitations of sampling and improve the detection coverage by performing data race detection on the trace with not only sampled but also reconstructed memory accesses. Experiments using racy production software including apache and mysql shows that, with a reasonable offline cost, ProRace incurs only 2.6% overhead at runtime with 27.5% detection probability with a sampling period of 10,000.",
"title": ""
},
{
"docid": "86e4fa3a9cc7dd6298785f40dae556b6",
"text": "Stochastic block model (SBM) and its variants are popular models used in community detection for network data. In this paper, we propose a feature adjusted stochastic block model (FASBM) to capture the impact of node features on the network links as well as to detect the residual community structure beyond that explained by the node features. The proposed model can accommodate multiple node features and estimate the form of feature impacts from the data. Moreover, unlike many existing algorithms that are limited to binary-valued interactions, the proposed FASBM model and inference approaches are easily applied to relational data that generates from any exponential family distribution. We illustrate the methods on simulated networks and on two real world networks: a brain network and an US air-transportation network.",
"title": ""
},
{
"docid": "49a6de5759f4e760f68939e9292928d8",
"text": "An ongoing controversy exists in the prototyping community about how closely in form and function a user-interface prototype should represent the final product. This dispute is referred to as the \" Low-versus High-Fidelity Prototyping Debate.'' In this article, we discuss arguments for and against low-and high-fidelity prototypes , guidelines for the use of rapid user-interface proto-typing, and the implications for user-interface designers.",
"title": ""
},
{
"docid": "d44bc13e5dd794a70211aac7ba44103b",
"text": "Endowing artificial agents with the ability to empathize is believed to enhance their social behavior and to make them more likable, trustworthy, and caring. Neuropsychological findings substantiate that empathy occurs to different degrees depending on several factors including, among others, a person’s mood, personality, and social relationships with others. Although there is increasing interest in endowing artificial agents with affect, personality, and the ability to build social relationships, little attention has been devoted to the role of such factors in influencing their empathic behavior. In this paper, we present a computational model of empathy which allows a virtual human to exhibit different degrees of empathy. The presented model is based on psychological models of empathy and is applied and evaluated in the context of a conversational agent scenario.",
"title": ""
},
{
"docid": "dc330168eb4ca331c8fbfa40b6abdd66",
"text": "For multimedia communications, the low computational complexity of coder is required to integrate services of several media sources due to the limited computing capability of the personal information machine. The Multi-pulse Maximum Likelihood Quantization (MP-MLQ) algorithm with high computational complexity and high quality has been used in the G.723.1 standard codec. To reduce the computational complexity of the MP-MLQ method, this paper presents an efficient pre-selection scheme to simplify the excitation codebook search procedure which is computationally the most demand-ing. We propose a fast search algorithm which uses an energy function to predict the candidate pulses, and the codebook is redesigned to become the multi-track position structure. Simulation results show that the average of the perceptual evaluation of speech quality (PESQ) is degraded slightly, by only 0.056, and our proposed method can reduce computational complexity by about 52.8% relative to the original G.723.1 MP-MLQ computation load with perceptually negligible degradation. Our objective evaluations verify that the proposed method can provide speech quality comparable to that of the original MP-MLQ approach.",
"title": ""
},
{
"docid": "ccddd7df2b5246c44d349bfb0aae499a",
"text": "We consider stochastic multi-armed bandit problems with complex actions over a set of basic arms, where the decision maker plays a complex action rather than a basic arm in each round. The reward of the complex action is some function of the basic arms’ rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. Thus, feedback across complex actions may be coupled due to the nature of the reward function. We prove a frequentist regret bound for Thompson sampling in a very general setting involving parameter, action and observation spaces and a likelihood function over them. The bound holds for discretely-supported priors over the parameter space without additional structural properties such as closed-form posteriors, conjugate prior structure or independence across arms. The regret bound scales logarithmically with time but, more importantly, with an improved constant that non-trivially captures the coupling across complex actions due to the structure of the rewards. As applications, we derive improved regret bounds for classes of complex bandit problems involving selecting subsets of arms, including the first nontrivial regret bounds for nonlinear MAX reward feedback from subsets.",
"title": ""
},
{
"docid": "2a3273a7308273887b49f2d6cc99fe68",
"text": "The healthcare industry collects huge amounts of healthcare data which, unfortunately, are not \";mined\"; to discover hidden information for effective decision making. Discovery of hidden patterns and relationships often goes unexploited. Advanced data mining techniques can help remedy this situation. This research has developed a prototype Intelligent Heart Disease Prediction System (IHDPS) using data mining techniques, namely, Decision Trees, Naive Bayes and Neural Network. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. IHDPS can answer complex \";what if\"; queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established. IHDPS is Web-based, user-friendly, scalable, reliable and expandable. It is implemented on the .NET platform.",
"title": ""
},
{
"docid": "3810acca479f6fa5d4f314d36a27b42c",
"text": "The paper describes a stabilization control of two wheels driven wheelchair based on pitch angle disturbance observer (PADO). PADO makes it possible to stabilize the wheelchair motion and remove casters. This brings a sophisticated mobility of wheelchair because the casters are obstacle to realize step passage motion and so on. The proposed approach based on PADO is robust against disturbance of pitch angle direction and the more functional wheelchairs is expected in the developed system. The validity of the proposed method is confirmed by simulation and experiment.",
"title": ""
},
{
"docid": "64330f538b3d8914cbfe37565ab0d648",
"text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"title": ""
},
{
"docid": "ec9c0ba115e68545e263a82d6282d43e",
"text": "A 1.8 GHz LC VCO in 1.8-V supply is presented. The VCO achieves low power consumption by optimum selection of inductance in the L-C tank. To increase the tuning range, a three-bit switching capacitor array is used for digital switched tuning. Designed in 0.18μm RF CMOS technology, the proposed VCO achieves a phase noise of -126.2dBc/Hz at 1MHz offset and consumes 1.38mA core current at 1.8-V voltage supply.",
"title": ""
},
{
"docid": "2172e78731ee63be5c15549e38c4babb",
"text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "d8f54e45818fd88fc8e5689de55428a3",
"text": "When brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: The changes become extremely difficult to notice, even when they are large, presented repeatedly, and the observer expects them to occur (Rensink, O’Regan, & Clark, 1997). To determine the mechanisms behind this induced “change blindness”, four experiments examine its dependence on initial preview and on the nature of the interruptions used. Results support the proposal that representations at the early stages of visual processing are inherently volatile, and that focused attention is needed to stabilize them sufficiently to support the perception of change.",
"title": ""
},
{
"docid": "c30ea570f744f576014aeacf545b027c",
"text": "We aimed to examine the effect of different doses of lutein supplementation on visual function in subjects with long-term computer display light exposure. Thirty-seven healthy subjects with long-term computer display light exposure ranging in age from 22 to 30 years were randomly assigned to one of three groups: Group L6 (6 mg lutein/d, n 12); Group L12 (12 mg lutein/d, n 13); and Group Placebo (maltodextrin placebo, n 12). Levels of serum lutein and visual performance indices such as visual acuity, contrast sensitivity and glare sensitivity were measured at weeks 0 and 12. After 12-week lutein supplementation, serum lutein concentrations of Groups L6 and L12 increased from 0.356 (SD 0.117) to 0.607 (SD 0.176) micromol/l, and from 0.328 (SD 0.120) to 0.733 (SD 0.354) micromol/l, respectively. No statistical changes from baseline were observed in uncorrected visual acuity and best-spectacle corrected visual acuity, whereas there was a trend toward increase in visual acuity in Group L12. Contrast sensitivity in Groups L6 and L12 increased with supplementation, and statistical significance was reached at most visual angles of Group L12. No significant change was observed in glare sensitivity over time. Visual function in healthy subjects who received the lutein supplement improved, especially in contrast sensitivity, suggesting that a higher intake of lutein may have beneficial effects on the visual performance.",
"title": ""
},
{
"docid": "eadc50aebc6b9c2fbd16f9ddb3094c00",
"text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.",
"title": ""
},
{
"docid": "e9dc75f34b398b4e0d028f4dbbb707d1",
"text": "INTRODUCTION\nUniversity students are potentially important targets for the promotion of healthy lifestyles as this may reduce the risks of lifestyle-related disorders later in life. This cross-sectional study examined differences in eating behaviours, dietary intake, weight status, and body composition between male and female university students.\n\n\nMETHODOLOGY\nA total of 584 students (59.4% females and 40.6% males) aged 20.6 +/- 1.4 years from four Malaysian universities in the Klang Valley participated in this study. Participants completed the Eating Behaviours Questionnaire and two-day 24-hour dietary recall. Body weight, height, waist circumference and percentage of body fat were measured.\n\n\nRESULTS\nAbout 14.3% of males and 22.4% of females were underweight, while 14.0% of males and 12.3% of females were overweight and obese. A majority of the participants (73.8% males and 74.6% females) skipped at least one meal daily in the past seven days. Breakfast was the most frequently skipped meal. Both males and females frequently snacked during morning tea time. Fruits and biscuits were the most frequently consumed snack items. More than half of the participants did not meet the Malaysian Recommended Nutrient Intake (RNI) for energy, vitamin C, thiamine, riboflavin, niacin, iron (females only), and calcium. Significantly more males than females achieved the RNI levels for energy, protein and iron intakes.\n\n\nCONCLUSION\nThis study highlights the presence of unhealthy eating behaviours, inadequate nutrient intake, and a high prevalence of underweight among university students. Energy and nutrient intakes differed between the sexes. Therefore, promoting healthy eating among young adults is crucial to achieve a healthy nutritional status.",
"title": ""
},
{
"docid": "1dc615b299a8a63caa36cd8e36459323",
"text": "Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly.",
"title": ""
},
{
"docid": "ac46286c7d635ccdcd41358666026c12",
"text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.",
"title": ""
}
] | scidocsrr |
f4476a230ad3cff29a1ace5d2e4f3987 | Clustering of streaming time series is meaningless | [
{
"docid": "e10886264acb1698b36c4d04cf2d9df6",
"text": "† This work was supported by the RGC CERG project PolyU 5065/98E and the Departmental Grant H-ZJ84 ‡ Corresponding author ABSTRACT Pattern discovery from time series is of fundamental importance. Particularly when the domain expert derived patterns do not exist or are not complete, an algorithm to discover specific patterns or shapes automatically from the time series data is necessary. Such an algorithm is noteworthy in that it does not assume prior knowledge of the number of interesting structures, nor does it require an exhaustive explanation of the patterns being described. In this paper, a clustering approach is proposed for pattern discovery from time series. In view of its popularity and superior clustering performance, the self-organizing map (SOM) was adopted for pattern discovery in temporal data sequences. It is a special type of clustering algorithm that imposes a topological structure on the data. To prepare for the SOM algorithm, data sequences are segmented from the numerical time series using a continuous sliding window. Similar temporal patterns are then grouped together using SOM into clusters, which may subsequently be used to represent different structures of the data or temporal patterns. Attempts have been made to tackle the problem of representing patterns in a multi-resolution manner. With the increase in the number of data points in the patterns (the length of patterns), the time needed for the discovery process increases exponentially. To address this problem, we propose to compress the input patterns by a perceptually important point (PIP) identification algorithm. The idea is to replace the original data segment by its PIP’s so that the dimensionality of the input pattern can be reduced. Encouraging results are observed and reported for the application of the proposed methods to the time series collected from the Hong Kong stock market.",
"title": ""
}
] | [
{
"docid": "83fbffec2e727e6ed6be1e02f54e1e47",
"text": "Large dc and ac electric currents are often measured by open-loop sensors without a magnetic yoke. A widely used configuration uses a differential magnetic sensor inserted into a hole in a flat busbar. The use of a differential sensor offers the advantage of partial suppression of fields coming from external currents. Hall sensors and AMR sensors are currently used in this application. In this paper, we present a current sensor of this type that uses novel integrated fluxgate sensors, which offer a greater range than magnetoresistors and better stability than Hall sensors. The frequency response of this type of current sensor is limited due to the eddy currents in the solid busbar. We present a novel amphitheater geometry of the hole in the busbar of the sensor, which reduces the frequency dependence from 15% error at 1 kHz to 9%.",
"title": ""
},
{
"docid": "777cbf7e5c5bdf4457ce24520bbc8036",
"text": "Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today's DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TL-DRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5× faster than the next fastest simulator. Ramulator is released under the permissive BSD license.",
"title": ""
},
{
"docid": "2176518448c89ba977d849f71c86e6a6",
"text": "iii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. _______________________________________ L. Peter Deutsch I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Abstract Object-oriented programming languages confer many benefits, including abstraction, which lets the programmer hide the details of an object's implementation from the object's clients. Unfortunately, crossing abstraction boundaries often incurs a substantial run-time overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction , while desirable from a design standpoint, may be impractical when it leads to inefficient programs. Aggressive compiler optimizations can reduce the overhead of abstraction. However, the long compilation times introduced by optimizing compilers delay the programming environment's responses to changes in the program. Furthermore, optimization also conflicts with source-level debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals by performing optimizations lazily. Four new techniques work together to achieve high performance and high responsiveness: • Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system. On average, programs run 1.5 times faster than the previous SELF system; compared to a commercial Smalltalk implementation, two medium-sized benchmarks run about three times faster. This level of performance is obtained with a compiler that is both simpler and faster than previous SELF compilers. • Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast non-optimizing compiler to generate initial code while automatically recompiling heavily used parts of the program with an optimizing compiler. On a previous-generation workstation like the SPARCstation-2, fewer than 200 pauses exceeded 200 ms during a 50-minute interaction, and 21 pauses exceeded one second. …",
"title": ""
},
{
"docid": "c1ca3f495400a898da846bdf20d23833",
"text": "It is very useful to integrate human knowledge and experience into traditional neural networks for faster learning speed, fewer training samples and better interpretability. However, due to the obscured and indescribable black box model of neural networks, it is very difficult to design its architecture, interpret its features and predict its performance. Inspired by human visual cognition process, we propose a knowledge-guided semantic computing network which includes two modules: a knowledge-guided semantic tree and a data-driven neural network. The semantic tree is pre-defined to describe the spatial structural relations of different semantics, which just corresponds to the tree-like description of objects based on human knowledge. The object recognition process through the semantic tree only needs simple forward computing without training. Besides, to enhance the recognition ability of the semantic tree in aspects of the diversity, randomicity and variability, we use the traditional neural network to aid the semantic tree to learn some indescribable features. Only in this case, the training process is needed. The experimental results on MNIST and GTSRB datasets show that compared with the traditional data-driven network, our proposed semantic computing network can achieve better performance with fewer training samples and lower computational complexity. Especially, Our model also has better adversarial robustness than traditional neural network with the help of human knowledge.",
"title": ""
},
{
"docid": "4c05d5add4bd2130787fd894ce74323a",
"text": "Although semi-supervised model can extract the event mentions matching frequent event patterns, it suffers much from those event mentions, which match infrequent patterns or have no matching pattern. To solve this issue, this paper introduces various kinds of linguistic knowledge-driven event inference mechanisms to semi-supervised Chinese event extraction. These event inference mechanisms can capture linguistic knowledge from four aspects, i.e. semantics of argument role, compositional semantics of trigger, consistency on coreference events and relevant events, to further recover missing event mentions from unlabeled texts. Evaluation on the ACE 2005 Chinese corpus shows that our event inference mechanisms significantly outperform the refined state-of-the-art semi-supervised Chinese event extraction system in F1-score by 8.5%.",
"title": ""
},
{
"docid": "225a492370efee6eca39f713026efe12",
"text": "Researchers in the social and behavioral sciences routinely rely on quasi-experimental designs to discover knowledge from large data-bases. Quasi-experimental designs (QEDs) exploit fortuitous circumstances in non-experimental data to identify situations (sometimes called \"natural experiments\") that provide the equivalent of experimental control and randomization. QEDs allow researchers in domains as diverse as sociology, medicine, and marketing to draw reliable inferences about causal dependencies from non-experimental data. Unfortunately, identifying and exploiting QEDs has remained a painstaking manual activity, requiring researchers to scour available databases and apply substantial knowledge of statistics. However, recent advances in the expressiveness of databases, and increases in their size and complexity, provide the necessary conditions to automatically identify QEDs. In this paper, we describe the first system to discover knowledge by applying quasi-experimental designs that were identified automatically. We demonstrate that QEDs can be identified in a traditional database schema and that such identification requires only a small number of extensions to that schema, knowledge about quasi-experimental design encoded in first-order logic, and a theorem-proving engine. We describe several key innovations necessary to enable this system, including methods for automatically constructing appropriate experimental units and for creating aggregate variables on those units. We show that applying the resulting designs can identify important causal dependencies in real domains, and we provide examples from academic publishing, movie making and marketing, and peer-production systems. Finally, we discuss the integration of QEDs with other approaches to causal discovery, including joint modeling and directed experimentation.",
"title": ""
},
{
"docid": "38a0f56e760b0e7a2979c90a8fbcca68",
"text": "The Rubik’s Cube is perhaps the world’s most famous and iconic puzzle, well-known to have a rich underlying mathematical structure (group theory). In this paper, we show that the Rubik’s Cube also has a rich underlying algorithmic structure. Specifically, we show that the n×n×n Rubik’s Cube, as well as the n×n×1 variant, has a “God’s Number” (diameter of the configuration space) of Θ(n/ logn). The upper bound comes from effectively parallelizing standard Θ(n) solution algorithms, while the lower bound follows from a counting argument. The upper bound gives an asymptotically optimal algorithm for solving a general Rubik’s Cube in the worst case. Given a specific starting state, we show how to find the shortest solution in an n×O(1)×O(1) Rubik’s Cube. Finally, we show that finding this optimal solution becomes NPhard in an n×n×1 Rubik’s Cube when the positions and colors of some cubies are ignored (not used in determining whether the cube is solved).",
"title": ""
},
{
"docid": "e181f73c36c1d8c9463ef34da29d9e03",
"text": "This paper examines prospects and limitations of citation studies in the humanities. We begin by presenting an overview of bibliometric analysis, noting several barriers to applying this method in the humanities. Following that, we present an experimental tool for extracting and classifying citation contexts in humanities journal articles. This tool reports the bibliographic information about each reference, as well as three features about its context(s): frequency, locationin-document, and polarity. We found that extraction was highly successful (above 85%) for three of the four journals, and statistics for the three citation figures were broadly consistent with previous research. We conclude by noting several limitations of the sentiment classifier and suggesting future areas for refinement. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "3a466fd05c021b8bd48600246086aaa2",
"text": "Recent empirical work has examined the extent to which international trade fosters international “spillovers” of technological information. FDI is an alternate, potentially equally important channel for the mediation of such knowledge spillovers. I introduce a framework for measuring international knowledge spillovers at the firm level, and I use this framework to directly test the hypothesis that FDI is a channel of knowledge spillovers for Japanese multinationals undertaking direct investments in the United States. Using an original firm-level panel data set on Japanese firms’ FDI and innovative activity, I find evidence that FDI increases the flow of knowledge spillovers both from and to the investing Japanese firms. ∗ This paper is a revision of Branstetter (2000a). I would like to thank Natasha Hsieh, Masami Imai,Yoko Kusaka, Grace Lin, Kentaro Minato, Kaoru Nabeshima, and Yoshiaki Ogura for excellent research assistance. I also thank Paul Almeida, Jonathan Eaton, Bronwyn Hall, Takatoshi Ito, Adam Jaffe, Wolfgang Keller, Yoshiaki Nakamura, James Rauch, Mariko Sakakibara, Ryuhei Wakasugi, two anonymous referees, and seminar participants at UC-Davis, UC-Berkeley, Boston University, UC-Boulder, Brandeis University, Columbia University, Cornell University, Northwestern University, UC-San Diego, the World Bank, the University of Michigan, the Research Institute of Economy, Trade, and Industry, and the NBER for valuable comments. Funding was provided by a University of California Faculty Research Grant, a grant from the Japan Foundation Center for Global Partnership, and the NBER Project on Industrial Technology and Productivity. Note that parts of this paper borrow from Branstetter (2000b) and from Branstetter and Nakamura (2003). I am solely responsible for any errors. ** Lee Branstetter, Columbia Business School, Uris Hall 815, 3022 Broadway, New York, NY 10027; TEL 212-854-2722; FAX 212-854-9895; E-mail [email protected]",
"title": ""
},
{
"docid": "a58930da8179d71616b8b6ef01ed1569",
"text": "Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.",
"title": ""
},
{
"docid": "560a19017dcc240d48bb879c3165b3e1",
"text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "80a61f27dab6a8f71a5c27437254778b",
"text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.",
"title": ""
},
{
"docid": "be689d89e1e5182895a473a52a1950cd",
"text": "This paper designs a Continuous Data Level Auditing system utilizing business process based analytical procedures and evaluates the system’s performance using disaggregated transaction records of a large healthcare management firm. An important innovation in the proposed architecture of the CDA system is the utilization of analytical monitoring as the second (rather than the first) stage of data analysis. The first component of the system utilizes automatic transaction verification to filter out exceptions, defined as transactions violating formal business process rules. The second component of the system utilizes business process based analytical procedures, denoted here ―Continuity Equations‖, as the expectation models for creating business process audit benchmarks. Our first objective is to examine several expectation models that can serve as the continuity equation benchmarks: a Linear Regression Model, a Simultaneous Equation Model, two Vector Autoregressive models, and a GARCH model. The second objective is to examine the impact of the choice of the level of data aggregation on anomaly detection performance. The third objective is to design a set of online learning and error correction protocols for automatic model inference and updating. Using a seeded error simulation approach, we demonstrate that the use of disaggregated business process data allows the detection of anomalies that slip through the analytical procedures applied to more aggregated data. Furthermore, the results indicate that under most circumstances the use of real time error correction results in superior performance, thus showing the benefit of continuous auditing.",
"title": ""
},
{
"docid": "d558f980b85bf970a7b57c00df361591",
"text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.",
"title": ""
},
{
"docid": "d18d4780cc259da28da90485bd3f0974",
"text": "L'ostéogenèse imparfaite (OI) est un groupe hétérogène de maladies affectant le collagène de type I et caractérisées par une fragilité osseuse. Les formes létales sont rares et se caractérisent par une micromélie avec déformation des membres. Un diagnostic anténatal d'OI létale a été fait dans deux cas, par échographie à 17 et à 25 semaines d'aménorrhée, complélées par un scanner du squelette fœtal dans un cas. Une interruption thérapeutique de grossesse a été indiquée dans les deux cas. Pan African Medical Journal. 2016; 25:88 doi:10.11604/pamj.2016.25.88.5871 This article is available online at: http://www.panafrican-med-journal.com/content/article/25/88/full/ © Houda EL Mhabrech et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access",
"title": ""
},
{
"docid": "3bb48e5bf7cc87d635ab4958553ef153",
"text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: [email protected]",
"title": ""
},
{
"docid": "1313abe909877b95557c51bb3b378cdb",
"text": "To evaluate the effect of early systematic soccer training on postural control we measured center-of-pressure (COP) variability, range, mean velocity and frequency in bipedal quiet stance with eyes open (EO) and closed (EC) in 44 boys aged 13 (25 boys who practiced soccer for 5–6 years and 19 healthy boys who did not practice sports). The soccer players had better stability, particularly in the medial–lateral plane (M/L); their COP variability and range were lower than in controls in both EO (p < 0.05) and EC (p < 0.0005) condition indicating that the athletes were less dependent on vision than non-athletes. Improved stability of athletes was accompanied by a decrease in COP frequency (p < 0.001 in EO, and p < 0.04 in EC) which accounted for lower regulatory activity of balance system in soccer players. The athletes had lower COP mean velocity than controls (p < 0.0001 in both visual condition), with larger difference in the M/L than A/P plane (p < 0.00001 and p < 0.05, respectively). Postural behavior was more variable within the non-athletes than soccer players, mainly in the EC stances (p < 0.005 for all COP parameters). We conclude that: (1) soccer training described was efficient in improving the M/L postural control in young boys; (2) athletes developed specific postural strategies characterized by decreased COP frequency and lower reliance on vision.",
"title": ""
},
{
"docid": "7cebca46f584b2f31fd9d2c8ef004f17",
"text": "Wirelessly networked systems of intra-body sensors and actuators could enable revolutionary applications at the intersection between biomedical science, networking, and control with a strong potential to advance medical treatment of major diseases of our times. Yet, most research to date has focused on communications along the body surface among devices interconnected through traditional electromagnetic radio-frequency (RF) carrier waves; while the underlying root challenge of enabling networked intra-body miniaturized sensors and actuators that communicate through body tissues is substantially unaddressed. The main obstacle to enabling this vision of networked implantable devices is posed by the physical nature of propagation in the human body. The human body is composed primarily (65 percent) of water, a medium through which RF electromagnetic waves do not easily propagate, even at relatively low frequencies. Therefore, in this article we take a different perspective and propose to investigate and study the use of ultrasonic waves to wirelessly internetwork intra-body devices. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack.",
"title": ""
},
{
"docid": "b336b95e53ba0d804060d2cee84f5fb4",
"text": "Discovering unexpected and useful patterns in databases is a fundamental data mining task. In recent years, a trend in data mining has been to design algorithms for discovering patterns in sequential data. One of the most popular data mining tasks on sequences is sequential pattern mining. It consists of discovering interesting subsequences in a set of sequences, where the interestingness of a subsequence can be measured in terms of various criteria such as its occurrence frequency, length, and profit. Sequential pattern mining has many real-life applications since data is encoded as sequences in many fields such as bioinformatics, e-learning, market basket analysis, text analysis, and webpage click-stream analysis. This paper surveys recent studies on sequential pattern mining and its applications. The goal is to provide both an introduction to sequential pattern mining, and a survey of recent advances and research opportunities. The paper is divided into four main parts. First, the task of sequential pattern mining is defined and its applications are reviewed. Key concepts and terminology are introduced. Moreover, main approaches and strategies to solve sequential pattern mining problems are presented. Limitations of traditional sequential pattern mining approaches are also highlighted, and popular variations of the task of sequential pattern mining are presented. The paper also presents research opportunities and the relationship to other popular pattern mining problems. Lastly, the paper also discusses open-source implementations of sequential pattern mining algorithms.",
"title": ""
},
{
"docid": "9a47ac8b2a5de779909f15bde96c283c",
"text": "We study lender behavior in the peer-to-peer (P2P) lending market, where individuals bid on unsecured microloans requested by other individual borrowers. Online P2P exchanges are growing, but lenders in this market are not professional investors. In addition, lenders have to take big risks because loans in P2P lending are granted without collateral. While the P2P lending market shares some characteristics of online markets with respect to herding behavior, it also has characteristics that may discourage it. This study empirically investigates herding behavior in the P2P lending market where seemingly conflicting conditions and features of herding are present. Using a large sample of daily data from one of the largest P2P lending platforms in Korea, we find strong evidence of herding and its diminishing marginal effect as bidding advances. We employ a multinomial logit market-share model in which relevant variables from prior studies on P2P lending are assessed. 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
9075dc1f6297ae56988ab18f77b78e9f | Activity Recognition using Actigraph Sensor | [
{
"docid": "d62bded822aff38333a212ed1853b53c",
"text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated",
"title": ""
}
] | [
{
"docid": "ca8b1080c8e1d6d234d12370f47d7874",
"text": "Alcelaphine herpesvirus-1 (AlHV-1), a causative agent of malignant catarrhal fever in cattle, was detected in wildebeest (Connochaetes taurinus) placenta tissue for the first time. Although viral load was low, the finding of viral DNA in over 50% of 94 samples tested lends support to the possibility that placental tissue could play a role in disease transmission and that wildebeest calves are infected in utero. Two viral loci were sequenced to examine variation among virus samples obtained from wildebeest and cattle: the ORF50 gene, encoding the lytic cycle transactivator protein, and the A9.5 gene, encoding a novel polymorphic viral glycoprotein. ORF50 was well conserved with six newly discovered alleles differing at only one or two base positions. In contrast, while only three new A9.5 alleles were discovered, these differed by up to 13% at the nucleotide level and up to 20% at the amino acid level. Structural homology searching performed with the additional A9.5 sequences determined in this study adds power to recent analysis identifying the four-helix bundle cytokine interleukin-4 (IL4) as the major homologue. The majority of MCF virus samples obtained from Tanzanian cattle and wildebeest encoded A9.5 polypeptides identical to the previously characterized A9.5 allele present in the laboratory maintained AlHV-1 C500 strain. This supports the view that AlHV-1 C500 is suitable for the development of a vaccine for wildebeest-associated MCF.",
"title": ""
},
{
"docid": "2a487ff4b9218900e9a0e480c23e4c25",
"text": "5.1 CONVENTIONAL ACTUATORS, SHAPE MEMORY ALLOYS, AND ELECTRORHEOLOGICAL FLUIDS ............................................................................................................................................................. 1 5.1.",
"title": ""
},
{
"docid": "6da632d61dbda324da5f74b38f25b1b9",
"text": "Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.",
"title": ""
},
{
"docid": "e1fb515f0f5bbec346098f1ee2aaefdc",
"text": "Observing failures and other – desired or undesired – behavior patterns in large scale software systems of specific domains (telecommunication systems, information systems, online web applications, etc.) is difficult. Very often, it is only possible by examining the runtime behavior of these systems through operational logs or traces. However, these systems can generate data in order of gigabytes every day, which makes a challenge to process in the course of predicting upcoming critical problems or identifying relevant behavior patterns. We can say that there is a gap between the amount of information we have and the amount of information we need to make a decision. Low level data has to be processed, correlated and synthesized in order to create high level, decision helping data. The actual value of this high level data lays in its availability at the time of decision making (e.g., do we face a virus attack?). In other words high level data has to be available real-time or near real-time. The research area of event processing deals with processing such data that are viewed as events and with making alerts to the administrators (users) of the systems about relevant behavior patterns based on the rules that are determined in advance. The rules or patterns describe the typical circumstances of the events which have been experienced by the administrators. Normally, these experts improve their observation capabilities over time as they experience more and more critical events and the circumstances preceding them. However, there is a way to aid this manual process by applying the results from a related (and from many aspects, overlapping) research area, predictive analytics, and thus improving the effectiveness of event processing. Predictive analytics deals with the prediction of future events based on previously observed historical data by applying sophisticated methods like machine learning, the historical data is often collected and transformed by using techniques similar to the ones of event processing, e.g., filtering, correlating the data, and so on. In this paper, we are going to examine both research areas and offer a survey on terminology, research achievements, existing solutions, and open issues. We discuss the applicability of the research areas to the telecommunication domain. We primarily base our survey on articles published in international conferences and journals, but we consider other sources of information as well, like technical reports, tools or web-logs.",
"title": ""
},
{
"docid": "7210c2e82441b142f722bcc01bfe9aca",
"text": "In the beginning of the last decade, agile methodologies emerged as a response to software development processes that were based on rigid approaches. In fact, the flexible characteristics of agile methods are expected to be suitable to the less-defined and uncertain nature of software development. However, many studies in this area lack empirical evaluation in order to provide more confident evidences about which contexts the claims are true. This paper reports an empirical study performed to analyze the impact of Scrum adoption on customer satisfaction as an external success perspective for software development projects in a software intensive organization. The study uses data from real-life projects executed in a major software intensive organization located in a nation wide software ecosystem. The empirical method applied was a cross-sectional survey using a sample of 19 real-life software development projects involving 156 developers. The survey aimed to determine whether there is any impact on customer satisfaction caused by the Scrum adoption. However, considering that sample, our results indicate that it was not possible to establish any evidence that using Scrum may help to achieve customer satisfaction and, consequently, increase the success rates in software projects, in contrary to general claims made by Scrum's advocates.",
"title": ""
},
{
"docid": "f0242a2a54b1c4538abdd374c74f69f6",
"text": "Background: An increasing research effort has devoted to just-in-time (JIT) defect prediction. A recent study by Yang et al. at FSE'16 leveraged individual change metrics to build unsupervised JIT defect prediction model. They found that many unsupervised models performed similarly to or better than the state-of-the-art supervised models in effort-aware JIT defect prediction. Goal: In Yang et al.'s study, code churn (i.e. the change size of a code change) was neglected when building unsupervised defect prediction models. In this study, we aim to investigate the effectiveness of code churn based unsupervised defect prediction model in effort-aware JIT defect prediction. Methods: Consistent with Yang et al.'s work, we first use code churn to build a code churn based unsupervised model (CCUM). Then, we evaluate the prediction performance of CCUM against the state-of-the-art supervised and unsupervised models under the following three prediction settings: cross-validation, time-wise cross-validation, and cross-project prediction. Results: In our experiment, we compare CCUM against the state-of-the-art supervised and unsupervised JIT defect prediction models. Based on six open-source projects, our experimental results show that CCUM performs better than all the prior supervised and unsupervised models. Conclusions: The result suggests that future JIT defect prediction studies should use CCUM as a baseline model for comparison when a novel model is proposed.",
"title": ""
},
{
"docid": "96c3c7f605f7ca763df0710629edd726",
"text": "This study underlines the importance of cinnamon, a widely-used food spice and flavoring material, and its metabolite sodium benzoate (NaB), a widely-used food preservative and a FDA-approved drug against urea cycle disorders in humans, in increasing the levels of neurotrophic factors [e.g., brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NT-3)] in the CNS. NaB, but not sodium formate (NaFO), dose-dependently induced the expression of BDNF and NT-3 in primary human neurons and astrocytes. Interestingly, oral administration of ground cinnamon increased the level of NaB in serum and brain and upregulated the levels of these neurotrophic factors in vivo in mouse CNS. Accordingly, oral feeding of NaB, but not NaFO, also increased the level of these neurotrophic factors in vivo in the CNS of mice. NaB induced the activation of protein kinase A (PKA), but not protein kinase C (PKC), and H-89, an inhibitor of PKA, abrogated NaB-induced increase in neurotrophic factors. Furthermore, activation of cAMP response element binding (CREB) protein, but not NF-κB, by NaB, abrogation of NaB-induced expression of neurotrophic factors by siRNA knockdown of CREB and the recruitment of CREB and CREB-binding protein to the BDNF promoter by NaB suggest that NaB exerts its neurotrophic effect through the activation of CREB. Accordingly, cinnamon feeding also increased the activity of PKA and the level of phospho-CREB in vivo in the CNS. These results highlight a novel neutrophic property of cinnamon and its metabolite NaB via PKA – CREB pathway, which may be of benefit for various neurodegenerative disorders.",
"title": ""
},
{
"docid": "20adf89d9301cdaf64d8bf684886de92",
"text": "A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to estimating the density of such spatial point events. One key feature of the new approach is that the network space is represented with basic linear units of equal network length, termed lixel (linear pixel), and related network topology. The use of lixel not only facilitates the systematic selection of a set of regularly spaced locations along a network for density estimation, but also makes the practical application of the network KDE feasible by significantly improving the computation efficiency. The approach is implemented in the ESRI ArcGIS environment and tested with the year 2005 traffic accident data and a road network in the Bowling Green, Kentucky area. The test results indicate that the new network KDE is more appropriate than standard planar KDE for density estimation of traffic accidents, since the latter covers space beyond the event context (network space) and is likely to overestimate the density values. The study also investigates the impacts on density calculation from two kernel functions, lixel lengths, and search bandwidths. It is found that the kernel function is least important in structuring the density pattern over network space, whereas the lixel length critically impacts the local variation details of the spatial density pattern. The search bandwidth imposes the highest influence by controlling the smoothness of the spatial pattern, showing local effects at a narrow bandwidth and revealing \" hot spots \" at larger or global scales with a wider bandwidth. More significantly, the idea of representing a linear network by a network system of equal-length lixels may potentially 3 lead the way to developing a suite of other network related spatial analysis and modeling methods.",
"title": ""
},
{
"docid": "2d4c99f3ff7a19580f9f012da99a8348",
"text": "OBJECTIVES\nTo compare the effectiveness of a mixture of acacia fiber, psyllium fiber, and fructose (AFPFF) with polyethylene glycol 3350 combined with electrolytes (PEG+E) in the treatment of children with chronic functional constipation (CFC); and to evaluate the safety and effectiveness of AFPFF in the treatment of children with CFC.\n\n\nSTUDY DESIGN\nThis was a randomized, open label, prospective, controlled, parallel-group study involving 100 children (M/F: 38/62; mean age ± SD: 6.5 ± 2.7 years) who were diagnosed with CFC according to the Rome III Criteria. Children were randomly divided into 2 groups: 50 children received AFPFF (16.8 g daily) and 50 children received PEG+E (0.5 g/kg daily) for 8 weeks. Primary outcome measures were frequency of bowel movements, stool consistency, fecal incontinence, and improvement of other associated gastrointestinal symptoms. Safety was assessed with evaluation of clinical adverse effects and growth measurements.\n\n\nRESULTS\nCompliance rates were 72% for AFPFF and 96% for PEG+E. A significant improvement of constipation was seen in both groups. After 8 weeks, 77.8% of children treated with AFPFF and 83% of children treated with PEG+E had improved (P = .788). Neither PEG+E nor AFPFF caused any clinically significant side effects during the entire course of the study period.\n\n\nCONCLUSIONS\nIn this randomized study, we did not find any significant difference between the efficacy of AFPFF and PEG+E in the treatment of children with CFC. Both medications were proved to be safe for CFC treatment, but PEG+E was better accepted by children.",
"title": ""
},
{
"docid": "61096a0d1e94bb83f7bd067b06d69edd",
"text": "A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of “overfitting”, defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, “slow” convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the normalized weight matrix at each layer of a deep network converges to a minimum norm solution (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for predicting the generalization performance of different zero minimizers of the empirical loss. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 80 6. 11 37 9v 1 [ cs .L G ] 2 9 Ju n 20 18 Theory IIIb: Generalization in Deep Networks Tomaso Poggio ∗1, Qianli Liao1, Brando Miranda1, Andrzej Banburski1, Xavier Boix1, and Jack Hidary2 1Center for Brains, Minds and Machines, MIT 2Alphabet (Google) X",
"title": ""
},
{
"docid": "76ede41b63f6c960729228c505026851",
"text": "Although the hip musculature is found to be very important in connecting the core to the lower extremities and in transferring forces from and to the core, it is proposed to leave the hip musculature out of consideration when talking about the concept of core stability. A low level of co-contraction of the trunk muscles is important for core stability. It provides a level of stiffness, which gives sufficient stability against minor perturbations. Next to this stiffness, direction-specific muscle reflex responses are also important in providing core stability, particularly when encountering sudden perturbations. It appears that most trunk muscles, both the local and global stabilization system, must work coherently to achieve core stability. The contributions of the various trunk muscles depend on the task being performed. In the search for a precise balance between the amount of stability and mobility, the role of sensory-motor control is much more important than the role of strength or endurance of the trunk muscles. The CNS creates a stable foundation for movement of the extremities through co-contraction of particular muscles. Appropriate muscle recruitment and timing is extremely important in providing core stability. No clear evidence has been found for a positive relationship between core stability and physical performance and more research in this area is needed. On the other hand, with respect to the relationship between core stability and injury, several studies have found an association between a decreased stability and a higher risk of sustaining a low back or knee injury. Subjects with such injuries have been shown to demonstrate impaired postural control, delayed muscle reflex responses following sudden trunk unloading and abnormal trunk muscle recruitment patterns. In addition, various relationships have been demonstrated between core stability, balance performance and activation characteristics of the trunk muscles. Most importantly, a significant correlation was found between poor balance performance in a sitting balance task and delayed firing of the trunk muscles during sudden perturbation. It was suggested that both phenomena are caused by proprioceptive deficits. The importance of sensory-motor control has implications for the development of measurement and training protocols. It has been shown that challenging propriocepsis during training activities, for example, by making use of unstable surfaces, leads to increased demands on trunk muscles, thereby improving core stability and balance. Various tests to directly or indirectly measure neuromuscular control and coordination have been developed and are discussed in the present article. Sitting balance performance and trunk muscle response times may be good indicators of core stability. In light of this, it would be interesting to quantify core stability using a sitting balance task, for example by making use of accelerometry. Further research is required to develop training programmes and evaluation methods that are suitable for various target groups.",
"title": ""
},
{
"docid": "ce32b34898427802abd4cc9c99eac0bc",
"text": "A circular polarizer is a single layer or multi-layer structure that converts linearly polarized waves into circularly polarized ones and vice versa. In this communication, a simple method based on transmission line circuit theory is proposed to model and design circular polarizers. This technique is more flexible than those previously presented in the way that it permits to design polarizers with the desired spacing between layers, while obtaining surfaces that may be easier to fabricate and less sensitive to fabrication errors. As an illustrating example, a modified version of the meander-line polarizer being twice as thin as its conventional counterpart is designed. Then, both polarizers are fabricated and measured. Results are shown and compared for normal and oblique incidence angles in the planes φ = 0° and φ = 90°.",
"title": ""
},
{
"docid": "9504571e66ea9071c6c227f61dfba98f",
"text": "Recent research has shown that although Reinforcement Learning (RL) can benefit from expert demonstration, it usually takes considerable efforts to obtain enough demonstration. The efforts prevent training decent RL agents with expert demonstration in practice. In this work, we propose Active Reinforcement Learning with Demonstration (ARLD), a new framework to streamline RL in terms of demonstration efforts by allowing the RL agent to query for demonstration actively during training. Under the framework, we propose Active Deep Q-Network, a novel query strategy which adapts to the dynamically-changing distributions during the RL training process by estimating the uncertainty of recent states. The expert demonstration data within Active DQN are then utilized by optimizing supervised max-margin loss in addition to temporal difference loss within usual DQN training. We propose two methods of estimating the uncertainty based on two state-of-the-art DQN models, namely the divergence of bootstrapped DQN and the variance of noisy DQN. The empirical results validate that both methods not only learn faster than other passive expert demonstration methods with the same amount of demonstration and but also reach super-expert level of performance across four different tasks.",
"title": ""
},
{
"docid": "1c9eb6b002b36e2607cc63e08151ee65",
"text": "Qualitative trend analysis (QTA) is a process-history-based data-driven technique that works by extracting important features (trends) from the measured signals and evaluating the trends. QTA has been widely used for process fault detection and diagnosis. Recently, Dash et al. (2001, 2003) presented an intervalhalving-based algorithm for off-line automatic trend extraction from a record of data, a fuzzy-logic based methodology for trend-matching and a fuzzy-rule-based framework for fault diagnosis (FD). In this article, an algorithm for on-line extraction of qualitative trends is proposed. A framework for on-line fault diagnosis using QTA also has been presented. Some of the issues addressed are (i) development of a robust and computationally efficient QTA-knowledge-base, (ii) fault detection, (iii) estimation of the fault occurrence time, (iv) on-line trend-matching and (v) updating the QTA-knowledge-base when a novel fault is diagnosed manually. Some results for FD of the Tennessee Eastman (TE) process using the developed framework are presented. Copyright c 2003 IFAC.",
"title": ""
},
{
"docid": "490114176c31592da4cac2bcf75f31f3",
"text": "In this letter, we present a compact ultrawideband (UWB) antenna printed on a 50.8-μm Kapton polyimide substrate. The antenna is fed by a linearly tapered coplanar waveguide (CPW) that provides smooth transitional impedance for improved matching. The proposed design is tuned to cover the 2.2-14.3-GHz frequency range that encompasses both the 2.45-GHz Industrial, Scientific, Medical (ISM) band and the standard 3.1-10.6-GHz UWB band. Furthermore, the antenna is compared to a conventional CPW-fed antenna to demonstrate the significance of the proposed design. A parametric study is first performed on the feed of the proposed design to achieve the desired impedance matching. Next, a prototype is fabricated; measurement results show good agreement with the simulated model. Moreover, the antenna demonstrates a very low susceptibility to performance degradation due to bending effects in terms of impedance matching and far-field radiation patterns, which makes it suitable for integration within modern flexible electronic devices.",
"title": ""
},
{
"docid": "e43814f288e1c5a84fb9d26b46fc7e37",
"text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.",
"title": ""
},
{
"docid": "7419fa101c2471e225c976da196ed813",
"text": "A 4×40 Gb/s collaborative digital CDR is implemented in 28nm CMOS. The CDR is capable of recovering a low jitter clock from a partially-equalized or un-equalized eye by using a phase detection scheme that inherently filters out ISI edges. The CDR uses split feedback that simultaneously allows wider bandwidth and lower recovered clock jitter. A shared frequency tracking is also introduced that results in lower periodic jitter. Combining these techniques the CDR recovers a 10GHz clock from an eye containing 0.8UIpp DDJ and still achieves 1-10 MHz of tracking bandwidth while adding <; 300fs of jitter. Per lane CDR occupies only .06 mm2 and consumes 175 mW.",
"title": ""
},
{
"docid": "2f60e3d89966d4680796c1e4355de4bc",
"text": "This letter addresses the problem of energy detection of an unknown signal over a multipath channel. It starts with the no-diversity case, and presents some alternative closed-form expressions for the probability of detection to those recently reported in the literature. Detection capability is boosted by implementing both square-law combining and square-law selection diversity schemes",
"title": ""
},
{
"docid": "956ffd90cc922e77632b8f9f79f42a98",
"text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433",
"title": ""
},
{
"docid": "d7ec0f978b066686edf9b930492dae71",
"text": "The association between MMORPG play (World of Warcraft) and psychological wellbeing was explored through a cross sectional, online questionnaire design testing the relationship between average hours playing per week and psychological wellbeing. Play motivation including achievement, social interaction and immersion as well as problematic use were tested as mediating variables. Participants (N = 565) completed online measures including demographics and play time, health, motivations to play and problematic use. Analysis revealed a negative correlation between playing time and psychological wellbeing. A Multiple Mediation Model showed the relationship specifically occurred where play was motivated by Immersion and/or where play was likely to have become problematic. No evidence of a direct effect of play on psychological wellbeing was found when taking these mediating pathways into account. Clinical and research implications are discussed.",
"title": ""
}
] | scidocsrr |
913cbf1c706a47094aabf3fc2f764150 | The Impacts of Social Media on Bitcoin Performance | [
{
"docid": "c02d207ed8606165e078de53a03bf608",
"text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: [email protected]), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: [email protected]), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*",
"title": ""
}
] | [
{
"docid": "7e40c98b9760e1f47a0140afae567b7f",
"text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "b78f1e6a5e93c1ad394b1cade293829f",
"text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing",
"title": ""
},
{
"docid": "fb31ead676acdd048d699ddfb4ddd17a",
"text": "Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.",
"title": ""
},
{
"docid": "8e654ace264f8062caee76b0a306738c",
"text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.",
"title": ""
},
{
"docid": "06672f6316878c80258ad53988a7e953",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/astata.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "fe57e844c12f7392bdd29a2e2396fc50",
"text": "With the help of modern information communication technology, mobile banking as a new type of financial services carrier can provide efficient and effective financial services for clients. Compare with Internet banking, mobile banking is more secure and user friendly. The implementation of wireless communication technologies may result in more complicated information security problems. Based on the principles of information security, this paper presented issues of information security of mobile banking and discussed the security protection measures such as: encryption technology, identity authentication, digital signature, WPKI technology.",
"title": ""
},
{
"docid": "64ba4467dc4495c6828f2322e8f415f2",
"text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.",
"title": ""
},
{
"docid": "10f3cafc05b3fb3b235df34aebbe0e23",
"text": "To cope with monolithic controller replicas and the current unbalance situation in multiphase converters, a pseudo-ramp current balance technique is proposed to achieve time-multiplexing current balance in voltage-mode multiphase DC-DC buck converter. With only one modulation controller, silicon area and power consumption caused by the replicas of controller can be reduced significantly. Current balance accuracy can be further enhanced since the mismatches between different controllers caused by process, voltage, and temperature variations are removed. Moreover, the offset cancellation control embedded in the current matching unit is used to eliminate intrinsic offset voltage existing at the operational transconductance amplifier for improved current balance. An explicit model, which contains both voltage and current balance loops with non-ideal effects, is derived for analyzing system stability. Experimental results show that current difference between each phase can be decreased by over 83% under both heavy and light load conditions.",
"title": ""
},
{
"docid": "358faa358eb07b8c724efcdb72334dc7",
"text": "We present a novel simple technique for rapidly creating and presenting interactive immersive 3D exploration experiences of 2D pictures and images of natural and artificial landscapes. Various application domains, ranging from virtual exploration of works of art to street navigation systems, can benefit from the approach. The method, dubbed PEEP, is motivated by the perceptual characteristics of the human visual system in interpreting perspective cues and detecting relative angles between lines. It applies to the common perspective images with zero or one vanishing points, and does not require the extraction of a precise geometric description of the scene. Taking as input a single image without other information, an automatic analysis technique fits a simple but perceptually consistent parametric 3D representation of the viewed space, which is used to drive an indirect constrained exploration method capable to provide the illusion of 3D exploration with realistic monocular (perspective and motion parallax) and binocular (stereo) depth cues. The effectiveness of the method is demonstrated on a variety of casual pictures and exploration configurations, including mobile devices.",
"title": ""
},
{
"docid": "c0440776fdd2adab39e9a9ba9dd56741",
"text": "Corynebacterium glutamicum is an important industrial metabolite producer that is difficult to genetically engineer. Although the Streptococcus pyogenes (Sp) CRISPR-Cas9 system has been adapted for genome editing of multiple bacteria, it cannot be introduced into C. glutamicum. Here we report a Francisella novicida (Fn) CRISPR-Cpf1-based genome-editing method for C. glutamicum. CRISPR-Cpf1, combined with single-stranded DNA (ssDNA) recombineering, precisely introduces small changes into the bacterial genome at efficiencies of 86-100%. Large gene deletions and insertions are also obtained using an all-in-one plasmid consisting of FnCpf1, CRISPR RNA, and homologous arms. The two CRISPR-Cpf1-assisted systems enable N iterative rounds of genome editing in 3N+4 or 3N+2 days. A proof-of-concept, codon saturation mutagenesis at G149 of γ-glutamyl kinase relieves L-proline inhibition using Cpf1-assisted ssDNA recombineering. Thus, CRISPR-Cpf1-based genome editing provides a highly efficient tool for genetic engineering of Corynebacterium and other bacteria that cannot utilize the Sp CRISPR-Cas9 system.",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "de63a161a9539931f834908477fb5ad1",
"text": "Network function virtualization introduces additional complexity for network management through the use of virtualization environments. The amount of managed data and the operational complexity increases, which makes service assurance and failure recovery harder to realize. In response to this challenge, the paper proposes a distributed management function, called virtualized network management function (vNMF), to detect failures related to virtualized services. vNMF detects the failures by monitoring physical-layer statistics that are processed with a self-organizing map algorithm. Experimental results show that memory leaks and network congestion failures can be successfully detected and that and the accuracy of failure detection can be significantly improved compared to common k-means clustering.",
"title": ""
},
{
"docid": "5c40b6fadf2f8f4b39c7adf1e894e600",
"text": "Monitoring the flow of traffic along network paths is essential for SDN programming and troubleshooting. For example, traffic engineering requires measuring the ingress-egress traffic matrix; debugging a congested link requires determining the set of sources sending traffic through that link; and locating a faulty device might involve detecting how far along a path the traffic makes progress. Past path-based monitoring systems operate by diverting packets to collectors that perform \"after-the-fact\" analysis, at the expense of large data-collection overhead. In this paper, we show how to do more efficient \"during-the-fact\" analysis. We introduce a query language that allows each SDN application to specify queries independently of the forwarding state or the queries of other applications. The queries use a regular-expression-based path language that includes SQL-like \"groupby\" constructs for count aggregation. We track the packet trajectory directly on the data plane by converting the regular expressions into an automaton, and tagging the automaton state (i.e., the path prefix) in each packet as it progresses through the network. The SDN policies that implement the path queries can be combined with arbitrary packet-forwarding policies supplied by other elements of the SDN platform. A preliminary evaluation of our prototype shows that our \"during-the-fact\" strategy reduces data-collection overhead over \"after-the-fact\" strategies.",
"title": ""
},
{
"docid": "0499618380bc33d376160a770683e807",
"text": "As multicore and manycore processor architectures are emerging and the core counts per chip continue to increase, it is important to evaluate and understand the performance and scalability of Parallel Discrete Event Simulation (PDES) on these platforms. Most existing architectures are still limited to a modest number of cores, feature simple designs and do not exhibit heterogeneity, making it impossible to perform comprehensive analysis and evaluations of PDES on these platforms. Instead, in this paper we evaluate PDES using a full-system cycle-accurate simulator of a multicore processor and memory subsystem. With this approach, it is possible to flexibly configure the simulator and perform exploration of the impact of architecture design choices on the performance of PDES. In particular, we answer the following four questions with respect to PDES performance and scalability: (1) For the same total chip area, what is the best design point in terms of the number of cores and the size of the on-chip cache? (2) What is the impact of using in-order vs. out-of-order cores? (3) What is the impact of a heterogeneous system with a mix of in-order and out-of-order cores? (4) What is the impact of object partitioning on PDES performance in heterogeneous systems? To answer these questions, we use MARSSx86 simulator for evaluating performance, and rely on Cacti and McPAT tools to derive the area and latency estimates for cores and caches.",
"title": ""
},
{
"docid": "5a601e08824185bafeb94ac432b6e92e",
"text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",
"title": ""
},
{
"docid": "e58882a41c4335caf957105df192edc5",
"text": "Credit card fraud is a serious problem in financial services. Billions of dollars are lost due to credit card fraud every year. There is a lack of research studies on analyzing real-world credit card data owing to confidentiality issues. In this paper, machine learning algorithms are used to detect credit card fraud. Standard models are first used. Then, hybrid methods which use AdaBoost and majority voting methods are applied. To evaluate the model efficacy, a publicly available credit card data set is used. Then, a real-world credit card data set from a financial institution is analyzed. In addition, noise is added to the data samples to further assess the robustness of the algorithms. The experimental results positively indicate that the majority voting method achieves good accuracy rates in detecting fraud cases in credit cards.",
"title": ""
},
{
"docid": "3d5bbe4dcdc3ad787e57583f7b621e36",
"text": "A miniaturized antenna employing a negative index metamaterial with modified split-ring resonator (SRR) and capacitance-loaded strip (CLS) unit cells is presented for Ultra wideband (UWB) microwave imaging applications. Four left-handed (LH) metamaterial (MTM) unit cells are located along one axis of the antenna as the radiating element. Each left-handed metamaterial unit cell combines a modified split-ring resonator (SRR) with a capacitance-loaded strip (CLS) to obtain a design architecture that simultaneously exhibits both negative permittivity and negative permeability, which ensures a stable negative refractive index to improve the antenna performance for microwave imaging. The antenna structure, with dimension of 16 × 21 × 1.6 mm³, is printed on a low dielectric FR4 material with a slotted ground plane and a microstrip feed. The measured reflection coefficient demonstrates that this antenna attains 114.5% bandwidth covering the frequency band of 3.4-12.5 GHz for a voltage standing wave ratio of less than 2 with a maximum gain of 5.16 dBi at 10.15 GHz. There is a stable harmony between the simulated and measured results that indicate improved nearly omni-directional radiation characteristics within the operational frequency band. The stable surface current distribution, negative refractive index characteristic, considerable gain and radiation properties make this proposed negative index metamaterial antenna optimal for UWB microwave imaging applications.",
"title": ""
},
{
"docid": "406e06e00799733c517aff88c9c85e0b",
"text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.",
"title": ""
},
{
"docid": "cef4c47b512eb4be7dcadcee35f0b2ca",
"text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.",
"title": ""
},
{
"docid": "986a0b910a4674b3c4bf92a668780dd6",
"text": "One of the most important attributes of the polymerase chain reaction (PCR) is its exquisite sensitivity. However, the high sensitivity of PCR also renders it prone to falsepositive results because of, for example, exogenous contamination. Good laboratory practice and specific anti-contamination strategies are essential to minimize the chance of contamination. Some of these strategies, for example, physical separation of the areas for the handling samples and PCR products, may need to be taken into consideration during the establishment of a laboratory. In this chapter, different strategies for the detection, avoidance, and elimination of PCR contamination will be discussed.",
"title": ""
}
] | scidocsrr |
ff57c158d0058d8f5b16f4049ec0210d | Supply Chain Contracting Under Competition : Bilateral Bargaining vs . Stackelberg | [
{
"docid": "6559d77de48d153153ce77b0e2969793",
"text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.",
"title": ""
}
] | [
{
"docid": "d0c5d24a5f68eb5448b45feeca098b87",
"text": "Age estimation has wide applications in video surveillance, social networking, and human-computer interaction. Many of the published approaches simply treat age estimation as an exact age regression problem, and thus do not leverage a distribution's robustness in representing labels with ambiguity such as ages. In this paper, we propose a new loss function, called mean-variance loss, for robust age estimation via distribution learning. Specifically, the mean-variance loss consists of a mean loss, which penalizes difference between the mean of the estimated age distribution and the ground-truth age, and a variance loss, which penalizes the variance of the estimated age distribution to ensure a concentrated distribution. The proposed mean-variance loss and softmax loss are jointly embedded into Convolutional Neural Networks (CNNs) for age estimation. Experimental results on the FG-NET, MORPH Album II, CLAP2016, and AADB databases show that the proposed approach outperforms the state-of-the-art age estimation methods by a large margin, and generalizes well to image aesthetics assessment.",
"title": ""
},
{
"docid": "211b858db72c962efaedf66f2ed9479d",
"text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.",
"title": ""
},
{
"docid": "0241cef84d46b942ee32fc7345874b90",
"text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.",
"title": ""
},
{
"docid": "f3f4cb6e7e33f54fca58c14ce82d6b46",
"text": "In this letter, a novel slot array antenna with a substrate-integrated coaxial line (SICL) technique is proposed. The proposed antenna has radiation slots etched homolaterally along the mean line in the top metallic layer of SICL and achieves a compact transverse dimension. A prototype with 5 <inline-formula><tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> 10 longitudinal slots is designed and fabricated with a multilayer liquid crystal polymer (LCP) process. A maximum gain of 15.0 dBi is measured at 35.25 GHz with sidelobe levels of <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 28.2 dB (<italic>E</italic>-plane) and <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 33.1 dB (<italic>H</italic>-plane). The close correspondence between experimental results and designed predictions on radiation patterns has validated the proposed excogitation in the end.",
"title": ""
},
{
"docid": "dea6ad0e1985260dbe7b70cef1c5da54",
"text": "The commonest mitochondrial diseases are probably those impairing the function of complex I of the respiratory electron transport chain. Such complex I impairment may contribute to various neurodegenerative disorders e.g. Parkinson's disease. In the following, using hepatocytes as a model cell, we have shown for the first time that the cytotoxicity caused by complex I inhibition by rotenone but not that caused by complex III inhibition by antimycin can be prevented by coenzyme Q (CoQ1) or menadione. Furthermore, complex I inhibitor cytotoxicity was associated with the collapse of the mitochondrial membrane potential and reactive oxygen species (ROS) formation. ROS scavengers or inhibitors of the mitochondrial permeability transition prevented cytotoxicity. The CoQ1 cytoprotective mechanism required CoQ1 reduction by DT-diaphorase (NQO1). Furthermore, the mitochondrial membrane potential and ATP levels were restored at low CoQ1 concentrations (5 microM). This suggests that the CoQ1H2 formed by NQO1 reduced complex III and acted as an electron bypass of the rotenone block. However cytoprotection still occurred at higher CoQ1 concentrations (>10 microM), which were less effective at restoring ATP levels but readily restored the cellular cytosolic redox potential (i.e. lactate: pyruvate ratio) and prevented ROS formation. This suggests that CoQ1 or menadione cytoprotection also involves the NQO1 catalysed reoxidation of NADH that accumulates as a result of complex I inhibition. The CoQ1H2 formed would then also act as a ROS scavenger.",
"title": ""
},
{
"docid": "579536fe3f52f4ed244f06210a9c2cd1",
"text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.",
"title": ""
},
{
"docid": "793d41551a918a113f52481ff3df087e",
"text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.",
"title": ""
},
{
"docid": "ba75caedb1c9e65f14c2764157682bdf",
"text": "Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, the effect of regular data augmentation, such as random image crop, is limited since it might introduce much uncontrolled background noise. In this paper, we propose WeaklySupervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object’s discriminative parts by weakly supervised Learning. Next, we randomly choose one attention map to augment this image, including attention crop and attention drop. Weakly-supervised data augmentation network improves the classification accuracy in two folds. On the one hand, images can be seen better since multiple object parts can be activated. On the other hand, attention regions provide spatial information of objects, which can make images be looked closer to further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our method surpasses the state-of-the-art methods by a large margin, which demonstrated the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "5ca75490c015685a1fc670b2ee5103ff",
"text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.",
"title": ""
},
{
"docid": "c3b6d46a9e1490c720056682328586d5",
"text": "BACKGROUND\nBirth preparedness and complication preparedness (BPACR) is a key component of globally accepted safe motherhood programs, which helps ensure women to reach professional delivery care when labor begins and to reduce delays that occur when mothers in labor experience obstetric complications.\n\n\nOBJECTIVE\nThis study was conducted to assess practice and factors associated with BPACR among pregnant women in Aleta Wondo district in Sidama Zone, South Ethiopia.\n\n\nMETHODS\nA community based cross sectional study was conducted in 2007, on a sample of 812 pregnant women. Data were collected using pre-tested and structured questionnaire. The collected data were analyzed by SPSS for windows version 12.0.1. The women were asked whether they followed the desired five steps while pregnant: identified a trained birth attendant, identified a health facility, arranged for transport, identified blood donor and saved money for emergency. Taking at least two steps was considered being well-prepared.\n\n\nRESULTS\nAmong 743 pregnant women only a quarter (20.5%) of pregnant women identified skilled provider. Only 8.1% identified health facility for delivery and/or for obstetric emergencies. Preparedness for transportation was found to be very low (7.7%). Considerable (34.5%) number of families saved money for incurred costs of delivery and emergency if needed. Only few (2.3%) identified potential blood donor in case of emergency. Majority (87.9%) of the respondents reported that they intended to deliver at home, and only 60(8%) planned to deliver at health facilities. Overall only 17% of pregnant women were well prepared. The adjusted multivariate model showed that significant predictors for being well-prepared were maternal availing of antenatal services (OR = 1.91 95% CI; 1.21-3.01) and being pregnant for the first time (OR = 6.82, 95% CI; 1.27-36.55).\n\n\nCONCLUSION\nBPACR practice in the study area was found to be low. Effort to increase BPACR should focus on availing antenatal care services.",
"title": ""
},
{
"docid": "d8b2294b650274fc0269545296504432",
"text": "The multidisciplinary nature of information privacy research poses great challenges, since many concepts of information privacy have only been considered and developed through the lens of a particular discipline. It was our goal to conduct a multidisciplinary literature review. Following the three-stage approach proposed by Webster and Watson (2002), our methodology for identifying information privacy publications proceeded in three stages.",
"title": ""
},
{
"docid": "52ebf28afd8ae56816fb81c19e8890b6",
"text": "In this paper we aim to model the relationship between the text of a political blog post and the comment volume—that is, the total amount of response—that a post will receive. We seek to accurately identify which posts will attract a high-volume response, and also to gain insight about the community of readers and their interests. We design and evaluate variations on a latentvariable topic model that links text to comment volume. Introduction What makes a blog post noteworthy? One measure of the popularity or breadth of interest of a blog post is the extent to which readers of the blog are inspired to leave comments on the post. In this paper, we study the relationship between the text contents of a blog post and the volume of response it will receive from blog readers. Modeling this relationship has the potential to reveal the interests of a blog’s readership community to its authors, readers, advertisers, and scientists studying the blogosphere, but it may also be useful in improving technologies for blog search, recommendation, summarization, and so on. There are many ways to define “popularity” in blogging. In this study, we focus exclusively on the aggregate volume of comments. Commenting is an important activity in the political blogosphere, giving a blog site the potential to become a discussion forum. For a given blog post, we treat comment volume as a target output variable, and use generative probabilistic models to learn from past data the relationship between a blog post’s text contents and its comment volume. While many clues might be useful in predicting comment volume (e.g., the post’s author, the time the post appears, the length of the post, etc.) here we focus solely on the text contents of the post. We first describe the data and experimental framework, including a simple baseline. We then explore how latentvariable topic models can be used to make better predictions about comment volume. These models reveal that part of the variation in comment volume can be explained by the topic of the blog post, and elucidate the relative degrees to which readers find each topic comment-worthy. ∗The authors acknowledge research support from HP Labs and helpful comments from the reviewers and Jacob Eisenstein. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Predicting Comment Volume Our goal is to predict some measure of the volume of comments on a new blog post.1 Volume might be measured as the number of words in the comment section, the number of comments, the number of distinct users who leave comments, or a variety of other ways. Any of these can be affected by uninteresting factors—the time of day the post appears, a side conversation, a surge in spammer activity—but these quantities are easily measured. In research on blog data, comments are often ignored, and it is easy to see why: comments are very noisy, full of non-standard grammar and spelling, usually unedited, often cryptic and uninformative, at least to those outside the blog’s community. A few studies have focused on information in comments. Mishe and Glance (2006) showed the value of comments in characterizing the social repercussions of a post, including popularity and controversy. Their largescale user study correlated popularity and comment activity. Yano et al. (2009) sought to predict which members of blog’s community would leave comments, and in some cases used the text contents of the comments themselves to discover topics related to both words and user comment behavior. This work is similar, but we seek to predict the aggregate behavior of the blog post’s readers: given a new blog post, how much will the community comment on it?",
"title": ""
},
{
"docid": "40479536efec6311cd735f2bd34605d7",
"text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.",
"title": ""
},
{
"docid": "68d8834770c34450adc96ed96299ae48",
"text": "This thesis presents a current-mode CMOS image sensor using lateral bipolar phototransistors (LPTs). The objective of this design is to improve the photosensitivity of the image sensor, and to provide photocurrent amplification at the circuit level. Lateral bipolar phototransistors can be implemented using a standard CMOS technology with no process modification. Under illumination, photogenerated carriers contribute to the base current, and the output emitter current is amplified through the transistor action of the bipolar device. Our analysis and simulation results suggest that the LPT output characteristics are strongly dependent on process parameters including base and emitter doping concentrations, as well as the device geometry such as the base width. For high current gain, a minimized base width is desired. The 2D effect of current crowding has also been discussed. Photocurrent can be further increased using amplifying current mirrors in the pixel and column structures. A prototype image sensor has been designed and fabricated in a standard 0.18μm CMOS technology. This design includes a photodiode image array and a LPT image array, each 70× 48 in dimension. For both arrays, amplifying current mirrors are included in the pixel readout structure and at the column level. Test results show improvements in both photosensitivity and conversion efficiency. The LPT also exhibits a better spectral response in the red region of the spectrum, because of the nwell/p-substrate depletion region. On the other hand, dark current, fixed pattern noise (FPN), and power consumption also increase due to current amplification. This thesis has demonstrated that the use of lateral bipolar phototransistors and amplifying current mirrors can help to overcome low photosensitivity and other deterioration imposed by technology scaling. The current-mode readout scheme with LPT-based photodetectors can be used as a front end to additional image processing circuits.",
"title": ""
},
{
"docid": "335220bbad7798a19403d393bcbbf7fb",
"text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.",
"title": ""
},
{
"docid": "d8eab1f244bd5f9e05eb706bb814d299",
"text": "Private participation in road projects is increasing around the world. The most popular franchising mechanism is a concession contract, which allows a private firm to charge tolls to road users during a pre-determined period in order to recover its investments. Concessionaires are usually selected through auctions at which candidates submit bids for tolls, payments to the government, or minimum term to hold the contract. This paper discusses, in the context of road franchising, how this mechanism does not generally yield optimal outcomes and it induces the frequent contract renegotiations observed in road projects. A new franchising mechanism is proposed, based on flexible-term contracts and auctions with bids for total net revenue and maintenance costs. This new mechanism improves outcomes compared to fixed-term concessions, by eliminating traffic risk and promoting the selection of efficient concessionaires.",
"title": ""
},
{
"docid": "155de33977b33d2f785fd86af0aa334f",
"text": "Model-based analysis tools, built on assumptions and simplifications, are difficult to handle smart grids with data characterized by volume, velocity, variety, and veracity (i.e., 4Vs data). This paper, using random matrix theory (RMT), motivates data-driven tools to perceive the complex grids in high-dimension; meanwhile, an architecture with detailed procedures is proposed. In algorithm perspective, the architecture performs a high-dimensional analysis and compares the findings with RMT predictions to conduct anomaly detections. Mean spectral radius (MSR), as a statistical indicator, is defined to reflect the correlations of system data in different dimensions. In management mode perspective, a group-work mode is discussed for smart grids operation. This mode breaks through regional limitations for energy flows and data flows, and makes advanced big data analyses possible. For a specific large-scale zone-dividing system with multiple connected utilities, each site, operating under the group-work mode, is able to work out the regional MSR only with its own measured/simulated data. The large-scale interconnected system, in this way, is naturally decoupled from statistical parameters perspective, rather than from engineering models perspective. Furthermore, a comparative analysis of these distributed MSRs, even with imperceptible different raw data, will produce a contour line to detect the event and locate the source. It demonstrates that the architecture is compatible with the block calculation only using the regional small database; beyond that, this architecture, as a data-driven solution, is sensitive to system situation awareness, and practical for real large-scale interconnected systems. Five case studies and their visualizations validate the designed architecture in various fields of power systems. To our best knowledge, this paper is the first attempt to apply big data technology into smart grids.",
"title": ""
},
{
"docid": "e75f830b902ca7d0e8d9e9fa03a62440",
"text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.",
"title": ""
},
{
"docid": "f96098449988c433fe8af20be0c468a5",
"text": "Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.",
"title": ""
},
{
"docid": "546296aecaee9963ee7495c9fbf76fd4",
"text": "In this paper, we propose text summarization method that creates text summary by definition of the relevance score of each sentence and extracting sentences from the original documents. While summarization this method takes into account weight of each sentence in the document. The essence of the method suggested is in preliminary identification of every sentence in the document with characteristic vector of words, which appear in the document, and calculation of relevance score for each sentence. The relevance score of sentence is determined through its comparison with all the other sentences in the document and with the document title by cosine measure. Prior to application of this method the scope of features is defined and then the weight of each word in the sentence is calculated with account of those features. The weights of features, influencing relevance of words, are determined using genetic algorithms.",
"title": ""
}
] | scidocsrr |
7a4fcb24bbaec04b6699f8dd33a65836 | Mental Health Problems in University Students : A Prevalence Study | [
{
"docid": "1497e47ada570797e879bbc4aba432a1",
"text": "The mental health of university students is an area of increasing concern worldwide. The objective of this study is to examine the prevalence of depression, anxiety and stress among a group of Turkish university students. Depression Anxiety and Stress Scale (DASS-42) completed anonymously in the students’ respective classrooms by 1,617 students. Depression, anxiety and stress levels of moderate severity or above were found in 27.1, 47.1 and 27% of our respondents, respectively. Anxiety and stress scores were higher among female students. First- and second-year students had higher depression, anxiety and stress scores than the others. Students who were satisfied with their education had lower depression, anxiety and stress scores than those who were not satisfied. The high prevalence of depression, anxiety and stress symptoms among university students is alarming. This shows the need for primary and secondary prevention measures, with the development of adequate and appropriate support services for this group.",
"title": ""
}
] | [
{
"docid": "0ef6e54d7190dde80ee7a30c5ecae0c3",
"text": "Games have been an important tool for motivating undergraduate students majoring in computer science and engineering. However, it is difficult to build an entire game for education from scratch, because the task requires high-level programming skills and expertise to understand the graphics and physics. Recently, there have been many different game artificial intelligence (AI) competitions, ranging from board games to the state-of-the-art video games (car racing, mobile games, first-person shooting games, real-time strategy games, and so on). The competitions have been designed such that participants develop their own AI module on top of public/commercial games. Because the materials are open to the public, it is quite useful to adopt them for an undergraduate course project. In this paper, we report our experiences using the Angry Birds AI Competition for such a project-based course. In the course, teams of students consider computer vision, strategic decision-making, resource management, and bug-free coding for their outcome. To promote understanding of game contents generation and extensive testing on the generalization abilities of the student's AI program, we developed software to help them create user-created levels. Students actively participated in the project and the final outcome was comparable with that of successful entries in the 2013 International Angry Birds AI Competition. Furthermore, it leads to the development of a new parallelized Angry Birds AI Competition platform with undergraduate students aiming to use advanced optimization algorithms for their controllers.",
"title": ""
},
{
"docid": "0fba05a38cb601a1b08e6105e6b949c1",
"text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.",
"title": ""
},
{
"docid": "f1df8b69dfec944b474b9b26de135f55",
"text": "Background:There are currently two million cancer survivors in the United Kingdom, and in recent years this number has grown by 3% per annum. The aim of this paper is to provide long-term projections of cancer prevalence in the United Kingdom.Methods:National cancer registry data for England were used to estimate cancer prevalence in the United Kingdom in 2009. Using a model of prevalence as a function of incidence, survival and population demographics, projections were made to 2040. Different scenarios of future incidence and survival, and their effects on cancer prevalence, were also considered. Colorectal, lung, prostate, female breast and all cancers combined (excluding non-melanoma skin cancer) were analysed separately.Results:Assuming that existing trends in incidence and survival continue, the number of cancer survivors in the United Kingdom is projected to increase by approximately one million per decade from 2010 to 2040. Particularly large increases are anticipated in the oldest age groups, and in the number of long-term survivors. By 2040, almost a quarter of people aged at least 65 will be cancer survivors.Conclusion:Increasing cancer survival and the growing/ageing population of the United Kingdom mean that the population of survivors is likely to grow substantially in the coming decades, as are the related demands upon the health service. Plans must, therefore, be laid to ensure that the varied needs of cancer survivors can be met in the future.",
"title": ""
},
{
"docid": "28d19824a598ae20039f2ed5d8885234",
"text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.",
"title": ""
},
{
"docid": "a574355d46c6e26efe67aefe2869a0cb",
"text": "The continuously increasing cost of the US healthcare system has received significant attention. Central to the ideas aimed at curbing this trend is the use of technology in the form of the mandate to implement electronic health records (EHRs). EHRs consist of patient information such as demographics, medications, laboratory test results, diagnosis codes, and procedures. Mining EHRs could lead to improvement in patient health management as EHRs contain detailed information related to disease prognosis for large patient populations. In this article, we provide a structured and comprehensive overview of data mining techniques for modeling EHRs. We first provide a detailed understanding of the major application areas to which EHR mining has been applied and then discuss the nature of EHR data and its accompanying challenges. Next, we describe major approaches used for EHR mining, the metrics associated with EHRs, and the various study designs. With this foundation, we then provide a systematic and methodological organization of existing data mining techniques used to model EHRs and discuss ideas for future research.",
"title": ""
},
{
"docid": "02e63f2279dbd980c6689bec5ea18411",
"text": "Reflection photoplethysmography (PPG) using 530 nm (green) wavelength light has the potential to be a superior method for monitoring heart rate (HR) during normal daily life due to its relative freedom from artifacts. However, little is known about the accuracy of pulse rate (PR) measured by 530 nm light PPG during motion. Therefore, we compared the HR measured by electrocadiography (ECG) as a reference with PR measured by 530, 645 (red), and 470 nm (blue) wavelength light PPG during baseline and while performing hand waving in 12 participants. In addition, we examined the change of signal-to-noise ratio (SNR) by motion for each of the three wavelengths used for the PPG. The results showed that the limit of agreement in Bland-Altman plots between the HR measured by ECG and PR measured by 530 nm light PPG (±0.61 bpm) was smaller than that achieved when using 645 and 470 nm light PPG (±3.20 bpm and ±2.23 bpm, respectively). The ΔSNR (the difference between baseline and task values) of 530 and 470nm light PPG was significantly smaller than ΔSNR for red light PPG. In conclusion, 530 nm light PPG could be a more suitable method than 645 and 470nm light PPG for monitoring HR in normal daily life.",
"title": ""
},
{
"docid": "5ccf0b3f871f8362fccd4dbd35a05555",
"text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.",
"title": ""
},
{
"docid": "736ee2bed70510d77b1f9bb13b3bee68",
"text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.",
"title": ""
},
{
"docid": "c60c83c93577377bad43ed1972079603",
"text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module",
"title": ""
},
{
"docid": "57e9467bfbc4e891acd00dcdac498e0e",
"text": "Cross-cultural perspectives have brought renewed interest in the social aspects of the self and the extent to which individuals define themselves in terms of their relationships to others and to social groups. This article provides a conceptual review of research and theory of the social self, arguing that the personal, relational, and collective levels of self-definition represent distinct forms of selfrepresentation with different origins, sources of self-worth, and social motivations. A set of 3 experiments illustrates haw priming of the interpersonal or collective \"we\" can alter spontaneous judgments of similarity and self-descriptions.",
"title": ""
},
{
"docid": "e50c921d664f970daa8050bad282e066",
"text": "In the complex decision-environments that characterize e-business settings, it is important to permit decision-makers to proactively manage data quality. In this paper we propose a decision-support framework that permits decision-makers to gauge quality both in an objective (context-independent) and in a context-dependent manner. The framework is based on the information product approach and uses the Information Product Map (IPMAP). We illustrate its application in evaluating data quality using completeness—a data quality dimension that is acknowledged as important. A decision-support tool (IPView) for managing data quality that incorporates the proposed framework is also described. D 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "01c267fbce494fcfabeabd38f18c19a3",
"text": "New insights in the programming physics of silicided polysilicon fuses integrated in 90 nm CMOS have led to a programming time of 100 ns, while achieving a resistance increase of 107. This is an order of magnitude better than any previously published result for the programming time and resistance increase individually. Simple calculations and TEM-analyses substantiate the proposed programming mechanism. The advantage of a rectangular fuse head over a tapered fuse head is shown and explained",
"title": ""
},
{
"docid": "e875d4a88e73984e37f5ce9ffe543791",
"text": "A set of face stimuli called the NimStim Set of Facial Expressions is described. The goal in creating this set was to provide facial expressions that untrained individuals, characteristic of research participants, would recognize. This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.",
"title": ""
},
{
"docid": "829eafadf393a66308db452eeef617d5",
"text": "The goal of creating non-biological intelligence has been with us for a long time, predating the nominal 1956 establishment of the field of artificial intelligence by centuries or, under some definitions, even by millennia. For much of this history it was reasonable to recast the goal of “creating” intelligence as that of “designing” intelligence. For example, it would have been reasonable in the 17th century, as Leibnitz was writing about reasoning as a form of calculation, to think that the process of creating artificial intelligence would have to be something like the process of creating a waterwheel or a pocket watch: first understand the principles, then use human intelligence to devise a design based on the principles, and finally build a system in accordance with the design. At the dawn of the 19th century William Paley made such assumptions explicit, arguing that intelligent designers are necessary for the production of complex adaptive systems. And then, of course, Paley was soundly refuted by Charles Darwin in 1859. Darwin showed how complex and adaptive systems can arise naturally from a process of selection acting on random variation. That is, he showed that complex and adaptive design could be created without an intelligent designer. On the basis of evidence from paleontology, molecular biology, and evolutionary theory we now understand that nearly all of the interesting features of biological agents, including intelligence, have arisen through roughly Darwinian evolutionary processes (with a few important refinements, some of which are mentioned below). But there are still some holdouts for the pre-Darwinian view. A recent survey in the United States found that 42% of respondents expressed a belief that “Life on Earth has existed in its present form since the beginning of time” [7], and these views are supported by powerful political forces including a stridently anti-science President. These shocking political realities are, however, beyond the scope of the present essay. This essay addresses a more subtle form of pre-Darwinian thinking that occurs even among the scientifically literate, and indeed even among highly trained scientists conducting advanced AI research. Those who engage in this form of pre-Darwinian thinking accept the evidence for the evolution of terrestrial life but ignore or even explicitly deny the power of evolutionary processes to produce adaptive complexity in other contexts. Within the artificial intelligence research community those who engage in this form of thinking ignore or deny the power of evolutionary processes to create machine intelligence. Before exploring this complaint further it is worth asking whether an evolved artificial intelligence would even serve the broader goals of AI as a field. Every AI text opens by defining the field, and some of the proffered definitions are explicitly oriented toward design—presumably design by intelligent humans. For example Dean et al. define AI as “the design and study of computer programs that behave intelligently” [2, p. 1]. Would the field, so defined, be served by the demonstration of an evolved artificial intelligence? It would insofar as we could study the evolved system and particularly if we could use our resulting understanding as the basis for future designs. So even the most design-oriented AI researchers should be interested in evolved artificial intelligence if it can in fact be created.",
"title": ""
},
{
"docid": "8d176debd26505d424dcbf8f5cfdb4d1",
"text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.",
"title": ""
},
{
"docid": "97b578720957155514ca9fbe68c03eed",
"text": "Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors.",
"title": ""
},
{
"docid": "52c1300a818340065ca16d02343f13fe",
"text": "Article history: Received 9 September 2014 Received in revised form 25 January 2015 Accepted 9 February 2015 Available online xxxx",
"title": ""
},
{
"docid": "419499ced8902a00909c32db352ea7f5",
"text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.",
"title": ""
},
{
"docid": "186d9fc899fdd92c7e74615a2a054a03",
"text": "In this paper, we propose an illumination-robust face recognition system via local directional pattern images. Usually, local pattern descriptors including local binary pattern and local directional pattern have been used in the field of the face recognition and facial expression recognition, since local pattern descriptors have important properties to be robust against the illumination changes and computational simplicity. Thus, this paper represents the face recognition approach that employs the local directional pattern descriptor and twodimensional principal analysis algorithms to achieve enhanced recognition accuracy. In particular, we propose a novel methodology that utilizes the transformed image obtained from local directional pattern descriptor as the direct input image of two-dimensional principal analysis algorithms, unlike that most of previous works employed the local pattern descriptors to acquire the histogram features. The performance evaluation of proposed system was performed using well-known approaches such as principal component analysis and Gabor-wavelets based on local binary pattern, and publicly available databases including the Yale B database and the CMU-PIE database were employed. Through experimental results, the proposed system showed the best recognition accuracy compared to different approaches, and we confirmed the effectiveness of the proposed method under varying lighting conditions.",
"title": ""
},
{
"docid": "6fc870c703611e07519ce5fe956c15d1",
"text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.",
"title": ""
}
] | scidocsrr |
04836cd980c5022b30d361d29baf4097 | A wearable system that knows who wears it | [
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "b7aca26bc09bbc9376fefd1befec2b28",
"text": "Wearable sensor systems have been used in the ubiquitous computing community and elsewhere for applications such as activity and gesture recognition, health and wellness monitoring, and elder care. Although the power consumption of accelerometers has already been highly optimized, this work introduces a novel sensing approach which lowers the power requirement for motion sensing by orders of magnitude. We present an ultra-low-power method for passively sensing body motion using static electric fields by measuring the voltage at any single location on the body. We present the feasibility of using this sensing approach to infer the amount and type of body motion anywhere on the body and demonstrate an ultra-low-power motion detector used to wake up more power-hungry sensors. The sensing hardware consumes only 3.3 μW, and wake-up detection is done using an additional 3.3 μW (6.6 μW total).",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] | [
{
"docid": "19a47559acfc6ee0ebb0c8e224090e28",
"text": "Learning from streams of evolving and unbounded data is an important problem, for example in visual surveillance or internet scale data. For such large and evolving real-world data, exhaustive supervision is impractical, particularly so when the full space of classes is not known in advance therefore joint class discovery (exploration) and boundary learning (exploitation) becomes critical. Active learning has shown promise in jointly optimising exploration-exploitation with minimal human supervision. However, existing active learning methods either rely on heuristic multi-criteria weighting or are limited to batch processing. In this paper, we present a new unified framework for joint exploration-exploitation active learning in streams without any heuristic weighting. Extensive evaluation on classification of various image and surveillance video datasets demonstrates the superiority of our framework over existing methods.",
"title": ""
},
{
"docid": "8d2d3b326c246bde95b360c9dcf6540f",
"text": "A field experiment was carried out at the Shenyang Experimental Station of Ecology (CAS) in order to study the effects of slow-release urea fertilizers high polymer-coated urea (SRU1), SRU1 mixed with dicyandiamide DCD (SRU2), and SRU1 mixed with calcium carbide CaC2 (SRU3) on urease activity, microbial biomass C and N, and nematode communities in an aquic brown soil during the maize growth period. The results demonstrated that the application of slow-release urea fertilizers inhibits soil urease activity and increases the soil NH4 +-N content. Soil available N increment could promote its immobilization by microorganisms. Determination of soil microbial biomass N indicated that a combined application of coated urea and nitrification inhibitors increased the soil active N pool. The population of predators/omnivores indicated that treatment with SRU2 could provide enough soil NH4 +-N to promote maize growth and increased the food resource for the soil fauna compared with the other treatments.",
"title": ""
},
{
"docid": "d337f149d3e52079c56731f4f3d8ea3e",
"text": "Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.",
"title": ""
},
{
"docid": "26befbb36d5d64ff0c075b38cde32d6f",
"text": "This study deals with the problems related to the translation of political texts in the theoretical framework elaborated by the researchers working in the field of translation studies and reflects on the terminological peculiarities of the special language used for this text type . Consideration of the theoretical framework is followed by the analysis of a specific text spoken then written in English and translated into Hungarian and Romanian. The conclusions are intended to highlight the fact that there are no recipes for translating a political speech, because translation is not only a technical process that uses translation procedures and applies transfer operations, but also a matter of understanding cultural, historical and political situations and their significance.",
"title": ""
},
{
"docid": "8492ba0660b06ca35ab3f4e96f3a33c3",
"text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.",
"title": ""
},
{
"docid": "ddb77ec8a722c50c28059d03919fb299",
"text": "Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.",
"title": ""
},
{
"docid": "cfadfcbc3929b5552119a4f8cb211b33",
"text": "The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is—as we discuss in this paper— well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.",
"title": ""
},
{
"docid": "58b4320c2cf52c658275eaa4748dede5",
"text": "Backing-out and heading-out maneuvers in perpendicular or angle parking lots are one of the most dangerous maneuvers, especially in cases where side parked cars block the driver view of the potential traffic flow. In this paper, a new vision-based Advanced Driver Assistance System (ADAS) is proposed to automatically warn the driver in such scenarios. A monocular grayscale camera was installed at the back-right side of a vehicle. A Finite State Machine (FSM) defined according to three CAN Bus variables and a manual signal provided by the user is used to handle the activation/deactivation of the detection module. The proposed oncoming traffic detection module computes spatio-temporal images from a set of predefined scan-lines which are related to the position of the road. A novel spatio-temporal motion descriptor is proposed (STHOL) accounting for the number of lines, their orientation and length of the spatio-temporal images. Some parameters of the proposed descriptor are adapted for nighttime conditions. A Bayesian framework is then used to trigger the warning signal using multivariate normal density functions. Experiments are conducted on image data captured from a vehicle parked at different location of an urban environment, including both daytime and nighttime lighting conditions. We demonstrate that the proposed approach provides robust results maintaining processing rates close to real time.",
"title": ""
},
{
"docid": "9a2b499cf1ed10403a55f2557c00dedf",
"text": "Polar codes are a recently discovered family of capacity-achieving codes that are seen as a major breakthrough in coding theory. Motivated by the recent rapid progress in the theory of polar codes, we propose a semi-parallel architecture for the implementation of successive cancellation decoding. We take advantage of the recursive structure of polar codes to make efficient use of processing resources. The derived architecture has a very low processing complexity while the memory complexity remains similar to that of previous architectures. This drastic reduction in processing complexity allows very large polar code decoders to be implemented in hardware. An N=217 polar code successive cancellation decoder is implemented in an FPGA. We also report synthesis results for ASIC.",
"title": ""
},
{
"docid": "9def5ba1b4b262b8eb71123023c00e36",
"text": "OBJECTIVE\nThe primary objective of this study was to compare clinically and radiographically the efficacy of autologous platelet rich fibrin (PRF) and autogenous bone graft (ABG) obtained using bone scrapper in the treatment of intrabony periodontal defects.\n\n\nMATERIALS AND METHODS\nThirty-eight intrabony defects (IBDs) were treated with either open flap debridement (OFD) with PRF or OFD with ABG. Clinical parameters were recorded at baseline and 6 months postoperatively. The defect-fill and defect resolution at baseline and 6 months were calculated radiographically (intraoral periapical radiographs [IOPA] and orthopantomogram [OPG]).\n\n\nRESULTS\nSignificant probing pocket depth (PPD) reduction, clinical attachment level (CAL) gain, defect fill and defect resolution at both PRF and ABG treated sites with OFD was observed. However, inter-group comparison was non-significant (P > 0.05). The bivariate correlation results revealed that any of the two radiographic techniques (IOPA and OPG) can be used for analysis of the regenerative therapy in IBDs.\n\n\nCONCLUSION\nThe use of either PRF or ABG were effective in the treatment of three wall IBDs with an uneventful healing of the sites.",
"title": ""
},
{
"docid": "b4910e355c44077eb27c62a0c8237204",
"text": "Our proof is built on Perron-Frobenius theorem, a seminal work in matrix theory (Meyer 2000). By Perron-Frobenius theorem, the power iteration algorithm for predicting top K persuaders converges to a unique C and this convergence is independent of the initialization of C if the persuasion probability matrix P is nonnegative, irreducible, and aperiodic (Heath 2002). We first show that P is nonnegative. Each component of the right hand side of Equation (10) is positive except nD $ 0; thus, persuasion probability pij estimated with Equation (10) is positive, for all i, j = 1, 2, ..., n and i ... j. Because all diagonal elements of P are equal to zero and all non-diagonal elements of P are positive persuasion probabilities, P is nonnegative.",
"title": ""
},
{
"docid": "d972e23eb49c15488d2159a9137efb07",
"text": "One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.",
"title": ""
},
{
"docid": "d4cd0dabcf4caa22ad92fab40844c786",
"text": "NA",
"title": ""
},
{
"docid": "4a0756bffc50e11a0bcc2ab88502e1a2",
"text": "The interest in attribute weighting for soft subspace clustering have been increasing in the last years. However, most of the proposed approaches are designed for dealing only with numeric data. In this paper, our focus is on soft subspace clustering for categorical data. In soft subspace clustering, the attribute weighting approach plays a crucial role. Due to this, we propose an entropy-based approach for measuring the relevance of each categorical attribute in each cluster. Besides that, we propose the EBK-modes (entropy-based k-modes), an extension of the basic k-modes that uses our approach for attribute weighting. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the EBK-modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.",
"title": ""
},
{
"docid": "3be38e070678e358e23cb81432033062",
"text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system",
"title": ""
},
{
"docid": "1dc07b02a70821fdbaa9911755d1e4b0",
"text": "The AROMA project is exploring the kind of awareness that people effortless are able to maintain about other beings who are located physically close. We are designing technology that attempts to mediate a similar kind of awareness among people who are geographically dispersed but want to stay better in touch. AROMA technology can be thought of as a stand-alone communication device or -more likely -an augmentation of existing technologies such as the telephone or full-blown media spaces. Our approach differs from other recent designs for awareness (a) by choosing pure abstract representations on the display site, (b) by possibly remapping the signal across media between capture and display, and, finally, (c) by explicitly extending the application domain to include more than the working life, to embrace social interaction in general. We are building a series of prototypes to learn if abstract representation of activity data does indeed convey a sense of remote presence and does so in a sutTiciently subdued manner to allow the user to concentrate on his or her main activity. We have done some initial testing of the technical feasibility of our designs. What still remains is an extensive effort of designing a symbolic language of remote presence, done in parallel with studies of how people will connect and communicate through such a language as they live with the AROMA system.",
"title": ""
},
{
"docid": "ae0ef7702fca274bd4ee8a2a30479275",
"text": "This paper describes the drawbacks related to the iron in the classical electrodynamic loudspeaker structure. Then it describes loudspeaker motors without any iron, which are only made of permanent magnets. They are associated to a piston like moving part which glides on ferrofluid seals. Furthermore, the coil is short and the suspension is wholly pneumatic. Several types of magnet assemblies are described and discussed. Indeed, their properties regarding the force factor and the ferrofluid seal shape depend on their structure. Eventually, the capacity of the seals is evaluated.",
"title": ""
},
{
"docid": "89b54aa0009598a4cb159b196f3749ee",
"text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.",
"title": ""
},
{
"docid": "6fd9793e9f44b726028f8c879157f1f7",
"text": "Modeling, simulation and implementation of Voltage Source Inverter (VSI) fed closed loop control of 3-phase induction motor drive is presented in this paper. A mathematical model of the drive system is developed and is used for the simulation study. Simulation is carried out using Scilab/Scicos, which is free and open source software. The above said drive system is implemented in laboratory using a PC and an add-on card. In this study the air gap flux of the machine is kept constant by maintaining Volt/Hertz (v/f) ratio constant. The experimental transient responses of the drive system obtained for change in speed under no load as well as under load conditions are presented.",
"title": ""
},
{
"docid": "cb19facb61dae863c566f5fafd9f8b20",
"text": "This paper describes our solution for the 2 YouTube-8M video understanding challenge organized by Google AI. Unlike the video recognition benchmarks, such as Kinetics and Moments, the YouTube8M challenge provides pre-extracted visual and audio features instead of raw videos. In this challenge, the submitted model is restricted to 1GB, which encourages participants focus on constructing one powerful single model rather than incorporating of the results from a bunch of models. Our system fuses six different sub-models into one single computational graph, which are categorized into three families. More specifically, the most effective family is the model with non-local operations following the NetVLAD encoding. The other two family models are Soft-BoF and GRU, respectively. In order to further boost single models performance, the model parameters of different checkpoints are averaged. Experimental results demonstrate that our proposed system can effectively perform the video classification task, achieving 0.88763 on the public test set and 0.88704 on the private set in terms of GAP@20, respectively. We finally ranked at the fourth place in the YouTube-8M video understanding challenge.",
"title": ""
}
] | scidocsrr |
7e7272379f6c262e43cf408524551964 | Steady-State Mean-Square Error Analysis for Adaptive Filtering under the Maximum Correntropy Criterion | [
{
"docid": "7a7e0363ca4ad5c83a571449f53834ca",
"text": "Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.",
"title": ""
}
] | [
{
"docid": "a14ac26274448e0a7ecafdecae4830f9",
"text": "Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.",
"title": ""
},
{
"docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77",
"text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.",
"title": ""
},
{
"docid": "315af705427ee4363fe4614dc72eb7a7",
"text": "The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.",
"title": ""
},
{
"docid": "5006770c9f7a6fb171a060ad3d444095",
"text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.",
"title": ""
},
{
"docid": "7d84e574d2a6349a9fc2669fdbe08bba",
"text": "Domain-specific languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a promising path to target different parallel and heterogeneous hardware from the same algorithm description. In theory, the DSL description can capture all characteristics of the algorithm that are required to generate highly efficient parallel implementations. However, most frameworks do not make use of this knowledge and the performance cannot reach that of optimized library implementations. In this article, we present the HIPAcc framework, a DSL and source-to-source compiler for image processing. We show that domain knowledge can be captured in the language and that this knowledge enables us to generate tailored implementations for a given target architecture. Back ends for CUDA, OpenCL, and Renderscript allow us to target discrete graphics processing units (GPUs) as well as mobile, embedded GPUs. Exploiting the captured domain knowledge, we can generate specialized algorithm variants that reach the maximal achievable performance due to the peak memory bandwidth. These implementations outperform state-of-the-art domain-specific languages and libraries significantly.",
"title": ""
},
{
"docid": "6838cf1310f0321cd524bb1120f35057",
"text": "One of the most compelling visions of future robots is that of the robot butler. An entity dedicated to fulfilling your every need. This obviously has its benefits, but there could be a flipside to this vision. To fulfill the needs of its users, it must first be aware of them, and so it could potentially amass a huge amount of personal data regarding its user, data which may or may not be safe from accidental or intentional disclosure to a third party. How may prospective owners of a personal robot feel about the data that might be collected about them? In order to investigate this issue experimentally, we conducted an exploratory study where 12 participants were exposed to an HRI scenario in which disclosure of personal information became an issue. Despite the small sample size interesting results emerged from this study, indicating how future owners of personal robots feel regarding what the robot will know about them, and what safeguards they believe should be in place to protect owners from unwanted disclosure of private information.",
"title": ""
},
{
"docid": "8f978ac84eea44a593e9f18a4314342c",
"text": "There is clear evidence that interpersonal social support impacts stress levels and, in turn, degree of physical illness and psychological well-being. This study examines whether mediated social networks serve the same palliative function. A survey of 401 undergraduate Facebook users revealed that, as predicted, number of Facebook friends associated with stronger perceptions of social support, which in turn associated with reduced stress, and in turn less physical illness and greater well-being. This effect was minimized when interpersonal network size was taken into consideration. However, for those who have experienced many objective life stressors, the number of Facebook friends emerged as the stronger predictor of perceived social support. The \"more-friends-the-better\" heuristic is proposed as the most likely explanation for these findings.",
"title": ""
},
{
"docid": "4dc302fc2001dda1d24d830bb43f9cfa",
"text": "Discussions of qualitative research interviews have centered on promoting an ideal interactional style and articulating the researcher behaviors by which this might be realized. Although examining what researchers do in an interview continues to be valuable, this focus obscures the reflexive engagement of all participants in the exchange and the potential for a variety of possible styles of interacting. The author presents her analyses of participants’ accounts of past research interviews and explores the implications of this for researchers’ orientation to qualitative research inter-",
"title": ""
},
{
"docid": "2031114bd1dc1a3ca94bdd8a13ad3a86",
"text": "Crude extracts of curcuminoids and essential oil of Curcuma longa varieties Kasur, Faisalabad and Bannu were studied for their antibacterial activity against 4 bacterial strains viz., Bacillus subtilis, Bacillus macerans, Bacillus licheniformis and Azotobacter using agar well diffusion method. Solvents used to determine antibacterial activity were ethanol and methanol. Ethanol was used for the extraction of curcuminoids. Essential oil was extracted by hydrodistillation and diluted in methanol by serial dilution method. Both Curcuminoids and oil showed zone of inhibition against all tested strains of bacteria. Among all the three turmeric varieties, Kasur variety had the most inhibitory effect on the growth of all bacterial strains tested as compared to Faisalabad and Bannu varieties. Among all the bacterial strains B. subtilis was the most sensitive to turmeric extracts of curcuminoids and oil. The MIC value for different strains and varieties ranged from 3.0 to 20.6 mm in diameter.",
"title": ""
},
{
"docid": "1b802879e554140e677020e379b866c1",
"text": "This study investigated vertical versus shared leadership as predictors of the effectiveness of 71 change management teams. Vertical leadership stems from an appointed or formal leader of a team, whereas shared leadership (C. L. Pearce, 1997; C. L. Pearce & J. A. Conger, in press; C. L. Pearce & H. P. Sims, 2000) is a group process in which leadership is distributed among, and stems from, team members. Team effectiveness was measured approximately 6 months after the assessment of leadership and was also measured from the viewpoints of managers, internal customers, and team members. Using multiple regression, the authors found both vertical and shared leadership to be significantly related to team effectiveness ( p .05), although shared leadership appears to be a more useful predictor of team effectiveness than vertical leadership.",
"title": ""
},
{
"docid": "ae70b9ef5eeb6316b5b022662191cc4f",
"text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.",
"title": ""
},
{
"docid": "4463a242a313f82527c4bdfff3d3c13c",
"text": "This paper examines the impact of capital structure on financial performance of Nigerian firms using a sample of thirty non-financial firms listed on the Nigerian Stock Exchange during the seven year period, 2004 – 2010. Panel data for the selected firms were generated and analyzed using ordinary least squares (OLS) as a method of estimation. The result shows that a firm’s capita structure surrogated by Debt Ratio, Dr has a significantly negative impact on the firm’s financial measures (Return on Asset, ROA, and Return on Equity, ROE). The study of these findings, indicate consistency with prior empirical studies and provide evidence in support of Agency cost theory.",
"title": ""
},
{
"docid": "a90dd405d9bd2ed912cacee098c0f9db",
"text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.",
"title": ""
},
{
"docid": "9bb88b82789d43e48b1e8a10701d39bd",
"text": "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many artificial intelligence–related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. In this article, we review several popular deep learning models, including deep belief networks and deep Boltzmann machines. We show that (a) these deep generative models, which contain many layers of latent variables and millions of parameters, can be learned efficiently, and (b) the learned high-level feature representations can be successfully applied in many application domains, including visual object recognition, information retrieval, classification, and regression tasks.",
"title": ""
},
{
"docid": "584e84ac1a061f1bf7945ab4cf54d950",
"text": "Paul White, PhD, MD§ Acupuncture has been used in China and other Asian countries for the past 3000 yr. Recently, this technique has been gaining increased popularity among physicians and patients in the United States. Even though acupuncture-induced analgesia is being used in many pain management programs in the United States, the mechanism of action remains unclear. Studies suggest that acupuncture and related techniques trigger a sequence of events that include the release of neurotransmitters, endogenous opioid-like substances, and activation of c-fos within the central nervous system. Recent developments in central nervous system imaging techniques allow scientists to better evaluate the chain of events that occur after acupuncture-induced stimulation. In this review article we examine current biophysiological and imaging studies that explore the mechanisms of acupuncture analgesia.",
"title": ""
},
{
"docid": "fce8f5ee730e2bbb63f4d1ef003ce830",
"text": "In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. Using a linear decision rule, we also propose a tractable approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems. An attractive feature of the framework is that we convert the original model into a second-order cone program, which is computationally tractable both in theory and in practice. We demonstrate the framework through an application of a project management problem with uncertain activity completion time.",
"title": ""
},
{
"docid": "3573fb077b151af3c83f7cd6a421dc9c",
"text": "Let G = (V, E) be a directed graph with a distinguished source vertex s. The single-source path expression problem is to find, for each vertex v, a regular expression P(s, v) which represents the set of all paths in G from s to v A solution to this problem can be used to solve shortest path problems, solve sparse systems of linear equations, and carry out global flow analysis. A method is described for computing path expressions by dwidmg G mto components, computing path expressions on the components by Gaussian elimination, and combining the solutions This method requires O(ma(m, n)) time on a reducible flow graph, where n Is the number of vertices m G, m is the number of edges in G, and a is a functional inverse of Ackermann's function The method makes use of an algonthm for evaluating functions defined on paths in trees. A smapllfied version of the algorithm, which runs in O(m log n) time on reducible flow graphs, is quite easy to implement and efficient m practice",
"title": ""
},
{
"docid": "b8e921733ef4ab77abcb48b0a1f04dbb",
"text": "This paper demonstrates the efficiency of kinematic redundancy used to increase the useable workspace of planar parallel mechanisms. As examples, we propose kinematically redundant schemes of the well known planar 3RRR and 3RPR mechanisms denoted as 3(P)RRR and 3(P)RPR. In both cases, a prismatic actuator is added allowing a usually fixed base joint to move linearly. Hence, reconfigurations can be performed selectively in order to avoid singularities and to affect the mechanisms' performance directly. Using an interval-based method the useable workspace, i.e. the singularity-free workspace guaranteeing a desired performance, is obtained. Due to the interval analysis any uncertainties can be implemented within the algorithm leading to practical and realistic results. It is shown that due to the additional prismatic actuator the useable workspace increases significantly. Several analysis examples clarify the efficiency of the proposed kinematically redundant mechanisms.",
"title": ""
},
{
"docid": "ba10de4e7613307d08b46cf001cbeb3b",
"text": "This paper builds on a general typology of textual communication (Aarseth 1997) and tries to establish a model for classifying the genre of “games in virtual environments”— that is, games that take place in some kind of simulated world, as opposed to purely abstract games like poker or blackjack. The aim of the model is to identify the main differences between games in a rigorous, analytical way, in order to come up with genres that are more specific and less ad hoc than those used by the industry and the popular gaming press. The model consists of a number of basic “dimensions”, such as Space, Perspective, Time, Teleology, etc, each of which has several variate values, (e.g. Teleology: finite (Half-Life) or infinite (EverQuest. Ideally, the multivariate model can be used to predict games that do not yet exist, but could be invented by combining the existing elements in new ways.",
"title": ""
},
{
"docid": "8188bcd3b95952dbf2818cad6fc2c36c",
"text": "Semi-supervised learning is by no means an unfamiliar concept to natural language processing researchers. Labeled data has been used to improve unsupervised parameter estimation procedures such as the EM algorithm and its variants since the beginning of the statistical revolution in NLP (e.g., Pereira and Schabes (1992)). Unlabeled data has also been used to improve supervised learning procedures, the most notable examples being the successful applications of self-training and co-training to word sense disambiguation (Yarowsky 1995) and named entity classification (Collins and Singer 1999). Despite its increasing importance, semi-supervised learning is not a topic that is typically discussed in introductory machine learning texts (e.g., Mitchell (1997), Alpaydin (2004)) or NLP texts (e.g., Manning and Schütze (1999), Jurafsky andMartin (2000)). Consequently, to learn about semi-supervised learning research, one has to consult the machine-learning literature. This can be a daunting task for NLP researchers who have little background in machine learning. Steven Abney’s book Semisupervised Learning for Computational Linguistics is targeted precisely at such researchers, aiming to provide them with a “broad and accessible presentation” of topics in semi-supervised learning. According to the preamble, the reader is assumed to have taken only an introductory course in NLP “that include statistical methods — concretely the material contained in Jurafsky andMartin (2000) andManning and Schütze (1999).”Nonetheless, I agreewith the author that any NLP researcher who has a solid background in machine learning is ready to “tackle the primary literature on semisupervised learning, and will probably not find this book particularly useful” (page 11). As the author promises, the book is self-contained and quite accessible to those who have little background in machine learning. In particular, of the 12 chapters in the book, three are devoted to preparatory material, including: a brief introduction to machine learning, basic unconstrained and constrained optimization techniques (e.g., gradient descent and the method of Lagrange multipliers), and relevant linear-algebra concepts (e.g., eigenvalues, eigenvectors, matrix and vector norms, diagonalization). The remaining chapters focus roughly on six types of semi-supervised learning methods:",
"title": ""
}
] | scidocsrr |
e1640b20b57f2db83b41db76947416dc | Data Mining in the Dark : Darknet Intelligence Automation | [
{
"docid": "22bdd2c36ef72da312eb992b17302fbe",
"text": "In this paper, we present an operational system for cyber threat intelligence gathering from various social platforms on the Internet particularly sites on the darknet and deepnet. We focus our attention to collecting information from hacker forum discussions and marketplaces offering products and services focusing on malicious hacking. We have developed an operational system for obtaining information from these sites for the purposes of identifying emerging cyber threats. Currently, this system collects on average 305 high-quality cyber threat warnings each week. These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack. This provides a significant service to cyber-defenders. The system is significantly augmented through the use of various data mining and machine learning techniques. With the use of machine learning models, we are able to recall 92% of products in marketplaces and 80% of discussions on forums relating to malicious hacking with high precision. We perform preliminary analysis on the data collected, demonstrating its application to aid a security expert for better threat analysis.",
"title": ""
},
{
"docid": "6d31ee4b0ad91e6500c5b8c7e3eaa0ca",
"text": "A host of tools and techniques are now available for data mining on the Internet. The explosion in social media usage and people reporting brings a new range of problems related to trust and credibility. Traditional media monitoring systems have now reached such sophistication that real time situation monitoring is possible. The challenge though is deciding what reports to believe, how to index them and how to process the data. Vested interests allow groups to exploit both social media and traditional media reports for propaganda purposes. The importance of collecting reports from all sides in a conflict and of balancing claims and counter-claims becomes more important as ease of publishing increases. Today the challenge is no longer accessing open source information but in the tagging, indexing, archiving and analysis of the information. This requires the development of general-purpose and domain specific knowledge bases. Intelligence tools are needed which allow an analyst to rapidly access relevant data covering an evolving situation, ranking sources covering both facts and opinions.",
"title": ""
}
] | [
{
"docid": "a854ee8cf82c4bd107e93ed0e70ee543",
"text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.",
"title": ""
},
{
"docid": "bc6877a5a83531a794ac1c8f7a4c7362",
"text": "A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the 'holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an 'inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of out-of-sample predictions) generates 'holdout' predictions with perfectly opposite rankings on random data. While this would raise immediate suspicion upon inspection, we would like to caution the data mining community against using CV for stacking or in currently popular ensemble methods. They can reverse the predictions by assigning negative weights and produce in the end a model that appears to have close to perfect predictability while in reality the data was random.",
"title": ""
},
{
"docid": "a33486dfec199cd51e885d6163082a96",
"text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.",
"title": ""
},
{
"docid": "7394f3000da8af0d4a2b33fed4f05264",
"text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.",
"title": ""
},
{
"docid": "2216f853543186e73b1149bb5a0de297",
"text": "Scaffolds have been utilized in tissue regeneration to facilitate the formation and maturation of new tissues or organs where a balance between temporary mechanical support and mass transport (degradation and cell growth) is ideally achieved. Polymers have been widely chosen as tissue scaffolding material having a good combination of biodegradability, biocompatibility, and porous structure. Metals that can degrade in physiological environment, namely, biodegradable metals, are proposed as potential materials for hard tissue scaffolding where biodegradable polymers are often considered as having poor mechanical properties. Biodegradable metal scaffolds have showed interesting mechanical property that was close to that of human bone with tailored degradation behaviour. The current promising fabrication technique for making scaffolds, such as computation-aided solid free-form method, can be easily applied to metals. With further optimization in topologically ordered porosity design exploiting material property and fabrication technique, porous biodegradable metals could be the potential materials for making hard tissue scaffolds.",
"title": ""
},
{
"docid": "501f9cb511e820c881c389171487f0b4",
"text": "An omnidirectional circularly polarized (CP) antenna array is proposed. The antenna array is composed of four identical CP antenna elements and one parallel strip-line feeding network. Each of CP antenna elements comprises a dipole and a zero-phase-shift (ZPS) line loop. The in-phase fed dipole and the ZPS line loop generate vertically and horizontally polarized omnidirectional radiation, respectively. Furthermore, the vertically polarized dipole is positioned in the center of the horizontally polarized ZPS line loop. The size of the loop is designed such that a 90° phase difference is realized between the two orthogonal components because of the spatial difference and, therefore, generates CP omnidirectional radiation. A 1 × 4 antenna array at 900 MHz is prototyped and targeted to ultra-high frequency (UHF) radio frequency identification (RFID) applications. The measurement results show that the antenna array achieves a 10-dB return loss over a frequency range of 900-935 MHz and 3-dB axial-ratio (AR) from 890 to 930 MHz. At the frequency of 915 MHz, the measured maximum AR of 1.53 dB, maximum gain of 5.4 dBic, and an omnidirectionality of ±1 dB are achieved.",
"title": ""
},
{
"docid": "58d19a5460ce1f830f7a5e2cb1c5ebca",
"text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.",
"title": ""
},
{
"docid": "54bdabea83e86d21213801c990c60f4d",
"text": "A method of depicting crew climate using a group diagram based on behavioral ratings is described. Behavioral ratings were made of twelve three-person professional airline cockpit crews in full-mission simulations. These crews had been part of an earlier study in which captains had been had been grouped into three personality types, based on pencil and paper pre-tests. We found that low error rates were related to group climate variables as well as positive captain behaviors.",
"title": ""
},
{
"docid": "b5babae9b9bcae4f87f5fe02459936de",
"text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.",
"title": ""
},
{
"docid": "19b8acf4e5c68842a02e3250c346d09b",
"text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.",
"title": ""
},
{
"docid": "fe903498e0c3345d7e5ebc8bf3407c2f",
"text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.",
"title": ""
},
{
"docid": "07a6de40826f4c5bab4a8b8c51aba080",
"text": "Prior studies on alternative work schedules have focused primarily on the main effects of compressed work weeks and shift work on individual outcomes. This study explores the combined effects of alternative and preferred work schedules on nurses' satisfaction with their work schedules, perceived patient care quality, and interferences with their personal lives.",
"title": ""
},
{
"docid": "62ff5888ad0c8065097603da8ff79cd6",
"text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.",
"title": ""
},
{
"docid": "3910a3317ea9ff4ea6c621e562b1accc",
"text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.",
"title": ""
},
{
"docid": "263c04402cfe80649b1d3f4a8578e99b",
"text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.",
"title": ""
},
{
"docid": "06755f8680ee8b43e0b3d512b4435de4",
"text": "Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.",
"title": ""
},
{
"docid": "cc9f566eb8ef891d76c1c4eee7e22d47",
"text": "In this study, a hybrid artificial intelligent (AI) system integrating neural network and expert system is proposed to support foreign exchange (forex) trading decisions. In this system, a neural network is used to predict the forex price in terms of quantitative data, while an expert system is used to handle qualitative factor and to provide forex trading decision suggestions for traders incorporating experts' knowledge and the neural network's results. The effectiveness of the proposed hybrid AI system is illustrated by simulation experiments",
"title": ""
},
{
"docid": "3b5340113d583b138834119614046151",
"text": "This paper presents the recent advancements in the control of multiple-degree-of-freedom hydraulic robotic manipulators. A literature review is performed on their control, covering both free-space and constrained motions of serial and parallel manipulators. Stability-guaranteed control system design is the primary requirement for all control systems. Thus, this paper pays special attention to such systems. An objective evaluation of the effectiveness of different methods and the state of the art in a given field is one of the cornerstones of scientific research and progress. For this purpose, the maximum position tracking error <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math></inline-formula> and a performance indicator <inline-formula><tex-math notation=\"LaTeX\">$\\rho$ </tex-math></inline-formula> (the ratio of <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math> </inline-formula> with respect to the maximum velocity) are used to evaluate and benchmark different free-space control methods in the literature. These indicators showed that stability-guaranteed nonlinear model based control designs have resulted in the most advanced control performance. In addition to stable closed-loop control, lack of energy efficiency is another significant challenge in hydraulic robotic systems. This paper pays special attention to these challenges in hydraulic robotic systems and discusses their reciprocal contradiction. Potential solutions to improve the system energy efficiency without control performance deterioration are discussed. Finally, for hydraulic robotic systems, open problems are defined and future trends are projected.",
"title": ""
},
{
"docid": "3ea021309fd2e729ffced7657e3a6038",
"text": "Physiological and pharmacological research undertaken on sloths during the past 30 years is comprehensively reviewed. This includes the numerous studies carried out upon the respiratory and cardiovascular systems, anesthesia, blood chemistry, neuromuscular responses, the brain and spinal cord, vision, sleeping and waking, water balance and kidney function and reproduction. Similarities and differences between the physiology of sloths and that of other mammals are discussed in detail.",
"title": ""
},
{
"docid": "637e73416c1a6412eeeae63e1c73c2c3",
"text": "Disgust, an emotion related to avoiding harmful substances, has been linked to moral judgments in many behavioral studies. However, the fact that participants report feelings of disgust when thinking about feces and a heinous crime does not necessarily indicate that the same mechanisms mediate these reactions. Humans might instead have separate neural and physiological systems guiding aversive behaviors and judgments across different domains. The present interdisciplinary study used functional magnetic resonance imaging (n = 50) and behavioral assessment to investigate the biological homology of pathogen-related and moral disgust. We provide evidence that pathogen-related and sociomoral acts entrain many common as well as unique brain networks. We also investigated whether morality itself is composed of distinct neural and behavioral subdomains. We provide evidence that, despite their tendency to elicit similar ratings of moral wrongness, incestuous and nonsexual immoral acts entrain dramatically separate, while still overlapping, brain networks. These results (i) provide support for the view that the biological response of disgust is intimately tied to immorality, (ii) demonstrate that there are at least three separate domains of disgust, and (iii) suggest strongly that morality, like disgust, is not a unified psychological or neurological phenomenon.",
"title": ""
}
] | scidocsrr |
f3ffbaafd9085526f906a7fb90ac3558 | Fast camera calibration for the analysis of sport sequences | [
{
"docid": "cfadde3d2e6e1d6004e6440df8f12b5a",
"text": "We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses the line markings of the court for calibration and it can be applied to a variety of different sports since the geometric model of the court can be specified by the user. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture restrictions. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the following input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.",
"title": ""
}
] | [
{
"docid": "36759b5da620f3b1c870c65e16aa2b44",
"text": "Frama-C is a source code analysis platform that aims at conducting verification of industrial-size C programs. It provides its users with a collection of plug-ins that perform static analysis, deductive verification, and testing, for safety- and security-critical software. Collaborative verification across cooperating plug-ins is enabled by their integration on top of a shared kernel and datastructures, and their compliance to a common specification language. This foundational article presents a consolidated view of the platform, its main and composite analyses, and some of its industrial achievements.",
"title": ""
},
{
"docid": "76cedf5536bd886b5838c2a5e027de79",
"text": "This article reports a meta-analysis of personality-academic performance relationships, based on the 5-factor model, in which cumulative sample sizes ranged to over 70,000. Most analyzed studies came from the tertiary level of education, but there were similar aggregate samples from secondary and tertiary education. There was a comparatively smaller sample derived from studies at the primary level. Academic performance was found to correlate significantly with Agreeableness, Conscientiousness, and Openness. Where tested, correlations between Conscientiousness and academic performance were largely independent of intelligence. When secondary academic performance was controlled for, Conscientiousness added as much to the prediction of tertiary academic performance as did intelligence. Strong evidence was found for moderators of correlations. Academic level (primary, secondary, or tertiary), average age of participant, and the interaction between academic level and age significantly moderated correlations with academic performance. Possible explanations for these moderator effects are discussed, and recommendations for future research are provided.",
"title": ""
},
{
"docid": "d5f43b7405e08627b7f0930cc1ddd99e",
"text": "Source code duplication, commonly known as code cloning, is considered an obstacle to software maintenance because changes to a cloned region often require consistent changes to other regions of the source code. Research has provided evidence that the elimination of clones may not always be practical, feasible, or cost-effective. We present a clone management approach that describes clone regions in a robust way that is independent from the exact text of clone regions or their location in a file, and that provides support for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRDs), which describe clone regions using a combination of their syntactic, structural, and lexical information. We present our definition of CRDs, and describe a clone tracking system capable of producing CRDs from the output of different clone detection tools, notifying developers of modifications to clone regions, and supporting updates to the documented clone relationships. We evaluated the performance and usefulness of our approach across three clone detection tools and five subject systems, and the results indicate that CRDs are a practical and robust representation for tracking code clones in evolving software.",
"title": ""
},
{
"docid": "c2334008c6a07cbd3b3d89dc01ddc02d",
"text": "Four Cucumber mosaic virus (CMV) (CMV-HM 1–4) and nine Tomato mosaic virus (ToMV) (ToMV AH 1–9) isolates detected in tomato samples collected from different governorates in Egypt during 2014, were here characterized. According to the coat protein gene sequence and to the complete nucleotide sequence of total genomic RNA1, RNA2 and RNA3 of CMV-HM3 the new Egyptian isolates are related to members of the CMV subgroup IB. The nine ToMV Egyptian isolates were characterized by sequence analysis of the coat protein and the movement protein genes. All isolates were grouped within the same branch and showed high relatedness to all considered isolates (98–99%). Complete nucleotide sequence of total genomic RNA of ToMV AH4 isolate was obtained and its comparison showed a closer degree of relatedness to isolate 99–1 from the USA (99%). To our knowledge, this is the first report of CMV isolates from subgroup IB in Egypt and the first full length sequencing of an ToMV Egyptian isolate.",
"title": ""
},
{
"docid": "0ce9e025b0728adc245759580330e7f5",
"text": "We present a unified framework for dense correspondence estimation, called Homography flow, to handle large photometric and geometric deformations in an efficient manner. Our algorithm is inspired by recent successes of the sparse to dense framework. The main intuition is that dense flows located in same plane can be represented as a single geometric transform. Tailored to dense correspondence task, the Homography flow differs from previous methods in the flow domain clustering and the trilateral interpolation. By estimating and propagating sparsely estimated transforms, dense flow field is estimated with very low computation time. The Homography flow highly improves the performance of dense correspondences, especially in flow discontinuous area. Experimental results on challenging image pairs show that our approach suppresses the state-of-the-art algorithms in both accuracy and computation time.",
"title": ""
},
{
"docid": "d94a4f07939c0f420787b099336f426b",
"text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.",
"title": ""
},
{
"docid": "13cfc33bd8611b3baaa9be37ea9d627e",
"text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.",
"title": ""
},
{
"docid": "03625364ccde0155f2c061b47e3a00b8",
"text": "The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al.’s system (Pantel et al., 2007).",
"title": ""
},
{
"docid": "779fba8ff7f59d3571cfe4c1803671e3",
"text": "This paper describes the design of an indirect current feedback Instrumentation Amplifier (IA). Transistor sizing plays a major role in achieving the desired gain, the Common Mode Rejection Ratio (CMRR) and the bandwidth of the Instrumentation Amplifier. A gm/ID based design methodology is employed to design the functional blocks of the IA. It links the design variables of each functional block to its target specifications and is used to develop design charts that are used to accurately size the transistors. The IA thus designed achieves a voltage gain of 31dB with a bandwidth 1.2MHz and a CMRR of 87dB at 1MHz. The circuit design is carried out using 0.18μm CMOS process.",
"title": ""
},
{
"docid": "b1a508ecaa6fef0583b430fc0074af74",
"text": "Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.",
"title": ""
},
{
"docid": "755820a345dea56c4631ee14467e2e41",
"text": "This paper presents a novel six-axis force/torque (F/T) sensor for robotic applications that is self-contained, rugged, and inexpensive. Six capacitive sensor cells are adopted to detect three normal and three shear forces. Six sensor cell readings are converted to F/T information via calibrations and transformation. To simplify the manufacturing processes, a sensor design with parallel and orthogonal arrangements of sensing cells is proposed, which achieves the large improvement of the sensitivity. Also, the signal processing is realized with a single printed circuit board and a ground plate, and thus, we make it possible to build a lightweight six-axis F/T sensor with simple manufacturing processes at extremely low cost. The sensor is manufactured and its performances are validated by comparing them with a commercial six-axis F/T sensor.",
"title": ""
},
{
"docid": "a07338beeb3246954815e0389c59ae29",
"text": "We have proposed gate-all-around Silicon nanowire MOSFET (SNWFET) on bulk Si as an ultimate transistor. Well controlled processes are used to achieve gate length (LG) of sub-10nm and narrow nanowire widths. Excellent performance with reasonable VTH and short channel immunity are achieved owing to thin nanowire channel, self-aligned gate, and GAA structure. Transistor performance with gate length of 10nm has been demonstrated and nanowire size (DNW) dependency of various electrical characteristics has been investigated. Random telegraph noise (RTN) in SNWFET is studied as well.",
"title": ""
},
{
"docid": "17f0fbd3ab3b773b5ef9d636700b5af6",
"text": "Motor sequence learning is a process whereby a series of elementary movements is re-coded into an efficient representation for the entire sequence. Here we show that human subjects learn a visuomotor sequence by spontaneously chunking the elementary movements, while each chunk acts as a single memory unit. The subjects learned to press a sequence of 10 sets of two buttons through trial and error. By examining the temporal patterns with which subjects performed a visuomotor sequence, we found that the subjects performed the 10 sets as several clusters of sets, which were separated by long time gaps. While the overall performance time decreased by repeating the same sequence, the clusters became clearer and more consistent. The cluster pattern was uncorrelated with the distance of hand movements and was different across subjects who learned the same sequence. We then split a learned sequence into three segments, while preserving or destroying the clusters in the learned sequence, and shuffled the segments. The performance on the shuffled sequence was more accurate and quicker when the clusters in the original sequence were preserved than when they were destroyed. The results suggest that each cluster is processed as a single memory unit, a chunk, and is necessary for efficient sequence processing. A learned visuomotor sequence is hierarchically represented as chunks that contain several elementary movements. We also found that the temporal patterns of sequence performance transferred from the nondominant to dominant hand, but not vice versa. This may suggest a role of the dominant hemisphere in storage of learned chunks. Together with our previous unit-recording and imaging studies that used the same learning paradigm, we predict specific roles of the dominant parietal area, basal ganglia, and presupplementary motor area in the chunking.",
"title": ""
},
{
"docid": "2ab8c692ef55d2501ff61f487f91da9c",
"text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.",
"title": ""
},
{
"docid": "84e71d32b1f40eb59d63a0ec6324d79b",
"text": "Typically a classifier trained on a given dataset (source domain) does not performs well if it is tested on data acquired in a different setting (target domain). This is the problem that domain adaptation (DA) tries to overcome and, while it is a well explored topic in computer vision, it is largely ignored in robotic vision where usually visual classification methods are trained and tested in the same domain. Robots should be able to deal with unknown environments, recognize objects and use them in the correct way, so it is important to explore the domain adaptation scenario also in this context. The goal of the project is to define a benchmark and a protocol for multimodal domain adaptation that is valuable for the robot vision community. With this purpose some of the state-of-the-art DA methods are selected: Deep Adaptation Network (DAN), Domain Adversarial Training of Neural Network (DANN), Automatic Domain Alignment Layers (AutoDIAL) and Adversarial Discriminative Domain Adaptation (ADDA). Evaluations have been done using different data types: RGB only, depth only and RGB-D over the following datasets, designed for the robotic community: RGB-D Object Dataset (ROD), Web Object Dataset (WOD), Autonomous Robot Indoor Dataset (ARID), Big Berkeley Instance Recognition Dataset (BigBIRD) and Active Vision Dataset. Although progresses have been made on the formulation of effective adaptation algorithms and more realistic object datasets are available, the results obtained show that, training a sufficiently good object classifier, especially in the domain adaptation scenario, is still an unsolved problem. Also the best way to combine depth with RGB informations to improve the performance is a point that needs to be investigated more.",
"title": ""
},
{
"docid": "37d353f5b8f0034209f75a3848580642",
"text": "(NR) is the first interactive data repository with a web-based platform for visual interactive analytics. Unlike other data repositories (e.g., UCI ML Data Repository, and SNAP), the network data repository (networkrepository.com) allows users to not only download, but to interactively analyze and visualize such data using our web-based interactive graph analytics platform. Users can in real-time analyze, visualize, compare, and explore data along many different dimensions. The aim of NR is to make it easy to discover key insights into the data extremely fast with little effort while also providing a medium for users to share data, visualizations, and insights. Other key factors that differentiate NR from the current data repositories is the number of graph datasets, their size, and variety. While other data repositories are static, they also lack a means for users to collaboratively discuss a particular dataset, corrections, or challenges with using the data for certain applications. In contrast, NR incorporates many social and collaborative aspects that facilitate scientific research, e.g., users can discuss each graph, post observations, and visualizations.",
"title": ""
},
{
"docid": "4c9313e27c290ccc41f3874108593bf6",
"text": "Very few standards exist for fitting products to people. Footwear is a noteworthy example. This study is an attempt to evaluate the quality of footwear fit using two-dimensional foot outlines. Twenty Hong Kong Chinese students participated in an experiment that involved three pairs of dress shoes and one pair of athletic shoes. The participants' feet were scanned using a commercial laser scanner, and each participant wore and rated the fit of each region of each shoe. The shoe lasts were also scanned and were used to match the foot scans with the last scans. The ANOVA showed significant (p < 0.05) differences among the four pairs of shoes for the overall, fore-foot and rear-foot fit ratings. There were no significant differences among shoes for mid-foot fit rating. These perceived differences were further analysed after matching the 2D outlines of both last and feet. The point-wise dimensional difference between foot and shoe outlines were computed and analysed after normalizing with foot perimeter. The dimensional difference (DD) plots along the foot perimeter showed that fore-foot fit was strongly correlated (R(2) > 0.8) with two of the minimums in the DD-plot while mid-foot fit was strongly correlated (R(2) > 0.9) with the dimensional difference around the arch region and a point on the lateral side of the foot. The DD-plots allow the designer to determine the critical locations that may affect footwear fit in addition to quantifying the nature of misfit so that design changes to shape and material may be possible.",
"title": ""
},
{
"docid": "e2bdc37afbe20e8281aaae302ed4cd7e",
"text": "Some obtained results related to an ongoing project which aims at providing a comprehensive approach for implementation of Internet of Things concept into the military domain are presented. A comprehensive approach to fault diagnosis within the Internet of Military Things was outlined. Particularly a method of fault detection which is based on a network partitioning into clusters was proposed. Also, some solutions proposed for the experimentally constructed network called EFTSN was conducted.",
"title": ""
},
{
"docid": "112931102c7c68e6e1e056f18593dbbc",
"text": "Graphical passwords were proposed as an alternative to overcome the inherent limitations of text-based passwords, inspired by research that shows that the graphical memory of humans is particularly well developed. A graphical password scheme that has been widely adopted is the Android Unlock Pattern, a special case of the Pass-Go scheme with grid size restricted to 3x3 points and restricted stroke count.\n In this paper, we study the security of Android unlock patterns. By performing a large-scale user study, we measure actual user choices of patterns instead of theoretical considerations on password spaces. From this data we construct a model based on Markov chains that enables us to quantify the strength of Android unlock patterns. We found empirically that there is a high bias in the pattern selection process, e.g., the upper left corner and three-point long straight lines are very typical selection strategies. Consequently, the entropy of patterns is rather low, and our results indicate that the security offered by the scheme is less than the security of only three digit randomly-assigned PINs for guessing 20% of all passwords (i.e., we estimate a partial guessing entropy G_0.2 of 9.10 bit).\n Based on these insights, we systematically improve the scheme by finding a small, but still effective change in the pattern layout that makes graphical user logins substantially more secure. By means of another user study, we show that some changes improve the security by more than doubling the space of actually used passwords (i.e., increasing the partial guessing entropy G_0.2 to 10.81 bit).",
"title": ""
},
{
"docid": "ef3598b448179b7a788444193bc77d62",
"text": "The human visual system has the remarkably ability to be able to effortlessly learn novel concepts from only a few examples. Mimicking the same behavior on machine learning vision systems is an interesting and very challenging research problem with many practical advantages on real world vision applications. In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories). To achieve that goal we propose (a) to extend an object recognition system with an attention based few-shot classification weight generator, and (b) to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. The latter, apart from unifying the recognition of both novel and base categories, it also leads to feature representations that generalize better on \"unseen\" categories. We extensively evaluate our approach on Mini-ImageNet where we manage to improve the prior state-of-the-art on few-shot recognition (i.e., we achieve 56.20% and 73.00% on the 1-shot and 5-shot settings respectively) while at the same time we do not sacrifice any accuracy on the base categories, which is a characteristic that most prior approaches lack. Finally, we apply our approach on the recently introduced few-shot benchmark of Bharath and Girshick [4] where we also achieve state-of-the-art results.",
"title": ""
}
] | scidocsrr |
94e2c515da44e97d8b7db8821ebcb2e4 | Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions. | [
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "6a4437fa8a5a764d99ed5471401f5ce4",
"text": "There is disagreement in the literature about the exact nature of the phenomenon of empathy. There are emotional, cognitive, and conditioning views, applying in varying degrees across species. An adequate description of the ultimate and proximate mechanism can integrate these views. Proximately, the perception of an object's state activates the subject's corresponding representations, which in turn activate somatic and autonomic responses. This mechanism supports basic behaviors (e.g., alarm, social facilitation, vicariousness of emotions, mother-infant responsiveness, and the modeling of competitors and predators) that are crucial for the reproductive success of animals living in groups. The Perception-Action Model (PAM), together with an understanding of how representations change with experience, can explain the major empirical effects in the literature (similarity, familiarity, past experience, explicit teaching, and salience). It can also predict a variety of empathy disorders. The interaction between the PAM and prefrontal functioning can also explain different levels of empathy across species and age groups. This view can advance our evolutionary understanding of empathy beyond inclusive fitness and reciprocal altruism and can explain different levels of empathy across individuals, species, stages of development, and situations.",
"title": ""
}
] | [
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "6a6238bb56eacc7d8ecc8f15f753b745",
"text": "Privacy-preservation has emerged to be a major concern in devising a data mining system. But, protecting the privacy of data mining input does not guarantee a privacy-preserved output. This paper focuses on preserving the privacy of data mining output and particularly the output of classification task. Further, instead of static datasets, we consider the classification of continuously arriving data streams: a rapidly growing research area. Due to the challenges of data stream classification such as vast volume, a mixture of labeled and unlabeled instances throughout the stream and timely classifier publication, enforcing privacy-preservation techniques becomes even more challenging. In order to achieve this goal, we propose a systematic method for preserving output-privacy in data stream classification that addresses several applications like loan approval, credit card fraud detection, disease outbreak or biological attack detection. Specifically, we propose an algorithm named Diverse and k-Anonymized HOeffding Tree (DAHOT) that is an amalgamation of popular data stream classification algorithm Hoeffding tree and a variant of k-anonymity and l-diversity principles. The empirical results on real and synthetic data streams verify the effectiveness of DAHOT as compared to its bedrock Hoeffding tree and two other techniques, one that learns sanitized decision trees from sampled data stream and other technique that uses ensemble-based classification. DAHOT guarantees to preserve the private patterns while classifying the data streams accurately.",
"title": ""
},
{
"docid": "d7aac1208aa2ef63ed9a4ef5b67d8017",
"text": "We contrast two theoretical approaches to social influence, one stressing interpersonal dependence, conceptualized as normative and informational influence (Deutsch & Gerard, 1955), and the other stressing group membership, conceptualized as self-categorization and referent informational influence (Turner, Hogg, Oakes, Reicher & Wetherell, 1987). We argue that both social comparisons to reduce uncertainty and the existence of normative pressure to comply depend on perceiving the source of influence as belonging to one's own category. This study tested these two approaches using three influence paradigms. First we demonstrate that, in Sherif's (1936) autokinetic effect paradigm, the impact of confederates on the formation of a norm decreases as their membership of a different category is made more salient to subjects. Second, in the Asch (1956) conformity paradigm, surveillance effectively exerts normative pressure if done by an in-group but not by an out-group. In-group influence decreases and out-group influence increases when subjects respond privately. Self-report data indicate that in-group confederates create more subjective uncertainty than out-group confederates and public responding seems to increase cohesiveness with in-group - but decrease it with out-group - sources of influence. In our third experiment we use the group polarization paradigm (e.g. Burnstein & Vinokur, 1973) to demonstrate that, when categorical differences between two subgroups within a discussion group are made salient, convergence of opinion between the subgroups is inhibited. Taken together the experiments show that self-categorization can be a crucial determining factor in social influence.",
"title": ""
},
{
"docid": "efae02feebc4a2efe2cf98ab4d19cd34",
"text": "User behavior on the Web changes over time. For example, the queries that people issue to search engines, and the underlying informational goals behind the queries vary over time. In this paper, we examine how to model and predict this temporal user behavior. We develop a temporal modeling framework adapted from physics and signal processing that can be used to predict time-varying user behavior using smoothing and trends. We also explore other dynamics of Web behaviors, such as the detection of periodicities and surprises. We develop a learning procedure that can be used to construct models of users' activities based on features of current and historical behaviors. The results of experiments indicate that by using our framework to predict user behavior, we can achieve significant improvements in prediction compared to baseline models that weight historical evidence the same for all queries. We also develop a novel learning algorithm that explicitly learns when to apply a given prediction model among a set of such models. Our improved temporal modeling of user behavior can be used to enhance query suggestions, crawling policies, and result ranking.",
"title": ""
},
{
"docid": "9cdc7b6b382ce24362274b75da727183",
"text": "Collaborative spectrum sensing is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and then exclude the attacker's report for spectrum sensing. Many existing attacker-detection schemes are based on the knowledge of the attacker's strategy and thus apply the Bayesian attacker detection. However, in practical cognitive radio systems the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality-detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case in which the attacker does not know the reports of honest secondary users (called independent attack), it is shown that the attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case in which the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss-detection and false-alarm probabilities. This motivates cognitive radio networks to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations.",
"title": ""
},
{
"docid": "6e8d1b5c2183ce09aadb09e4ff215241",
"text": "The widely used ChestX-ray14 dataset addresses an important medical image classification problem and has the following caveats: 1) many lung pathologies are visually similar, 2) a variant of diseases including lung cancer, tuberculosis, and pneumonia are present in a single scan, i.e. multiple labels and 3) The incidence of healthy images is much larger than diseased samples, creating imbalanced data. These properties are common in medical domain. Existing literature uses stateof-the-art DensetNet/Resnet models being transfer learned where output neurons of the networks are trained for individual diseases to cater for multiple diseases labels in each image. However, most of them don’t consider relationship between multiple classes. In this work we have proposed a novel error function, Multi-label Softmax Loss (MSML), to specifically address the properties of multiple labels and imbalanced data. Moreover, we have designed deep network architecture based on fine-grained classification concept that incorporates MSML. We have evaluated our proposed method on various network backbones and showed consistent performance improvements of AUC-ROC scores on the ChestX-ray14 dataset. The proposed error function provides a new method to gain improved performance across wider medical datasets.",
"title": ""
},
{
"docid": "0fd48f6f0f5ef1e68c2a157c16713e86",
"text": "Location distinction is the ability to determine when a device has changed its position. We explore the opportunity to use sophisticated PHY-layer measurements in wireless networking systems for location distinction. We first compare two existing location distinction methods - one based on channel gains of multi-tonal probes, and another on channel impulse response. Next, we combine the benefits of these two methods to develop a new link measurement that we call the complex temporal signature. We use a 2.4 GHz link measurement data set, obtained from CRAWDAD [10], to evaluate the three location distinction methods. We find that the complex temporal signature method performs significantly better compared to the existing methods. We also perform new measurements to understand and model the temporal behavior of link signatures over time. We integrate our model in our location distinction mechanism and significantly reduce the probability of false alarms due to temporal variations of link signatures.",
"title": ""
},
{
"docid": "37dbfc84d3b04b990d8b3b31d2013f77",
"text": "Large projects such as kernels, drivers and libraries follow a code style, and have recurring patterns. In this project, we explore learning based code recommendation, to use the project context and give meaningful suggestions. Using word vectors to model code tokens, and neural network based learning techniques, we are able to capture interesting patterns, and predict code that that cannot be predicted by a simple grammar and syntax based approach as in conventional IDEs. We achieve a total prediction accuracy of 56.0% on Linux kernel, a C project, and 40.6% on Twisted, a Python networking library.",
"title": ""
},
{
"docid": "eb7ccd69c0bbb4e421b8db3b265f5ba6",
"text": "The discovery of Novoselov et al. (2004) of a simple method to transfer a single atomic layer of carbon from the c-face of graphite to a substrate suitable for the measurement of its electrical and optical properties has led to a renewed interest in what was considered to be before that time a prototypical, yet theoretical, two-dimensional system. Indeed, recent theoretical studies of graphene reveal that the linear electronic band dispersion near the Brillouin zone corners gives rise to electrons and holes that propagate as if they were massless fermions and anomalous quantum transport was experimentally observed. Recent calculations and experimental determination of the optical phonons of graphene reveal Kohn anomalies at high-symmetry points in the Brillouin zone. They also show that the Born– Oppenheimer principle breaks down for doped graphene. Since a carbon nanotube can be viewed as a rolled-up sheet of graphene, these recent theoretical and experimental results on graphene should be important to researchers working on carbon nanotubes. The goal of this contribution is to review the exciting news about the electronic and phonon states of graphene and to suggest how these discoveries help understand the properties of carbon nanotubes.",
"title": ""
},
{
"docid": "f7e14c5e8a54e01c3b8f64e08f30a500",
"text": "As a subsystem of an Intelligent Transportation System (ITS), an Advanced Traveller Information System (ATIS) disseminates real-time traffic information to travellers. This paper analyses traffic flows data, describes methodology of traffic flows data processing and visualization in digital ArcGIS online maps. Calculation based on real time traffic data from equipped traffic sensors in Vilnius city streets. The paper also discusses about traffic conditions and impacts for Vilnius streets network from the point of traffic flows view. Furthermore, a comprehensive traffic flow GIS modelling procedure is presented, which relates traffic flows data from sensors to street network segments and updates traffic flow data to GIS database. GIS maps examples and traffic flows analysis possibilities in this paper presented as well.",
"title": ""
},
{
"docid": "a1bb09726327d73cf73c1aa9b0a2c39d",
"text": "Advances in neural network language models have demonstrated that these models can effectively learn representations of words meaning. In this paper, we explore a variation of neural language models that can learn on concepts taken from structured ontologies and extracted from free-text, rather than directly from terms in free-text.\n This model is employed for the task of measuring semantic similarity between medical concepts, a task that is central to a number of techniques in medical informatics and information retrieval. The model is built with two medical corpora (journal abstracts and patient records) and empirically validated on two ground-truth datasets of human-judged concept pairs assessed by medical professionals. Empirically, our approach correlates closely with expert human assessors (≈0.9) and outperforms a number of state-of-the-art benchmarks for medical semantic similarity.\n The demonstrated superiority of this model for providing an effective semantic similarity measure is promising in that this may translate into effectiveness gains for techniques in medical information retrieval and medical informatics (e.g., query expansion and literature-based discovery).",
"title": ""
},
{
"docid": "c1978e4936ed5bda4e51863dea7e93ee",
"text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.",
"title": ""
},
{
"docid": "0245101fac73b247fb2750413aad3915",
"text": "State evaluation and opponent modelling are important areas to consider when designing game-playing Artificial Intelligence. This paper presents a model for predicting which player will win in the real-time strategy game StarCraft. Model weights are learned from replays using logistic regression. We also present some metrics for estimating player skill which can be used a features in the predictive model, including using a battle simulation as a baseline to compare player performance against.",
"title": ""
},
{
"docid": "ba39b85859548caa2d3f1d51a7763482",
"text": "A new antenna structure of internal LTE/WWAN laptop computer antenna formed by a coupled-fed loop antenna connected with two branch radiators is presented. The two branch radiators consist of one longer strip and one shorter strip, both contributing multi-resonant modes to enhance the bandwidth of the antenna. The antenna's lower band is formed by a dual-resonant mode mainly contributed by the longer branch strip, while the upper band is formed by three resonant modes contributed respectively by one higher-order resonant mode of the longer branch strip, one resonant mode of the coupled-fed loop antenna alone, and one resonant mode of the shorter branch strip. The antenna's lower and upper bands can therefore cover the desired 698~960 and 1710~2690 MHz bands, respectively. The proposed antenna is suitable to be mounted at the top shielding metal wall of the display ground of the laptop computer and occupies a small volume of 4 × 10 × 75 mm3 above the top shielding metal wall, which makes it promising to be embedded inside the casing of the laptop computer as an internal antenna.",
"title": ""
},
{
"docid": "dca65464cc8a3bb59f2544ef9a09e388",
"text": "Some authors clearly showed that faking reduces the construct validity of personality questionnaires, whilst many others found no such effect. A possible explanation for mixed results could be searched for in a variety of methodological strategies in forming comparison groups supposed to differ in the level of faking: candidates vs. non-candidates; groups of individuals with \"high\" vs. \"low\" social desirability score; and groups given instructions to respond honestly vs. instructions to \"fake good\". All three strategies may be criticized for addressing the faking problem indirectly – assuming that comparison groups really differ in the level of response distortion, which might not be true. Therefore, in a within-subject design study we examined how faking affects the construct validity of personality inventories using a direct measure of faking. The results suggest that faking reduces the construct validity of personality questionnaires gradually – the effect was stronger in the subsample of participants who distorted their responses to a greater extent.",
"title": ""
},
{
"docid": "4cf669d93a62c480f4f6795f47744bc8",
"text": "We present an estimate of an upper bound of 1.75 bits for the entropy of characters in printed English, obtained by constructing a word trigram model and then computing the cross-entropy between this model and a balanced sample of English text. We suggest the well-known and widely available Brown Corpus of printed English as a standard against which to measure progress in language modeling and offer our bound as the first of what we hope will be a series of steadily decreasing bounds.",
"title": ""
},
{
"docid": "b70d795f7f1bdbc18be034e1d3f20f8e",
"text": "Technical universities, especially in Europe, are facing an important challenge in attracting more diverse groups of students, and in keeping the students they attract motivated and engaged in the curriculum. We describe our experience with gamification, which we loosely define as a teaching technique that uses social gaming elements to deliver higher education. Over the past three years, we have applied gamification to undergraduate and graduate courses in a leading technical university in the Netherlands and in Europe. Ours is one of the first long-running attempts to show that gamification can be used to teach technically challenging courses. The two gamification-based courses, the first-year B.Sc. course Computer Organization and an M.Sc.-level course on the emerging technology of Cloud Computing, have been cumulatively followed by over 450 students and passed by over 75% of them, at the first attempt. We find that gamification is correlated with an increase in the percentage of passing students, and in the participation in voluntary activities and challenging assignments. Gamification seems to also foster interaction in the classroom and trigger students to pay more attention to the design of the course. We also observe very positive student assessments and volunteered testimonials, and a Teacher of the Year award.",
"title": ""
},
{
"docid": "4040c04a9a3cfebe850229cc78233f8c",
"text": "Utility computing delivers compute and storage resources to applications as an 'on-demand utility', much like electricity, from a distributed collection of computing resources. There is great interest in running database applications on utility resources (e.g., Oracle's Grid initiative) due to reduced infrastructure and management costs, higher resource utilization, and the ability to handle sudden load surges. Virtual Machine (VM) technology offers powerful mechanisms to manage a utility resource infrastructure. However, provisioning VMs for applications to meet system performance goals, e.g., to meet service level agreements (SLAs), is an open problem. We are building two systems at Duke - Shirako and NIMO - that collectively address this problem.\n Shirako is a toolkit for leasing VMs to an application from a utility resource infrastructure. NIMO learns application performance models using novel techniques based on active learning, and uses these models to guide VM provisioning in Shirako. We will demonstrate: (a) how NIMO learns performance models in an online and automatic fashion using active learning; and (b) how NIMO uses these models to do automated and on-demand provisioning of VMs in Shirako for two classes of database applications - multi-tier web services and computational science workflows.",
"title": ""
},
{
"docid": "7809fdedaf075955523b51b429638501",
"text": "PM10 prediction has attracted special legislative and scientific attention due to its harmful effects on human health. Statistical techniques have the potential for high-accuracy PM10 prediction and accordingly, previous studies on statistical methods for temporal, spatial and spatio-temporal prediction of PM10 are reviewed and discussed in this paper. A review of previous studies demonstrates that Support Vector Machines, Artificial Neural Networks and hybrid techniques show promise for suitable temporal PM10 prediction. A review of the spatial predictions of PM10 shows that the LUR (Land Use Regression) approach has been successfully utilized for spatial prediction of PM10 in urban areas. Of the six introduced approaches for spatio-temporal prediction of PM10, only one approach is suitable for high-resolved prediction (Spatial resolution < 100 m; Temporal resolution ď 24 h). In this approach, based upon the LUR modeling method, short-term dynamic input variables are employed as explanatory variables alongside typical non-dynamic input variables in a non-linear modeling procedure.",
"title": ""
}
] | scidocsrr |
02ce80dc277237d28e5b16de1f8a14d3 | Mobile-D: an agile approach for mobile application development | [
{
"docid": "67d704317471c71842a1dfe74ddd324a",
"text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.",
"title": ""
}
] | [
{
"docid": "3f06fc0b50a1de5efd7682b4ae9f5a46",
"text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.",
"title": ""
},
{
"docid": "74972989924aef7d8923d3297d221e23",
"text": "Emerging evidence suggests that a traumatic brain injury (TBI) in childhood may disrupt the ability to abstract the central meaning or gist-based memory from connected language (discourse). The current study adopts a novel approach to elucidate the role of immediate and working memory processes in producing a cohesive and coherent gist-based text in the form of a summary in children with mild and severe TBI as compared to typically developing children, ages 8-14 years at test. Both TBI groups showed decreased performance on a summary production task as well as retrieval of specific content from a long narrative. Working memory on n-back tasks was also impaired in children with severe TBI, whereas immediate memory performance for recall of a simple word list in both TBI groups was comparable to controls. Interestingly, working memory, but not simple immediate memory for a word list, was significantly correlated with summarization ability and ability to recall discourse content.",
"title": ""
},
{
"docid": "54df0e1a435d673053f9264a4c58e602",
"text": "Next location prediction anticipates a person’s movement based on the history of previous sojourns. It is useful for proactive actions taken to assist the person in an ubiquitous environment. This paper evaluates next location prediction methods: dynamic Bayesian network, multi-layer perceptron, Elman net, Markov predictor, and state predictor. For the Markov and state predictor we use additionally an optimization, the confidence counter. The criterions for the comparison are the prediction accuracy, the quantity of useful predictions, the stability, the learning, the relearning, the memory and computing costs, the modelling costs, the expandability, and the ability to predict the time of entering the next location. For evaluation we use the same benchmarks containing movement sequences of real persons within an office building.",
"title": ""
},
{
"docid": "919d86270951a89a14398ee796b4e542",
"text": "The role of the circadian clock in skin and the identity of genes participating in its chronobiology remain largely unknown, leading us to define the circadian transcriptome of mouse skin at two different stages of the hair cycle, telogen and anagen. The circadian transcriptomes of telogen and anagen skin are largely distinct, with the former dominated by genes involved in cell proliferation and metabolism. The expression of many metabolic genes is antiphasic to cell cycle-related genes, the former peaking during the day and the latter at night. Consistently, accumulation of reactive oxygen species, a byproduct of oxidative phosphorylation, and S-phase are antiphasic to each other in telogen skin. Furthermore, the circadian variation in S-phase is controlled by BMAL1 intrinsic to keratinocytes, because keratinocyte-specific deletion of Bmal1 obliterates time-of-day-dependent synchronicity of cell division in the epidermis leading to a constitutively elevated cell proliferation. In agreement with higher cellular susceptibility to UV-induced DNA damage during S-phase, we found that mice are most sensitive to UVB-induced DNA damage in the epidermis at night. Because in the human epidermis maximum numbers of keratinocytes go through S-phase in the late afternoon, we speculate that in humans the circadian clock imposes regulation of epidermal cell proliferation so that skin is at a particularly vulnerable stage during times of maximum UV exposure, thus contributing to the high incidence of human skin cancers.",
"title": ""
},
{
"docid": "0cfac94bf56f39386802571ecd45cd3b",
"text": "Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.",
"title": ""
},
{
"docid": "76b081d26dc339218652cd6d7e0dfe4c",
"text": "Software developers working on change tasks commonly experience a broad range of emotions, ranging from happiness all the way to frustration and anger. Research, primarily in psychology, has shown that for certain kinds of tasks, emotions correlate with progress and that biometric measures, such as electro-dermal activity and electroencephalography data, might be used to distinguish between emotions. In our research, we are building on this work and investigate developers' emotions, progress and the use of biometric measures to classify them in the context of software change tasks. We conducted a lab study with 17 participants working on two change tasks each. Participants were wearing three biometric sensors and had to periodically assess their emotions and progress. The results show that the wide range of emotions experienced by developers is correlated with their perceived progress on the change tasks. Our analysis also shows that we can build a classifier to distinguish between positive and negative emotions in 71.36% and between low and high progress in 67.70% of all cases. These results open up opportunities for improving a developer's productivity. For instance, one could use such a classifier for providing recommendations at opportune moments when a developer is stuck and making no progress.",
"title": ""
},
{
"docid": "abd026e3f71c7e2a2b8d4fc8900b800f",
"text": "Text Summarization aims to generate concise and compressed form of original documents. The techniques used for text summarization may be categorized as extractive summarization and abstractive summarization. We consider extractive techniques which are based on selection of important sentences within a document. A major issue in extractive summarization is how to select important sentences, i.e., what criteria should be defined for selection of sentences which are eventually part of the summary. We examine this issue using rough sets notion of reducts. A reduct is an attribute subset which essentially contains the same information as the original attribute set. In particular, we defined and examined three types of matrices based on an information table, namely, discernibility matrix, indiscernibility matrix and equal to one matrix. Each of these matrices represents a certain type of relationship between the objects of an information table. Three types of reducts are determined based on these matrices. The reducts are used to select sentences and consequently generate text summaries. Experimental results and comparisons with existing approaches advocates for the use of the proposed approach in generating text summaries.",
"title": ""
},
{
"docid": "7cf8e2555cfccc1fc091272559ad78d7",
"text": "This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification.",
"title": ""
},
{
"docid": "bf2f9a0387de2b2aa3136a2879a07e83",
"text": "Rich representations in reinforcement learning have been studied for the purpose of enabling generalization and making learning feasible in large state spaces. We introduce Object-Oriented MDPs (OO-MDPs), a representation based on objects and their interactions, which is a natural way of modeling environments and offers important generalization opportunities. We introduce a learning algorithm for deterministic OO-MDPs and prove a polynomial bound on its sample complexity. We illustrate the performance gains of our representation and algorithm in the well-known Taxi domain, plus a real-life videogame.",
"title": ""
},
{
"docid": "25c80c2fe20576ca6f94d5abac795521",
"text": "BACKGROUND\nIntelligence theory research has illustrated that people hold either \"fixed\" (intelligence is immutable) or \"growth\" (intelligence can be improved) mindsets and that these views may affect how people learn throughout their lifetime. Little is known about the mindsets of physicians, and how mindset may affect their lifetime learning and integration of feedback. Our objective was to determine if pediatric physicians are of the \"fixed\" or \"growth\" mindset and whether individual mindset affects perception of medical error reporting. \n\n\nMETHODS\nWe sent an anonymous electronic survey to pediatric residents and attending pediatricians at a tertiary care pediatric hospital. Respondents completed the \"Theories of Intelligence Inventory\" which classifies individuals on a 6-point scale ranging from 1 (Fixed Mindset) to 6 (Growth Mindset). Subsequent questions collected data on respondents' recall of medical errors by self or others.\n\n\nRESULTS\nWe received 176/349 responses (50 %). Participants were equally distributed between mindsets with 84 (49 %) classified as \"fixed\" and 86 (51 %) as \"growth\". Residents, fellows and attendings did not differ in terms of mindset. Mindset did not correlate with the small number of reported medical errors.\n\n\nCONCLUSIONS\nThere is no dominant theory of intelligence (mindset) amongst pediatric physicians. The distribution is similar to that seen in the general population. Mindset did not correlate with error reports.",
"title": ""
},
{
"docid": "082a077db6f8b0d41c613f9a50934239",
"text": "Traceability is recognized to be important for supporting agile development processes. However, after analyzing many of the existing traceability approaches it can be concluded that they strongly depend on traditional development process characteristics. Within this paper it is justified that this is a drawback to support adequately agile processes. As it is discussed, some concepts do not have the same semantics for traditional and agile methodologies. This paper proposes three features that traceability models should support to be less dependent on a specific development process: (1) user-definable traceability links, (2) roles, and (3) linkage rules. To present how these features can be applied, an emerging traceability metamodel (TmM) will be used within this paper. TmM supports the definition of traceability methodologies adapted to the needs of each project. As it is shown, after introducing these three features into traceability models, two main advantages are obtained: 1) the support they can provide to agile process stakeholders is significantly more extensive, and 2) it will be possible to achieve a higher degree of automation. In this sense it will be feasible to have a methodical trace acquisition and maintenance process adapted to agile processes.",
"title": ""
},
{
"docid": "2d7963a209ec1c7f38c206a0945a1a7e",
"text": "We present a system which enables a user to remove a le from both the le system and all the backup tapes on which the le is stored. The ability to remove les from all backup tapes is desirable in many cases. Our system erases information from the backup tape without actually writing on the tape. This is achieved by applying cryptography in a new way: a block cipher is used to enable the system to \\forget\" information rather than protect it. Our system is easy to install and is transparent to the end user. Further, it introduces no slowdown in system performance and little slowdown in the backup procedure.",
"title": ""
},
{
"docid": "de8045598fe808788aca455eee4a1126",
"text": "This paper presents an efficient and practical approach for automatic, unsupervised object detection and segmentation in two-texture images based on the concept of Gabor filter optimization. The entire process occurs within a hierarchical framework and consists of the steps of detection, coarse segmentation, and fine segmentation. In the object detection step, the image is first processed using a Gabor filter bank. Then, the histograms of the filtered responses are analyzed using the scale-space approach to predict the presence/absence of an object in the target image. If the presence of an object is reported, the proposed approach proceeds to the coarse segmentation stage, wherein the best Gabor filter (among the bank of filters) is automatically chosen, and used to segment the image into two distinct regions. Finally, in the fine segmentation step, the coefficients of the best Gabor filter (output from the previous stage) are iteratively refined in order to further fine-tune and improve the segmentation map produced by the coarse segmentation step. In the validation study, the proposed approach is applied as part of a machine vision scheme with the goal of quantifying the stain-release property of fabrics. To that end, the presented hierarchical scheme is used to detect and segment stains on a sizeable set of digitized fabric images, and the performance evaluation of the detection, coarse segmentation, and fine segmentation steps is conducted using appropriate metrics. The promising nature of these results bears testimony to the efficacy of the proposed approach.",
"title": ""
},
{
"docid": "72d75ebfc728d3b287bcaf429a6b2ee5",
"text": "We present a fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization. This technology reflects an improvement of 2.8X routed logic density and >40% performance over the 14nm reference technology described in [1-3]. A full range of Vts is enabled on-chip through a unique multi-workfunction process. This enables both excellent low voltage SRAM response and highly scaled memory area simultaneously. The HD 6-T bitcell size is 0.0269um2. This 7nm technology is fully enabled by immersion lithography and advanced optical patterning techniques (like SAQP and SADP). However, the technology platform is also designed to leverage EUV insertion for specific multi-patterned (MP) levels for cycle time benefit and manufacturing efficiency. A complete set of foundation and complex IP is available in this advanced CMOS platform to enable both High Performance Compute (HPC) and mobile applications.",
"title": ""
},
{
"docid": "83637dc7109acc342d50366f498c141a",
"text": "With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice.",
"title": ""
},
{
"docid": "deedf390faeef304bf0479a844297113",
"text": "A compact 24-GHz Doppler radar module is developed in this paper for non-contact human vital-sign detection. The 24-GHz radar transceiver chip, transmitting and receiving antennas, baseband circuits, microcontroller, and Bluetooth transmission module have been integrated and implemented on a printed circuit board. For a measurement range of 1.5 m, the developed radar module can successfully detect the respiration and heartbeat of a human adult.",
"title": ""
},
{
"docid": "f15a7d48f3c42ccc97480204dc5c8622",
"text": "We have developed a wearable upper limb support system (ULSS) for support during heavy overhead tasks. The purpose of this study is to develop the voluntary motion support algorithm for the ULSS, and to confirm the effectiveness of the ULSS with the developed algorithm through dynamic evaluation experiments. The algorithm estimates the motor intention of the wearer based on a bioelectrical signal (BES). The ULSS measures the BES via electrodes attached onto the triceps brachii, deltoid, and clavicle. The BES changes in synchronization with the motion of the wearer's upper limbs. The algorithm changes a control phase by comparing the BES and threshold values. The algorithm achieves voluntary motion support for dynamic tasks by changing support torques of the ULSS in synchronization with the control phase. Five healthy adult males moved heavy loads vertically overhead in the evaluation experiments. In a random instruction experiment, the volunteers moved in synchronization with random instructions, and we confirmed that the control phase changes in synchronization with the random instructions. In a motion support experiment, we confirmed that the average number of the vertical motion with the ULSS increased 2.3 times compared to the average number without the ULSS. As a result, the ULSS with the algorithm supports the motion voluntarily, and it has a positive effect on the support. In conclusion, we could develop the novel voluntary motion support algorithm of the ULSS.",
"title": ""
},
{
"docid": "b2470ecd83971aa877d8a38a5b88a6dc",
"text": "In this paper, we improve the attention or alignment accuracy of neural machine translation by utilizing the alignments of training sentence pairs. We simply compute the distance between the machine attentions and the “true” alignments, and minimize this cost in the training procedure. Our experiments on large-scale Chinese-to-English task show that our model improves both translation and alignment qualities significantly over the large-vocabulary neural machine translation system, and even beats a state-of-the-art traditional syntax-based system.",
"title": ""
},
{
"docid": "e9b3ddc114998e25932819e3281e2e0c",
"text": "We study the problem of jointly aligning sentence constituents and predicting their similarities. While extensive sentence similarity data exists, manually generating reference alignments and labeling the similarities of the aligned chunks is comparatively onerous. This prompts the natural question of whether we can exploit easy-to-create sentence level data to train better aligners. In this paper, we present a model that learns to jointly align constituents of two sentences and also predict their similarities. By taking advantage of both sentence and constituent level data, we show that our model achieves state-of-the-art performance at predicting alignments and constituent similarities.",
"title": ""
},
{
"docid": "bffbc725b52468b41c53b156f6eadedb",
"text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.",
"title": ""
}
] | scidocsrr |
9e94a07f70d58bc9c62a0aa9cd109816 | Next-Generation Machine Learning for Biological Networks | [
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
}
] | [
{
"docid": "7c057b63c525a03ad2f40f625b6157e3",
"text": "As the selection of products and services becomes profuse in the technology market, it is often the delighting user experience (UX) that differentiates a successful product from the competitors. Product development is no longer about implementing features and testing their usability, but understanding users' daily lives and evaluating if a product resonates with the in-depth user needs. Although UX is a widely adopted term in industry, the tools for evaluating UX in product development are still inadequate. Based on industrial case studies and the latest research on UX evaluation, this workshop forms a model for aligning the used UX evaluation methods to product development processes. The results can be used to advance the state of \"putting UX evaluation into practice\".",
"title": ""
},
{
"docid": "96a79bc015e34db18e32a31bfaaace36",
"text": "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71%, using observations about their prenatal behavior, and as accurately as 80-83% when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.",
"title": ""
},
{
"docid": "14d68a45e54b07efb15ef950ba92d7bc",
"text": "We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout. Instead of learning a direct mapping from text to image, our algorithm decomposes the generation process into multiple steps, in which it first constructs a semantic layout from the text by the layout generator and converts the layout to an image by the image generator. The proposed layout generator progressively constructs a semantic layout in a coarse-to-fine manner by generating object bounding boxes and refining each box by estimating object shapes inside the box. The image generator synthesizes an image conditioned on the inferred semantic layout, which provides a useful semantic structure of an image matching with the text description. Our model not only generates semantically more meaningful images, but also allows automatic annotation of generated images and user-controlled generation process by modifying the generated scene layout. We demonstrate the capability of the proposed model on challenging MS-COCO dataset and show that the model can substantially improve the image quality, interpretability of output and semantic alignment to input text over existing approaches.",
"title": ""
},
{
"docid": "6aa4b1064833af0c91d16af28136e7e4",
"text": "Recently, supervised classification has been shown to work well for the task of speech separation. We perform an in-depth evaluation of such techniques as a front-end for noise-robust automatic speech recognition (ASR). The proposed separation front-end consists of two stages. The first stage removes additive noise via time-frequency masking. The second stage addresses channel mismatch and the distortions introduced by the first stage; a non-linear function is learned that maps the masked spectral features to their clean counterpart. Results show that the proposed front-end substantially improves ASR performance when the acoustic models are trained in clean conditions. We also propose a diagonal feature discriminant linear regression (dFDLR) adaptation that can be performed on a per-utterance basis for ASR systems employing deep neural networks and HMM. Results show that dFDLR consistently improves performance in all test conditions. Surprisingly, the best average results are obtained when dFDLR is applied to models trained using noisy log-Mel spectral features from the multi-condition training set. With no channel mismatch, the best results are obtained when the proposed speech separation front-end is used along with multi-condition training using log-Mel features followed by dFDLR adaptation. Both these results are among the best on the Aurora-4 dataset.",
"title": ""
},
{
"docid": "d88ce9c09fdfa0c1ea023ce08183f39b",
"text": "The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.\n This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.",
"title": ""
},
{
"docid": "6c106d560d8894d941851386d96afe2b",
"text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
},
{
"docid": "31346876446c21b92f088b852c0201b2",
"text": "In this paper, the closed-form design method of an Nway dual-band Wilkinson hybrid power divider is proposed. This symmetric structure including N groups of two sections of transmission lines and two isolated resistors is described which can split a signal into N equiphase equiamplitude parts at two arbitrary frequencies (dual-band) simultaneously, where N can be odd or even. Based on the rigorous evenand odd-mode analysis, the closed-form design equations are derived. For verification, various numerical examples are designed, calculated and compared while two practical examples including two ways and three ways dual-band microstrip power dividers are fabricated and measured. It is very interesting that this generalized power divider with analytical design equations can be designed for wideband applications when the frequency-ratio is relatively small. In addition, it is found that the conventional N-way hybrid Wilkinson power divider for single-band applications is a special case (the frequency-ratio equals to 3) of this generalized power divider.",
"title": ""
},
{
"docid": "ca1aeb2730eb11844d0dde46cf15de4e",
"text": "Knowledge of the bio-impedance and its equivalent circuit model at the electrode-electrolyte/tissue interface is important in the application of functional electrical stimulation. Impedance can be used as a merit to evaluate the proximity between electrodes and targeted tissues. Understanding the equivalent circuit parameters of the electrode can further be leveraged to set a safe boundary for stimulus parameters in order not to exceed the water window of electrodes. In this paper, we present an impedance characterization technique and implement a proof-of-concept system using an implantable neural stimulator and an off-the-shelf microcontroller. The proposed technique yields the parameters of the equivalent circuit of an electrode through large signal analysis by injecting a single low-intensity biphasic current stimulus with deliberately inserted inter-pulse delay and by acquiring the transient electrode voltage at three well-specified timings. Using low-intensity stimulus allows the derivation of electrode double layer capacitance since capacitive charge-injection dominates when electrode overpotential is small. Insertion of the inter-pulse delay creates a controlled discharge time to estimate the Faradic resistance. The proposed method has been validated by measuring the impedance of a) an emulated Randles cells made of discrete circuit components and b) a custom-made platinum electrode array in-vitro, and comparing estimated parameters with the results derived from an impedance analyzer. The proposed technique can be integrated into implantable or commercial neural stimulator system at low extra power consumption, low extra-hardware cost, and light computation.",
"title": ""
},
{
"docid": "135f4008d9c7edc3d7ab8c7f9eb0c85e",
"text": "Organizations deploy gamification in CSCW systems to enhance motivation and behavioral outcomes of users. However, gamification approaches often cause competition between users, which might be inappropriate for working environments that seek cooperation. Drawing on the social interdependence theory, this paper provides a classification for gamification features and insights about the design of cooperative gamification. Using the example of an innova-tion community of a German engineering company, we present the design of a cooperative gamification approach and results from a first experimental evaluation. The findings indicate that the developed gamification approach has positive effects on perceived enjoyment and the intention towards knowledge sharing in the considered innovation community. Besides our conceptual contribu-tion, our findings suggest that cooperative gamification may be beneficial for cooperative working environments and represents a promising field for future research.",
"title": ""
},
{
"docid": "3ff330ab15962b09584e1636de7503ea",
"text": "By diverting funds away from legitimate partners (a.k.a publishers), click fraud represents a serious drain on advertising budgets and can seriously harm the viability of the internet advertising market. As such, fraud detection algorithms which can identify fraudulent behavior based on user click patterns are extremely valuable. Based on the BuzzCity dataset, we propose a novel approach for click fraud detection which is based on a set of new features derived from existing attributes. The proposed model is evaluated in terms of the resulting precision, recall and the area under the ROC curve. A final ensemble model based on 6 different learning algorithms proved to be stable with respect to all 3 performance indicators. Our final model shows improved results on training, validation and test datasets, thus demonstrating its generalizability to different datasets.",
"title": ""
},
{
"docid": "76156cea2ef1d49179d35fd8f333b011",
"text": "Climate change, pollution, and energy insecurity are among the greatest problems of our time. Addressing them requires major changes in our energy infrastructure. Here, we analyze the feasibility of providing worldwide energy for all purposes (electric power, transportation, heating/cooling, etc.) from wind, water, and sunlight (WWS). In Part I, we discuss WWS energy system characteristics, current and future energy demand, availability of WWS resources, numbers of WWS devices, and area and material requirements. In Part II, we address variability, economics, and policy of WWS energy. We estimate that !3,800,000 5 MW wind turbines, !49,000 300 MW concentrated solar plants, !40,000 300 MW solar PV power plants, !1.7 billion 3 kW rooftop PV systems, !5350 100 MWgeothermal power plants, !270 new 1300 MW hydroelectric power plants, !720,000 0.75 MWwave devices, and !490,000 1 MW tidal turbines can power a 2030 WWS world that uses electricity and electrolytic hydrogen for all purposes. Such a WWS infrastructure reduces world power demand by 30% and requires only !0.41% and !0.59% more of the world’s land for footprint and spacing, respectively. We suggest producing all new energy withWWSby 2030 and replacing the pre-existing energy by 2050. Barriers to the plan are primarily social and political, not technological or economic. The energy cost in a WWS world should be similar to",
"title": ""
},
{
"docid": "1527601285eb1b2ef2de040154e3d4fb",
"text": "This paper exploits the context of natural dynamic scenes for human action recognition in video. Human actions are frequently constrained by the purpose and the physical properties of scenes and demonstrate high correlation with particular scene classes. For example, eating often happens in a kitchen while running is more common outdoors. The contribution of this paper is three-fold: (a) we automatically discover relevant scene classes and their correlation with human actions, (b) we show how to learn selected scene classes from video without manual supervision and (c) we develop a joint framework for action and scene recognition and demonstrate improved recognition of both in natural video. We use movie scripts as a means of automatic supervision for training. For selected action classes we identify correlated scene classes in text and then retrieve video samples of actions and scenes for training using script-to-video alignment. Our visual models for scenes and actions are formulated within the bag-of-features framework and are combined in a joint scene-action SVM-based classifier. We report experimental results and validate the method on a new large dataset with twelve action classes and ten scene classes acquired from 69 movies.",
"title": ""
},
{
"docid": "5116ac47f91a798b9ddb6bc3da737c70",
"text": "Mobile brokerage services represent an emerging application of mobile commerce in the brokerage industry. Compared with telephone-based trading services and online brokerage services, they have advantages such as ubiquity, convenience, and privacy. However, the number of investors using mobile brokerage services to conduct brokerage transactions is far smaller than those using other trading methods. A plausible reason for this is that investors lack initial trust in mobile brokerage services, which affects their acceptance of them. This research examines trust transfer as a means of establishing initial trust in mobile brokerage services. We analyze how an investor’s trust in the online brokerage services of a brokerage firm affects her cognitive beliefs about the mobile brokerage services of the firm and what other key factors influence the formation of initial trust in mobile brokerage services. We develop and empirically test a theoretical model of trust transfer from the online to the mobile channels. Our results indicate that trust in online brokerage services not only has a direct effect on initial trust but also has an indirect effect through other variables. This study provides useful suggestions and implications for academics and practitioners. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b761b12bf2f7d9652fdfd7e7cd4f3ef3",
"text": "Knowledge graphs represent concepts (e.g., people, places, events) and their semantic relationships. As a data structure, they underpin a digital information system, support users in resource discovery and retrieval, and are useful for navigation and visualization purposes. Within the libaries and humanities domain, knowledge graphs are typically rooted in knowledge organization systems, which have a century-old tradition and have undergone their digital transformation with the advent of the Web and Linked Data. Being exposed to the Web, metadata and concept definitions are now forming an interconnected and decentralized global knowledge network that can be curated and enriched by community-driven editorial processes. In the future, knowledge graphs could be vehicles for formalizing and connecting findings and insights derived from the analysis of possibly large-scale corpora in the libraries and digital humanities domain.",
"title": ""
},
{
"docid": "763b8982d13b0637a17347b2c557f1f8",
"text": "This paper describes an application of Case-Based Reasonin g to the problem of reducing the number of final-line fraud investigation s i the credit approval process. The performance of a suite of algorithms whi ch are applied in combination to determine a diagnosis from a set of retriev ed cases is reported. An adaptive diagnosis algorithm combining several neighbourhoodbased and probabilistic algorithms was found to have the bes t performance, and these results indicate that an adaptive solution can pro vide fraud filtering and case ordering functions for reducing the number of fin al-li e fraud investigations necessary.",
"title": ""
},
{
"docid": "5ca36b7877ebd3d05e48d3230f2dceb0",
"text": "BACKGROUND\nThe frontal branch has a defined course along the Pitanguy line from tragus to lateral brow, although its depth along this line is controversial. The high-superficial musculoaponeurotic system (SMAS) face-lift technique divides the SMAS above the arch, which conflicts with previous descriptions of the frontal nerve depth. This anatomical study defines the depth and fascial boundaries of the frontal branch of the facial nerve over the zygomatic arch.\n\n\nMETHODS\nEight fresh cadaver heads were included in the study, with bilateral facial nerves studied (n = 16). The proximal frontal branches were isolated and then sectioned in full-thickness tissue blocks over a 5-cm distance over the zygomatic arch. The tissue blocks were evaluated histologically for the depth and fascial planes surrounding the frontal nerve. A dissection video accompanies this article.\n\n\nRESULTS\nThe frontal branch of the facial nerve was identified in each tissue section and its fascial boundaries were easily identified using epidermis and periosteum as reference points. The frontal branch coursed under a separate fascial plane, the parotid-temporal fascia, which was deep to the SMAS as it coursed to the zygomatic arch and remained within this deep fascia over the arch. The frontal branch was intact and protected by the parotid-temporal fascia after a high-SMAS face lift.\n\n\nCONCLUSIONS\nThe frontal branch of the facial nerve is protected by a deep layer of fascia, termed the parotid-temporal fascia, which is separate from the SMAS as it travels over the zygomatic arch. Division of the SMAS above the arch in a high-SMAS face lift is safe using the technique described in this study.",
"title": ""
},
{
"docid": "95babe8b0bd1674ece34cb311db37835",
"text": "We aim at estimating the fundamental matrix in two views from five correspondences of rotation invariant features obtained by e.g. the SIFT detector. The proposed minimal solver1 first estimates a homography from three correspondences assuming that they are co-planar and exploiting their rotational components. Then the fundamental matrix is obtained from the homography and two additional point pairs in general position. The proposed approach, combined with robust estimators like Graph-Cut RANSAC, is superior to other state-of-the-art algorithms both in terms of accuracy and number of iterations required. This is validated on synthesized data and 561 real image pairs. Moreover, the tests show that requiring three points on a plane is not too restrictive in urban environment and locally optimized robust estimators lead to accurate estimates even if the points are not entirely co-planar. As a potential application, we show that using the proposed method makes two-view multi-motion estimation more accurate.",
"title": ""
},
{
"docid": "f519e878b3aae2f0024978489db77425",
"text": "In this paper, we propose a new halftoning scheme that preserves the structure and tone similarities of images while maintaining the simplicity of Floyd-Steinberg error diffusion. Our algorithm is based on the Floyd-Steinberg error diffusion algorithm, but the threshold modulation part is modified to improve the over-blurring issue of the Floyd-Steinberg error diffusion algorithm. By adding some structural information on images obtained using the Laplacian operator to the quantizer thresholds, the structural details in the textured region can be preserved. The visual artifacts of the original error diffusion that is usually visible in the uniform region is greatly reduced by adding noise to the thresholds. This is especially true for the low contrast region because most existing error diffusion algorithms cannot preserve structural details but our algorithm preserves them clearly using threshold modulation. Our algorithm has been evaluated using various types of images including some with the low contrast region and assessed numerically using the MSSIM measure with other existing state-of-art halftoning algorithms. The results show that our method performs better than existing approaches both in the textured region and in the uniform region with the faster computation speed.",
"title": ""
},
{
"docid": "5f6b248776b3b7ad7a840ac5224587be",
"text": "We present in this paper a superpixel segmentation algorithm called Linear Spectral Clustering (LSC), which produces compact and uniform superpixels with low computational costs. Basically, a normalized cuts formulation of the superpixel segmentation is adopted based on a similarity metric that measures the color similarity and space proximity between image pixels. However, instead of using the traditional eigen-based algorithm, we approximate the similarity metric using a kernel function leading to an explicitly mapping of pixel values and coordinates into a high dimensional feature space. We revisit the conclusion that by appropriately weighting each point in this feature space, the objective functions of weighted K-means and normalized cuts share the same optimum point. As such, it is possible to optimize the cost function of normalized cuts by iteratively applying simple K-means clustering in the proposed feature space. LSC is of linear computational complexity and high memory efficiency and is able to preserve global properties of images. Experimental results show that LSC performs equally well or better than state of the art superpixel segmentation algorithms in terms of several commonly used evaluation metrics in image segmentation.",
"title": ""
}
] | scidocsrr |
80d7567d1d8943c76e6a979ffd1cfa0c | Real fuzzy PID control of the UAV AR.Drone 2.0 for hovering under disturbances in known environments | [
{
"docid": "7e884438ee8459a441cbe1500f1bac88",
"text": "We consider the problem of autonomously flying Miniature Aerial Vehicles (MAVs) in indoor environments such as home and office buildings. The primary long range sensor in these MAVs is a miniature camera. While previous approaches first try to build a 3D model in order to do planning and control, our method neither attempts to build nor requires a 3D model. Instead, our method first classifies the type of indoor environment the MAV is in, and then uses vision algorithms based on perspective cues to estimate the desired direction to fly. We test our method on two MAV platforms: a co-axial miniature helicopter and a toy quadrotor. Our experiments show that our vision algorithms are quite reliable, and they enable our MAVs to fly in a variety of corridors and staircases.",
"title": ""
},
{
"docid": "c12d534d219e3d249ba3da1c0956c540",
"text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.",
"title": ""
}
] | [
{
"docid": "c78ef06693d0b8ae37989b5574938c90",
"text": "Relational databases have been around for many decades and are the database technology of choice for most traditional data-intensive storage and retrieval applications. Retrievals are usually accomplished using SQL, a declarative query language. Relational database systems are generally efficient unless the data contains many relationships requiring joins of large tables. Recently there has been much interest in data stores that do not use SQL exclusively, the so-called NoSQL movement. Examples are Google's BigTable and Facebook's Cassandra. This paper reports on a comparison of one such NoSQL graph database called Neo4j with a common relational database system, MySQL, for use as the underlying technology in the development of a software system to record and query data provenance information.",
"title": ""
},
{
"docid": "b2b4e5162b3d7d99a482f9b82820d59e",
"text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.",
"title": ""
},
{
"docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37",
"text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.",
"title": ""
},
{
"docid": "c71ada1231703f2ecb2c2872ef7d5632",
"text": "We present a spatial multiplex optical transmission system named the “Smart Light” (See Figure 1), which provides multiple data streams to multiple points simultaneously. This system consists of a projector and some devices along with a photo-detector. The projector projects images with invisible information to the devices, and devices receive some data. In this system, the data stream is expandable to a positionbased audio or video stream by using DMDs (Digital Micro-mirror Device) or LEDs (Light Emitting Diode) with unperceivable space-time modulation. First, in a preliminary experiment, we confirmed with a commercially produced XGA grade projector transmitting a million points that the data rate of its path is a few bits per second. Detached devices can receive relative position data and other properties from the projector. Second, we made an LED type high-speed projector to transmit audio streams using modulated light on an object and confirmed the transmission of positionbased audio stream data.",
"title": ""
},
{
"docid": "b9c0ccebb8f7339830daccb235338d4a",
"text": "ÐA problem gaining interest in pattern recognition applied to data mining is that of selecting a small representative subset from a very large data set. In this article, a nonparametric data reduction scheme is suggested. It attempts to represent the density underlying the data. The algorithm selects representative points in a multiscale fashion which is novel from existing density-based approaches. The accuracy of representation by the condensed set is measured in terms of the error in density estimates of the original and reduced sets. Experimental studies on several real life data sets show that the multiscale approach is superior to several related condensation methods both in terms of condensation ratio and estimation error. The condensed set obtained was also experimentally shown to be effective for some important data mining tasks like classification, clustering, and rule generation on large data sets. Moreover, it is empirically found that the algorithm is efficient in terms of sample complexity. Index TermsÐData mining, multiscale condensation, scalability, density estimation, convergence in probability, instance learning.",
"title": ""
},
{
"docid": "888e8f68486c08ffe538c46ba76de85c",
"text": "Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.",
"title": ""
},
{
"docid": "b2d334cc7d79d2e3ebd573bbeaa2dfbe",
"text": "Objectives\nTo measure the occurrence and levels of depression, anxiety and stress in undergraduate dental students using the Depression, Anxiety and Stress Scale (DASS-21).\n\n\nMethods\nThis cross-sectional study was conducted in November and December of 2014. A total of 289 dental students were invited to participate, and 277 responded, resulting in a response rate of 96%. The final sample included 247 participants. Eligible participants were surveyed via a self-reported questionnaire that included the validated DASS-21 scale as the assessment tool and questions about demographic characteristics and methods for managing stress.\n\n\nResults\nAbnormal levels of depression, anxiety and stress were identified in 55.9%, 66.8% and 54.7% of the study participants, respectively. A multiple linear regression analysis revealed multiple predictors: gender (for anxiety b=-3.589, p=.016 and stress b=-4.099, p=.008), satisfaction with faculty relationships (for depression b=-2.318, p=.007; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), satisfaction with peer relationships (for depression b=-3.527, p<.001; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), and dentistry as the first choice for field of study (for stress b=-2.648, p=.045). The standardized coefficients demonstrated the relationship and strength of the predictors for each subscale. To cope with stress, students engaged in various activities such as reading, watching television and seeking emotional support from others.\n\n\nConclusions\nThe high occurrence of depression, anxiety and stress among dental students highlights the importance of providing support programs and implementing preventive measures to help students, particularly those who are most susceptible to higher levels of these psychological conditions.",
"title": ""
},
{
"docid": "8cd52cdc44c18214c471716745e3c00f",
"text": "The design of electric vehicles require a complete paradigm shift in terms of embedded systems architectures and software design techniques that are followed within the conventional automotive systems domain. It is increasingly being realized that the evolutionary approach of replacing the engine of a car by an electric engine will not be able to address issues like acceptable vehicle range, battery lifetime performance, battery management techniques, costs and weight, which are the core issues for the success of electric vehicles. While battery technology has crucial importance in the domain of electric vehicles, how these batteries are used and managed pose new problems in the area of embedded systems architecture and software for electric vehicles. At the same time, the communication and computation design challenges in electric vehicles also have to be addressed appropriately. This paper discusses some of these research challenges.",
"title": ""
},
{
"docid": "9df5329fcf5e5dd6394f76040d8d8402",
"text": "Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices. In this work, we show that multi-task learning is naturally suited to handle the statistical challenges of this setting, and propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Our method and theory for the first time consider issues of high communication cost, stragglers, and fault tolerance for distributed multi-task learning. The resulting method achieves significant speedups compared to alternatives in the federated setting, as we demonstrate through simulations on real-world federated datasets.",
"title": ""
},
{
"docid": "962ab9e871dc06c3cd290787dc7e71aa",
"text": "The conventional digital hardware computational blocks with different structures are designed to compute the precise results of the assigned calculations. The main contribution of our proposed Bio-inspired Imprecise Computational blocks (BICs) is that they are designed to provide an applicable estimation of the result instead of its precise value at a lower cost. These novel structures are more efficient in terms of area, speed, and power consumption with respect to their precise rivals. Complete descriptions of sample BIC adder and multiplier structures as well as their error behaviors and synthesis results are introduced in this paper. It is then shown that these BIC structures can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.",
"title": ""
},
{
"docid": "7208a2b257c7ba7122fd2e278dd1bf4a",
"text": "Abstract—This paper shows in detail the mathematical model of direct and inverse kinematics for a robot manipulator (welding type) with four degrees of freedom. Using the D-H parameters, screw theory, numerical, geometric and interpolation methods, the theoretical and practical values of the position of robot were determined using an optimized algorithm for inverse kinematics obtaining the values of the particular joints in order to determine the virtual paths in a relatively short time.",
"title": ""
},
{
"docid": "02fd763f6e15b07187e3cbe0fd3d0e18",
"text": "The Batcher`s bitonic sorting algorithm is a parallel sorting algorithm, which is used for sorting the numbers in modern parallel machines. There are various parallel sorting algorithms such as radix sort, bitonic sort, etc. It is one of the efficient parallel sorting algorithm because of load balancing property. It is widely used in various scientific and engineering applications. However, Various researches have worked on a bitonic sorting algorithm in order to improve up the performance of original batcher`s bitonic sorting algorithm. In this paper, tried to review the contribution made by these researchers.",
"title": ""
},
{
"docid": "1203f22bfdfc9ecd211dbd79a2043a6a",
"text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.",
"title": ""
},
{
"docid": "4a6c7b68ea23f910f0edc35f4542e5cb",
"text": "Microgrids have been proposed in order to handle the impacts of Distributed Generators (DGs) and make conventional grids suitable for large scale deployments of distributed generation. However, the introduction of microgrids brings some challenges. Protection of a microgrid and its entities is one of them. Due to the existence of generators at all levels of the distribution system and two distinct operating modes, i.e. Grid Connected and Islanded modes, the fault currents in a system vary substantially. Consequently, the traditional fixed current relay protection schemes need to be improved. This paper presents a conceptual design of a microgrid protection system which utilizes extensive communication to monitor the microgrid and update relay fault currents according to the variations in the system. The proposed system is designed so that it can respond to dynamic changes in the system such as connection/disconnection of DGs.",
"title": ""
},
{
"docid": "9afdd51ba034e9580c52f0aba50dfa4b",
"text": "Advances in field programmable gate arrays (FPGAs), which are the platform of choice for reconfigurable computing, have made it possible to use FPGAs in increasingly ma ny areas of computing, including complex scientific applicati ons. These applications demand high performance and high-preci s on, floating-point arithmetic. Until now, most of the research has not focussed on compliance with IEEE standard 754, focusing ins tead upon custom formats and bitwidths. In this paper, we present double-precision floating-point cores that are parameteri zed by their degree of pipelining and the features of IEEE standard754 that they implement. We then analyze the effects of supporti ng the standard when these cores are used in an FPGA-based accelerator for Lennard-Jones force and potential calculations that are part of molecular dynamics (MD) simulations.",
"title": ""
},
{
"docid": "2431ee8fb0dcfd84c61e60ee41a95edb",
"text": "Web applications have become a very popular means of developing software. This is because of many advantages of web applications like no need of installation on each client machine, centralized data, reduction in business cost etc. With the increase in this trend web applications are becoming vulnerable for attacks. Cross site scripting (XSS) is the major threat for web application as it is the most basic attack on web application. It provides the surface for other types of attacks like Cross Site Request Forgery, Session Hijacking etc. There are three types of XSS attacks i.e. non-persistent (or reflected) XSS, persistent (or stored) XSS and DOM-based vulnerabilities. There is one more type that is not as common as those three types, induced XSS. In this work we aim to study and consolidate the understanding of XSS and their origin, manifestation, kinds of dangers and mitigation efforts for XSS. Different approaches proposed by researchers are presented here and an analysis of these approaches is performed. Finally the conclusion is drawn at the end of the work.",
"title": ""
},
{
"docid": "cc6895789b42f7ae779c2236cde4636a",
"text": "Modern day social media search and recommender systems require complex query formulation that incorporates both user context and their explicit search queries. Users expect these systems to be fast and provide relevant results to their query and context. With millions of documents to choose from, these systems utilize a multi-pass scoring function to narrow the results and provide the most relevant ones to users. Candidate selection is required to sift through all the documents in the index and select a relevant few to be ranked by subsequent scoring functions. It becomes crucial to narrow down the document set while maintaining relevant ones in resulting set. In this tutorial we survey various candidate selection techniques and deep dive into case studies on a large scale social media platform. In the later half we provide hands-on tutorial where we explore building these candidate selection models on a real world dataset and see how to balance the tradeoff between relevance and latency.",
"title": ""
},
{
"docid": "18b0f6712396476dc4171128ff08a355",
"text": "Heterogeneous multicore architectures have the potential for high performance and energy efficiency. These architectures may be composed of small power-efficient cores, large high-performance cores, and/or specialized cores that accelerate the performance of a particular class of computation. Architects have explored multiple dimensions of heterogeneity, both in terms of micro-architecture and specialization. While early work constrained the cores to share a single ISA, this work shows that allowing heterogeneous ISAs further extends the effectiveness of such architectures\n This work exploits the diversity offered by three modern ISAs: Thumb, x86-64, and Alpha. This architecture has the potential to outperform the best single-ISA heterogeneous architecture by as much as 21%, with 23% energy savings and a reduction of 32% in Energy Delay Product.",
"title": ""
},
{
"docid": "033b05d21f5b8fb5ce05db33f1cedcde",
"text": "Seasonal occurrence of the common cutworm Spodoptera litura (Fab.) (Lepidoptera: Noctuidae) moths captured in synthetic sex pheromone traps and associated field population of eggs and larvae in soybean were examined in India from 2009 to 2011. Male moths of S. litura first appeared in late July or early August and continued through October. Peak male trap catches occurred during the second fortnight of September, which was within soybean reproductive stages. Similarly, the first appearance of S. litura egg masses and larval populations were observed after the first appearance of male moths in early to mid-August, and were present in the growing season up to late September to mid-October. The peak appearance of egg masses and larval populations always corresponded with the peak activity of male moths recorded during mid-September in all years. Correlation studies showed that weekly mean trap catches were linearly and positively correlated with egg masses and larval populations during the entire growing season of soybean. Seasonal means of male moth catches in pheromone traps during the 2010 and 2011 seasons were significantly lower than the catches during the 2009 season. However, seasonal means of the egg masses and larval populations were not significantly different between years. Pheromone traps may be useful indicators of the onset of numbers of S. litura eggs and larvae in soybean fields.",
"title": ""
},
{
"docid": "20c6da8e705ba063d139d4adba7bcde2",
"text": "Copyright © 2010 American Heart Association. All rights reserved. Print ISSN: 0009-7322. Online 72514 Circulation is published by the American Heart Association. 7272 Greenville Avenue, Dallas, TX DOI: 10.1161/CIR.0b013e3181f9a223 published online Oct 11, 2010; Circulation Care, Perioperative and Resuscitation Critical Association Council on Clinical Cardiology and Council on Cardiopulmonary, Parshall, Gary S. Francis, Mihai Gheorghiade and on behalf of the American Heart Anderson, Cynthia Arslanian-Engoren, W. Brian Gibler, James K. McCord, Mark B. Neal L. Weintraub, Sean P. Collins, Peter S. Pang, Phillip D. Levy, Allen S. Statement From the American Heart Association Treatment, and Disposition: Current Approaches and Future Aims. A Scientific Acute Heart Failure Syndromes: Emergency Department Presentation, http://circ.ahajournals.org located on the World Wide Web at: The online version of this article, along with updated information and services, is",
"title": ""
}
] | scidocsrr |
6e3a1a74ece7e0c49866c42f870f1d8d | Data Integration: The Current Status and the Way Forward | [
{
"docid": "d95cd76008dd65d5d7f00c82bad013d3",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
},
{
"docid": "c6abeae6e9287f04b472595a47e974ad",
"text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
}
] | [
{
"docid": "0f3cad05c9c267f11c4cebd634a12c59",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "6858c559b78c6f2b5000c22e2fef892b",
"text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.",
"title": ""
},
{
"docid": "86ededf9b452bbc51117f5a117247b51",
"text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.",
"title": ""
},
{
"docid": "831b153045d9afc8f92336b3ba8019c6",
"text": "The progress in the field of electronics and technology as well as the processing of signals coupled with advance in the use of computer technology has given the opportunity to record and analyze the bio-electric signals from the human body in real time that requires dealing with many challenges according to the nature of the signal and its frequency. This could be up to 1 kHz, in addition to the need to transfer data from more than one channel at the same time. Moreover, another challenge is a high sensitivity and low noise measurements of the acquired bio-electric signals which may be tens of micro volts in amplitude. For these reasons, a low power wireless Electromyography (EMG) data transfer system is designed in order to meet these challenging demands. In this work, we are able to develop an EMG analogue signal processing hardware, along with computer based supporting software. In the development of the EMG analogue signal processing hardware, many important issues have been addressed. Some of these issues include noise and artifact problems, as well as the bias DC current. The computer based software enables the user to analyze the collected EMG data and plot them on graphs for visual decision making. The work accomplished in this study enables users to use the surface EMG device for recording EMG signals for various purposes in movement analysis in medical diagnosis, rehabilitation sports medicine and ergonomics. Results revealed that the proposed system transmit and receive the signal without any losing in the information of signals.",
"title": ""
},
{
"docid": "835b7a2b3d9c457a962e6b432665c7ce",
"text": "In this paper we investigate the feasibility of using synthetic data to augment face datasets. In particular, we propose a novel generative adversarial network (GAN) that can disentangle identity-related attributes from non-identity-related attributes. This is done by training an embedding network that maps discrete identity labels to an identity latent space that follows a simple prior distribution, and training a GAN conditioned on samples from that distribution. Our proposed GAN allows us to augment face datasets by generating both synthetic images of subjects in the training set and synthetic images of new subjects not in the training set. By using recent advances in GAN training, we show that the synthetic images generated by our model are photo-realistic, and that training with augmented datasets can indeed increase the accuracy of face recognition models as compared with models trained with real images alone.",
"title": ""
},
{
"docid": "495be81dda82d3e4d90a34b6716acf39",
"text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.",
"title": ""
},
{
"docid": "6fdeeea1714d484c596468aea053848f",
"text": "Standard slow start does not work well under large bandwidthdelay product (BDP) networks. We find two causes of this problem in existing three popular operating systems, Linux, FreeBSD and Windows XP. The first cause is that because of the exponential increase of cwnd during standard slow start, heavy packet losses occur. Recovering from heavy packet losses puts extremely high load on end systems which renders the end systems completely unresponsive for a long time, resulting in a long blackout period of no transmission. This problem commonly occurs with the three operating systems. The second cause is that some of proprietary protocol optimizations applied for slow start by these operating systems to relieve the system load happen to slow down the loss recovery followed by slow start. To remedy this problem, we propose a new slow start algorithm, called Hybrid Start (HyStart) that finds a “safe” exit point of slow start at which slow start can finish and safely move to congestion avoidance without causing any heavy packet losses. HyStart uses ACK trains and RTT delay samples to detect whether (1) the forward path is congested or (2) the current size of congestion window has reached the available capacity of the forward path. HyStart is a plug-in to the TCP sender and does not require any change in TCP receivers. We implemented HyStart for TCP-NewReno and TCP-SACK in Linux and compare its performance with five different slow start schemes with the TCP receivers of the three different operating systems in the Internet and also in the lab testbeds. Our results indicate that HyStart works consistently well under diverse network environments including asymmetric links and high and low BDP networks. Especially with different operating system receivers (Windows XP and FreeBSD), HyStart improves the start-up throughput of TCP more than 2 to 3 times.",
"title": ""
},
{
"docid": "4e85039497c60f8241d598628790f543",
"text": "Knowledge management (KM) is a dominant theme in the behavior of contemporary organizations. While KM has been extensively studied in developed economies, it is much less well understood in developing economies, notably those that are characterized by different social and cultural traditions to the mainstream of Western societies. This is notably the case in China. This chapter develops and tests a theoretical model that explains the impact of leadership style and interpersonal trust on the intention of information and knowledge workers in China to share their knowledge with their peers. All the hypotheses are supported, showing that both initiating structure and consideration have a significant effect on employees’ intention to share knowledge through trust building: 28.2% of the variance in employees’ intention to share knowledge is explained. The authors discuss the theoretical contributions of the chapter, identify future research opportunities, and highlight the implications for practicing managers. DOI: 10.4018/978-1-60566-920-5.ch009",
"title": ""
},
{
"docid": "da45568bf2ec4bfe32f927eb54e78816",
"text": "We explore controller input mappings for games using a deformable prototype that combines deformation gestures with standard button input. In study one, we tested discrete gestures using three simple games. We categorized the control schemes as binary (button only), action, and navigation, the latter two named based on the game mechanics mapped to the gestures. We found that the binary scheme performed the best, but gesture-based control schemes are stimulating and appealing. Results also suggest that the deformation gestures are best mapped to simple and natural tasks. In study two, we tested continuous gestures in a 3D racing game using the same control scheme categorization. Results were mostly consistent with study one but showed an improvement in performance and preference for the action control scheme.",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "375766c4ae473312c73e0487ab57acc8",
"text": "There are three reasons why the asymmetric crooked nose is one of the greatest challenges in rhinoplasty surgery. First, the complexity of the problem is not appreciated by the patient nor understood by the surgeon. Patients often see the obvious deviation of the nose, but not the distinct differences between the right and left sides. Surgeons fail to understand and to emphasize to the patient that each component of the nose is asymmetric. Second, these deformities can be improved, but rarely made flawless. For this reason, patients are told that the result will be all \"-er words,\" better, straighter, cuter, but no \"t-words,\" there is no perfect nor straight. Most surgeons fail to realize that these cases represent asymmetric noses on asymmetric faces with the variable of ipsilateral and contralateral deviations. Third, these cases demand a wide range of sophisticated surgical techniques, some of which have a minimal margin of error. This article offers an in-depth look at analysis, preoperative planning, and surgical techniques available for dealing with the asymmetric crooked nose.",
"title": ""
},
{
"docid": "5e6175d56150485d559d0c1a963e12b8",
"text": "High-resolution depth map can be inferred from a lowresolution one with the guidance of an additional highresolution texture map of the same scene. Recently, deep neural networks with large receptive fields are shown to benefit applications such as image completion. Our insight is that super resolution is similar to image completion, where only parts of the depth values are precisely known. In this paper, we present a joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution. Our model consists of three sub-networks, two convolutional neural pyramids concatenated by a normal convolutional neural network. The convolutional neural pyramids extract information from large receptive fields of the depth map and guidance map, while the convolutional neural network effectively transfers useful structures of the guidance image to the depth image. Experimental results show that our model outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.",
"title": ""
},
{
"docid": "571a4de4ac93b26d55252dab86e2a0d3",
"text": "Amnestic mild cognitive impairment (MCI) is a degenerative neurological disorder at the early stage of Alzheimer’s disease (AD). This work is a pilot study aimed at developing a simple scalp-EEG-based method for screening and monitoring MCI and AD. Specifically, the use of graphical analysis of inter-channel coherence of resting EEG for the detection of MCI and AD at early stages is explored. Resting EEG records from 48 age-matched subjects (mean age 75.7 years)—15 normal controls (NC), 16 with early-stage MCI, and 17 with early-stage AD—are examined. Network graphs are constructed using pairwise inter-channel coherence measures for delta–theta, alpha, beta, and gamma band frequencies. Network features are computed and used in a support vector machine model to discriminate among the three groups. Leave-one-out cross-validation discrimination accuracies of 93.6% for MCI vs. NC (p < 0.0003), 93.8% for AD vs. NC (p < 0.0003), and 97.0% for MCI vs. AD (p < 0.0003) are achieved. These results suggest the potential for graphical analysis of resting EEG inter-channel coherence as an efficacious method for noninvasive screening for MCI and early AD.",
"title": ""
},
{
"docid": "97b212bb8fde4859e368941a4e84ba90",
"text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.",
"title": ""
},
{
"docid": "af0df66f001ffd9601ac3c89edf6af0f",
"text": "State-of-the-art speech recognition systems rely on fixed, handcrafted features such as mel-filterbanks to preprocess the waveform before the training pipeline. In this paper, we study end-toend systems trained directly from the raw waveform, building on two alternatives for trainable replacements of mel-filterbanks that use a convolutional architecture. The first one is inspired by gammatone filterbanks (Hoshen et al., 2015; Sainath et al, 2015), and the second one by the scattering transform (Zeghidour et al., 2017). We propose two modifications to these architectures and systematically compare them to mel-filterbanks, on the Wall Street Journal dataset. The first modification is the addition of an instance normalization layer, which greatly improves on the gammatone-based trainable filterbanks and speeds up the training of the scattering-based filterbanks. The second one relates to the low-pass filter used in these approaches. These modifications consistently improve performances for both approaches, and remove the need for a careful initialization in scattering-based trainable filterbanks. In particular, we show a consistent improvement in word error rate of the trainable filterbanks relatively to comparable mel-filterbanks. It is the first time end-to-end models trained from the raw signal significantly outperform mel-filterbanks on a large vocabulary task under clean recording conditions.",
"title": ""
},
{
"docid": "a2f4005c681554cc422b11a6f5087793",
"text": "Emerged as salient in the recent home appliance consumer market is a new generation of home cleaning robot featuring the capability of Simultaneous Localization and Mapping (SLAM). SLAM allows a cleaning robot not only to selfoptimize its work paths for efficiency but also to self-recover from kidnappings for user convenience. By kidnapping, we mean that a robot is displaced, in the middle of cleaning, without its SLAM aware of where it moves to. This paper presents a vision-based kidnap recovery with SLAM for home cleaning robots, the first of its kind, using a wheel drop switch and an upwardlooking camera for low-cost applications. In particular, a camera with a wide-angle lens is adopted for a kidnapped robot to be able to recover its pose on a global map with only a single image. First, the kidnapping situation is effectively detected based on a wheel drop switch. Then, for S. Lee · S. Lee (B) School of Information and Communication Engineering and Department of Interaction Science, Sungkyunkwan University, Suwon, South Korea e-mail: [email protected] S. Lee e-mail: [email protected] S. Lee · S. Baek Future IT Laboratory, LG Electronics Inc., Seoul, South Korea e-mail: [email protected] an efficient kidnap recovery, a coarse-to-fine approach to matching the image features detected with those associated with a large number of robot poses or nodes, built as a map in graph representation, is adopted. The pose ambiguity, e.g., due to symmetry is taken care of, if any. The final robot pose is obtained with high accuracy from the fine level of the coarse-to-fine hierarchy by fusing poses estimated from a chosen set of matching nodes. The proposed method was implemented as an embedded system with an ARM11 processor on a real commercial home cleaning robot and tested extensively. Experimental results show that the proposed method works well even in the situation in which the cleaning robot is suddenly kidnapped during the map building process.",
"title": ""
},
{
"docid": "b5b7bef8ec2d38bb2821dc380a3a49bf",
"text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.",
"title": ""
},
{
"docid": "cf8cdd70dde3f55ed097972be1d2fde7",
"text": "BACKGROUND\nText-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.\n\n\nMETHODS\nWe describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.\n\n\nRESULTS\nPerformance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.\n\n\nCONCLUSION\nWe have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm.",
"title": ""
},
{
"docid": "1b647a09085a41e66f8c1e3031793fed",
"text": "In this paper we apply distributional semantic information to document-level machine translation. We train monolingual and bilingual word vector models on large corpora and we evaluate them first in a cross-lingual lexical substitution task and then on the final translation task. For translation, we incorporate the semantic information in a statistical document-level decoder (Docent), by enforcing translation choices that are semantically similar to the context. As expected, the bilingual word vector models are more appropriate for the purpose of translation. The final document-level translator incorporating the semantic model outperforms the basic Docent (without semantics) and also performs slightly over a standard sentencelevel SMT system in terms of ULC (the average of a set of standard automatic evaluation metrics for MT). Finally, we also present some manual analysis of the translations of some concrete documents.",
"title": ""
},
{
"docid": "7f2403a849690fb12a184ec67b0a2872",
"text": "Deep reinforcement learning achieves superhuman performance in a range of video game environments, but requires that a designer manually specify a reward function. It is often easier to provide demonstrations of a target behavior than to design a reward function describing that behavior. Inverse reinforcement learning (IRL) algorithms can infer a reward from demonstrations in low-dimensional continuous control environments, but there has been little work on applying IRL to high-dimensional video games. In our CNN-AIRL baseline, we modify the state-of-the-art adversarial IRL (AIRL) algorithm to use CNNs for the generator and discriminator. To stabilize training, we normalize the reward and increase the size of the discriminator training dataset. We additionally learn a low-dimensional state representation using a novel autoencoder architecture tuned for video game environments. This embedding is used as input to the reward network, improving the sample efficiency of expert demonstrations. Our method achieves high-level performance on the simple Catcher video game, substantially outperforming the CNN-AIRL baseline. We also score points on the Enduro Atari racing game, but do not match expert performance, highlighting the need for further work.",
"title": ""
}
] | scidocsrr |
9901f05894b9deb977fd2f8ab00096ad | Analysis of the antecedents of knowledge sharing and its implication for SMEs internationalization | [
{
"docid": "d5464818af641aae509549f586c5526d",
"text": "The learning and knowledge that we have, is, at the most, but little compared with that of which we are ignorant. Plato Knowledge management (KM) is a vital and complex topic of current interest to so many in business, government and the community in general, that there is an urgent need to expand the role of empirical research to inform knowledge management practice. However, one of the most striking aspects of knowledge management is the diversity of the field and the lack of universally accepted definitions of the term itself and its derivatives, knowledge and management. As a consequence of the multidisciplinary nature of KM, the terms inevitably hold a difference in meaning and emphasis for different people. The initial chapter of this book addresses the challenges brought about by these differences. This chapter begins with a critical assessment of some diverse frameworks for knowledge management that have been appearing in the international academic literature of many disciplines for some time. Then follows a description of ways that these have led to some holistic and integrated frameworks currently being developed by KM researchers in Australia.",
"title": ""
},
{
"docid": "5e04372f08336da5b8ab4d41d69d3533",
"text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.",
"title": ""
}
] | [
{
"docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c",
"text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.",
"title": ""
},
{
"docid": "affc663476dc4d5299de5f89f67e5f5a",
"text": "Many machine learning algorithms, such as K Nearest Neighbor (KNN), heavily rely on the distance metric for the input data patterns. Distance Metric learning is to learn a distance metric for the input space of data from a given collection of pair of similar/dissimilar points that preserves the distance relation among the training data. In recent years, many studies have demonstrated, both empirically and theoretically, that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. This paper surveys the field of distance metric learning from a principle perspective, and includes a broad selection of recent work. In particular, distance metric learning is reviewed under different learning conditions: supervised learning versus unsupervised learning, learning in a global sense versus in a local sense; and the distance matrix based on linear kernel versus nonlinear kernel. In addition, this paper discusses a number of techniques that is central to distance metric learning, including convex programming, positive semi-definite programming, kernel learning, dimension reduction, K Nearest Neighbor, large margin classification, and graph-based approaches.",
"title": ""
},
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "6e848928859248e0597124cee0560e43",
"text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.",
"title": ""
},
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "3ba65ec924fff2d246197bb2302fb86e",
"text": "Guidelines for evaluating the levels of evidence based on quantitative research are well established. However, the same cannot be said for the evaluation of qualitative research. This article discusses a process members of an evidence-based clinical practice guideline development team with the Association of Women's Health, Obstetric and Neonatal Nurses used to create a scoring system to determine the strength of qualitative research evidence. A brief history of evidence-based clinical practice guideline development is provided, followed by discussion of the development of the Nursing Management of the Second Stage of Labor evidence-based clinical practice guideline. The development of the qualitative scoring system is explicated, and implications for nursing are proposed.",
"title": ""
},
{
"docid": "46ff38a51f766cd5849a537cc0632660",
"text": "BACKGROUND\nLinear IgA bullous dermatosis (LABD) is an acquired autoimmune sub-epidermal vesiculobullous disease characterized by continuous linear IgA deposit on the basement membrane zone, as visualized on direct immunofluorescence microscopy. LABD can affect both adults and children. The disease is very uncommon, with a still unknown incidence in the South American population.\n\n\nMATERIALS AND METHODS\nAll confirmed cases of LABD by histological and immunofluorescence in our hospital were studied.\n\n\nRESULTS\nThe confirmed cases were three females and two males, aged from 8 to 87 years. Precipitant events associated with LABD were drug consumption (non-steroid inflammatory agents in two cases) and ulcerative colitis (one case). Most of our patients were treated with dapsone, resulting in remission.\n\n\nDISCUSSION\nOur series confirms the heterogeneous clinical features of this uncommon disease in concordance with a larger series of patients reported in the literature.",
"title": ""
},
{
"docid": "7970ec4bd6e17d70913d88e07a39f82d",
"text": "This thesis deals with Chinese characters (Hanzi): their key characteristics and how they could be used as a kind of knowledge resource in the (Chinese) NLP. Part 1 deals with basic issues. In Chapter 1, the motivation and the reasons for reconsidering the writing system will be presented, and a short introduction to Chinese and its writing system will be given in Chapter 2. Part 2 provides a critical review of the current, ongoing debate about Chinese characters. Chapter 3 outlines some important linguistic insights from the vantage point of indigenous scriptological and Western linguistic traditions, as well as a new theoretical framework in contemporary studies of Chinese characters. The focus of Chapter 4 concerns the search for appropriate mathematical descriptions with regard to the systematic knowledge information hidden in characters. The subject matter of mathematical formalization of the shape structure of Chinese characters is depicted as well. Part 3 illustrates the representation issues. Chapter 5 addresses the design and construction of the HanziNet, an enriched conceptual network of Chinese characters. Topics that are covered in this chapter include the ideas, architecture, methods and ontology design. In Part 4, a case study based on the above mentioned ideas will be launched. Chapter 6 presents an experiment exploring the character-triggered semantic class of Chinese unknown words. Finally, Chapter 7 summarizes the major findings of this thesis. Next, it depicts some potential avenues in the future, and assesses the theoretical implications of these findings for computational linguistic theory.",
"title": ""
},
{
"docid": "09085fc15308a96cd9441bb0e23e6c1a",
"text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.",
"title": ""
},
{
"docid": "a017ab9f310f9f36f88bf488ac833f05",
"text": "Wireless data communication technology has eliminated wired connections for data transfer to portable devices. Wireless power technology offers the possibility of eliminating the remaining wired connection: the power cord. For ventricular assist devices (VADs), wireless power technology will eliminate the complications and infections caused by the percutaneous wired power connection. Integrating wireless power technology into VADs will enable VAD implants to become a more viable option for heart failure patients (of which there are 80 000 in the United States each year) than heart transplants. Previous transcutaneous energy transfer systems (TETS) have attempted to wirelessly power VADs ; however, TETS-based technologies are limited in range to a few millimeters, do not tolerate angular misalignment, and suffer from poor efficiency. The free-range resonant electrical delivery (FREE-D) wireless power system aims to use magnetically coupled resonators to efficiently transfer power across a distance to a VAD implanted in the human body, and to provide robustness to geometric changes. Multiple resonator configurations are implemented to improve the range and efficiency of wireless power transmission to both a commercially available axial pump and a VentrAssist centrifugal pump [3]. An adaptive frequency tuning method allows for maximum power transfer efficiency for nearly any angular orientation over a range of separation distances. Additionally, laboratory results show the continuous operation of both pumps using the FREE-D system with a wireless power transfer efficiency upwards of 90%.",
"title": ""
},
{
"docid": "819f5df03cebf534a51eb133cd44cb0d",
"text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.",
"title": ""
},
{
"docid": "225b834e820b616e0ccfed7259499fd6",
"text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.",
"title": ""
},
{
"docid": "be41d072e3897506fad111549e7bf862",
"text": "Handing unbalanced data and noise are two important issues in the field of machine learning. This paper proposed a complete framework of fuzzy relevance vector machine by weighting the punishment terms of error in Bayesian inference process of relevance vector machine (RVM). Above problems can be learned within this framework with different kinds of fuzzy membership functions. Experiments on both synthetic data and real world data demonstrate that fuzzy relevance vector machine (FRVM) is effective in dealing with unbalanced data and reducing the effects of noises or outliers. 2008 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "b851cf64be0684f63e63e7317aaada5c",
"text": "With the increasing popularity of cloud-based data services, data owners are highly motivated to store their huge amount of potentially sensitive personal data files on remote servers in encrypted form. Clients later can query over the encrypted database to retrieve files while protecting privacy of both the queries and the database, by allowing some reasonable leakage information. To this end, the notion of searchable symmetric encryption (SSE) was proposed. Meanwhile, recent literature has shown that most dynamic SSE solutions leaking information on updated keywords are vulnerable to devastating file-injection attacks. The only way to thwart these attacks is to design forward-private schemes. In this paper, we investigate new privacy-preserving indexing and query processing protocols which meet a number of desirable properties, including the multi-keyword query processing with conjunction and disjunction logic queries, practically high privacy guarantees with adaptive chosen keyword attack (CKA2) security and forward privacy, the support of dynamic data operations, and so on. Compared with previous schemes, our solutions are highly compact, practical, and flexible. Their performance and security are carefully characterized by rigorous analysis. Experimental evaluations conducted over a large representative data set demonstrate that our solutions can achieve modest search time efficiency, and they are practical for use in large-scale encrypted database systems.",
"title": ""
},
{
"docid": "124729483d5db255b60690e2facbfe45",
"text": "Human social intelligence depends on a diverse array of perceptual, cognitive, and motivational capacities. Some of these capacities depend on neural systems that may have evolved through modification of ancestral systems with non-social or more limited social functions (evolutionary repurposing). Social intelligence, in turn, enables new forms of repurposing within the lifetime of an individual (cultural and instrumental repurposing), which entail innovating over and exploiting pre-existing circuitry to meet problems our brains did not evolve to solve. Considering these repurposing processes can provide insight into the computations that brain regions contribute to social information processing, generate testable predictions that usefully constrain social neuroscience theory, and reveal biologically imposed constraints on cultural inventions and our ability to respond beneficially to contemporary challenges.",
"title": ""
},
{
"docid": "c5e078cb9835db450be894aee477d00c",
"text": "I would like to jump on the blockchain bandwagon. I would like to be able to say that blockchain is the solution to the longstanding problem of secure identity on the Internet. I would like to say that everyone in the world will soon have a digital identity. Put yourself on the blockchain and never again ask yourself, Who am I? - you are your blockchain address.",
"title": ""
},
{
"docid": "762d6e9a8f0061e3a2f1b1c0eeba2802",
"text": "A new prior is proposed for representation learning, which can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by the phenomenon of consciousness seen as the formation of a low-dimensional combination of a few concepts constituting a conscious thought, i.e., consciousness as awareness at a particular time instant. This provides a powerful constraint on the representation in that such low-dimensional thought vectors can correspond to statements about reality which are true, highly probable, or very useful for taking decisions. The fact that a few elements of the current state can be combined into such a predictive or useful statement is a strong constraint and deviates considerably from the maximum likelihood approaches to modelling data and how states unfold in the future based on an agent's actions. Instead of making predictions in the sensory (e.g. pixel) space, the consciousness prior allows the agent to make predictions in the abstract space, with only a few dimensions of that space being involved in each of these predictions. The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in the form of facts and rules, although the conscious states may be richer than what can be expressed easily in the form of a sentence, a fact or a rule.",
"title": ""
},
{
"docid": "57e2adea74edb5eaf5b2af00ab3c625e",
"text": "Although scholars agree that moral emotions are critical for deterring unethical and antisocial behavior, there is disagreement about how 2 prototypical moral emotions--guilt and shame--should be defined, differentiated, and measured. We addressed these issues by developing a new assessment--the Guilt and Shame Proneness scale (GASP)--that measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains 2 guilt subscales that assess negative behavior-evaluations and repair action tendencies following private transgressions and 2 shame subscales that assess negative self-evaluations (NSEs) and withdrawal action tendencies following publically exposed transgressions. Both guilt subscales were highly correlated with one another and negatively correlated with unethical decision making. Although both shame subscales were associated with relatively poor psychological functioning (e.g., neuroticism, personal distress, low self-esteem), they were only weakly correlated with one another, and their relationships with unethical decision making diverged. Whereas shame-NSE constrained unethical decision making, shame-withdraw did not. Our findings suggest that differentiating the tendency to make NSEs following publically exposed transgressions from the tendency to hide or withdraw from public view is critically important for understanding and measuring dispositional shame proneness. The GASP's ability to distinguish these 2 classes of responses represents an important advantage of the scale over existing assessments. Although further validation research is required, the present studies are promising in that they suggest the GASP has the potential to be an important measurement tool for detecting individuals susceptible to corruption and unethical behavior.",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] | scidocsrr |
89ec42167ac8e1243fca82dc5a7df1ae | RGBD-camera based get-up event detection for hospital fall prevention | [
{
"docid": "b9a893fb526955b5131860a1402e2f7c",
"text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"title": ""
}
] | [
{
"docid": "d90954eaae0c9d84e261c6d0794bbf76",
"text": "The index case of the Ebola virus disease epidemic in West Africa is believed to have originated in Guinea. By June 2014, Guinea, Liberia, and Sierra Leone were in the midst of a full-blown and complex global health emergency. The devastating effects of this Ebola epidemic in West Africa put the global health response in acute focus for urgent international interventions. Accordingly, in October 2014, a World Health Organization high-level meeting endorsed the concept of a phase 2/3 clinical trial in Liberia to study Ebola vaccines. As a follow-up to the global response, in November 2014, the Government of Liberia and the US Government signed an agreement to form a research partnership to investigate Ebola and to assess intervention strategies for treating, controlling, and preventing the disease in Liberia. This agreement led to the establishment of the Joint Liberia-US Partnership for Research on Ebola Virus in Liberia as the beginning of a long-term collaborative partnership in clinical research between the two countries. In this article, we discuss the methodology and related challenges associated with the implementation of the Ebola vaccines clinical trial, based on a double-blinded randomized controlled trial, in Liberia.",
"title": ""
},
{
"docid": "3f8ed9f5b015f50989ebde22329e6e7c",
"text": "In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided.",
"title": ""
},
{
"docid": "af598c452d9a6589e45abe702c7cab58",
"text": "This paper proposes the concept of “liveaction virtual reality games” as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, context-aware input devices, and embedded/embodied interactions. Live-action virtual reality games are “live-action games” because a player physically acts out (using his/her real body and senses) his/her “avatar” (his/her virtual representation) in the game stage – the mixed-reality environment where the game happens. The game stage is a kind of “augmented virtuality” – a mixedreality where the virtual world is augmented with real-world information. In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information. Physical objects that reside in the physical world are also mapped to virtual elements. Liveaction virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities. This setup enables the players to touch physical architectural elements (such as walls) and other objects, “feeling” the game stage. Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly. Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.",
"title": ""
},
{
"docid": "c1bfef951e9775f6ffc949c5110e1bd1",
"text": "In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism.",
"title": ""
},
{
"docid": "80c7a60035f08fcefc6f5e0ba1c82405",
"text": "This paper deals with word length in twenty of Jane Austen's letters and is part of a research project performed in Göttingen. Word length in English has so far only been studied in the context of contemporary texts (Hasse & Weinbrenner, 1995; Riedemann, 1994) and in the English dictionary (Rothschild, 1986). It has been ascertained that word length in texts abides by a law having the form of the mixed Poisson distribution -an assumption which in a language like English can easily be justified. However, in special texts other regularities can arise. Individual or genre-like factors can induce a systematic deviation in one or more frequency classes. We say that the phenomenon is on the way to another attractor. The first remedy in such cases is a local modification of the given frequency classes; the last remedy is the search for another model. THE DATA Letters were examined because it can be assumed that they are written down without interruption, and hence revised versions or the conscious use of stylistic means are the exception. The assumed natural rhythm governing word length in writing is thus believed to have remained mostly uninfluenced and constant. The length of the selected letters is between 126 and 494 words. They date from 1796 to 1817 and are partly businesslike and partly private. The letters to Jane Austen's sister Cassandra above all are written in an 'informal' style. In general, however, the letters are on a high stylistic level, which is not only characteristic of the use of language at that time, but also a main feature of Jane Austen's personal style. Thus contractions such as don't, can't, wouldn 't etc. do not occur. word depends on the number of vowels or diphthongs. Diphthongs and triphthongs can also be differentiated, both of these would count as one syllable. This paper only deals with diphthongs. The number of syllables of abbreviations is counted according to its fully spoken form. Thus addresses and titles such as 'Mrs', 'Mr', 'Md' and 'Capt' consist of two syllables; 'Lieut' consists of three syllables. The same holds for figures and for the abbreviations of months. MS is the common short form for 'Manuscript'; 'comps' (complements), 'G.Mama' (Grandmama), 'morn' (morning), 'c ' (could), 'w ' (would) or 'rec' (received) seem to be the writer's idiosyncratic abbreviations. In all cases length is determined by the spoken form. The analysis is based on the 'received pronunciation' of British English. Only the running text without address, date, or place has been considered. ANALYSING THE DATA General Criteria Length is determined by the number of syllables in each word. \"Word\" is defined as an orthographic unit. The number of syllables in a Findings As ascertained by the software tool 'AltmannFitter' (1994) the best model was found to be the positive Singh-Poisson distribution (= inflated zero truncated Poisson distribution), which has the following formula: *Address correspondence to: J. Frischen, Brüder-Grimm-Allee 2, 37075 Göttingen, Germany. D ow nl oa de d by [ K or ea U ni ve rs ity ] at 0 4: 53 1 0 Ja nu ar y 20 15 WORD LENGTH JANE AUSTEN'S LETTERS 81 aae' Table 3. Letter 16, Austen, 1798, to Cassandra Austen. fx NPx aae~ x\\(l-e-)' x=2,3,... Distributions modified in this way indicate that the author tends to leave the basic model (in the case of English, the Poisson distribution) by local modification of the shortest class (here x 188 57 15 4 1 187.79 56.53 16.38 3.561 0.74J",
"title": ""
},
{
"docid": "34f6603912c9775fc48329e596467107",
"text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.",
"title": ""
},
{
"docid": "91d0f12e9303b93521146d4d650a63df",
"text": "We utilize the state-of-the-art in deep learning to show that we can learn by example what constitutes humor in the context of a Yelp review. To the best of the authors knowledge, no systematic study of deep learning for humor exists – thus, we construct a scaffolded study. First, we use “shallow” methods such as Random Forests and Linear Discriminants built on top of bag-of-words and word vector features. Then, we build deep feedforward networks on top of these features – in some sense, measuring how much of an effect basic feedforward nets help. Then, we use recurrent neural networks and convolutional neural networks to more accurately model the sequential nature of a review.",
"title": ""
},
{
"docid": "402bf66ab180944e8f3068bef64fbc77",
"text": "EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.",
"title": ""
},
{
"docid": "0c67bd1867014053a5bec3869f3b4f8c",
"text": "BACKGROUND AND PURPOSE\nConstraint-induced movement therapy (CI therapy) has previously been shown to produce large improvements in actual amount of use of a more affected upper extremity in the \"real-world\" environment in patients with chronic stroke (ie, >1 year after the event). This work was carried out in an American laboratory. Our aim was to determine whether these results could be replicated in another laboratory located in Germany, operating within the context of a healthcare system in which administration of conventional types of physical therapy is generally more extensive than in the United States.\n\n\nMETHODS\nFifteen chronic stroke patients were given CI therapy, involving restriction of movement of the intact upper extremity by placing it in a sling for 90% of waking hours for 12 days and training (by shaping) of the more affected extremity for 7 hours on the 8 weekdays during that period.\n\n\nRESULTS\nPatients showed a significant and very large degree of improvement from before to after treatment on a laboratory motor test and on a test assessing amount of use of the affected extremity in activities of daily living in the life setting (effect sizes, 0.9 and 2.2, respectively), with no decrement in performance at 6-month follow-up. During a pretreatment control test-retest interval, there were no significant changes on these tests.\n\n\nCONCLUSIONS\nResults replicate in Germany the findings with CI therapy in an American laboratory, suggesting that the intervention has general applicability.",
"title": ""
},
{
"docid": "077162116799dffe986cb488dda2ee56",
"text": "We present hybrid concolic testing, an algorithm that interleaves random testing with concolic execution to obtain both a deep and a wide exploration of program state space. Our algorithm generates test inputs automatically by interleaving random testing until saturation with bounded exhaustive symbolic exploration of program points. It thus combines the ability of random search to reach deep program states quickly together with the ability of concolic testing to explore states in a neighborhood exhaustively. We have implemented our algorithm on top of CUTE and applied it to obtain better branch coverage for an editor implementation (VIM 5.7, 150K lines of code) as well as a data structure implementation in C. Our experiments suggest that hybrid concolic testing can handle large programs and provide, for the same testing budget, almost 4× the branch coverage than random testing and almost 2× that of concolic testing.",
"title": ""
},
{
"docid": "01e53610e746555afadfc9387a66ce05",
"text": "This paper presents a survey of the autopilot systems for small or micro unmanned aerial vehicles (UAVs). The objective is to provide a summary of the current commercial, open source and research autopilot systems for convenience of potential small UAV users. The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both the hardware and software viewpoints. Several typical off-the-shelf autopilot packages are compared in terms of sensor packages, observation approaches and controller strengths. Afterwards some open source autopilot systems are introduced. Conclusion is made with a summary of the current autopilot market and a remark on the future development.",
"title": ""
},
{
"docid": "c7f0856c282d1039e44ba6ef50948d32",
"text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.",
"title": ""
},
{
"docid": "d7e7cdc9ac55d5af199395becfe02d73",
"text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.",
"title": ""
},
{
"docid": "8f570416ceecf87310b7780ec935d814",
"text": "BACKGROUND\nInguinal lymph node involvement is an important prognostic factor in penile cancer. Inguinal lymph node dissection allows staging and treatment of inguinal nodal disease. However, it causes morbidity and is associated with complications, such as lymphocele, skin loss and infection. Video Endoscopic Inguinal Lymphadenectomy (VEIL) is an endoscopic procedure, and it seems to be a new and attractive approach duplicating the standard open procedure with less morbidity. We present here a critical perioperative assessment with points of technique.\n\n\nMETHODS\nTen patients with moderate to high grade penile carcinoma with clinically negative inguinal lymph nodes were subjected to elective VEIL. VEIL was done in standard surgical steps. Perioperative parameters were assessed that is - duration of the surgery, lymph-related complications, time until drain removal, lymph node yield, surgical emphysema and histopathological positivity of lymph nodes.\n\n\nRESULTS\nOperative time for VEIL was 120 to 180 minutes. Lymph node yield was 7 to 12 lymph nodes. No skin related complications were seen with VEIL. Lymph related complications, that is, lymphocele, were seen in only two patients. The suction drain was removed after four to eight days (mean 5.1). Overall morbidity was 20% with VEIL.\n\n\nCONCLUSION\nIn our early experience, VEIL was a safe and feasible technique in patients with penile carcinoma with non palpable inguinal lymph nodes. It allows the removal of inguinal lymph nodes within the same limits as in conventional surgical dissection and potentially reduces surgical morbidity.",
"title": ""
},
{
"docid": "72ddcb7a55918a328576a811a89d245b",
"text": "Among all new emerging RNA species, microRNAs (miRNAs) have attracted the interest of the scientific community due to their implications as biomarkers of prognostic value, disease progression, or diagnosis, because of defining features as robust association with the disease, or stable presence in easily accessible human biofluids. This field of research has been established twenty years ago, and the development has been considerable. The regulatory nature of miRNAs makes them great candidates for the treatment of infectious diseases, and a successful example in the field is currently being translated to clinical practice. This review will present a general outline of miRNAmolecules, as well as successful stories of translational significance which are getting us closer from the basic bench studies into clinical practice.",
"title": ""
},
{
"docid": "9b0ed9c60666c36f8cf33631f791687d",
"text": "The central notion of Role-Based Access Control (RBAC) is that users do not have discretionary access to enterprise objects. Instead, access permissions are administratively associated with roles, and users are administratively made members of appropriate roles. This idea greatly simplifies management of authorization while providing an opportunity for great flexibility in specifying and enforcing enterprisespecific protection policies. Users can be made members of roles as determined by their responsibilities and qualifications and can be easily reassigned from one role to another without modifying the underlying access structure. Roles can be granted new permissions as new applications and actions are incorporated, and permissions can be revoked from roles as needed. Some users and vendors have recognized the potential benefits of RBAC without a precise definition of what RBAC constitutes. Some RBAC features have been implemented in commercial products without a frame of reference as to the functional makeup and virtues of RBAC [1]. This lack of definition makes it difficult for consumers to compare products and for vendors to get credit for the effectiveness of their products in addressing known security problems. To correct these deficiencies, a number of government sponsored research efforts are underway to define RBAC precisely in terms of its features and the benefits it affords. This research includes: surveys to better understand the security needs of commercial and government users [2], the development of a formal RBAC model, architecture, prototype, and demonstrations to validate its use and feasibility. As a result of these efforts, RBAC systems are now beginning to emerge. The purpose of this paper is to provide additional insight as to the motivations and functionality that might go behind the official RBAC name.",
"title": ""
},
{
"docid": "644f61bc267d3dcb915f8c36c1584605",
"text": "This paper discusses the design and development of an experimental tabletop robot called \"Haru\" based on design thinking methodology. Right from the very beginning of the design process, we have brought an interdisciplinary team that includes animators, performers and sketch artists to help create the first iteration of a distinctive anthropomorphic robot design based on a concept that leverages form factor with functionality. Its unassuming physical affordance is intended to keep human expectation grounded while its actual interactive potential stokes human interest. The meticulous combination of both subtle and pronounced mechanical movements together with its stunning visual displays, highlight its affective affordance. As a result, we have developed the first iteration of our tabletop robot rich in affective potential for use in different research fields involving long-term human-robot interaction.",
"title": ""
},
{
"docid": "86820c43e63066930120fa5725b5b56d",
"text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.",
"title": ""
}
] | scidocsrr |
539b8778fa5e2573c9d6a1c3627ba881 | The development of reading in children who speak English as a second language. | [
{
"docid": "4272b4a73ecd9d2b60e0c60de0469f17",
"text": "Suggesting that empirical work in the field of reading has advanced sufficiently to allow substantial agreed-upon results and conclusions, this literature review cuts through the detail of partially convergent, sometimes discrepant research findings to provide an integrated picture of how reading develops and how reading instruction should proceed. The focus of the review is prevention. Sketched is a picture of the conditions under which reading is most likely to develop easily--conditions that include stimulating preschool environments, excellent reading instruction, and the absence of any of a wide array of risk factors. It also provides recommendations for practice as well as recommendations for further research. After a preface and executive summary, chapters are (1) Introduction; (2) The Process of Learning to Read; (3) Who Has Reading Difficulties; (4) Predic:ors of Success and Failure in Reading; (5) Preventing Reading Difficulties before Kindergarten; (6) Instructional Strategies for Kindergarten and the Primary Grades; (7) Organizational Strategies for Kindergarten and the Primary Grades; (8) Helping Children with Reading Difficulties in Grades 1 to 3; (9) The Agents of Change; and (10) Recommendations for Practice and Research. Contains biographical sketches of the committee members and an index. Contains approximately 800 references.",
"title": ""
}
] | [
{
"docid": "ed06666ec688b6a57b2f3eaa57853dcd",
"text": "Sensor fusion is indispensable to improve accuracy and robustness in an autonomous navigation setting. However, in the space of end-to-end sensorimotor control, this multimodal outlook has received limited attention. In this work, we propose a novel stochastic regularization technique, called Sensor Dropout, to robustify multimodal sensor policy learning outcomes. We also introduce an auxiliary loss on policy network along with the standard DRL loss in order to reduce variance in actions of the multimodal sensor policy. Through extensive empirical testing, we demonstrate that our proposed policy can 1) operate with minimal performance drop in noisy environments and 2) remain functional even in the face of a sensor subset failure. Finally, through the visualization of gradients, we show that the learned policies are conditioned on the same latent input distribution despite having multiple and diverse observations spaces a hallmark of true sensorfusion. This efficacy of a multimodal sensor policy is shown through simulations on TORCS, a popular open-source racing car game. A demo video can be seen here: https://youtu.be/HC3TcJjXf3Q.",
"title": ""
},
{
"docid": "5325138fcbb52c61903e7bb9bd1c890b",
"text": "To simulate an efficient Intrusion Detection System (IDS) model, enormous amount of data are required to train and testing the model. To improve the accuracy and efficiency of the model, it is essential to infer the statistical properties from the observable elements of th e dataset. In this work, we have proposed some data preprocessing techniques such as filling the missing values, removing redundant samples, reduce the dimension, selecting most relevant features and finally, normalize the samples. After data preprocessing, we have simulated and tested the dataset by applying various data mining algorithms such as Support Vector Machine (SVM), Decision Tree, K nearest neighbor, K-Mean and Fuzzy C-Mean Clustering which provides better result in less computational time.",
"title": ""
},
{
"docid": "51a9180623be4ddaf514377074edc379",
"text": "Breast region measurements are important for research, but they may also become significant in the legal field as a quantitative tool for preoperative and postoperative evaluation. Direct anthropometric measurements can be taken in clinical practice. The aim of this study was to compare direct breast anthropometric measurements taken with a tape measure and a compass. Forty women, aged 18–60 years, were evaluated. They had 14 anatomical landmarks marked on the breast region and arms. The union of these points formed eight linear segments and one angle for each side of the body. The volunteers were evaluated by direct anthropometry in a standardized way, using a tape measure and a compass. Differences were found between the tape measure and the compass measurements for all segments analyzed (p > 0.05). Measurements obtained by tape measure and compass are not identical. Therefore, once the measurement tool is chosen, it should be used for the pre- and postoperative measurements in a standardized way. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "39f5413a937587b3afc9bbd9ee4b735f",
"text": "examples in learning math. Science, 320(5875), 454–455. doi: 10.1126/science.1154659 Kaminski, J. A., Sloutsky, V. M., & Heckler, A. (2009). Transfer of mathematical knowledge: The portability of generic instantiations. Child Development Perspectives, 3(3), 151–155. doi:10.1111/j.1750-8606",
"title": ""
},
{
"docid": "5f42f43bf4f46b821dac3b0d0be2f63a",
"text": "The autonomous overtaking maneuver is a valuable technology in unmanned vehicle field. However, overtaking is always perplexed by its security and time cost. Now, an autonomous overtaking decision making method based on deep Q-learning network is proposed in this paper, which employs a deep neural network(DNN) to learn Q function from action chosen to state transition. Based on the trained DNN, appropriate action is adopted in different environments for higher reward state. A series of experiments are performed to verify the effectiveness and robustness of our proposed approach for overtaking decision making based on deep Q-learning method. The results support that our approach achieves better security and lower time cost compared with traditional reinforcement learning methods.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "a30c2a8d3db81ae121e62af5994d3128",
"text": "Recent advances in the fields of robotics, cyborg development, moral psychology, trust, multi agent-based systems and socionics have raised the need for a better understanding of ethics, moral reasoning, judgment and decision-making within the system of man and machines. Here we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the social moral norms at the collective end. We review salient works in the fields of trust and machine ethics research, underscore the importance and the need for a deeper understanding of ethical trust at the individual level and the development of collective social moral norms. Drawing upon the recent findings from neural sciences on mirror-neuron system (MNS) and social cognition, we present a bio-inspired Computational Model of Ethical Trust (CMET) to allow investigations of the interplay of ethical trust and social moral norms.",
"title": ""
},
{
"docid": "7aa07ba3e04a79cf51dfc9c42b415628",
"text": "A model is presented that permits the calculation of densities of 60-Hz magnetic fields throughout a residence from only a few measurements. We assume that residential magnetic fields are produced by sources external to the house and by the residential grounding circuit. The field from external sources is measured with a single probe. The field produced by the grounding circuit is calculated from the current flowing in the circuit and its geometry. The two fields are combined to give a prediction of the total field at any point in the house. A data-acquisition system was built to record the magnitude and phase of the grounding current and the field from external sources. The model's predictions were compared with measurements of the total magnetic field at a single location in 23 houses; a correlation coefficient of .87 was obtained, indicating that the model has good predictive capability. A more detailed study that was carried out in one house permitted comparisons of measurements with the model's predictions at locations throughout the house. Again, quite reasonable agreement was found. We also investigated the temporal variability of field readings in this house. Daily magnetic field averages were found to be considerably more stable than hourly averages. Finally, we demonstrate the use of the model in creating a profile of the magnetic fields in a home.",
"title": ""
},
{
"docid": "4a9913930e2e07b867cc701b07e88eaa",
"text": "There is little doubt that the incidence of depression in Britain is increasing. According to research at the Universities of London and Warwick, the incidence of depression among young people has doubled in the past 12 years. However, whether young or old, the question is why and what can be done? There are those who argue that the increasingly common phenomenon of depression is primarily psychological, and best dealt with by counselling. There are others who consider depression as a biochemical phenomenon, best dealt with by antidepressant medication. However, there is a third aspect to the onset and treatment of depression that is given little heed: nutrition. Why would nutrition have anything to do with depression? Firstly, we have seen a significant decline in fruit and vegetable intake (rich in folic acid), in fish intake (rich in essential fats) and an increase in sugar consumption, from 2 lb a year in the 1940s to 150 lb a year in many of today’s teenagers. Each of these nutrients is strongly linked to depression and could, theoretically, contribute to increasing rates of depression. Secondly, if depression is a biochemical imbalance it makes sense to explore how the brain normalises its own biochemistry, using nutrients as the precursors for key neurotransmitters such as serotonin. Thirdly, if 21st century living is extra-stressful, it would be logical to assume that increasing psychological demands would also increase nutritional requirements since the brain is structurally and functionally completely dependent on nutrients. So, what evidence is there to support suboptimal nutrition as a potential contributor to depression? These are the common imbalances connected to nutrition that are known to worsen your mood and motivation:",
"title": ""
},
{
"docid": "d42bbb6fe8d99239993ed01aa44c32ef",
"text": "Chemical communication plays a very important role in the lives of many social insects. Several different types of pheromones (species-specific chemical messengers) of ants have been described, particularly those involved in recruitment, recognition, territorial and alarm behaviours. Properties of pheromones include activity in minute quantities (thus requiring sensitive methods for chemical analysis) and specificity (which can have chemotaxonomic uses). Ants produce pheromones in various exocrine glands, such as the Dufour, poison, pygidial and mandibular glands. A wide range of substances have been identified from these glands.",
"title": ""
},
{
"docid": "82ef80d6257c5787dcf9201183735497",
"text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.",
"title": ""
},
{
"docid": "2528b23554f934a67b3ed66f7df9d79e",
"text": "In this paper, we implemented an approach to predict final exam scores from early course assessments of the students during the semester. We used a linear regression model to check which part of the evaluation of the course assessment affects final exam score the most. In addition, we explained the origins of data mining and data mining in education. After preprocessing and preparing data for the task in hand, we implemented the linear regression model. The results of our work show that quizzes are most accurate predictors of final exam scores compared to other kinds of assessments.",
"title": ""
},
{
"docid": "6d4cd80341c429ecaaccc164b1bde5f9",
"text": "One hundred and two olive RAPD profiles were sampled from all around the Mediterranean Basin. Twenty four clusters of RAPD profiles were shown in the dendrogram based on the Ward’s minimum variance algorithm using chi-square distances. Factorial discriminant analyses showed that RAPD profiles were correlated with the use of the fruits and the country or region of origin of the cultivars. This suggests that cultivar selection has occurred in different genetic pools and in different areas. Mitochondrial DNA RFLP analyses were also performed. These mitotypes supported the conclusion also that multilocal olive selection has occurred. This prediction for the use of cultivars will help olive growers to choose new foreign cultivars for testing them before an eventual introduction if they are well adapted to local conditions.",
"title": ""
},
{
"docid": "e910310c5cc8357c570c6c4110c4e94f",
"text": "Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.",
"title": ""
},
{
"docid": "eaae33cb97b799eff093a7a527143346",
"text": "RGB Video now is one of the major data sources of traffic surveillance applications. In order to detect the possible traffic events in the video, traffic-related objects, such as vehicles and pedestrians, should be first detected and recognized. However, due to the 2D nature of the RGB videos, there are technical difficulties in efficiently detecting and recognizing traffic-related objects from them. For instance, the traffic-related objects cannot be efficiently detected in separation while parts of them overlap, and complex background will influence the accuracy of the object detection. In this paper, we propose a robust RGB-D data based traffic scene understanding algorithm. By integrating depth information, we can calculate more discriminative object features and spatial information can be used to separate the objects in the scene efficiently. Experimental results show that integrating depth data can improve the accuracy of object detection and recognition. We also show that the analyzed object information plus depth data facilitate two important traffic event detection applications: overtaking warning and collision",
"title": ""
},
{
"docid": "c57d4b7ea0e5f7126329626408f1da2d",
"text": "Educational Data Mining (EDM) is an interdisciplinary ingenuous research area that handles the development of methods to explore data arising in a scholastic fields. Computational approaches used by EDM is to examine scholastic data in order to study educational questions. As a result, it provides intrinsic knowledge of teaching and learning process for effective education planning. This paper conducts a comprehensive study on the recent and relevant studies put through in this field to date. The study focuses on methods of analysing educational data to develop models for improving academic performances and improving institutional effectiveness. This paper accumulates and relegates literature, identifies consequential work and mediates it to computing educators and professional bodies. We identify research that gives well-fortified advice to amend edifying and invigorate the more impuissant segment students in the institution. The results of these studies give insight into techniques for ameliorating pedagogical process, presaging student performance, compare the precision of data mining algorithms, and demonstrate the maturity of open source implements.",
"title": ""
},
{
"docid": "5f5c78b74e1e576dd48690b903bf4de4",
"text": "Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.",
"title": ""
},
{
"docid": "7fc10687c97d2219ce8555dd92baf57c",
"text": "The wind-induced response of tall buildings is inherently sensitive to structural dynamic properties like frequency and damping ratio. The latter parameter in particular is fraught with uncertainty in the design stage and may result in a built structure whose acceleration levels exceed design predictions. This reality has motivated the need to monitor tall buildings in full-scale. This paper chronicles the authors’ experiences in the analysis of full-scale dynamic response data from tall buildings around the world, including full-scale datasets from high rises in Boston, Chicago, and Seoul. In particular, this study focuses on the effects of coupling, beat phenomenon, amplitude dependence, and structural system type on dynamic properties, as well as correlating observed periods of vibration against fi nite element predictions. The fi ndings suggest the need for time–frequency analyses to identify coalescing modes and the mechanisms spurring them. The study also highlighted the effect of this phenomenon on damping values, the overestimates that can result due to amplitude dependence, as well as the comparatively larger degree of energy dissipation experienced by buildings dominated by frame action. Copyright © 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "f6b4ab40746d0c8c7e2b0113402667a9",
"text": "This paper presents a method for measuring the semantic similarity between concepts in Knowledge Graphs (KGs) such as WordNet and DBpedia. Previous work on semantic similarity methods have focused on either the structure of the semantic network between concepts (e.g., path length and depth), or only on the Information Content (IC) of concepts. We propose a semantic similarity method, namely wpath, to combine these two approaches, using IC to weight the shortest path length between concepts. Conventional corpus-based IC is computed from the distributions of concepts over textual corpus, which is required to prepare a domain corpus containing annotated concepts and has high computational cost. As instances are already extracted from textual corpus and annotated by concepts in KGs, graph-based IC is proposed to compute IC based on the distributions of concepts over instances. Through experiments performed on well known word similarity datasets, we show that the wpath semantic similarity method has produced a statistically significant improvement over other semantic similarity methods. Moreover, in a real category classification evaluation, the wpath method has shown the best performance in terms of accuracy and F score.",
"title": ""
},
{
"docid": "72c79181572c836cb92aac8fe7a14c5d",
"text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).",
"title": ""
}
] | scidocsrr |
a28e7cdf3a39ff608c0d62daf4268019 | Grounding Topic Models with Knowledge Bases | [
{
"docid": "8d8dc05c2de34440eb313503226f7e99",
"text": "Disambiguating entity references by annotating them with unique ids from a catalog is a critical step in the enrichment of unstructured content. In this paper, we show that topic models, such as Latent Dirichlet Allocation (LDA) and its hierarchical variants, form a natural class of models for learning accurate entity disambiguation models from crowd-sourced knowledge bases such as Wikipedia. Our main contribution is a semi-supervised hierarchical model called Wikipedia-based Pachinko Allocation Model} (WPAM) that exploits: (1) All words in the Wikipedia corpus to learn word-entity associations (unlike existing approaches that only use words in a small fixed window around annotated entity references in Wikipedia pages), (2) Wikipedia annotations to appropriately bias the assignment of entity labels to annotated (and co-occurring unannotated) words during model learning, and (3) Wikipedia's category hierarchy to capture co-occurrence patterns among entities. We also propose a scheme for pruning spurious nodes from Wikipedia's crowd-sourced category hierarchy. In our experiments with multiple real-life datasets, we show that WPAM outperforms state-of-the-art baselines by as much as 16% in terms of disambiguation accuracy.",
"title": ""
},
{
"docid": "f6121f69419a074b657bb4a0324bae4a",
"text": "Latent Dirichlet allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Increasingly, topic modeling needs to scale to larger topic spaces and use richer forms of prior knowledge, such as word correlations or document labels. However, inference is cumbersome for LDA models with prior knowledge. As a result, LDA models that use prior knowledge only work in small-scale scenarios. In this work, we propose a factor graph framework, Sparse Constrained LDA (SC-LDA), for efficiently incorporating prior knowledge into LDA. We evaluate SC-LDA’s ability to incorporate word correlation knowledge and document label knowledge on three benchmark datasets. Compared to several baseline methods, SC-LDA achieves comparable performance but is significantly faster. 1 Challenge: Leveraging Prior Knowledge in Large-scale Topic Models Topic models, such as Latent Dirichlet Allocation (Blei et al., 2003, LDA), have been successfully used for discovering hidden topics in text collections. LDA is an unsupervised model—it requires no annotation—and discovers, without any supervision, the thematic trends in a text collection. However, LDA’s lack of supervision can lead to disappointing results. Often, the hidden topics learned by LDA fail to make sense to end users. Part of the problem is that the objective function of topic models does not always correlate with human judgments of topic quality (Chang et al., 2009). Therefore, it’s often necessary to incorporate prior knowledge into topic models to improve the model’s performance. Recent work has also shown that by interactive human feedback can improve the quality and stability of topics (Hu and Boyd-Graber, 2012; Yang et al., 2015). Information about documents (Ramage et al., 2009) or words (Boyd-Graber et al., 2007) can improve LDA’s topics. In addition to its occasional inscrutability, scalability can also hamper LDA’s adoption. Conventional Gibbs sampling—the most widely used inference for LDA—scales linearly with the number of topics. Moreover, accurate training usually takes many sampling passes over the dataset. Therefore, for large datasets with millions or even billions of tokens, conventional Gibbs sampling takes too long to finish. For standard LDA, recently introduced fast sampling methods (Yao et al., 2009; Li et al., 2014; Yuan et al., 2015) enable industrial applications of topic modeling to search engines and online advertising, where capturing the “long tail” of infrequently used topics requires large topic spaces. For example, while typical LDA models in academic papers have up to 103 topics, industrial applications with 105–106 topics are common (Wang et al., 2014). Moreover, scaling topic models to many topics can also reveal the hierarchical structure of topics (Downey et al., 2015). Thus, there is a need for topic models that can both benefit from rich prior information and that can scale to large datasets. However, existing methods for improving scalability focus on topic models without prior information. To rectify this, we propose a factor graph model that encodes a potential function over the hidden topic variables, encouraging topics consistent with prior knowledge. The factor model representation admits an efficient sampling algorithm that takes advantage of the model’s sparsity. We show that our method achieves comparable performance but runs significantly faster than baseline methods, enabling models to discover models with many topics enriched by prior knowledge. 2 Efficient Algorithm for Incorporating Knowledge into LDA In this section, we introduce the factor model for incorporating prior knowledge and show how to efficiently use Gibbs sampling for inference. 2.1 Background: LDA and SparseLDA A statistical topic model represents words in documents in a collection D as mixtures of T topics, which are multinomials over a vocabulary of size V . In LDA, each document d is associated with a multinomial distribution over topics, θd. The probability of a word type w given topic z is φw|z . The multinomial distributions θd and φz are drawn from Dirichlet distributions: α and β are the hyperparameters for θ and φ. We represent the document collection D as a sequence of words w, and topic assignments as z. We use symmetric priors α and β in the model and experiment, but asymmetric priors are easily encoded in the models (Wallach et al., 2009). Discovering the latent topic assignments z from observed words w requires inferring the the posterior distribution P (z|w). Griffiths and Steyvers (2004) propose using collapsed Gibbs sampling. The probability of a topic assignment z = t in document d given an observed word type w and the other topic assignments z− is P (z = t|z−, w) ∝ (nd,t + α) nw,t + β",
"title": ""
},
{
"docid": "ef31d8b3cd83aeb109f62fde4cd8bc8a",
"text": "Many existing knowledge bases (KBs), including Freebase, Yago, and NELL, rely on a fixed ontology, given as an input to the system, which defines the data to be cataloged in the KB, i.e., a hierarchy of categories and relations between them. The system then extracts facts that match the predefined ontology. We propose an unsupervised model that jointly learns a latent ontological structure of an input corpus, and identifies facts from the corpus that match the learned structure. Our approach combines mixed membership stochastic block models and topic models to infer a structure by jointly modeling text, a latent concept hierarchy, and latent semantic relationships among the entities mentioned in the text. As a case study, we apply the model to a corpus of Web documents from the software domain, and evaluate the accuracy of the various components of the learned ontology.",
"title": ""
}
] | [
{
"docid": "814aa0089ce9c5839d028d2e5aca450d",
"text": "Espresso is a document-oriented distributed data serving platform that has been built to address LinkedIn's requirements for a scalable, performant, source-of-truth primary store. It provides a hierarchical document model, transactional support for modifications to related documents, real-time secondary indexing, on-the-fly schema evolution and provides a timeline consistent change capture stream. This paper describes the motivation and design principles involved in building Espresso, the data model and capabilities exposed to clients, details of the replication and secondary indexing implementation and presents a set of experimental results that characterize the performance of the system along various dimensions.\n When we set out to build Espresso, we chose to apply best practices in industry, already published works in research and our own internal experience with different consistency models. Along the way, we built a novel generic distributed cluster management framework, a partition-aware change- capture pipeline and a high-performance inverted index implementation.",
"title": ""
},
{
"docid": "75b2f12152526a0fbc5648261faca1cc",
"text": "Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.",
"title": ""
},
{
"docid": "44e135418dc6480366bb5679b62bc4f9",
"text": "There is growing interest regarding the role of the right inferior frontal gyrus (RIFG) during a particular form of executive control referred to as response inhibition. However, tasks used to examine neural activity at the point of response inhibition have rarely controlled for the potentially confounding effects of attentional demand. In particular, it is unclear whether the RIFG is specifically involved in inhibitory control, or is involved more generally in the detection of salient or task relevant cues. The current fMRI study sought to clarify the role of the RIFG in executive control by holding the stimulus conditions of one of the most popular response inhibition tasks-the Stop Signal Task-constant, whilst varying the response that was required on reception of the stop signal cue. Our results reveal that the RIFG is recruited when important cues are detected, regardless of whether that detection is followed by the inhibition of a motor response, the generation of a motor response, or no external response at all.",
"title": ""
},
{
"docid": "bc35d87706c66350f4cec54befc9acc2",
"text": "This paper presents a new improved term frequency/inverse document frequency (TF-IDF) approach which uses confidence, support and characteristic words to enhance the recall and precision of text classification. Synonyms defined by a lexicon are processed in the improved TF-IDF approach. We detailedly discuss and analyze the relationship among confidence, recall and precision. The experiments based on science and technology gave promising results that the new TF-IDF approach improves the precision and recall of text classification compared with the conventional TF-IDF approach.",
"title": ""
},
{
"docid": "88a4ab49e7d3263d5d6470d123b6e74b",
"text": "Graph databases have gained renewed interest in the last years, due to its applications in areas such as the Semantic Web and Social Networks Analysis. We study the problem of querying graph databases, and, in particular, the expressiveness and complexity of evaluation for several general-purpose query languages, such as the regular path queries and its extensions with conjunctions and inverses. We distinguish between two semantics for these languages. The first one, based on simple paths, easily leads to intractability, while the second one, based on arbitrary paths, allows tractable evaluation for an expressive family of languages.\n We also study two recent extensions of these languages that have been motivated by modern applications of graph databases. The first one allows to treat paths as first-class citizens, while the second one permits to express queries that combine the topology of the graph with its underlying data.",
"title": ""
},
{
"docid": "625b96d21cb9ff05785aa34c98c567ff",
"text": "The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.",
"title": ""
},
{
"docid": "10646c29afc4cc5c0a36ca508aabb41a",
"text": "As high-resolution fingerprint images are becoming more common, the pores have been found to be one of the promising candidates in improving the performance of automated fingerprint identification systems (AFIS). This paper proposes a deep learning approach towards pore extraction. It exploits the feature learning and classification capability of convolutional neural networks (CNNs) to detect pores on fingerprints. Besides, this paper also presents a unique affine Fourier moment-matching (AFMM) method of matching and fusing the scores obtained for three different fingerprint features to deal with both local and global linear distortions. Combining the two aforementioned contributions, an EER of 3.66% can be observed from the experimental results.",
"title": ""
},
{
"docid": "0a0ca1f866a4be1a3f264c6e3c888adc",
"text": "Printed circuit board (PCB) windings are convenient for many applications given their ease of manufacture, high repeatability, and low profile. In many cases, the use of multistranded litz wires is appropriate due to the rated power, frequency range, and efficiency constraints. This paper proposes a manufacturing technique and a semianalytical loss model for PCB windings using planar litz structure to obtain a similar ac loss reduction to that of conventional windings of round wires with litz structure. Different coil prototypes have been tested in several configurations to validate the proposal.",
"title": ""
},
{
"docid": "c77042cb1a8255ac99ebfbc74979c3c6",
"text": "Machine translation systems require semantic knowledge and grammatical understanding. Neural machine translation (NMT) systems often assume this information is captured by an attention mechanism and a decoder that ensures fluency. Recent work has shown that incorporating explicit syntax alleviates the burden of modeling both types of knowledge. However, requiring parses is expensive and does not explore the question of what syntax a model needs during translation. To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees. In this way, we leverage the benefits of structure while investigating what syntax NMT must induce to maximize performance. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.",
"title": ""
},
{
"docid": "ecfb05d557ebe524e3821fcf6ce0f985",
"text": "This paper presents a novel active-source-pump (ASP) circuit technique to significantly lower the ESD sensitivity of ultrathin gate inputs in advanced sub-90nm CMOS technologies. As demonstrated by detailed experimental analysis, an ESD design window expansion of more than 100% can be achieved. This revives conventional ESD solutions for ultrasensitive input protection also enabling low-capacitance RF protection schemes with a high ESD design flexibility at IC-level. ASP IC application examples, and the impact of ASP on normal RF operation performance, are discussed.",
"title": ""
},
{
"docid": "3301a0cf26af8d4d8c7b2b9d56cec292",
"text": "Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
"title": ""
},
{
"docid": "7beeea42e8f5d0f21ea418aa7f433ab9",
"text": "This application note describes principles and uses for continuous ST segment monitoring. It also provides a detailed description of the ST Analysis algorithm implemented in the multi-lead ST/AR (ST and Arrhythmia) algorithm, and an assessment of the ST analysis algorithm's performance.",
"title": ""
},
{
"docid": "d540250c51e97622a10bcb29f8fde956",
"text": "With many advantages of rectangular waveguide and microstrip lines, substrate integrated waveguide (SIW) can be used for design of planar waveguide-like slot antenna. However, the bandwidth of this kind of antenna structure is limited. In this work, a parasitic dipole is introduced and coupled with the SIW radiate slot. The results have indicated that the proposed technique can enhance the bandwidth of the SIW slot antenna significantly. The measured bandwidth of fabricated antenna prototype is about 19%, indicating about 115% bandwidth enhancement than the ridged substrate integrated waveguide (RSIW) slot antenna.",
"title": ""
},
{
"docid": "d35bc5ef2ea3ce24bbba87f65ae93a25",
"text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.",
"title": ""
},
{
"docid": "4e2bed31e5406e30ae59981fa8395d5b",
"text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.",
"title": ""
},
{
"docid": "7d5215dc3213b13748f97aa21898e86e",
"text": "Several tasks in computer vision and machine learning can be modeled as MRF-MAP inference problems. Using higher order potentials to model complex dependencies can significantly improve the performance. The problem can often be modeled as minimizing a sum of submodular (SoS) functions. Since sum of submodular functions is also submodular, existing submodular function minimization (SFM) techniques can be employed for optimal inference in polynomial time [1], [2]. These techniques, though oblivious to the clique sizes, have limited scalability in the number of pixels. On the other hand, state of the art algorithms in computer vision [3], [47] can handle problems with a large number of pixels but fail to scale to large clique sizes. In this paper, we adapt two SFM algorithms [1], [5], to exploit the sum of submodular structure, thereby helping them scale to large number of pixels while maintaining scalability with large clique sizes. Our ideas are general enough and can be extended to adapt other existing SFM algorithms as well. Our experiments on computer vision problems demonstrate that our approach can easily scale up to clique sizes of 300, thereby unlocking the usage of really large sized cliques for MRF-MAP inference problems.",
"title": ""
},
{
"docid": "07e03419430b7ea8ca3c7b02f9340d46",
"text": "Recently, [2] presented a security attack on the privacy-preserving outsourcing scheme for biometric identification proposed in [1]. In [2], the author claims that the scheme CloudBI-II proposed in [1] can be broken under the collusion case. That is, when the cloud server acts as a user to submit a number of identification requests, CloudBI-II is no longer secure. In this technical report, we will explicitly show that the attack method proposed in [2] doesn’t work in fact.",
"title": ""
},
{
"docid": "b97c9e8238f74539e8a17dcffecdd35f",
"text": "This paper presents a novel approach to the task of automatic music genre classification which is based on multiple feature vectors and ensemble of classifiers. Multiple feature vectors are extracted from a single music piece. First, three 30-second music segments, one from the beginning, one from the middle and one from end part of a music piece are selected and feature vectors are extracted from each segment. Individual classifiers are trained to account for each feature vector extracted from each music segment. At the classification, the outputs provided by each individual classifier are combined through simple combination rules such as majority vote, max, sum and product rules, with the aim of improving music genre classification accuracy. Experiments carried out on a large dataset containing more than 3,000 music samples from ten different Latin music genres have shown that for the task of automatic music genre classification, the features extracted from the middle part of the music provide better results than using the segments from the beginning or end part of the music. Furthermore, the proposed ensemble approach, which combines the multiple feature vectors, provides better accuracy than using single classifiers and any individual music segment.",
"title": ""
},
{
"docid": "ef0625150b0eb6ae68a214256e3db50d",
"text": "Undergraduate engineering students require a practical application of theoretical concepts learned in classrooms in order to appropriate a complete management of them. Our aim is to assist students to learn control systems theory in an engineering context, through the design and implementation of a simple and low cost ball and plate plant. Students are able to apply mathematical and computational modelling tools, control systems design, and real-time software-hardware implementation while solving a position regulation problem. The whole project development is presented and may be assumed as a guide for replicate results or as a basis for a new design approach. In both cases, we end up in a tool available to implement and assess control strategies experimentally.",
"title": ""
},
{
"docid": "72fec6dc287b0aa9aea97a22268c1125",
"text": "Given a symmetric matrix what is the nearest correlation matrix, that is, the nearest symmetric positive semidefinite matrix with unit diagonal? This problem arises in the finance industry, where the correlations are between stocks. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. We show how the modified alternating projections method can be used to compute the solution for the more commonly used of the weighted Frobenius norms. In the finance application the original matrix has many zero or negative eigenvalues; we show that for a certain class of weights the nearest correlation matrix has correspondingly many zero eigenvalues and that this fact can be exploited in the computation.",
"title": ""
}
] | scidocsrr |
c8f59650002f716fa244065bdee10466 | A Sarcasm Extraction Method Based on Patterns of Evaluation Expressions | [
{
"docid": "65b34f78e3b8d54ad75d32cdef487dac",
"text": "Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.",
"title": ""
},
{
"docid": "b485b27da4b17469a5c519538f4dcf1b",
"text": "The research described in this work focuses on identifying key components for the task of irony detection. By means of analyzing a set of customer reviews, which are considered as ironic both in social and mass media, we try to find hints about how to deal with this task from a computational point of view. Our objective is to gather a set of discriminating elements to represent irony. In particular, the kind of irony expressed in such reviews. To this end, we built a freely available data set with ironic reviews collected from Amazon. Such reviews were posted on the basis of an online viral effect; i.e. contents whose effect triggers a chain reaction on people. The findings were assessed employing three classifiers. The results show interesting hints regarding the patterns and, especially, regarding the implications for sentiment analysis.",
"title": ""
}
] | [
{
"docid": "fac476744429cacfe1c07ec19ee295eb",
"text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.",
"title": ""
},
{
"docid": "b5215ddc7768f75fe72cdaaad9e3cdb8",
"text": "Visual saliency analysis detects salient regions/objects that attract human attention in natural scenes. It has attracted intensive research in different fields such as computer vision, computer graphics, and multimedia. While many such computational models exist, the focused study of what and how applications can be beneficial is still lacking. In this article, our ultimate goal is thus to provide a comprehensive review of the applications using saliency cues, the so-called attentive systems. We would like to provide a broad vision about saliency applications and what visual saliency can do. We categorize the vast amount of applications into different areas such as computer vision, computer graphics, and multimedia. Intensively covering 200+ publications we survey (1) key application trends, (2) the role of visual saliency, and (3) the usability of saliency into different tasks.",
"title": ""
},
{
"docid": "2833dbe3c3e576a3ba8f175a755b6964",
"text": "The accuracy and granularity of network flow measurement play a critical role in many network management tasks, especially for anomaly detection. Despite its important, traffic monitoring often introduces overhead to the network, thus, operators have to employ sampling and aggregation to avoid overloading the infrastructure. However, such sampled and aggregated information may affect the accuracy of traffic anomaly detection. In this work, we propose a novel method that performs adaptive zooming in the aggregation of flows to be measured. In order to better balance the monitoring overhead and the anomaly detection accuracy, we propose a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions. To control the load on each individual switch, we carefully delegate monitoring rules in the network wide. Using real-world data and three simple anomaly detectors, we show that the adaptive based counting can detect anomalies more accurately with less overhead.",
"title": ""
},
{
"docid": "2a76205b80c90ff9a4ca3ccb0434bb03",
"text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.",
"title": ""
},
{
"docid": "14724ca410a07d97857bf730624644a5",
"text": "We introduce a highly scalable approach for open-domain question answering with no dependence on any data set for surface form to logical form mapping or any linguistic analytic tool such as POS tagger or named entity recognizer. We define our approach under the Constrained Conditional Models framework which lets us scale up to a full knowledge graph with no limitation on the size. On a standard benchmark, we obtained near 4 percent improvement over the state-of-the-art in open-domain question answering task.",
"title": ""
},
{
"docid": "86f93e5facbcf5ac96ba68a8d91dda63",
"text": "Lawvere theories and monads have been the two main category theoretic formulations of universal algebra, Lawvere theories arising in 1963 and the connection with monads being established a few years later. Monads, although mathematically the less direct and less malleable formulation, rapidly gained precedence. A generation later, the definition of monad began to appear extensively in theoretical computer science in order to model computational effects, without reference to universal algebra. But since then, the relevance of universal algebra to computational effects has been recognised, leading to renewed prominence of the notion of Lawvere theory, now in a computational setting. This development has formed a major part of Gordon Plotkin’s mature work, and we study its history here, in particular asking why Lawvere theories were eclipsed by monads in the 1960’s, and how the renewed interest in them in a computer science setting might develop in future.",
"title": ""
},
{
"docid": "6224f4f3541e9cd340498e92a380ad3f",
"text": "A personal story: From philosophy to software.",
"title": ""
},
{
"docid": "5931169b6433d77496dfc638988399eb",
"text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.",
"title": ""
},
{
"docid": "58eebe0e55f038fea268b6a7a6960939",
"text": "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means that fit their regulatory orientation, and this regulatory fit increases the value of what they are doing. The following postulates of this value from fit proposal are examined: (a) People will be more inclined toward goal means that have higher regulatory fit, (b) people's motivation during goal pursuit will be stronger when regulatory fit is higher, (c) people's (prospective) feelings about a choice they might make will be more positive for a desirable choice and more negative for an undesirable choice when regulatory fit is higher, (d) people's (retrospective) evaluations of past decisions or goal pursuits will be more positive when regulatory fit was higher, and (e) people will assign higher value to an object that was chosen with higher regulatory fit. Studies testing each of these postulates support the value-from-fit proposal. How value from fit can enhance or diminish the value of goal pursuits and the quality of life itself is discussed.",
"title": ""
},
{
"docid": "025932fa63b24d65f3b61e07864342b7",
"text": "The realization of the Internet of Things (IoT) paradigm relies on the implementation of systems of cooperative intelligent objects with key interoperability capabilities. One of these interoperability features concerns the cooperation among nodes towards a collaborative deployment of applications taking into account the available resources, such as electrical energy, memory, processing, and object capability to perform a given task, which are",
"title": ""
},
{
"docid": "c075c26fcfad81865c58a284013c0d33",
"text": "A novel pulse compression technique is developed that improves the axial resolution of an ultrasonic imaging system and provides a boost in the echo signal-to-noise ratio (eSNR). The new technique, called the resolution enhancement compression (REC) technique, was validated with simulations and experimental measurements. Image quality was examined in terms of three metrics: the cSNR, the bandwidth, and the axial resolution through the modulation transfer function (MTF). Simulations were conducted with a weakly-focused, single-element ultrasound source with a center frequency of 2.25 MHz. Experimental measurements were carried out with a single-element transducer (f/3) with a center frequency of 2.25 MHz from a planar reflector and wire targets. In simulations, axial resolution of the ultrasonic imaging system was almost doubled using the REC technique (0.29 mm) versus conventional pulsing techniques (0.60 mm). The -3 dB pulse/echo bandwidth was more than doubled from 48% to 97%, and maximum range sidelobes were -40 dB. Experimental measurements revealed an improvement in axial resolution using the REC technique (0.31 mm) versus conventional pulsing (0.44 mm). The -3 dB pulse/echo bandwidth was doubled from 56% to 113%, and maximum range sidelobes were observed at -45 dB. In addition, a significant gain in eSNR (9 to 16.2 dB) was achieved",
"title": ""
},
{
"docid": "405bae0d413aa4b5fef0ac8b8c639235",
"text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.",
"title": ""
},
{
"docid": "27b5e0594305a81c6fad15567ba1f3b9",
"text": "A novel approach to the design of series-fed antenna arrays has been presented, in which a modified three-way slot power divider is applied. In the proposed coupler, the power division is adjusted by changing the slot inclination with respect to the transmission line, whereas coupled transmission lines are perpendicular. The proposed modification reduces electrical length of the feeding line to <formula formulatype=\"inline\"><tex Notation=\"TeX\">$1 \\lambda$</tex></formula>, hence results in dissipation losses' reduction. The theoretical analysis and measurement results of the 2<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$\\, \\times \\,$</tex></formula>8 microstrip antenna array operating within 10.5-GHz frequency range are shown in the letter, proving the novel inclined-slot power divider's capability to provide appropriate power distribution and its potential application in the large antenna arrays.",
"title": ""
},
{
"docid": "491ad4b4ab179db2efd54f3149d08db5",
"text": "In robotics, Air Muscle is used as the analogy of the biological motor for locomotion or manipulation. It has advantages like the passive Damping, good power-weight ratio and usage in rough environments. An experimental test set up is designed to test both contraction and volume trapped in Air Muscle. This paper gives the characteristics of Air Muscle in terms of contraction of Air Muscle with variation of pressure at different loads and also in terms of volume of air trapped in it with variation in pressure at different loads. Braid structure of the Muscle has been described and its theoretical and experimental aspects of the characteristics of an Air Muscle are analysed.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "4f222d326bdbf006c3d8e54d2d97ba3f",
"text": "Designing autonomous vehicles for urban environments remains an unresolved problem. One major dilemma faced by autonomous cars is understanding the intention of other road users and communicating with them. To investigate one aspect of this, specifically pedestrian crossing behavior, we have collected a large dataset of pedestrian samples at crosswalks under various conditions (e.g., weather) and in different types of roads. Using the data, we analyzed pedestrian behavior from two different perspectives: the way they communicate with drivers prior to crossing and the factors that influence their behavior. Our study shows that changes in head orientation in the form of looking or glancing at the traffic is a strong indicator of crossing intention. We also found that context in the form of the properties of a crosswalk (e.g., its width), traffic dynamics (e.g., speed of the vehicles) as well as pedestrian demographics can alter pedestrian behavior after the initial intention of crossing has been displayed. Our findings suggest that the contextual elements can be interrelated, meaning that the presence of one factor may increase/decrease the influence of other factors. Overall, our work formulates the problem of pedestrian-driver interaction and sheds light on its complexity in typical traffic scenarios.",
"title": ""
},
{
"docid": "51e6db842735ae89419612bf831fce54",
"text": "In this work, we focus on automatically recognizing social conversational strategies that in human conversation contribute to building, maintaining or sometimes destroying a budding relationship. These conversational strategies include self-disclosure, reference to shared experience, praise and violation of social norms. By including rich contextual features drawn from verbal, visual and vocal modalities of the speaker and interlocutor in the current and previous turn, we can successfully recognize these dialog phenomena with an accuracy of over 80% and kappa ranging from 60-80%. Our findings have been successfully integrated into an end-to-end socially aware dialog system, with implications for virtual agents that can use rapport between user and system to improve task-oriented assistance.",
"title": ""
},
{
"docid": "71c7c98b55b2b2a9c475d4522310cfaa",
"text": "This paper studies an active underground economy which spec ializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs c ollected from an active underground market operating on publi c Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal subs trate mature enough to steal wealth into the millions of dollars in less than one year.",
"title": ""
},
{
"docid": "f7f6f01e2858e03ae9a1313e0bb7b25f",
"text": "This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 10 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time.",
"title": ""
},
{
"docid": "fe11fc1282a7efc34a9efe0e81fb21d6",
"text": "Increased complexity in modern embedded systems has presented various important challenges with regard to side-channel attacks. In particular, it is common to deploy SoC-based target devices with high clock frequencies in security-critical scenarios; understanding how such features align with techniques more often deployed against simpler devices is vital from both destructive (i.e., attack) and constructive (i.e., evaluation and/or countermeasure) perspectives. In this paper, we investigate electromagnetic-based leakage from three different means of executing cryptographic workloads (including the general purpose ARM core, an on-chip co-processor, and the NEON core) on the AM335x SoC. Our conclusion is that addressing challenges of the type above is feasible, and that key recovery attacks can be conducted with modest resources.",
"title": ""
}
] | scidocsrr |
18148f5dc3b0b61ca640477c84dcd70e | Algorithms for Quantum Computers | [
{
"docid": "8eac34d73a2bcb4fa98793499d193067",
"text": "We review here the recent success in quantum annealing, i.e., optimization of the cost or energy functions of complex systems utilizing quantum fluctuations. The concept is introduced in successive steps through the studies of mapping of such computationally hard problems to the classical spin glass problems. The quantum spin glass problems arise with the introduction of quantum fluctuations, and the annealing behavior of the systems as these fluctuations are reduced slowly to zero. This provides a general framework for realizing analog quantum computation.",
"title": ""
}
] | [
{
"docid": "6d825778d5d2cb935aab35c60482a267",
"text": "As the workforce ages rapidly in industrialized countries, a phenomenon known as the graying of the workforce, new challenges arise for firms as they have to juggle this dramatic demographical change (Trend 1) in conjunction with the proliferation of increasingly modern information and communication technologies (ICTs) (Trend 2). Although these two important workplace trends are pervasive, their interdependencies have remained largely unexplored. While Information Systems (IS) research has established the pertinence of age to IS phenomena from an empirical perspective, it has tended to model the concept merely as a control variable with limited understanding of its conceptual nature. In fact, even the few IS studies that used the concept of age as a substantive variable have mostly relied on stereotypical accounts alone to justify their age-related hypotheses. Further, most of these studies have examined the role of age in the same phenomenon (i.e., initial adoption of ICTs), implying a marked lack of diversity with respect to the phenomena under investigation. Overall, IS research has yielded only limited insight into the role of age in phenomena involving ICTs. In this essay, we argue for the importance of studying agerelated impacts more carefully and across various IS phenomena, and we enable such research by providing a research agenda that IS scholars can use. In doing so, we hope that future research will further both our empirical and conceptual understanding of the managerial challenges arising from the interplay of a graying workforce and rapidly evolving ICTs. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "514afc7846a1d9c3ce60c2ae392b3e43",
"text": "Scientific workflows facilitate automation, reuse, and reproducibility of scientific data management and analysis tasks. Scientific workflows are often modeled as dataflow networks, chaining together processing components (called actors) that query, transform, analyse, and visualize scientific datasets. Semantic annotations relate data and actor schemas with conceptual information from a shared ontology, to support scientific workflow design, discovery, reuse, and validation in the presence of thousands of potentially useful actors and datasets. However, the creation of semantic annotations is complex and time-consuming. We present a calculus and two inference algorithms to automatically propagate semantic annotations through workflow actors described by relational queries. Given an input annotation α and a query q, forward propagation computes an output annotation α′; conversely, backward propagation infers α from q and α′.",
"title": ""
},
{
"docid": "c7f0a749e38b3b7eba871fca80df9464",
"text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.",
"title": ""
},
{
"docid": "8be33fad66b25a9d3a4b05dbfc1aac5d",
"text": "A question-answering system needs to be able to reason about unobserved causes in order to answer questions of the sort that people face in everyday conversations. Recent neural network models that incorporate explicit memory and attention mechanisms have taken steps towards this capability. However, these models have not been tested in scenarios for which reasoning about the unobservable mental states of other agents is necessary to answer a question. We propose a new set of tasks inspired by the well-known false-belief test to examine how a recent question-answering model performs in situations that require reasoning about latent mental states. We find that the model is only successful when the training and test data bear substantial similarity, as it memorizes how to answer specific questions and cannot reason about the causal relationship between actions and latent mental states. We introduce an extension to the model that explicitly simulates the mental representations of different participants in a reasoning task, and show that this capacity increases the model’s performance on our theory of mind test.",
"title": ""
},
{
"docid": "8b57c1f4c865c0a414b2e919d19959ce",
"text": "A microstrip HPF with sharp attenuation by using cross-coupling is proposed in this paper. The HPF consists of parallel plate- and gap type- capacitors and inductor lines. The one block of the HPF has two sections of a constant K filter in the bridge T configuration. Thus the one block HPF is first coarsely designed and the performance is optimized by circuit simulator. With the gap capacitor adjusted the proposed HPF illustrates the sharp attenuation characteristics near the cut-off frequency made by cross-coupling between the inductor lines. In order to improve the stopband performance, the cascaded two block HPF is examined. Its measured results show the good agreement with the simulated ones giving the sharper attenuation slope.",
"title": ""
},
{
"docid": "288f8a2dab0c32f85c313f5a145e47a5",
"text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input",
"title": ""
},
{
"docid": "3cb0bddb1ed916cffdff3624e61d49cd",
"text": "Thh paper presents a new method for computing the configuration-space map of obstacles that is used in motion-planning algorithms. The method derives from the observation that, when the robot is a rigid object that can only translate, the configuration space is a convolution of the workspace and the robot. This convolution is computed with the use of the Fast Fourier Transform (FFT) algorithm. The method is particularly promising for workspaces with many andlor complicated obstacles, or when the shape of the robot is not simple. It is an inherently parallel method that can significantly benefit from existing experience and hardware on the FFT.",
"title": ""
},
{
"docid": "6989ae9a7e6be738d0d2e8261251a842",
"text": "A single-feed reconfigurable square-ring patch antenna with pattern diversity is presented. The antenna structure has four shorting walls placed respectively at each edge of the square-ring patch, in which two shorting walls are directly connected to the patch and the others are connected to the patch via pin diodes. By controlling the states of the pin diodes, the antenna can be operated at two different modes: monopolar plat-patch and normal patch modes; moreover, the 10 dB impedance bandwidths of the two modes are overlapped. Consequently, the proposed antenna allows its radiation pattern to be switched electrically between conical and broadside radiations at a fixed frequency. Detailed design considerations of the proposed antenna are described. Experimental and simulated results are also shown and discussed",
"title": ""
},
{
"docid": "a408e25435dded29744cf2af0f7da1e5",
"text": "Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.",
"title": ""
},
{
"docid": "a67f7593ea049be1e2785108b6181f7d",
"text": "This paper describes torque characteristics of the interior permanent magnet synchronous motor (IPMSM) using the inexpensive ferrite magnets. IPMSM model used in this study has the spoke and the axial type magnets in the rotor, and torque characteristics are analyzed by the three-dimensional finite element method (3D-FEM). As a result, torque characteristics can be improved by using both the spoke type magnets and the axial type magnets in the rotor.",
"title": ""
},
{
"docid": "241542e915e51ce1505c7d24641e4e0b",
"text": "Over the past decade, research has increased our understanding of the effects of physical activity at opposite ends of the spectrum. Sedentary behaviour—too much sitting—has been shown to increase risk of chronic disease, particularly diabetes and cardiovascular disease. There is now a clear need to reduce prolonged sitting. Secondly, evidence on the potential of high intensity interval training inmanaging the same chronic diseases, as well as reducing indices of cardiometabolic risk in healthy adults, has emerged. This vigorous training typically comprises multiple 3-4 minute bouts of high intensity exercise interspersed with several minutes of low intensity recovery, three times a week. Between these two extremes of the activity spectrum is the mainstream public health recommendation for aerobic exercise, which is similar in many developed countries. The suggested target for older adults (≥65) is the same as for other adults (18-64): 150 minutes a week of moderate intensity activity in bouts of 10 minutes or more. It is often expressed as 30 minutes of brisk walking or equivalent activity five days a week, although 75 minutes of vigorous intensity activity spread across the week, or a combination of moderate and vigorous activity are sometimes suggested. Physical activity to improve strength should also be done at least two days a week. The 150 minute target is widely disseminated to health professionals and the public. However, many people, especially in older age groups, find it hard to achieve this level of activity. We argue that when advising patients on exercise doctors should encourage people to increase their level of activity by small amounts rather than focus on the recommended levels. The 150 minute target, although warranted, may overshadow other less concrete elements of guidelines. These include finding ways to do more lower intensity lifestyle activity. As people get older, activity may become more relevant for sustaining the strength, flexibility, and balance required for independent living in addition to the strong associations with hypertension, coronary heart disease, stroke, diabetes, breast cancer, and colon cancer. Observational data have confirmed associations between increased physical activity and reduction in musculoskeletal conditions such as arthritis, osteoporosis, and sarcopenia, and better cognitive acuity and mental health. Although these links may be modest and some lack evidence of causality, they may provide sufficient incentives for many people to be more active. Research into physical activity",
"title": ""
},
{
"docid": "ca19a74fde1b9e3a0ab76995de8b0f36",
"text": "Sensors on (or attached to) mobile phones can enable attractive sensing applications in different domains, such as environmental monitoring, social networking, healthcare, transportation, etc. We introduce a new concept, sensing as a service (S2aaS), i.e., providing sensing services using mobile phones via a cloud computing system. An S2aaS cloud needs to meet the following requirements: 1) it must be able to support various mobile phone sensing applications on different smartphone platforms; 2) it must be energy-efficient; and 3) it must have effective incentive mechanisms that can be used to attract mobile users to participate in sensing activities. In this vision paper, we identify unique challenges of designing and implementing an S2aaS cloud, review existing systems and methods, present viable solutions, and point out future research directions.",
"title": ""
},
{
"docid": "361e874cccb263b202155ef92e502af3",
"text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.",
"title": ""
},
{
"docid": "88660d823f1c20cf0b75b665c66af696",
"text": "A pectus index can be derived from dividing the transverse diameter of the chest by the anterior-posterior diameter on a simple CT scan. In a preliminary report, all patients who required operative correction for pectus excavatum had a pectus index greater than 3.25 while matched normal controls were all less than 3.25. A simple CT scan may be a useful adjunct in objective evaluation of children and teenagers for surgery of pectus excavatum.",
"title": ""
},
{
"docid": "65bea826c88408b87ce2e2c17944835c",
"text": "The broad spectrum of clinical signs in canine cutaneous epitheliotropic T-cell lymphoma mimics many inflammatory skin diseases and is a diagnostic challenge. A 13-year-old-male castrated golden retriever crossbred dog presented with multifocal flaccid bullae evolving into deep erosions. A shearing force applied to the skin at the periphery of the erosions caused the epidermis to further slide off the dermis suggesting intraepidermal or subepidermal separation. Systemic signs consisted of profound weight loss and marked respiratory distress. Histologically, the superficial and deep dermis were infiltrated by large, CD3-positive neoplastic lymphocytes and mild epitheliotropism involved the deep epidermis, hair follicle walls and epitrichial sweat glands. There was partial loss of the stratum basale. Bullous lesions consisted of large dermoepidermal and intraepidermal clefts that contained loose accumulations of neutrophils mixed with fewer neoplastic cells in proteinaceous fluid. The lifted epidermis was often devitalized and bordered by hydropic degeneration and partial epidermal collapse. Similar neoplastic lymphocytes formed small masses in the lungs associated with broncho-invasion. Clonal rearrangement analysis of antigen receptor genes in samples from skin and lung lesions using primers specific for canine T-cell receptor gamma (TCRgamma) produced a single-sized amplicon of identical sequence, indicating that both lesions resulted from the expansion of the same neoplastic T-cell population. Macroscopic vesiculobullous lesions with devitalization of the lesional epidermis should be included in the broad spectrum of clinical signs presented by canine cutaneous epitheliotropic T-cell lymphoma.",
"title": ""
},
{
"docid": "ead6596d7f368da713f36f572c79bf94",
"text": "The total variation (TV) model is a classical and effective model in image denoising, but the weighted total variation (WTV) model has not attracted much attention. In this paper, we propose a new constrained WTV model for image denoising. A fast denoising dual method for the new constrained WTV model is also proposed. To achieve this task, we combines the well known gradient projection (GP) and the fast gradient projection (FGP) methods on the dual approach for the image denoising problem. Experimental results show that the proposed method outperforms currently known GP andFGP methods, and canbe applicable to both the isotropic and anisotropic WTV functions.",
"title": ""
},
{
"docid": "89aa13fe76bf48c982e44b03acb0dd3d",
"text": "Stock trading strategy plays a crucial role in investment companies. However, it is challenging to obtain optimal strategy in the complex and dynamic stock market. We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return. 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment. We train a deep reinforcement learning agent and obtain an adaptive trading strategy. The agent’s performance is evaluated and compared with Dow Jones Industrial Average and the traditional min-variance portfolio allocation strategy. The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns.",
"title": ""
},
{
"docid": "04fe2706a8da54365e4125867613748b",
"text": "We consider a sequence of multinomial data for which the probabilities associated with the categories are subject to abrupt changes of unknown magnitudes at unknown locations. When the number of categories is comparable to or even larger than the number of subjects allocated to these categories, conventional methods such as the classical Pearson’s chi-squared test and the deviance test may not work well. Motivated by high-dimensional homogeneity tests, we propose a novel change-point detection procedure that allows the number of categories to tend to infinity. The null distribution of our test statistic is asymptotically normal and the test performs well with finite samples. The number of change-points is determined by minimizing a penalized objective function based on segmentation, and the locations of the change-points are estimated by minimizing the objective function with the dynamic programming algorithm. Under some mild conditions, the consistency of the estimators of multiple change-points is established. Simulation studies show that the proposed method performs satisfactorily for identifying change-points in terms of power and estimation accuracy, and it is illustrated with an analysis of a real data set.",
"title": ""
},
{
"docid": "2cebd2fd12160d2a3a541989293f10be",
"text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.",
"title": ""
},
{
"docid": "6f94a57f7ae1a818c3bd5e7f6f2cea0f",
"text": "We propose a novel hybrid metric learning approach to combine multiple heterogenous statistics for robust image set classification. Specifically, we represent each set with multiple statistics – mean, covariance matrix and Gaussian distribution, which generally complement each other for set modeling. However, it is not trivial to fuse them since the mean vector with d-dimension often lies in Euclidean space R, whereas the covariance matrix typically resides on Riemannian manifold Sym+d . Besides, according to information geometry, the space of Gaussian distribution can be embedded into another Riemannian manifold Sym+d+1. To fuse these statistics from heterogeneous spaces, we propose a Hybrid Euclidean-and-Riemannian Metric Learning (HERML) method to exploit both Euclidean and Riemannian metrics for embedding their original spaces into high dimensional Hilbert spaces and then jointly learn hybrid metrics with discriminant constraint. The proposed method is evaluated on two tasks: set-based object categorization and video-based face recognition. Extensive experimental results demonstrate that our method has a clear superiority over the state-of-the-art methods.",
"title": ""
}
] | scidocsrr |
de58318e961209968774fcda1d76bc73 | Forecasting of ozone concentration in smart city using deep learning | [
{
"docid": "961348dd7afbc1802d179256606bdbb8",
"text": "Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics. The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios. The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "4e9b1776436950ed25353a8731eda76a",
"text": "This paper presents the design and implementation of VibeBin, a low-cost, non-intrusive and easy-to-install waste bin level detection system. Recent popularity of Internet-of-Things (IoT) sensors has brought us unprecedented opportunities to enable a variety of new services for monitoring and controlling smart buildings. Indoor waste management is crucial to a healthy environment in smart buildings. Measuring the waste bin fill-level helps building operators schedule garbage collection more responsively and optimize the quantity and location of waste bins. Existing systems focus on directly and intrusively measuring the physical quantities of the garbage (weight, height, volume, etc.) or its appearance (image), and therefore require careful installation, laborious calibration or labeling, and can be costly. Our system indirectly measures fill-level by sensing the changes in motor-induced vibration characteristics on the outside surface of waste bins. VibeBin exploits the physical nature of vibration resonance of the waste bin and the garbage within, and learns the vibration features of different fill-levels through a few garbage collection (emptying) cycles in a completely unsupervised manner. VibeBin identifies vibration features of different fill-levels by clustering historical vibration samples based on a custom distance metric which measures the dissimilarity between two samples. We deploy our system on eight waste bins of different types and sizes, and show that under normal usage and real waste, it can deliver accurate level measurements after just 3 garbage collection cycles. The average F-score (harmonic mean of precision and recall) of measuring empty, half, and full levels achieves 0.912. A two-week deployment also shows that the false positive and false negative events are satisfactorily rare.",
"title": ""
},
{
"docid": "91a56dbdefc08d28ff74883ec10a5d6e",
"text": "A truly autonomous guided vehicle (AGV) must sense its surrounding environment and react accordingly. In order to maneuver an AGV autonomously, it has to overcome navigational and collision avoidance problems. Previous AGV control systems have relied on hand-coded algorithms for processing sensor information. An intelligent distributed fuzzy logic control system (IDFLCS) has been implemented in a mecanum wheeled AGV system in order to achieve improved reliability and to reduce complexity of the development of control systems. Fuzzy logic controllers have been used to achieve robust control of mechatronic systems by fusing multiple signals from noisy sensors, integrating the representation of human knowledge and implementing behaviour-based control using if-then rules. This paper presents an intelligent distributed controller that implements fuzzy logic on an AGV that uses four independently driven mecanum wheels, incorporating laser, inertial and ultrasound sensors. Distributed control system, fuzzy control strategy, navigation and motion control of such an AGV are presented.",
"title": ""
},
{
"docid": "1c94dec13517bedf7a8140e207e0a6d9",
"text": "Art and anatomy were particularly closely intertwined during the Renaissance period and numerous painters and sculptors expressed themselves in both fields. Among them was Michelangelo Buonarroti (1475-1564), who is renowned for having produced some of the most famous of all works of art, the frescoes on the ceiling and on the wall behind the altar of the Sistine Chapel in Rome. Recently, a unique association was discovered between one of Michelangelo's most celebrated works (The Creation of Adam fresco) and the Divine Proportion/Golden Ratio (GR) (1.6). The GR can be found not only in natural phenomena but also in a variety of human-made objects and works of art. Here, using Image-Pro Plus 6.0 software, we present mathematical evidence that Michelangelo also used the GR when he painted Saint Bartholomew in the fresco of The Last Judgment, which is on the wall behind the altar. This discovery will add a new dimension to understanding the great works of Michelangelo Buonarroti.",
"title": ""
},
{
"docid": "a1f93bedbddefb63cd7ab7d030b4f3ee",
"text": "This paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. The detection is achieved with minimal impact on the system’s resources through the use of customized 3D inertial sensors embedded in fitness accessories with built-in pre-processing of the initial 100Hz data. It provides a flexible re-training of the classifiers on the phone which allows deploying the system swiftly. A set of evaluations shows a classification performance that is comparable to that of state of the art activity recognition, and that the whole setup is suitable for daily usage with minimal impact on the phone’s resources.",
"title": ""
},
{
"docid": "ddb66de70b76427f30fae713f176bc64",
"text": "Identifying whether an utterance is a statement, question, greeting, and so forth is integral to effective automatic understanding of natural dialog. Little is known, however, about how such dialog acts (DAs) can be automatically classified in truly natural conversation. This study asks whether current approaches, which use mainly word information, could be improved by adding prosodic information. The study is based on more than 1000 conversations from the Switchboard corpus. DAs were hand-annotated, and prosodic features (duration, pause, F0, energy, and speaking rate) were automatically extracted for each DA. In training, decision trees based on these features were inferred; trees were then applied to unseen test data to evaluate performance. Performance was evaluated for prosody models alone, and after combining the prosody models with word information--either from true words or from the output of an automatic speech recognizer. For an overall classification task, as well as three subtasks, prosody made significant contributions to classification. Feature-specific analyses further revealed that although canonical features (such as F0 for questions) were important, less obvious features could compensate if canonical features were removed. Finally, in each task, integrating the prosodic model with a DA-specific statistical language model improved performance over that of the language model alone, especially for the case of recognized words. Results suggest that DAs are redundantly marked in natural conversation, and that a variety of automatically extractable prosodic features could aid dialog processing in speech applications.",
"title": ""
},
{
"docid": "a774567d957ed0ea209b470b8eced563",
"text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.",
"title": ""
},
{
"docid": "d8253659de704969cd9c30b3ea7543c5",
"text": "Frequent itemset mining is an important step of association rules mining. Traditional frequent itemset mining algorithms have certain limitations. For example Apriori algorithm has to scan the input data repeatedly, which leads to high I/O load and low performance, and the FP-Growth algorithm is limited by the capacity of computer's inner stores because it needs to build a FP-tree and mine frequent itemset on the basis of the FP-tree in memory. With the coming of the Big Data era, these limitations are becoming more prominent when confronted with mining large-scale data. In this paper, DPBM, a distributed matrix-based pruning algorithm based on Spark, is proposed to deal with frequent itemset mining. DPBM can greatly reduce the amount of candidate itemset by introducing a novel pruning technique for matrix-based frequent itemset mining algorithm, an improved Apriori algorithm which only needs to scan the input data once. In addition, each computer node reduces greatly the memory usage by implementing DPBM under a latest distributed environment-Spark, which is a lightning-fast distributed computing. The experimental results show that DPBM have better performance than MapReduce-based algorithms for frequent itemset mining in terms of speed and scalability.",
"title": ""
},
{
"docid": "d8c64128c89f3a291b410eefbf00dab2",
"text": "We review the prospects of using yeasts and microalgae as sources of cheap oils that could be used for biodiesel. We conclude that yeast oils, the cheapest of the oils producible by heterotrophic microorganisms, are too expensive to be viable alternatives to the major commodity plant oils. Algal oils are similarly unlikely to be economic; the cheapest form of cultivation is in open ponds which then requires a robust, fast-growing alga that can withstand adventitious predatory protozoa or contaminating bacteria and, at the same time, attain an oil content of at least 40% of the biomass. No such alga has yet been identified. However, we note that if the prices of the major plant oils and crude oil continue to rise in the future, as they have done over the past 12 months, then algal lipids might just become a realistic alternative within the next 10 to 15 years. Better prospects would, however, be to focus on algae as sources of polyunsaturated fatty acids.",
"title": ""
},
{
"docid": "227d8ad4000e6e1d9fd1aa6bff8ed64c",
"text": "Recently, speed sensorless control of Induction Motor (IM) drives received great attention to avoid the different problems associated with direct speed sensors. Among different rotor speed estimation techniques, Model Reference Adaptive System (MRAS) schemes are the most common strategies employed due to their relative simplicity and low computational effort. In this paper a novel adaptation mechanism is proposed which replaces normally used conventional Proportional-Integral (PI) controller in MRAS adaptation mechanism by a Fractional Order PI (FOPI) controller. The performance of two adaptation mechanism controllers has been verified through simulation results using MATLAB/SIMULINK software. It is seen that the performance of the induction motor has improved when FOPI controller is used in place of classical PI controller.",
"title": ""
},
{
"docid": "4a4a868d64a653fac864b5a7a531f404",
"text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.",
"title": ""
},
{
"docid": "2d78a4c914c844a3f28e8f3b9f65339f",
"text": "The availability of abundant data posts a challenge to integrate static customer data and longitudinal behavioral data to improve performance in customer churn prediction. Usually, longitudinal behavioral data are transformed into static data before being included in a prediction model. In this study, a framework with ensemble techniques is presented for customer churn prediction directly using longitudinal behavioral data. A novel approach called the hierarchical multiple kernel support vector machine (H-MK-SVM) is formulated. A three phase training algorithm for the H-MK-SVM is developed, implemented and tested. The H-MK-SVM constructs a classification function by estimating the coefficients of both static and longitudinal behavioral variables in the training process without transformation of the longitudinal behavioral data. The training process of the H-MK-SVM is also a feature selection and time subsequence selection process because the sparse non-zero coefficients correspond to the variables selected. Computational experiments using three real-world databases were conducted. Computational results using multiple criteria measuring performance show that the H-MK-SVM directly using longitudinal behavioral data performs better than currently available classifiers.",
"title": ""
},
{
"docid": "ce9345c367db70de1dec07cad0343f71",
"text": "Techniques for digital image tampering are becoming widespread for the availability of low cost technology in which the image could be easily manipulated. Copy-move forgery is one of the tampering techniques that are frequently used and has recently received significant attention. But the existing methods, including block-matching and key point matching based methods, are not able to be used to solve the problem of detecting image forgery in both flat region and non-flat region. In this paper, combining the thinking of these two types of methods, we develop a SURF-based method to tackle this problem. In addition to the determination of forgeries in non-flat region through key point features, our method can be used to detect flat region in images in an effective way, and extract FMT features after blocking the region. By using matching algorithms of similar blocked images, image forgeries in flat region can be determined, which results in the completing of the entire image tamper detection. Experimental results are presented to demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "02ad9bef7d38af14c01ceb6efec8078b",
"text": "Weakness of the will may lead to ineffective goal striving in the sense that people lacking willpower fail to get started, to stay on track, to select instrumental means, and to act efficiently. However, using a simple self-regulation strategy (i.e., forming implementation intentions or making if–then plans) can get around this problem by drastically improving goal striving on the spot. After an overview of research investigating how implementation intentions work, I will discuss how people can use implementation intentions to overcome potential hindrances to successful goal attainment. Extensive empirical research shows that implementation intentions help people to meet their goals no matter whether these hindrances originate from within (e.g., lack of cognitive capabilities) or outside the person (i.e., difficult social situations). Moreover, I will report recent research demonstrating that implementation intentions can even be used to control impulsive cognitive, affective, and behavioral responses that interfere with one’s focal goal striving. In ending, I will present various new lines of implementation intention research, and raise a host of open questions that still deserve further empirical and theoretical analysis.",
"title": ""
},
{
"docid": "aa70864ca9d2285eebe5b46f7c283ebe",
"text": "The centerpiece of this thesis is a new processing paradigm for exploiting instruction level parallelism. This paradigm, called the multiscalar paradigm, splits the program into many smaller tasks, and exploits fine-grain parallelism by executing multiple, possibly (control and/or data) dependent tasks in parallel using multiple processing elements. Splitting the instruction stream at statically determined boundaries allows the compiler to pass substantial information about the tasks to the hardware. The processing paradigm can be viewed as extensions of the superscalar and multiprocessing paradigms, and shares a number of properties of the sequential processing model and the dataflow processing model. The multiscalar paradigm is easily realizable, and we describe an implementation of the multiscalar paradigm, called the multiscalar processor. The central idea here is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. The multiscalar processor supports speculative execution, allows arbitrary dynamic code motion (facilitated by an efficient hardware memory disambiguation mechanism), exploits communication localities, and does all of these with hardware that is fairly straightforward to build. Other desirable aspects of the implementation include decentralization of the critical resources, absence of wide associative searches, and absence of wide interconnection/data paths.",
"title": ""
},
{
"docid": "000652922defcc1d500a604d43c8f77b",
"text": "The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.",
"title": ""
},
{
"docid": "6162ad3612b885add014bd09baa5f07a",
"text": "The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.",
"title": ""
},
{
"docid": "29d1502c7edea13ce67aa1e283dc8488",
"text": "An explosive growth in the volume, velocity, and variety of the data available on the Internet has been witnessed recently. The data originated frommultiple types of sources including mobile devices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, health data has led to one of the most challenging research issues of the big data era. In this paper, Knowle—an online news management system upon semantic link network model is introduced. Knowle is a news event centrality data management system. The core elements of Knowle are news events on the Web, which are linked by their semantic relations. Knowle is a hierarchical data system, which has three different layers including the bottom layer (concepts), the middle layer (resources), and the top layer (events). The basic blocks of the Knowle system—news collection, resources representation, semantic relations mining, semantic linking news events are given. Knowle does not require data providers to follow semantic standards such as RDF or OWL, which is a semantics-rich self-organized network. It reflects various semantic relations of concepts, news, and events. Moreover, in the case study, Knowle is used for organizing andmining health news, which shows the potential on forming the basis of designing and developing big data analytics based innovation framework in the health domain. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b16407fc67058110b334b047bcfea9ac",
"text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@",
"title": ""
},
{
"docid": "c684de3eb8a370e3444aee3a37319b46",
"text": "We present an extended version of our work on the design and implementation of a reference model of the human body, the Master Motor Map (MMM) which should serve as a unifying framework for capturing human motions, their representation in standard data structures and formats as well as their reproduction on humanoid robots. The MMM combines the definition of a comprehensive kinematics and dynamics model of the human body with 104 DoF including hands and feet with procedures and tools for unified capturing of human motions. We present online motion converters for the mapping of human and object motions to the MMM model while taking into account subject specific anthropométrie data as well as for the mapping of MMM motion to a target robot kinematics. Experimental evaluation of the approach performed on VICON motion recordings demonstrate the benefits of the MMM as an important step towards standardized human motion representation and mapping to humanoid robots.",
"title": ""
}
] | scidocsrr |
0edabeebbf0365b18eeacd6d81e02853 | A Stress Sensor Based on Galvanic Skin Response (GSR) Controlled by ZigBee | [
{
"docid": "1d51506f851a8b125edd7edcd8c6bd1b",
"text": "A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.",
"title": ""
},
{
"docid": "963eb2a6225a1f320489a504f8010e94",
"text": "A method for recognizing the emotion states of subjects based on 30 features extracted from their Galvanic Skin Response (GSR) signals was proposed. GSR signals were acquired by means of experiments attended by those subjects. Next the data was normalized with the calm signal of the same subject after being de-noised. Then the normalized data were extracted features before the step of feature selection. Immune Hybrid Particle Swarm Optimization (IH-PSO) was proposed to select the feature subsets of different emotions. Classifier for feature selection was evaluated on the correct recognition as well as number of the selected features. At last, this paper verified the effectiveness of the feature subsets selected with another new data. All performed in this paper illustrate that IH-PSO can achieve much effective results, and further more, demonstrate that there is significant emotion information in GSR signal.",
"title": ""
}
] | [
{
"docid": "3b07476ebb8b1d22949ec32fc42d2d05",
"text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.",
"title": ""
},
{
"docid": "dbfb89ae6abef4d3dd9fa7591f0c57b1",
"text": "While everyday document search is done by keyword-based queries to search engines, we have situations that need deep search of documents such as scrutinies of patents, legal documents, and so on. In such cases, using document queries, instead of keyword-based queries, can be more helpful because it exploits more information from the query document. This paper studies a scheme of document search based on document queries. In particular, it uses centrality vectors, instead of tf-idf vectors, to represent query documents, combined with the Word2vec method to capture the semantic similarity in contained words. This scheme improves the performance of document search and provides a way to find documents not only lexically, but semantically close to a query document.",
"title": ""
},
{
"docid": "01ea2d3c28382459aafa064e70e582d3",
"text": "* In recent decades, an intriguing view of human cognition has garnered increasing support. According to this view, which I will call 'the hypothesis of extended cognition' ('HEC', hereafter), human cognitive processing literally extends into the environment surrounding the organism, and human cognitive states literally comprise—as wholes do their proper parts— elements in that environment; in consequence, while the skin and scalp may encase the human organism, they do not delimit the thinking subject. 1 The hypothesis of extended cognition should provoke our critical interest. Acceptance of HEC would alter our approach to research and theorizing in cognitive science and, it would seem, significantly change our conception of persons. Thus, if HEC faces substantive difficulties, these should be brought to light; this paper is meant to do just that, exposing some of the problems HEC must overcome if it is to stand among leading views of the nature of human cognition. The essay unfolds as follows: The first section consists of preliminary remarks, mostly about the scope and content of HEC as I will construe it. Sections II and III clarify HEC by situating it with respect to related theses one finds in the literature—the hypothesis of embedded cognition Association. I would like to express my appreciation to members of all three audiences for their useful feedback (especially William Lycan at the Mountain-Plains and David Chalmers at the APA), as well as to my conference commentators, Robert Welshon and Tadeusz Zawidzki. I also benefited from discussing extended cognition with 2 and content-externalism. The remaining sections develop a series of objections to HEC and the arguments that have been offered in its support. The first objection appeals to common sense: HEC implies highly counterintuitive attributions of belief. Of course, HEC-theorists can take, and have taken, a naturalistic stand. They claim that HEC need not be responsive to commonsense objections, for HEC is being offered as a theoretical postulate of cognitive science; whether we should accept HEC depends, they say, on the value of the empirical work premised upon it. Thus, I consider a series of arguments meant to show that HEC is a promising causal-explanatory hypothesis, concluding that these arguments fail and that, ultimately, HEC appears to be of marginal interest as part of a philosophical foundation for cognitive science. If the cases canvassed here are any indication, adopting HEC results in a significant loss of explanatory power or, at the …",
"title": ""
},
{
"docid": "a4b57037235e306034211e07e8500399",
"text": "As wireless devices boom and bandwidth-hungry applications (e.g., video and cloud uploading) get popular, today's wireless local area networks (WLANs) become not only crowded but also stressed at throughput. Multiuser multiple-input-multiple-output (MU-MIMO), an advanced form of MIMO, has gained attention due to its huge potential in improving the performance of WLANs. This paper surveys random access-based medium access control (MAC) protocols for MU-MIMO-enabled WLANs. It first provides background information about the evolution and the fundamental MAC schemes of IEEE 802.11 Standards and Amendments, and then identifies the key requirements of designing MU-MIMO MAC protocols for WLANs. After this, the most representative MU-MIMO MAC proposals in the literature are overviewed by benchmarking their MAC procedures and examining the key components, such as the channel state information acquisition, decoding/precoding, and scheduling schemes. Classifications and discussions on important findings of the surveyed MAC protocols are provided, based on which, the research challenges for designing effective MU-MIMO MAC protocols, as well as the envisaged MAC's role in the future heterogeneous networks, are highlighted.",
"title": ""
},
{
"docid": "16db60e96604f65f8b6f4f70e79b8ae5",
"text": "Yahoo! Answers is currently one of the most popular question answering systems. We claim however that its user experience could be significantly improved if it could route the \"right question\" to the \"right user.\" Indeed, while some users would rush answering a question such as \"what should I wear at the prom?,\" others would be upset simply being exposed to it. We argue here that Community Question Answering sites in general and Yahoo! Answers in particular, need a mechanism that would expose users to questions they can relate to and possibly answer.\n We propose here to address this need via a multi-channel recommender system technology for associating questions with potential answerers on Yahoo! Answers. One novel aspect of our approach is exploiting a wide variety of content and social signals users regularly provide to the system and organizing them into channels. Content signals relate mostly to the text and categories of questions and associated answers, while social signals capture the various user interactions with questions, such as asking, answering, voting, etc. We fuse and generalize known recommendation approaches within a single symmetric framework, which incorporates and properly balances multiple types of signals according to channels. Tested on a large scale dataset, our model exhibits good performance, clearly outperforming standard baselines.",
"title": ""
},
{
"docid": "6a9e30fd08b568ef6607158cab4f82b2",
"text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.",
"title": ""
},
{
"docid": "1cde5c2c4e4fe5d791242da86d4dd06d",
"text": "Recent years have seen an increasing interest in micro aerial vehicles (MAVs) and flapping flight in connection to that. The Delft University of Technology has developed a flapping wing MAV, “DelFly II”, which relies on a flapping bi-plane wing configuration for thrust and lift. The ultimate aim of the present research is to improve the flight performance of the DelFly II from both an aerodynamic and constructional perspective. This is pursued by a parametric wing geometry study in combination with a detailed aerodynamic and aeroelastic investigation. In the geometry study an improved wing geometry was found, where stiffeners are placed more outboard for a more rigid in-flight wing shape. The improved wing shows a 10% increase in the thrust-to-power ratio. Investigations into the swirling strength around the DelFly wing in hovering flight show a leading edge vortex (LEV) during the inand out-stroke. The LEV appears to be less stable than in insect flight, since some shedding of LEV is present. Nomenclature Symbol Description Unit f Wing flapping frequency Hz P Power W R DelFly wing length (semi-span) mm T Thrust N λci Positive imaginary part of eigenvalue τ Dimensionless time Abbreviations LEV Leading Edge Vortex MAV Micro Aerial Vehicle UAV Unmanned Aerial Vehicle",
"title": ""
},
{
"docid": "358adb9e7fb3507d8cfe8af85e028686",
"text": "An under-recognized inflammatory dermatosis characterized by an evolution of distinctive clinicopathological features\" (2016).",
"title": ""
},
{
"docid": "968ea2dcfd30492a81a71be25f16e350",
"text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.",
"title": ""
},
{
"docid": "4c563b09a10ce0b444edb645ce411d42",
"text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic",
"title": ""
},
{
"docid": "1a819d090746e83676b0fc3ee94fd526",
"text": "Brain-computer interfaces (BCIs) use signals recorded from the brain to operate robotic or prosthetic devices. Both invasive and noninvasive approaches have proven effective. Achieving the speed, accuracy, and reliability necessary for real-world applications remains the major challenge for BCI-based robotic control.",
"title": ""
},
{
"docid": "b05fc1f939ff50dc07dbbc170cd28478",
"text": "A compact multiresonant antenna for octaband LTE/WWAN operation in the internal smartphone applications is proposed and discussed in this letter. With a small volume of 15×25×4 mm3, the presented antenna comprises two direct feeding strips and a chip-inductor-loaded two-branch shorted strip. The two direct feeding strips can provide two resonant modes at around 1750 and 2650 MHz, and the two-branch shorted strip can generate a double-resonance mode at about 725 and 812 MHz. Moreover, a three-element bandstop matching circuit is designed to generate an additional resonance for bandwidth enhancement of the lower band. Ultimately, up to five resonances are achieved to cover the desired 704-960- and 1710-2690-MHz bands. Simulated and measured results are presented to demonstrate the validity of the proposed antenna.",
"title": ""
},
{
"docid": "c497964a942cc4187ab5dd8c8ea1c6d4",
"text": "De novo sequencing is an important task in proteomics to identify novel peptide sequences. Traditionally, only one MS/MS spectrum is used for the sequencing of a peptide; however, the use of multiple spectra of the same peptide with different types of fragmentation has the potential to significantly increase the accuracy and practicality of de novo sequencing. Research into the use of multiple spectra is in a nascent stage. We propose a general framework to combine the two different types of MS/MS data. Experiments demonstrate that our method significantly improves the de novo sequencing of existing software.",
"title": ""
},
{
"docid": "f6826b5983bc4af466e42e149ac19ba8",
"text": "Automatic violence detection from video is a hot topic for many video surveillance applications. However, there has been little success in developing an algorithm that can detect violence in surveillance videos with high performance. In this paper, following our recently proposed idea of motion Weber local descriptor (WLD), we make two major improvements and propose a more effective and efficient algorithm for detecting violence from motion images. First, we propose an improved WLD (IWLD) to better depict low-level image appearance information, and then extend the spatial descriptor IWLD by adding a temporal component to capture local motion information and hence form the motion IWLD. Second, we propose a modified sparse-representation-based classification model to both control the reconstruction error of coding coefficients and minimize the classification error. Based on the proposed sparse model, a class-specific dictionary containing dictionary atoms corresponding to the class labels is learned using class labels of training samples. With this learned dictionary, not only the representation residual but also the representation coefficients become discriminative. A classification scheme integrating the modified sparse model is developed to exploit such discriminative information. The experimental results on three benchmark data sets have demonstrated the superior performance of the proposed approach over the state of the arts.",
"title": ""
},
{
"docid": "b850d522f3283e638a5995242ebe2b08",
"text": "Agile methods may produce software faster but we also need to know how they meet our quality requirements. In this paper we compare the waterfall model with agile processes to show how agile methods achieve software quality under time pressure and in an unstable requirements environment, i.e. we analyze agile software quality assurance. We present a detailed waterfall model showing its software quality support processes. We then show the quality practices that agile methods have integrated into their processes. This allows us to answer the question \"can agile methods ensure quality even though they develop software faster and can handle unstable requirements?\".",
"title": ""
},
{
"docid": "b23230f0386f185b7d5eb191034d58ec",
"text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.",
"title": ""
},
{
"docid": "b91204ac8a118fcde9a774e925f24a7e",
"text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.",
"title": ""
},
{
"docid": "95d8b83eadde6d6da202341c0b9238c8",
"text": "Numerous studies have demonstrated that water-based compost preparations, referred to as compost tea and compost-water extract, can suppress phytopathogens and plant diseases. Despite its potential, compost tea has generally been considered as inadequate for use as a biocontrol agent in conventional cropping systems but important to organic producers who have limited disease control options. The major impediments to the use of compost tea have been the lessthan-desirable and inconsistent levels of plant disease suppression as influenced by compost tea production and application factors including compost source and maturity, brewing time and aeration, dilution and application rate and application frequency. Although the mechanisms involved in disease suppression are not fully understood, sterilization of compost tea has generally resulted in a loss in disease suppressiveness. This indicates that the mechanisms of suppression are often, or predominantly, biological, although physico-chemical factors have also been implicated. Increasing the use of molecular approaches, such as metagenomics, metaproteomics, metatranscriptomics and metaproteogenomics should prove useful in better understanding the relationships between microbial abundance, diversity, functions and disease suppressive efficacy of compost tea. Such investigations are crucial in developing protocols for optimizing the compost tea production process so as to maximize disease suppressive effect without exposing the manufacturer or user to the risk of human pathogens. To this end, it is recommended that compost tea be used as part of an integrated disease management system.",
"title": ""
},
{
"docid": "72a86b52797d61bf631d75cd7109e9d9",
"text": "We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus’ open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.",
"title": ""
},
{
"docid": "3b302ce4b5b8b42a61c7c4c25c0f3cbf",
"text": "This paper describes quorum leases, a new technique that allows Paxos-based systems to perform reads with high throughput and low latency. Quorum leases do not sacrifice consistency and have only a small impact on system availability and write latency. Quorum leases allow a majority of replicas to perform strongly consistent local reads, which substantially reduces read latency at those replicas (e.g., by two orders of magnitude in wide-area scenarios). Previous techniques for performing local reads in Paxos systems either (a) sacrifice consistency; (b) allow only one replica to read locally; or (c) decrease the availability of the system and increase the latency of all updates by requiring all replicas to be notified synchronously. We describe the design of quorum leases and evaluate their benefits compared to previous approaches through an implementation running in five geo-distributed Amazon EC2 datacenters.",
"title": ""
}
] | scidocsrr |
80e9309b3e9bb8f29e81d26f3cb8606b | The Incredible ELK | [
{
"docid": "87af3cf22afaf5903a521e653f693e6c",
"text": "Finding the justifications of an entailment (that is, all the minimal set of axioms sufficient to produce an entailment) has emerged as a key inference service for the Web Ontology Language (OWL). Justifications are essential for debugging unsatisfiable classes and contradictions. The availability of justifications as explanations of entailments improves the understandability of large and complex ontologies. In this paper, we present several algorithms for computing all the justifications of an entailment in an OWL-DL Ontology and show, by an empirical evaluation, that even a reasoner independent approach works well on real ontologies.",
"title": ""
},
{
"docid": "9814af3a2c855717806ad7496d21f40e",
"text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.",
"title": ""
}
] | [
{
"docid": "69a11f89a92051631e1c07f2af475843",
"text": "Animal-assisted therapy (AAT) has been practiced for many years and there is now increasing interest in demonstrating its efficacy through research. To date, no known quantitative review of AAT studies has been published; our study sought to fill this gap. We conducted a comprehensive search of articles reporting on AAT in which we reviewed 250 studies, 49 of which met our inclusion criteria and were submitted to meta-analytic procedures. Overall, AAT was associated with moderate effect sizes in improving outcomes in four areas: Autism-spectrum symptoms, medical difficulties, behavioral problems, and emotional well-being. Contrary to expectations, characteristics of participants and studies did not produce differential outcomes. AAT shows promise as an additive to established interventions and future research should investigate the conditions under which AAT can be most helpful.",
"title": ""
},
{
"docid": "115fb4dcd7d5a1240691e430cd107dce",
"text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "b7dcd24f098965ff757b7ce5f183662b",
"text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.",
"title": ""
},
{
"docid": "4cbf8dc762813225048edc555a28a0c4",
"text": "The Semantic Web and Linked Data gained traction in the last years. However, the majority of information still is contained in unstructured documents. This can also not be expected to change, since text, images and videos are the natural way how humans interact with information. Semantic structuring on the other hand enables the (semi-)automatic integration, repurposing, rearrangement of information. NLP technologies and formalisms for the integrated representation of unstructured and semantic content (such as RDFa and Microdata) aim at bridging this semantic gap. However, in order for humans to truly benefit from this integration, we need ways to author, visualize and explore unstructured and semantically enriched content in an integrated manner. In this paper, we present the WYSIWYM (What You See is What You Mean) concept, which addresses this issue and formalizes the binding between semantic representation models and UI elements for authoring, visualizing and exploration. With RDFaCE and Pharmer we present and evaluate two complementary showcases implementing the WYSIWYM concept for different application domains.",
"title": ""
},
{
"docid": "d25a3d1a921d78c4e447c8e010647351",
"text": "In the TREC 2005 Spam Evaluation Track, a number of popular spam filters – all owing their heritage to Graham’s A Plan for Spam – did quite well. Machine learning techniques reported elsewhere to perform well were hardly represented in the participating filters, and not represented at all in the better results. A non-traditional technique Prediction by Partial Matching (PPM) – performed exceptionally well, at or near the top of every test. Are the TREC results an anomaly? Is PPM really the best method for spam filtering? How are these results to be reconciled with others showing that methods like Support Vector Machines (SVM) are superior? We address these issues by testing implementations of five different classification methods on the TREC public corpus using the online evaluation methodology introduced in TREC. These results are complemented with cross validation experiments, which facilitate a comparison of the methods considered in the study under different evaluation schemes, and also give insight into the nature and utility of the evaluation regimens themselves. For comparison with previously published results, we also conducted cross validation experiments on the Ling-Spam and PU1 datasets. These tests reveal substantial differences attributable to different test assumptions, in particular batch vs. on-line training and testing, the order of classification, and the method of tokenization. Notwithstanding these differences, the methods that perform well at TREC also perform well using established test methods and corpora. Two previously untested methods – one based on Dynamic Markov Compression and one using logistic regression – compare favorably with competing approaches.",
"title": ""
},
{
"docid": "1f52a93eff0c020564acc986b2fef0e7",
"text": "The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.",
"title": ""
},
{
"docid": "946517ff7728e321804b36c43e3a0da2",
"text": "We are creating an environment for investigating the role of advanced AI in interactive, story-based computer games. This environment is based on the Unreal Tournament (UT) game engine and the Soar AI engine. Unreal provides a 3D virtual environment, while Soar provides a flexible architecture for developing complex AI characters. This paper describes our progress to date, starting with our game, Haunt 2, which is designed so that complex AI characters will be critical to the success (or failure) of the game. It addresses design issues with constructing a plot for an interactive storytelling environment, creating synthetic characters for that environment, and using a story director agent to tell the story with those characters.",
"title": ""
},
{
"docid": "cf9fe52efd734c536d0a7daaf59a9bcd",
"text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"title": ""
},
{
"docid": "dd0f335262aab9aa5adb0ad7d25b80bf",
"text": "We present a framework for adaptive news access, based on machine learning techniques specifically designed for this task. First, we focus on the system's general functionality and system architecture. We then describe the interface and design of two deployed news agents that are part of the described architecture. While the first agent provides personalized news through a web-based interface, the second system is geared towards wireless information devices such as PDAs (personal digital assistants) and cell phones. Based on implicit and explicit user feedback, our agents use a machine learning algorithm to induce individual user models. Motivated by general shortcomings of other user modeling systems for Information Retrieval applications, as well as the specific requirements of news classification, we propose the induction of hybrid user models that consist of separate models for short-term and long-term interests. Furthermore, we illustrate how the described algorithm can be used to address an important issue that has thus far received little attention in the Information Retrieval community: a user's information need changes as a direct result of interaction with information. We empirically evaluate the system's performance based on data collected from regular system users. The goal of the evaluation is not only to understand the performance contributions of the algorithm's individual components, but also to assess the overall utility of the proposed user modeling techniques from a user perspective. Our results provide empirical evidence for the utility of the hybrid user model, and suggest that effective personalization can be achieved without requiring any extra effort from the user.",
"title": ""
},
{
"docid": "5546f93f4c10681edb0fdfe3bf52809c",
"text": "The current applications of neural networks to in vivo medical imaging and signal processing are reviewed. As is evident from the literature neural networks have already been used for a wide variety of tasks within medicine. As this trend is expected to continue this review contains a description of recent studies to provide an appreciation of the problems associated with implementing neural networks for medical imaging and signal processing.",
"title": ""
},
{
"docid": "598fd1fc1d1d6cba7a838c17efe9481b",
"text": "The tens of thousands of high-quality open source software projects on the Internet raise the exciting possibility of studying software development by finding patterns across truly large source code repositories. This could enable new tools for developing code, encouraging reuse, and navigating large projects. In this paper, we build the first giga-token probabilistic language model of source code, based on 352 million lines of Java. This is 100 times the scale of the pioneering work by Hindle et al. The giga-token model is significantly better at the code suggestion task than previous models. More broadly, our approach provides a new “lens” for analyzing software projects, enabling new complexity metrics based on statistical analysis of large corpora. We call these metrics data-driven complexity metrics. We propose new metrics that measure the complexity of a code module and the topical centrality of a module to a software project. In particular, it is possible to distinguish reusable utility classes from classes that are part of a program's core logic based solely on general information theoretic criteria.",
"title": ""
},
{
"docid": "f151c89fecb41e10c6b19ceb659eb163",
"text": "Most organizations have some kind of process-oriented information system that keeps track of business events. Process Mining starts from event logs extracted from these systems in order to discover, analyze, diagnose and improve processes, organizational, social and data structures. Notwithstanding the large number of contributions to the process mining literature over the last decade, the number of studies actually demonstrating the applicability and value of these techniques in practice has been limited. As a consequence, there is a need for real-life case studies suggesting methodologies to conduct process mining analysis and to show the benefits of its application in real-life environments. In this paper we present a methodological framework for a multi-faceted analysis of real-life event logs based on Process Mining. As such, we demonstrate the usefulness and flexibility of process mining techniques to expose organizational inefficiencies in a real-life case study that is centered on the back office process of a large Belgian insurance company. Our analysis shows that process mining techniques constitute an ideal means to tackle organizational challenges by suggesting process improvements and creating a companywide process awareness. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f81ea919846bce6bae4298d8780f9123",
"text": "AIMS AND OBJECTIVES\nTo evaluate the effectiveness of an accessibility-enhanced multimedia informational educational programme in reducing anxiety and increasing satisfaction with the information and materials received by patients undergoing cardiac catheterisation.\n\n\nBACKGROUND\nCardiac catheterisation is one of the most anxiety-provoking invasive procedures for patients. However, informational education using multimedia to inform patients undergoing cardiac catheterisation has not been extensively explored.\n\n\nDESIGN\nA randomised experimental design with three-cohort prospective comparisons.\n\n\nMETHODS\nIn total, 123 consecutive patients were randomly assigned to one of three groups: regular education; (group 1), accessibility-enhanced multimedia informational education (group 2) and instructional digital videodisc education (group 3). Anxiety was measured with Spielberger's State Anxiety Inventory, which was administered at four time intervals: before education (T0), immediately after education (T1), before cardiac catheterisation (T2) and one day after cardiac catheterisation (T3). A satisfaction questionnaire was administrated one day after cardiac catheterisation. Data were collected from May 2009-September 2010 and analysed using descriptive statistics, chi-squared tests, one-way analysis of variance, Scheffe's post hoc test and generalised estimating equations.\n\n\nRESULTS\nAll patients experienced moderate anxiety at T0 to low anxiety at T3. Accessibility-enhanced multimedia informational education patients had significantly lower anxiety levels and felt the most satisfied with the information and materials received compared with patients in groups 1 and 3. A statistically significant difference in anxiety levels was only found at T2 among the three groups (p = 0·004).\n\n\nCONCLUSIONS\nThe findings demonstrate that the accessibility-enhanced multimedia informational education was the most effective informational educational module for informing patients about their upcoming cardiac catheterisation, to reduce anxiety and improve satisfaction with the information and materials received compared with the regular education and instructional digital videodisc education.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nAs the accessibility-enhanced multimedia informational education reduced patient anxiety and improved satisfaction with the information and materials received, it can be adapted to complement patient education in future regular cardiac care.",
"title": ""
},
{
"docid": "ac657141ed547f870ad35d8c8b2ba8f5",
"text": "Induced by “big data,” “topic modeling” has become an attractive alternative to mapping cowords in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument (“The Leiden Manifesto”) and then upscale to a sample of moderate size (n = 687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.",
"title": ""
},
{
"docid": "cb00162e49af450c3e355088fe7817ac",
"text": "The new sensing applications need enhanced computing capabilities to handle the requirements of complex and huge data processing. The Internet of Things (IoT) concept brings processing and communication features to devices. In addition, the Cloud Computing paradigm provides resources and infrastructures for performing the computations and outsourcing the work from the IoT devices. This scenario opens new opportunities for designing advanced IoT-based applications, however, there is still much research to be done to properly gear all the systems for working together. This work proposes a collaborative model and an architecture to take advantage of the available computing resources. The resulting architecture involves a novel network design with different levels which combines sensing and processing capabilities based on the Mobile Cloud Computing (MCC) paradigm. An experiment is included to demonstrate that this approach can be used in diverse real applications. The results show the flexibility of the architecture to perform complex computational tasks of advanced applications.",
"title": ""
},
{
"docid": "f066cb3e2fc5ee543e0cc76919b261eb",
"text": "Eco-labels are part of a new wave of environmental policy that emphasizes information disclosure as a tool to induce environmentally friendly behavior by both firms and consumers. Little consensus exists as to whether eco-certified products are actually better than their conventional counterparts. This paper seeks to understand the link between eco-certification and product quality. We use data from three leading wine rating publications (Wine Advocate, Wine Enthusiast, and Wine Spectator) to assess quality for 74,148 wines produced in California between 1998 and 2009. Our results indicate that eco-certification is associated with a statistically significant increase in wine quality rating.",
"title": ""
},
{
"docid": "55ec669a67b88ff0b6b88f1fa6408df9",
"text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.",
"title": ""
},
{
"docid": "b06f1e94f0ba22828044030c3a1fe691",
"text": "BACKGROUND\nThe use of opioids for chronic non-cancer pain has increased in the United States since state laws were relaxed in the late 1990s. These policy changes occurred despite scanty scientific evidence that chronic use of opioids was safe and effective.\n\n\nMETHODS\nWe examined opiate prescriptions and dosing patterns (from computerized databases, 1996 to 2002), and accidental poisoning deaths attributable to opioid use (from death certificates, 1995 to 2002), in the Washington State workers' compensation system.\n\n\nRESULTS\nOpioid prescriptions increased only modestly between 1996 and 2002. However, prescriptions for the most potent opioids (Schedule II), as a percentage of all scheduled opioid prescriptions (II, III, and IV), increased from 19.3% in 1996 to 37.2% in 2002. Among long-acting opioids, the average daily morphine equivalent dose increased by 50%, to 132 mg/day. Thirty-two deaths were definitely or probably related to accidental overdose of opioids. The majority of deaths involved men (84%) and smokers (69%).\n\n\nCONCLUSIONS\nThe reasons for escalating doses of the most potent opioids are unknown, but it is possible that tolerance or opioid-induced abnormal pain sensitivity may be occurring in some workers who use opioids for chronic pain. Opioid-related deaths in this population may be preventable through use of prudent guidelines regarding opioid use for chronic pain.",
"title": ""
},
{
"docid": "3668a5a14ea32471bd34a55ff87b45b5",
"text": "This paper proposes a method to separate polyphonic music signal into signals of each musical instrument by NMF: Non-negative Matrix Factorization based on preservation of spectrum envelope. Sound source separation is taken as a fundamental issue in music signal processing and NMF is becoming common to solve it because of its versatility and compatibility with music signal processing. Our method bases on a common feature of harmonic signal: spectrum envelopes of musical signal in close pitches played by the harmonic music instrument would be similar. We estimate power spectrums of each instrument by NMF with restriction to synchronize spectrum envelope of bases which are allocated to all possible center frequencies of each instrument. This manipulation means separation of components which refers to tones of each instrument and realizes both of separation without pre-training and separation of signal including harmonic and non-harmonic sound. We had an experiment to decompose mixture sound signal of MIDI instruments into each instrument and evaluated the result by SNR of single MIDI instrument sound signals and separated signals. As a result, SNR of lead guitar and drums approximately marked 3.6 and 6.0 dB and showed significance of our method.",
"title": ""
},
{
"docid": "12d565f0aaa6960e793b96f1c26cb103",
"text": "The new western Mode 5 IFF (Identification Foe or Friend) system is introduced. Based on analysis of signal features and format characteristics of Mode 5, a new signal detection method using phase and Amplitude correlation is put forward. This method utilizes odd and even channels to separate the signal, and then the separated signals are performed correlation with predefined mask. Through detecting preamble, the detection of Mode 5 signal is implemented. Finally, simulation results show the validity of the proposed method.",
"title": ""
}
] | scidocsrr |
5df68dcfb86b34f85a01916e74852a7b | Attending to the present: mindfulness meditation reveals distinct neural modes of self-reference. | [
{
"docid": "c6e1c8aa6633ec4f05240de1a3793912",
"text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.",
"title": ""
},
{
"docid": "a55eed627afaf39ee308cc9e0e10a698",
"text": "Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.",
"title": ""
},
{
"docid": "4b284736c51435f9ab6f52f174dc7def",
"text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.",
"title": ""
},
{
"docid": "34257e8924d8f9deec3171589b0b86f2",
"text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.",
"title": ""
}
] | [
{
"docid": "5e453defd762bb4ecfae5dcd13182b4a",
"text": "We present a comprehensive lifetime prediction methodology for both intrinsic and extrinsic Time-Dependent Dielectric Breakdown (TDDB) failures to provide adequate Design-for-Reliability. For intrinsic failures, we propose applying the √E model and estimating the Weibull slope using dedicated single-via test structures. This effectively prevents lifetime underestimation, and thus relaxes design restrictions. For extrinsic failures, we propose applying the thinning model and Critical Area Analysis (CAA). In the thinning model, random defects reduce effective spaces between interconnects, causing TDDB failures. We can quantify the failure probabilities by using CAA for any design layouts of various LSI products.",
"title": ""
},
{
"docid": "9ff76c8500a15d1c9b4a980b37bca505",
"text": "The thesis is about linear genetic programming (LGP), a machine learning approach that evolves computer programs as sequences of imperative instructions. Two fundamental differences to the more common tree-based variant (TGP) may be identified. These are the graph-based functional structure of linear genetic programs, on the one hand, and the existence of structurally noneffective code, on the other hand. The two major objectives of this work comprise (1) the development of more advanced methods and variation operators to produce better and more compact program solutions and (2) the analysis of general EA/GP phenomena in linear GP, including intron code, neutral variations, and code growth, among others. First, we introduce efficient algorithms for extracting features of the imperative and functional structure of linear genetic programs. In doing so, especially the detection and elimination of noneffective code during runtime will turn out as a powerful tool to accelerate the time-consuming step of fitness evaluation in GP. Variation operators are discussed systematically for the linear program representation. We will demonstrate that so called effective instruction mutations achieve the best performance in terms of solution quality. These mutations operate only on the (structurally) effective code and restrict the mutation step size to one instruction. One possibility to further improve their performance is to explicitly increase the probability of neutral variations. As a second, more time-efficient alternative we explicitly control the mutation step size on the effective code (effective step size). Minimum steps do not allow more than one effective instruction to change its effectiveness status. That is, only a single node may be connected to or disconnected from the effective graph component. It is an interesting phenomenon that, to some extent, the effective code becomes more robust against destructions over the generations already implicitly. A special concern of this thesis is to convince the reader that there are some serious arguments for using a linear representation. In a crossover-based comparison LGP has been found superior to TGP over a set of benchmark problems. Furthermore, linear solutions turned out to be more compact than tree solutions due to (1) multiple usage of subgraph results and (2) implicit parsimony pressure by structurally noneffective code. The phenomenon of code growth is analyzed for different linear genetic operators. When applying instruction mutations exclusively almost only neutral variations may be held responsible for the emergence and propagation of intron code. It is noteworthy that linear genetic programs may not grow if all neutral variation effects are rejected and if the variation step size is minimum. For the same reasons effective instruction mutations realize an implicit complexity control in linear GP which reduces a possible negative effect of code growth to a minimum. Another noteworthy result in this context is that program size is strongly increased by crossover while it is hardly influenced by mutation even if step sizes are not explicitly restricted.",
"title": ""
},
{
"docid": "664b9bb1f132a87e2f579945a31852b7",
"text": "Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains—astronomy and molecular biology. Introduction",
"title": ""
},
{
"docid": "ddff0a3c6ed2dc036cf5d6b93d2da481",
"text": "Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100% (Meteor score increases from 4.82 to 9.65).",
"title": ""
},
{
"docid": "89dbc16a2510e3b0e4a248f428a9ffc0",
"text": "Complex networks are ubiquitous in our daily life, with the World Wide Web, social networks, and academic citation networks being some of the common examples. It is well understood that modeling and understanding the network structure is of crucial importance to revealing the network functions. One important problem, known as community detection, is to detect and extract the community structure of networks. More recently, the focus in this research topic has been switched to the detection of overlapping communities. In this paper, based on the matrix factorization approach, we propose a method called bounded nonnegative matrix tri-factorization (BNMTF). Using three factors in the factorization, we can explicitly model and learn the community membership of each node as well as the interaction among communities. Based on a unified formulation for both directed and undirected networks, the optimization problem underlying BNMTF can use either the squared loss or the generalized KL-divergence as its loss function. In addition, to address the sparsity problem as a result of missing edges, we also propose another setting in which the loss function is defined only on the observed edges. We report some experiments on real-world datasets to demonstrate the superiority of BNMTF over other related matrix factorization methods.",
"title": ""
},
{
"docid": "fd0defe3aaabd2e27c7f9d3af47dd635",
"text": "A fast test for triangle-triangle intersection by computing signed vertex-plane distances (sufficient if one triangle is wholly to one side of the other) and signed line-line distances of selected edges (otherwise) is presented. This algorithm is faster than previously published algorithms and the code is available online.",
"title": ""
},
{
"docid": "0e600cedfbd143fe68165e20317c46d4",
"text": "We propose an efficient real-time automatic license plate recognition (ALPR) framework, particularly designed to work on CCTV video footage obtained from cameras that are not dedicated to the use in ALPR. At present, in license plate detection, tracking and recognition are reasonably well-tackled problems with many successful commercial solutions being available. However, the existing ALPR algorithms are based on the assumption that the input video will be obtained via a dedicated, high-resolution, high-speed camera and is/or supported by a controlled capture environment, with appropriate camera height, focus, exposure/shutter speed and lighting settings. However, typical video forensic applications may require searching for a vehicle having a particular number plate on noisy CCTV video footage obtained via non-dedicated, medium-to-low resolution cameras, working under poor illumination conditions. ALPR in such video content faces severe challenges in license plate localization, tracking and recognition stages. This paper proposes a novel approach for efficient localization of license plates in video sequence and the use of a revised version of an existing technique for tracking and recognition. A special feature of the proposed approach is that it is intelligent enough to automatically adjust for varying camera distances and diverse lighting conditions, a requirement for a video forensic tool that may operate on videos obtained by a diverse set of unspecified, distributed CCTV cameras.",
"title": ""
},
{
"docid": "75952b1d2c9c2f358c4c2e3401a00245",
"text": "This book is an outstanding contribution to the philosophical study of language and mind, by one of the most influential thinkers of our time. In a series of penetrating essays, Noam Chomsky cuts through the confusion and prejudice which has infected the study of language and mind, bringing new solutions to traditional philosophical puzzles and fresh perspectives on issues of general interest, ranging from the mind–body problem to the unification of science. Using a range of imaginative and deceptively simple linguistic analyses, Chomsky argues that there is no coherent notion of “language” external to the human mind, and that the study of language should take as its focus the mental construct which constitutes our knowledge of language. Human language is therefore a psychological, ultimately a “biological object,” and should be analysed using the methodology of the natural sciences. His examples and analyses come together in this book to give a unique and compelling perspective on language and the mind.",
"title": ""
},
{
"docid": "3bff3136e5e2823d0cca2f864fe9e512",
"text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.",
"title": ""
},
{
"docid": "7c3b5470398a219875ba1a6443119c8e",
"text": "Semantic role labeling (SRL) identifies the predicate-argument structure in text with semantic labels. It plays a key role in understanding natural language. In this paper, we present POLYGLOT, a multilingual semantic role labeling system capable of semantically parsing sentences in 9 different languages from 4 different language groups. The core of POLYGLOT are SRL models for individual languages trained with automatically generated Proposition Banks (Akbik et al., 2015). The key feature of the system is that it treats the semantic labels of the English Proposition Bank as “universal semantic labels”: Given a sentence in any of the supported languages, POLYGLOT applies the corresponding SRL and predicts English PropBank frame and role annotation. The results are then visualized to facilitate the understanding of multilingual SRL with this unified semantic representation.",
"title": ""
},
{
"docid": "5bca58cbd1ef80ebf040529578d2a72a",
"text": "In this letter, a printable chipless tag with electromagnetic code using split ring resonators is proposed. A 4 b chipless tag that can be applied to paper/plastic-based items such as ID cards, tickets, banknotes and security documents is designed. The chipless tag generates distinct electromagnetic characteristics by various combinations of a split ring resonator. Furthermore, a reader system is proposed to digitize electromagnetic characteristics and convert chipless tag to electromagnetic code.",
"title": ""
},
{
"docid": "b2c03d8e54a2a6840f6688ab9682e24b",
"text": "Path following and follow-the-leader motion is particularly desirable for minimally-invasive surgery in confined spaces which can only be reached using tortuous paths, e.g. through natural orifices. While path following and followthe- leader motion can be achieved by hyper-redundant snake robots, their size is usually not applicable for medical applications. Continuum robots, such as tendon-driven or concentric tube mechanisms, fulfill the size requirements for minimally invasive surgery, but yet follow-the-leader motion is not inherently provided. In fact, parameters of the manipulator's section curvatures and translation have to be chosen wisely a priori. In this paper, we consider a tendon-driven continuum robot with extensible sections. After reformulating the forward kinematics model, we formulate prerequisites for follow-the-leader motion and present a general approach to determine a sequence of robot configurations to achieve follow-the-leader motion along a given 3D path. We evaluate our approach in a series of simulations with 3D paths composed of constant curvature arcs and general 3D paths described by B-spline curves. Our results show that mean path errors <;0.4mm and mean tip errors <;1.6mm can theoretically be achieved for constant curvature paths and <;2mm and <;3.1mm for general B-spline curves respectively.",
"title": ""
},
{
"docid": "25bcbb44c843d71b7422905e9dbe1340",
"text": "INTRODUCTION\nThe purpose of this study was to evaluate the effect of using the transverse analysis developed at Case Western Reserve University (CWRU) in Cleveland, Ohio. The hypotheses were based on the following: (1) Does following CWRU's transverse analysis improve the orthodontic results? (2) Does following CWRU's transverse analysis minimize the active treatment duration?\n\n\nMETHODS\nA retrospective cohort research study was conducted on a randomly selected sample of 100 subjects. The sample had CWRU's analysis performed retrospectively, and the sample was divided according to whether the subjects followed what CWRU's transverse analysis would have suggested. The American Board of Orthodontics discrepancy index was used to assess the pretreatment records, and quality of the result was evaluated using the American Board of Orthodontics cast/radiograph evaluation. The Mann-Whitney test was used for the comparison.\n\n\nRESULTS\nCWRU's transverse analysis significantly improved the total cast/radiograph evaluation scores (P = 0.041), especially the buccolingual inclination component (P = 0.001). However, it did not significantly affect treatment duration (P = 0.106).\n\n\nCONCLUSIONS\nCWRU's transverse analysis significantly improves the orthodontic results but does not have significant effects on treatment duration.",
"title": ""
},
{
"docid": "e81f1caa398de7f56a70cc4db18d58db",
"text": "UNLABELLED\nThis study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.\n\n\nIN CONCLUSION\n1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.",
"title": ""
},
{
"docid": "31cd031708856490f756d4399d7709d5",
"text": "Inspecting objects in the industry aims to guarantee product quality allowing problems to be corrected and damaged products to be discarded. Inspection is also widely used in railway maintenance, where wagon components need to be checked due to efficiency and safety concerns. In some organizations, hundreds of wagons are inspected visually by a human inspector, which leads to quality issues and safety risks for the inspectors. This paper describes a wagon component inspection approach using Deep Learning techniques to detect a particular damaged component: the shear pad. We compared our approach for convolutional neural networks with the state of art classification methods to distinguish among three shear pads conditions: absent, damaged, and undamaged shear pad. Our results are very encouraging showing empirical evidence that our approach has better performance than other classification techniques.",
"title": ""
},
{
"docid": "a697f85ad09699ddb38994bd69b11103",
"text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.",
"title": ""
},
{
"docid": "d8f8931af18f3e0a6424916dfac717ee",
"text": "Twitter data have brought new opportunities to know what happens in the world in real-time, and conduct studies on the human subjectivity on a diversity of issues and topics at large scale, which would not be feasible using traditional methods. However, as well as these data represent a valuable source, a vast amount of noise can be found in them. Because of the brevity of texts and the widespread use of mobile devices, non-standard word forms abound in tweets, which degrade the performance of Natural Language Processing tools. In this paper, a lexical normalization system of tweets written in Spanish is presented. The system suggests normalization candidates for out-of-vocabulary (OOV) words based on similarity of graphemes or phonemes. Using contextual information, the best correction candidate for a word is selected. Experimental results show that the system correctly detects OOV words and the most of cases suggests the proper corrections. Together with this, results indicate a room for improvement in the correction candidate selection. Compared with other methods, the overall performance of the system is above-average and competitive to different approaches in the literature.",
"title": ""
},
{
"docid": "da5c1445453853e23477bfea79fd4605",
"text": "This paper presents an 8-bit column-driver IC with improved deviation of voltage output (DVO) for thin-film-transistor (TFT) liquid crystal displays (LCDs). The various DVO results contributed by the output buffer of a column driver are predicted by using Monte Carlo simulation under different variation conditions. Relying on this prediction, a better compromise can be achieved between DVO and chip size. This work was implemented using 0.35-μm CMOS technology and the measured maximum DVO is only 6.2 mV.",
"title": ""
},
{
"docid": "f598677e19789c92c31936440e709c4d",
"text": "Temporal datasets, in which data evolves continuously, exist in a wide variety of applications, and identifying anomalous or outlying objects from temporal datasets is an important and challenging task. Different from traditional outlier detection, which detects objects that have quite different behavior compared with the other objects, temporal outlier detection tries to identify objects that have different evolutionary behavior compared with other objects. Usually objects form multiple communities, and most of the objects belonging to the same community follow similar patterns of evolution. However, there are some objects which evolve in a very different way relative to other community members, and we define such objects as evolutionary community outliers. This definition represents a novel type of outliers considering both temporal dimension and community patterns. We investigate the problem of identifying evolutionary community outliers given the discovered communities from two snapshots of an evolving dataset. To tackle the challenges of community evolution and outlier detection, we propose an integrated optimization framework which conducts outlier-aware community matching across snapshots and identification of evolutionary outliers in a tightly coupled way. A coordinate descent algorithm is proposed to improve community matching and outlier detection performance iteratively. Experimental results on both synthetic and real datasets show that the proposed approach is highly effective in discovering interesting evolutionary community outliers.",
"title": ""
},
{
"docid": "04271124470c613da4dd4136ceb61a18",
"text": "In this paper, we propose the deep reinforcement relevance network (DRRN), a novel deep architecture, for handling an unbounded action space with applications to language understanding for text-based games. For a particular class of games, a user must choose among a variable number of actions described by text, with the goal of maximizing long-term reward. In these games, the best action is typically that which fits the best to the current situation (modeled as a state in the DRRN), also described by text. Because of the exponential complexity of natural language with respect to sentence length, there is typically an unbounded set of unique actions. Therefore, it is very difficult to pre-define the action set as in the deep Q-network (DQN). To address this challenge, the DRRN extracts high-level embedding vectors from the texts that describe states and actions, respectively, and computes the inner products between the state and action embedding vectors to approximate the Q-function. We evaluate the DRRN on two popular text games, showing superior performance over the DQN.",
"title": ""
}
] | scidocsrr |
a1ede71923b1a94dff46f1c8d67dfb20 | Real-Time Bidding by Reinforcement Learning in Display Advertising | [
{
"docid": "d8982dd146a28c7d2779c781f7110ed5",
"text": "We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.",
"title": ""
},
{
"docid": "e9eefe7d683a8b02a8456cc5ff0ebe9d",
"text": "The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4% of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past.",
"title": ""
}
] | [
{
"docid": "548ca7ecd778bc64e4a3812acd73dcfb",
"text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.",
"title": ""
},
{
"docid": "759a4737f3774c1487670597f5e011d1",
"text": "Indoor positioning systems (IPS) based on Wi-Fi signals are gaining popularity recently. IPS based on Received Signal Strength Indicator (RSSI) could only achieve a precision of several meters due to the strong temporal and spatial variation of indoor environment. On the other hand, IPS based on Channel State Information (CSI) drive the precision into the sub-meter regime with several access points (AP). However, the performance degrades with fewer APs mainly due to the limit of bandwidth. In this paper, we propose a Wi-Fi-based time-reversal indoor positioning system (WiFi-TRIPS) using the location-specific fingerprints generated by CSIs with a total bandwidth of 1 GHz. WiFi-TRIPS consists of an offline phase and an online phase. In the offline phase, CSIs are collected in different 10 MHz bands from each location-of-interest and the timing and frequency synchronization errors are compensated. We perform a bandwidth concatenation to combine CSIs in different bands into a single fingerprint of 1 GHz. In the online phase, we evaluate the time-reversal resonating strength using the fingerprint from an unknown location and those in the database for location estimation. Extensive experiment results demonstrate a perfect 5cm precision in an 20cm × 70cm area in a non-line-of-sight office environment with one link measurement.",
"title": ""
},
{
"docid": "48544ec3225799c82732db7b3215833b",
"text": "Christian M Jones Laura Scholes Daniel Johnson Mary Katsikitis Michelle C. Carras University of the Sunshine Coast University of the Sunshine Coast Queensland University of Technology University of the Sunshine Coast Johns Hopkins University Queensland, Australia Queensland, Australia Queensland, Australia Queensland, Australia Baltimore, MD, USA [email protected] [email protected] [email protected] [email protected] [email protected]",
"title": ""
},
{
"docid": "65580dfc9bdf73ef72b6a133ab19ccdd",
"text": "A rotary piezoelectric motor design with simple structural components and the potential for miniaturization using a pretwisted beam stator is demonstrated in this paper. The beam acts as a vibration converter to transform axial vibration input from a piezoelectric element into combined axial-torsional vibration. The axial vibration of the stator modulates the torsional friction forces transmitted to the rotor. Prototype stators measuring 6.5 times 6.5 times 67.5 mm were constructed using aluminum (2024-T6) twisted beams with rectangular cross-section and multilayer piezoelectric actuators. The stall torque and no-load speed attained for a rectangular beam with an aspect ratio of 1.44 and pretwist helix angle of 17.7deg were 0.17 mNm and 840 rpm with inputs of 184.4 kHz and 149 mW, respectively. Operation in both clockwise and counterclockwise directions was obtained by choosing either 70.37 or 184.4 kHz for the operating frequency. The effects of rotor preload and power input on motor performance were investigated experimentally. The results suggest that motor efficiency is higher at low power input, and that efficiency increases with preload to a maximum beyond which it begins to drop.",
"title": ""
},
{
"docid": "610629d3891c10442fe5065e07d33736",
"text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.",
"title": ""
},
{
"docid": "b3a9ad04e7df1b2250f0a7b625509efd",
"text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.",
"title": ""
},
{
"docid": "1d5624ab9e2e69cd7a96619b25db3e1c",
"text": "Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully convolutional network (MB-FCN) for face detection, which considers both efficiency and effectiveness in the design process. Our MB-FCN detector can deal with faces at all scale ranges with only a single pass through the backbone network. As such, our MB-FCN model saves computation and thus is more efficient, compared to previous methods that make multiple passes. For each branch, the specific skip connections of the convolutional feature maps at different layers are exploited to represent faces in specific scale ranges. Specifically, small faces can be represented with both shallow fine-grained and deep powerful coarse features. With this representation, superior improvement in performance is registered for the task of detecting small faces. We test our MB-FCN detector on two public face detection benchmarks, including FDDB and WIDER FACE. Extensive experiments show that our detector outperforms state-of-the-art methods on all these datasets in general and by a substantial margin on the most challenging among them (e.g. WIDER FACE Hard subset). Also, MB-FCN runs at 15 FPS on a GPU for images of size 640 × 480 with no assumption on the minimum detectable face size.",
"title": ""
},
{
"docid": "3f1161fa81b19a15b0d4ff882b99b60a",
"text": "INTRODUCTION\nDupilumab is a fully human IgG4 monoclonal antibody directed against the α subunit of the interleukin (IL)-4 receptor (IL-4Rα). Since the activation of IL-4Rα is utilized by both IL-4 and IL-13 to mediate their pathophysiological effects, dupilumab behaves as a dual antagonist of these two sister cytokines, which blocks IL-4/IL-13-dependent signal transduction. Areas covered: Herein, the authors review the cellular and molecular pathways activated by IL-4 and IL-13, which are relevant to asthma pathobiology. They also review: the mechanism of action of dupilumab, the phase I, II and III studies evaluating the pharmacokinetics as well as the safety, tolerability and clinical efficacy of dupilumab in asthma therapy. Expert opinion: Supported by a strategic mechanism of action, as well as by convincing preliminary clinical results, dupilumab currently appears to be a very promising biological drug for the treatment of severe uncontrolled asthma. It also may have benefits to comorbidities of asthma including atopic dermatitis, chronic sinusitis and nasal polyposis.",
"title": ""
},
{
"docid": "254f437f82e14d889fe6ba15df8369ad",
"text": "In academia, scientific research achievements would be inconceivable without academic collaboration and cooperation among researchers. Previous studies have discovered that productive scholars tend to be more collaborative. However, it is often difficult and time-consuming for researchers to find the most valuable collaborators (MVCs) from a large volume of big scholarly data. In this paper, we present MVCWalker, an innovative method that stands on the shoulders of random walk with restart (RWR) for recommending collaborators to scholars. Three academic factors, i.e., coauthor order, latest collaboration time, and times of collaboration, are exploited to define link importance in academic social networks for the sake of recommendation quality. We conducted extensive experiments on DBLP data set in order to compare MVCWalker to the basic model of RWR and the common neighbor-based model friend of friends in various aspects, including, e.g., the impact of critical parameters and academic factors. Our experimental results show that incorporating the above factors into random walk model can improve the precision, recall rate, and coverage rate of academic collaboration recommendations.",
"title": ""
},
{
"docid": "69ced55a44876f7cc4e57f597fcd5654",
"text": "A wideband circularly polarized (CP) antenna with a conical radiation pattern is investigated. It consists of a feeding probe and parasitic dielectric parallelepiped elements that surround the probe. Since the structure of the antenna looks like a bird nest, it is named as bird-nest antenna. The probe, which protrudes from a circular ground plane, operates in its fundamental monopole mode that generates omnidirectional linearly polarized (LP) fields. The dielectric parallelepipeds constitute a wave polarizer that converts omnidirectional LP fields of the probe into omnidirectional CP fields. To verify the design, a prototype operating in C band was fabricated and measured. The reflection coefficient, axial ratio (AR), radiation pattern, and antenna gain are studied, and reasonable agreement between the measured and simulated results is observed. The prototype has a 10-dB impedance bandwidth of 41.0% and a 3-dB AR bandwidth of as wide as 54.9%. A parametric study was carried out to characterize the proposed antenna. Also, a design guideline is given to facilitate designs of the antenna.",
"title": ""
},
{
"docid": "64f4ee1e5397b1a5dd35f7908ead0429",
"text": "Online user feedback is principally used as an information source for evaluating customers’ satisfaction for a given goods, service or software application. The increasing attitude of people towards sharing comments through the social media is making online user feedback a resource containing different types of valuable information. The huge amount of available user feedback has drawn the attention of researchers from different fields. For instance, data mining techniques have been developed to enable information extraction for different purposes, or the use of social techniques for involving users in the innovation of services and processes. Specifically, current research and technological efforts are put into the definition of platforms to gather and/or analyze multi-modal feedback. But we believe that the understanding of the type of concepts instantiated as information contained in user feedback would be beneficial to define new methods for its better exploitation. In our research, we focus on online explicit user feedback that can be considered as a powerful means for user-driven evolution of software services and applications. Up to our knowledge, a conceptualization of user feedback is still missing. With the purpose of contributing to fill up this gap we propose an ontology, for explicit online user feedback that is founded on a foundational ontology and has been proposed to describe artifacts and processes in software engineering. Our contribution in this paper concerns a novel user feedback ontology founded on a Unified Foundational Ontology (UFO) that supports the description of analysis processes of user feedback in software engineering. We describe the ontology together with an evaluation of its quality, and discuss some application scenarios.",
"title": ""
},
{
"docid": "5940949b1fd6f6b8ab2c45dcb1ece016",
"text": "Despite significant work on the problem of inferring a Twitter user’s gender from her online content, no systematic investigation has been made into leveraging the most obvious signal of a user’s gender: first name. In this paper, we perform a thorough investigation of the link between gender and first name in English tweets. Our work makes several important contributions. The first and most central contribution is two different strategies for incorporating the user’s self-reported name into a gender classifier. We find that this yields a 20% increase in accuracy over a standard baseline classifier. These classifiers are the most accurate gender inference methods for Twitter data developed to date. In order to evaluate our classifiers, we developed a novel way of obtaining gender-labels for Twitter users that does not require analysis of the user’s profile or textual content. This is our second contribution. Our approach eliminates the troubling issue of a label being somehow derived from the same text that a classifier will use to",
"title": ""
},
{
"docid": "27034289da290734ec5136656573ca11",
"text": "Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.",
"title": ""
},
{
"docid": "f81dd0c86a7b45e743e4be117b4030c2",
"text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.",
"title": ""
},
{
"docid": "ddb2fb53f0ead327d064d9b34af9b335",
"text": "We seek to automate the design of molecules based on specific chemical properties. In computational terms, this task involves continuous embedding and generation of molecular graphs. Our primary contribution is the direct realization of molecular graphs, a task previously approached by generating linear SMILES strings instead of graphs. Our junction tree variational autoencoder generates molecular graphs in two phases, by first generating a tree-structured scaffold over chemical substructures, and then combining them into a molecule with a graph message passing network. This approach allows us to incrementally expand molecules while maintaining chemical validity at every step. We evaluate our model on multiple tasks ranging from molecular generation to optimization. Across these tasks, our model outperforms previous state-of-the-art baselines by a significant margin.",
"title": ""
},
{
"docid": "87c33e325d074d8baefd56f6396f1c7a",
"text": "We present a recurrent model for semantic instance segmentation that sequentially generates binary masks and their associated class probabilities for every object in an image. Our proposed system is trainable end-to-end from an input image to a sequence of labeled masks and, compared to methods relying on object proposals, does not require postprocessing steps on its output. We study the suitability of our recurrent model on three different instance segmentation benchmarks, namely Pascal VOC 2012, CVPPP Plant Leaf Segmentation and Cityscapes. Further, we analyze the object sorting patterns generated by our model and observe that it learns to follow a consistent pattern, which correlates with the activations learned in the encoder part of our network.",
"title": ""
},
{
"docid": "1c576cf604526b448f0264f2c39f705a",
"text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.",
"title": ""
},
{
"docid": "c474df285da8106b211dc7fe62733423",
"text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.",
"title": ""
},
{
"docid": "9d175a211ec3b0ee7db667d39c240e1c",
"text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.",
"title": ""
},
{
"docid": "d464711e6e07b61896ba6efe2bbfa5e4",
"text": "This paper presents a simple model for body-shadowing in off-body and body-to-body channels. The model is based on a body shadowing pattern associated with the on-body antenna, represented by a cosine function whose amplitude parameter is calculated from measurements. This parameter, i.e the maximum body-shadowing loss, is found to be linearly dependent on distance. The model was evaluated against a set of off-body channel measurements at 2.45 GHz in an indoor office environment, showing a good fit. The coefficient of determination obtained for the linear model of the maximum body-shadowing loss is greater than 0.6 in all considered scenarios, being higher than 0.8 for the ones with a static user.",
"title": ""
}
] | scidocsrr |
ae6eae748436bd9099d1b047c04e39c4 | EDGE DETECTION TECHNIQUES FOR IMAGE SEGMENTATION | [
{
"docid": "68990d2cb2ed45e1c8d30b2d7cb45926",
"text": "Methods for histogram thresholding based on the minimization of a threshold-dependent criterion function might not work well for images having multimodal histograms. We propose an approach to threshold the histogram according to the similarity between gray levels. Such a similarity is assessed through a fuzzy measure. In this way, we overcome the local minima that affect most of the conventional methods. The experimental results demonstrate the effectiveness of the proposed approach for both bimodal and multimodal histograms.",
"title": ""
},
{
"docid": "e14234696124c47d1860301c873f6685",
"text": "We propose a novel image segmentation technique using the robust, adaptive least k-th order squares (ALKS) estimator which minimizes the k-th order statistics of the squared of residuals. The optimal value of k is determined from the data and the procedure detects the homogeneous surface patch representing the relative majority of the pixels. The ALKS shows a better tolerance to structured outliers than other recently proposed similar techniques: Minimize the Probability of Randomness (MINPRAN) and Residual Consensus (RESC). The performance of the new, fully autonomous, range image segmentation algorithm is compared to several other methods. Index Terms|robust methods, range image segmentation, surface tting",
"title": ""
},
{
"docid": "6a96e3680d3d25fc8bcffe3b7e70968f",
"text": "All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, without permission in writing from the publisher. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher shall not be liable in any event for incidental or consequential damages with, or arising out of, the furnishing, performance, or use of these programs. 1 1 Introduction Preview Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. In this chapter we outline how a theoretical base and state-of-the-art software can be integrated into a prototyping environment whose objective is to provide a set of well-supported tools for the solution of a broad class of problems in digital image processing. Background An important characteristic underlying the design of image processing systems is the significant level of testing and experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches and quickly prototype candidate solutions generally plays a major role in reducing the cost and time required to arrive at a viable system implementation. Little has been written in the way of instructional material to bridge the gap between theory and application in a well-supported software environment. The main objective of this book is to integrate under one cover a broad base of theoretical concepts with the knowledge required to implement those concepts using state-of-the-art image processing software tools. The theoretical underpinnings of the material in the following chapters are mainly from the leading textbook in the field: Digital Image Processing, by Gonzalez and Woods, published by Prentice Hall. The software code and supporting tools are based on the leading software package in the field: The MATLAB Image Processing Toolbox, † 1.1 † In the following discussion and in subsequent chapters we sometimes refer to Digital Image Processing by Gonzalez and Woods as \" the Gonzalez-Woods book, \" and to the Image Processing Toolbox as \" IPT \" or simply as the \" toolbox. \" 2 Chapter 1 I Introduction from The MathWorks, Inc. (see Section 1.3). The material in the present book shares the same design, notation, and style of presentation …",
"title": ""
}
] | [
{
"docid": "6bb1914cbbaf0ba27a8ab52dbec2152a",
"text": "This paper presents a novel local feature for 3D range image data called `the line image'. It is designed to be highly viewpoint invariant by exploiting the range image to efficiently detect 3D occupancy, producing a representation of the surface, occlusions and empty spaces. We also propose a strategy for defining keypoints with stable orientations which define regions of interest in the scan for feature computation. The feature is applied to the task of object classification on sparse urban data taken with a Velodyne laser scanner, producing good results.",
"title": ""
},
{
"docid": "7be0d43664c4ebb3c66f58c485a517ce",
"text": "We consider problems requiring to allocate a set of rectangular items to larger rectangular standardized units by minimizing the waste. In two-dimensional bin packing problems these units are finite rectangles, and the objective is to pack all the items into the minimum number of units, while in two-dimensional strip packing problems there is a single standardized unit of given width, and the objective is to pack all the items within the minimum height. We discuss mathematical models, and survey lower bounds, classical approximation algorithms, recent heuristic and metaheuristic methods and exact enumerative approaches. The relevant special cases where the items have to be packed into rows forming levels are also discussed in detail. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "ed41127bf43b4f792f8cbe1ec652f7b2",
"text": "Today, more than 100 blockchain projects created to transform government systems are being conducted in more than 30 countries. What leads countries rapidly initiate blockchain projects? I argue that it is because blockchain is a technology directly related to social organization; Unlike other technologies, a consensus mechanism form the core of blockchain. Traditionally, consensus is not the domain of machines but rather humankind. However, blockchain operates through a consensus algorithm with human intervention; once that consensus is made, it cannot be modified or forged. Through utilization of Lawrence Lessig’s proposition that “Code is law,” I suggest that blockchain creates “absolute law” that cannot be violated. This characteristic of blockchain makes it possible to implement social technology that can replace existing social apparatuses including bureaucracy. In addition, there are three close similarities between blockchain and bureaucracy. First, both of them are defined by the rules and execute predetermined rules. Second, both of them work as information processing machines for society. Third, both of them work as trust machines for society. Therefore, I posit that it is possible and moreover unavoidable to replace bureaucracy with blockchain systems. In conclusion, I suggest five principles that should be adhered to when we replace bureaucracy with the blockchain system: 1) introducing Blockchain Statute law; 2) transparent disclosure of data and source code; 3) implementing autonomous executing administration; 4) building a governance system based on direct democracy and 5) making Distributed Autonomous Government(DAG).",
"title": ""
},
{
"docid": "7e8976250bd67e07fb71c6dd8b5be414",
"text": "With the rapid growth of product review forums, discussion groups, and Blogs, it is almost impossible for a customer to make an informed purchase decision. Different and possibly contradictory opinions written by different reviewers can even make customers more confused. In the last few years, mining customer reviews (opinion mining) has emerged as an interesting new research direction to address this need. One of the interesting problem in opinion mining is Opinion Question Answering (Opinion QA). While traditional QA can only answer factual questions, opinion QA aims to find the authors' sentimental opinions on a specific target. Current opinion QA systems suffers from several weaknesses. The main cause of these weaknesses is that these methods can only answer a question if they find a content similar to the given question in the given documents. As a result, they cannot answer majority questions like \"What is the best digital camera?\" nor comparative questions, e.g. \"Does SamsungY work better than CanonX?\". In this paper we address the problem of opinion question answering to answer opinion questions about products by using reviewers' opinions. Our proposed method, called Aspect-based Opinion Question Answering (AQA), support answering of opinion-based questions while improving the weaknesses of current techniques. AQA contains five phases: question analysis, question expansion, high quality review retrieval, subjective sentence extraction, and answer grouping. AQA adopts an opinion mining technique in the preprocessing phase to identify target aspects and estimate their quality. Target aspects are attributes or components of the target product that have been commented on in the review, e.g. 'zoom' and 'battery life' for a digital camera. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the AQA in terms of the accuracy of the retrieved answers.",
"title": ""
},
{
"docid": "85908a576c13755e792d52d02947f8b3",
"text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.",
"title": ""
},
{
"docid": "18b3328725661770be1f408f37c7eb64",
"text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.",
"title": ""
},
{
"docid": "511c4a62c32b32eb74761b0585564fe4",
"text": "In the previous chapters, we proposed several features for writer identification, historical manuscript dating and localization separately. In this chapter, we present a summarization of the proposed features for different applications by proposing a joint feature distribution (JFD) principle to design novel discriminative features which could be the joint distribution of features on adjacent positions or the joint distribution of different features on the same location. Following the proposed JFD principle, we introduce seventeen features, including twelve textural-based and five grapheme-based features. We evaluate these features for different applications from four different perspectives to understand handwritten documents beyond OCR, by writer identification, script recognition, historical manuscript dating and localization.",
"title": ""
},
{
"docid": "bd125a32cba00b4071c87aa42e7f3236",
"text": "With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (eg. Kinect) still comes with several challenges that result in noise or even incomplete shapes. Recent success in deep learning has shown how to learn complex shape distributions in a data-driven way from large scale 3D CAD Model collections and to utilize them for 3D processing on volumetric representations and thereby circumventing problems of topology and tessellation. Prior work has shown encouraging results on problems ranging from shape completion to recognition. We provide an analysis of such approaches and discover that training as well as the resulting representation are strongly and unnecessarily tied to the notion of object labels. Furthermore, deep learning research argues [1] that learning representation with over-complete model are more prone to overfitting compared to the approach that learns from noisy data. Thus, we investigate a full convolutional volumetric denoising auto encoder that is trained in a unsupervised fashion. It outperforms prior work on recognition as well as more challenging tasks like denoising and shape completion. In addition, our approach is atleast two order of magnitude faster at test time and thus, provides a path to scaling up 3D deep learning.",
"title": ""
},
{
"docid": "a6a2c027b809a98430ad80b837fa8090",
"text": "This paper presents a 60-GHz CMOS direct-conversion Doppler radar RF sensor with a clutter canceller for single-antenna noncontact human vital-signs detection. A high isolation quasi-circulator (QC) is designed to reduce the transmitting (Tx) power leakage (to the receiver). The clutter canceller performs cancellation for the Tx leakage power (from the QC) and the stationary background reflection clutter to enhance the detection sensitivity of weak vital signals. The integration of the 60-GHz RF sensor consists of the voltage-controlled oscillator, divided-by-2 frequency divider, power amplifier, QC, clutter canceller (consisting of variable-gain amplifier and 360 ° phase shifter), low-noise amplifier, in-phase/quadrature-phase sub-harmonic mixer, and three couplers. In the human vital-signs detection experimental measurement, at a distance of 75 cm, the detected heartbeat (1-1.3 Hz) and respiratory (0.35-0.45 Hz) signals can be clearly observed with a 60-GHz 17-dBi patch-array antenna. The RF sensor is fabricated in 90-nm CMOS technology with a chip size of 2 mm×2 mm and a consuming power of 217 mW.",
"title": ""
},
{
"docid": "78ccfdac121daaae3abe3f8f7c73482b",
"text": "We present a method for constructing smooth n-direction fields (line fields, cross fields, etc.) on surfaces that is an order of magnitude faster than state-of-the-art methods, while still producing fields of equal or better quality. Fields produced by the method are globally optimal in the sense that they minimize a simple, well-defined quadratic smoothness energy over all possible configurations of singularities (number, location, and index). The method is fully automatic and can optionally produce fields aligned with a given guidance field such as principal curvature directions. Computationally the smoothest field is found via a sparse eigenvalue problem involving a matrix similar to the cotan-Laplacian. When a guidance field is present, finding the optimal field amounts to solving a single linear system.",
"title": ""
},
{
"docid": "e3739a934ecd7b99f2d35a19f2aed5cf",
"text": "We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations, the timing of information exchange, the amount of local information needed at each computation node, and the initial conditions for the algorithm. The class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to a shortest path problem the algorithm reduces to the algorithm originally implemented for routing of messages in the ARPANET.",
"title": ""
},
{
"docid": "bbb4f7b90ade0ffbf7ba3e598c18a78f",
"text": "In this paper, an analysis of the resistance of multi-track coils in printed circuit board (PCB) implementations, where the conductors have rectangular cross-section, for spiral planar coils is carried out. For this purpose, different analytical losses models for the mentioned conductors have been reviewed. From this review, we conclude that for the range of frequencies, the coil dimensions and the planar configuration typically used in domestic induction heating, the application in which we focus, these analysis are unsatisfactory. Therefore, in this work the resistance of multi-track winding has been calculated by means of finite element analysis (FEA) tool. These simulations provide us some design guidelines that allow us to optimize the design of multi-track coils for domestic induction heating. Furthermore, several prototypes are used to verify the simulated results, both single-turn coils and multi-turn coils.",
"title": ""
},
{
"docid": "96bd149346554dac9e3889f0b1569be7",
"text": "BACKGROUND\nFlight related low back pain (LBP) among helicopter pilots is frequent and may influence flight performance. Prolonged confined sitting during flights seems to weaken lumbar trunk (LT) muscles with associated secondary transient pain. Aim of the study was to investigate if structured training could improve muscular function and thus improve LBP related to flying.\n\n\nMETHODS\n39 helicopter pilots (35 men and 4 women), who reported flying related LBP on at least 1 of 3 missions last month, were allocated to two training programs over a 3-month period. Program A consisted of 10 exercises recommended for general LBP. Program B consisted of 4 exercises designed specifically to improve LT muscular endurance. The pilots were examined before and after the training using questionnaires for pain, function, quality of health and tests of LT muscular endurance as well as ultrasound measurements of the contractility of the lumbar multifidus muscle (LMM).\n\n\nRESULTS\nApproximately half of the participants performed the training per-protocol. Participants in this subset group had comparable baseline characteristics as the total study sample. Pre and post analysis of all pilots included, showed participants had marked improvement in endurance and contractility of the LMM following training. Similarly, participants had improvement in function and quality of health. Participants in program B had significant improvement in pain, function and quality of health.\n\n\nCONCLUSIONS\nThis study indicates that participants who performed a three months exercise program had improved muscle endurance at the end of the program. The helicopter pilots also experienced improved function and quality of health.\n\n\nTRIAL REGISTRATION\nIdentifier: NCT01788111 Registration date; February 5th, 2013, verified April 2016.",
"title": ""
},
{
"docid": "31dbedbcdb930ead1f8274ff2c181fcb",
"text": "This paper sums up lessons learned from a sequence of cooperative design workshops where end users were enabled to design mobile systems through scenario building, role playing, and low-fidelity prototyping. We present a resulting fixed workshop structure with well-chosen constraints that allows for end users to explore and design new technology and work practices. In these workshops, the systems developers get input to design from observing how users stage and act out current and future use scenarios and improvise new technology to fit their needs. A theoretical framework is presented to explain the creative processes involved and the workshop as a user-centered design method. Our findings encourage us to recommend the presented workshop structure for design projects involving mobility and computer-mediated communication, in particular project where the future use of the resulting products and services also needs to be designed.",
"title": ""
},
{
"docid": "0048b244bd55a724f9bcf4dbf5e551a8",
"text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.",
"title": ""
},
{
"docid": "f83d8a69a4078baf4048b207324e505f",
"text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.",
"title": ""
},
{
"docid": "b16407fc67058110b334b047bcfea9ac",
"text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@",
"title": ""
},
{
"docid": "c84d41e54b12cca847135dfc2e9e13f8",
"text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.",
"title": ""
},
{
"docid": "41cfa1840ef8b6f35865b220c087302b",
"text": "Ultra-high voltage (>10 kV) power devices based on SiC are gaining significant attentions since Si power devices are typically at lower voltage levels. In this paper, a world record 22kV Silicon Carbide (SiC) p-type ETO thyristor is developed and reported as a promising candidate for ultra-high voltage applications. The device is based on a 2cm2 22kV p type gate turn off thyristor (p-GTO) structure. Its static as well as dynamic performances are analyzed, including the anode to cathode blocking characteristics, forward conduction characteristics at different temperatures, turn-on and turn-off dynamic performances. The turn-off energy at 6kV, 7kV and 8kV respectively is also presented. In addition, theoretical boundary of the reverse biased safe operation area (RBSOA) of the 22kV SiC ETO is obtained by simulations and the experimental test also demonstrated a wide RBSOA.",
"title": ""
},
{
"docid": "945bf7690169b5f2e615324fb133bc19",
"text": "Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.",
"title": ""
}
] | scidocsrr |
10e2cbfa32f8e2e6759561c28dfd1938 | Constructing Thai Opinion Mining Resource: A Case Study on Hotel Reviews | [
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] | [
{
"docid": "b1bb5751e409d0fe44754624a4145e70",
"text": "Capacity planning determines the optimal product mix based on the available tool sets and allocates production capacity according to the forecasted demands for the next few months. MaxIt is the previous capacity planning system for Intel's Flash Product Group (FPG) Assembly & Test Manufacturing (ATM). It only applied to single product family scenarios with simple process routing. However, new Celluar Handhold Group (CHG) products need to go through flexible and reentrant ATM routes. In this paper, we introduce MaxItPlus, which is an enhanced MaxIt using MILP (mixed integer linear programming) to conduct capacity planning of multiple product families with mixed process routes in a multifactory ATM environment. We also present the detailed mathematical formulation, the system architecture, and implementation results. The project will help Intel global Flash ATM to achieve a single and efficient capacity planning process for all FPG and CHG products and gain $10 M in marginal profit (as determined by the finance department)",
"title": ""
},
{
"docid": "dd11d7291d8f0ee2313b74dc5498acfa",
"text": "Going further At this point, the theorem is proved. While for every summarizer σ there exists at least one tuple (θ,O), in practice there exist multiple tuples, and the one proposed by the proof would not be useful to rank models of summary quality. We can formulate an algorithm which constructs θ from σ and which yields an ordering of candidate summaries. Let σD\\{s1,...,sn} be the summarizer σ which still uses D as initial document collection, but which is not allowed to output sentences from {s1, . . . , sn} in the final summary. For a given summary S to score, let Rσ,S be the smallest set of sentences {s1, . . . , sn} that one has to remove fromD such that σD\\R outputs S. Then the definition of θσ follows:",
"title": ""
},
{
"docid": "11b20602fc9d6e97a5bcc857da7902d0",
"text": "This research investigates the Quality of Service (QoS) interaction at the edge of differentiated service (DiffServ) domain, denoted by video gateway (VG). VG is responsible for coordinating the QoS mapping between video applications and DiffServ enabled network. To accomplish the goal of achieving economical and high-quality end-to-end video streaming, which utilizes its awareness of relative service differentiation, the proposed QoS control framework includes the following three components: 1) the relative priority based indexing and categorization of streaming video content at sender, 2) the differentiated QoS levels with load variation in DiffServ networks, and 3) the feedforward and feedback mechanisms assisting QoS mapping of categorized index to DS level at the proposed VG. Especially, we focus on building a framework for dynamic QoS mapping, which intends to overcome both the QoS demand variations of CM applications (e.g., varying priorities from aggregated/categorized packets) and the QoS supply variations of DiffServ network (e.g., varying loss/delay due to fluctuating network loads). Thus, with the proposed QoS controls in both feedforward and feedback fashion, enhanced quality provisioning for CM applications (especially video streaming) is investigated under the given pricing model (e.g., DS level differentiated price/packet).",
"title": ""
},
{
"docid": "c4f9c924963cadc658ad9c97560ea252",
"text": "A novel broadband circularly polarized (CP) antenna is proposed. The operating principle of this CP antenna is different from those of conventional CP antennas. An off-center-fed dipole is introduced to achieve the 90° phase difference required for circular polarization. The new CP antenna consists of two off-center-fed dipoles. Combining such two new CP antennas leads to a bandwidth enhancement for circular polarization. A T-shaped microstrip probe is used to excite the broadband CP antenna, featuring a simple planar configuration. It is shown that the new broadband CP antenna achieves an axial ratio (AR) bandwidth of 55% (1.69-3.0 GHz) for AR <; 3 dB, an impedance bandwidth of 60% (1.7-3.14 GHz) for return loss (RL) > 15 dB, and an antenna gain of 6-9 dBi. The new mechanism for circular polarization is described and an experimental verification is presented.",
"title": ""
},
{
"docid": "5268fd63c99f43d1a155c0078b2e5df5",
"text": "With Docker gaining widespread popularity in the recent years, the container scheduler becomes a crucial role for the exploding containerized applications and services. In this work, the container host energy conservation, the container image pulling costs from the image registry to the container hosts and the workload network transition costs from the clients to the container hosts are evaluated in combination. By modeling the scheduling problem as an integer linear programming, an effective and adaptive scheduler is proposed. Impressive cost savings were achieved compared to Docker Swarm scheduler. Moreover, it can be easily integrated into the open-source container orchestration frameworks.",
"title": ""
},
{
"docid": "4645d0d7b1dfae80657f75d3751ef72a",
"text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.",
"title": ""
},
{
"docid": "203312195c3df688a594d0c05be72b5a",
"text": "Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback.",
"title": ""
},
{
"docid": "4ab58e47f1f523ba3f48c37bc918696e",
"text": "In this work, we design a neural network for recognizing emotions in speech, using the standard IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting highlevel features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. Applying techniques of data augmentation, layerwise learning rate adjustment and batch normalization, we obtain highly competitive results, with 64.5% weighted accuracy and 61.7% unweighted accuracy on four emotions. Moreover, we show that the model performance is strongly correlated with the labeling confidence, which highlights a fundamental difficulty in emotion recognition.",
"title": ""
},
{
"docid": "858a5ed092f02d057437885ad1387c9f",
"text": "The current state-of-the-art singledocument summarization method generates a summary by solving a Tree Knapsack Problem (TKP), which is the problem of finding the optimal rooted subtree of the dependency-based discourse tree (DEP-DT) of a document. We can obtain a gold DEP-DT by transforming a gold Rhetorical Structure Theory-based discourse tree (RST-DT). However, there is still a large difference between the ROUGE scores of a system with a gold DEP-DT and a system with a DEP-DT obtained from an automatically parsed RST-DT. To improve the ROUGE score, we propose a novel discourse parser that directly generates the DEP-DT. The evaluation results showed that the TKP with our parser outperformed that with the state-of-the-art RST-DT parser, and achieved almost equivalent ROUGE scores to the TKP with the gold DEP-DT.",
"title": ""
},
{
"docid": "ef95b5b3a0ff0ab0907565305d597a9d",
"text": "Control flow defenses against ROP either use strict, expensive, but strong protection against redirected RET instructions with shadow stacks, or much faster but weaker protections without. In this work we study the inherent overheads of shadow stack schemes. We find that the overhead is roughly 10% for a traditional shadow stack. We then design a new scheme, the parallel shadow stack, and show that its performance cost is significantly less: 3.5%. Our measurements suggest it will not be easy to improve performance on current x86 processors further, due to inherent costs associated with RET and memory load/store instructions. We conclude with a discussion of the design decisions in our shadow stack instrumentation, and possible lighter-weight alternatives.",
"title": ""
},
{
"docid": "64306a76b61bbc754e124da7f61a4fbe",
"text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.",
"title": ""
},
{
"docid": "9d2a73c8eac64ed2e1af58a5883229c3",
"text": "Tetyana Sydorenko Michigan State University This study examines the effect of input modality (video, audio, and captions, i.e., onscreen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8) saw video with audio and captions (VAC); group two (N = 9) saw video with audio (VA); group three (N = 9) saw video with captions (VC). All participants completed written and aural vocabulary tests and a final questionnaire.",
"title": ""
},
{
"docid": "236d3cb8566d4ae72add4a4b8b1f1fcc",
"text": "SAP HANA is a pioneering, and one of the best performing, data platform designed from the grounds up to heavily exploit modern hardware capabilities, including SIMD, and large memory and CPU footprints. As a comprehensive data management solution, SAP HANA supports the complete data life cycle encompassing modeling, provisioning, and consumption. This extended abstract outlines the vision and planned next step of the SAP HANA evolution growing from a core data platform into an innovative enterprise application platform as the foundation for current as well as novel business applications in both on-premise and on-demand scenarios. We argue that only a holistic system design rigorously applying co-design at di↵erent levels may yield a highly optimized and sustainable platform for modern enterprise applications. 1. THE BEGINNING: SAP HANA DATA PLATFORM A comprehensive data management solution has become one of the most critical assets in large enterprises. Modern data management solutions must cover a wide spectrum of additional data structures ranging from simple keyvalues models to complex graph structured data sets and document-centric data stores. Complex query and manipulation patterns are issued against the database reflecting the algorithmic side of complex enterprise applications. Additionally, data consumption activities with analytical query patterns are no longer reserved for decision makers or specialized data scientists but are increasingly becoming an integral part of complex operational business processes requiring support for analytical as well as transactional workloads managed within the same system [4]. Dealing with these challenges [5] demanded a complete re-thinking of traditional database architectures and data management approaches now made possible by advances in hardware architectures. The development of SAP HANA accepted this challenge head on and started a new generation Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were invited to present their results at The 39th International Conference on Very Large Data Bases, August 26th 30th 2013, Riva del Garda, Trento, Italy. Proceedings of the VLDB Endowment, Vol. 6, No. 11 Copyright 2013 VLDB Endowment 2150-8097/13/09... $ 10.00. Figure 1: The SAP HANA platform of database system design. The SAP HANA database server now comprises a centrally, and tightly, orchestrated collection of di↵erent processing capabilities, e.g., an in-memory columnar relational store, a graph engine, native support for text processing, comprehensive spatial support, etc., all running within a single system environment and, therefore, within a single transactional sphere of control without the need for data replication and synchronization [2]. Secondly, and most importantly, SAP HANA has triggered a major shift in the database industry from the classical disk-centric database system design to a ground breaking main-memory centric system design [3]. The mainstream availability of very large main memory and CPU core footprints within single compute nodes, combined with SIMD architectures and sophisticated cluster systems based on high speed interconnects, was and remains, the central design guideline of the SAP HANA database server. SAP HANA was the first commercial system to systematically reflect, and exploit, the shift in memory hierarchies and CPU architectures in order to optimize data structures and access paths. As a result, SAP HANA has yielded orders of magnitude performance gains thereby opening up completely novel application opportunities. Most of the core design advances behind SAP HANA are now finding their way into mainstream database system research and development, thereby reflecting its pioneering role. As a foundational tenet, we see rigorous application of Hardware/Database co-design principles as the main success factor to systematically exploit the underlying hardware platform: Literally every core SAP HANA data structure and routine has been systematically inspected, redesigned",
"title": ""
},
{
"docid": "23583b155fc8ec3301cfef805f568e57",
"text": "We address the problem of covering an environment with robots equipped with sensors. The robots are heterogeneous in that the sensor footprints are different. Our work uses the location optimization framework in with three significant extensions. First, we consider robots with different sensor footprints, allowing, for example, aerial and ground vehicles to collaborate. We allow for finite size robots which enables implementation on real robotic systems. Lastly, we extend the previous work allowing for deployment in non convex environments.",
"title": ""
},
{
"docid": "cf0b98dfd188b7612577c975e08b0c92",
"text": "Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.",
"title": ""
},
{
"docid": "cbc6bd586889561cc38696f758ad97d2",
"text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.",
"title": ""
},
{
"docid": "88285b058e6b93c2b31e9b1b8d6b657e",
"text": "Corporate incubators for technology development are a recent phenomenon whose functioning and implications are not yet well understood. The resource-based view can offer an explanatory model on how corporate incubators function as specialised corporate units that hatch new businesses. While tangible resources, such as the financial, physical and even explicit knowledge flow, are all visible, and therefore easy to measure, intangible resources such as tacit knowledge and branding flow are harder to detect and localise. Managing the resource flow requires the initial allocation of resources to the corporate incubator during its set-up as well as a continuous resource flow to the technology venture and, during the harvest phase, also from it. Two levels of analysis need to be distinguished: (1) the resource flow between the corporate incubator and the technology venture and (2) the resource flow interface between the corporate incubator and the technology venture. Our empirical findings are based on two phases: First, in-depth case studies of 22 companies through 47 semi-structured interviews that were conducted with managers of large technology-intensive corporations’ corporate incubators in Europe and the U.S., and second, an analysis of the European Commission’s benchmarking survey of 77 incubators.",
"title": ""
},
{
"docid": "e6e6eb1f1c0613a291c62064144ff0ba",
"text": "Mobile phones have become the most popular way to communicate with other individuals. While cell phones have become less of a status symbol and more of a fashion statement, they have created an unspoken social dependency. Adolescents and young adults are more likely to engage in SMS messing, making phone calls, accessing the internet from their phone or playing a mobile driven game. Once pervaded by boredom, teenagers resort to instant connection, to someone, somewhere. Sensation seeking behavior has also linked adolescents and young adults to have the desire to take risks with relationships, rules and roles. Individuals seek out entertainment and avoid boredom at all times be it appropriate or inappropriate. Cell phones are used for entertainment, information and social connectivity. It has been demonstrated that individuals with low self – esteem use cell phones to form and maintain social relationships. They form an attachment with cell phone which molded their mind that they cannot function without their cell phone on a day-to-day basis. In this context, the study attempts to examine the extent of use of mobile phone and its influence on the academic performance of the students. A face to face survey using structured questionnaire was the method used to elicit the opinions of students between the age group of 18-25 years in three cities covering all the three regions the State of Andhra Pradesh in India. The survey was administered among 1200 young adults through two stage random sampling to select the colleges and respondents from the selected colleges, with 400 from each city. In Hyderabad, 201 males and 199 females participated in the survey. In Visakhapatnam, 192 males and 208 females participated. In Tirupati, 220 males and 180 females completed the survey. Two criteria were taken into consideration while choosing the participants for the survey. The participants are college-going and were mobile phone users. Each of the survey responses was entered and analyzed using SPSS software. The Statistical Package for Social Sciences (SPSS 16) had been used to work out the distribution of samples in terms of percentages for each specified parameter.",
"title": ""
},
{
"docid": "4b3c69e446dcf1d237db63eb4f106dd7",
"text": "Creating linguistic annotations requires more than just a reliable annotation scheme. Annotation can be a complex endeavour potentially involving many people, stages, and tools. This chapter outlines the process of creating end-toend linguistic annotations, identifying specific tasks that researchers often perform. Because tool support is so central to achieving high quality, reusable annotations with low cost, the focus is on identifying capabilities that are necessary or useful for annotation tools, as well as common problems these tools present that reduce their utility. Although examples of specific tools are provided in many cases, this chapter concentrates more on abstract capabilities and problems because new tools appear continuously, while old tools disappear into disuse or disrepair. The two core capabilities tools must have are support for the chosen annotation scheme and the ability to work on the language under study. Additional capabilities are organized into three categories: those that are widely provided; those that often useful but found in only a few tools; and those that have as yet little or no available tool support. 1 Annotation: More than just a scheme Creating manually annotated linguistic corpora requires more than just a reliable annotation scheme. A reliable scheme, of course, is a central ingredient to successful annotation; but even the most carefully designed scheme will not answer a number of practical questions about how to actually create the annotations, progressing from raw linguistic data to annotated linguistic artifacts that can be used to answer interesting questions or do interesting things. Annotation, especially high-quality annotation of large language datasets, can be a complex process potentially involving many people, stages, and tools, and the scheme only specifies the conceptual content of the annotation. By way of example, the following questions are relevant to a text annotation project and are not answered by a scheme: How should linguistic artifacts be prepared? Will the originals be annotated directly, or will their textual content be extracted into separate files for annotation? In the latter case, what layout or formatting will be kept (lines, paragraphs page breaks, section headings, highlighted text)? What file format will be used? How will typographical errors be handled? Will typos be ignored, changed in the original, changed in extracted content, or encoded as an additional annotation? Who will be allowed to make corrections: the annotators themselves, adjudicators, or perhaps only the project manager? How will annotators be provided artifacts to annotate? How will the order of annotation be specified (if at all), and how will this order be enforced? How will the project manager ensure that each document is annotated the appropriate number of times (e.g., by two different people for double annotation). What inter-annotator agreement measures (IAAs) will be measured, and when? Will IAAs be measured continuously, on batches, or on other subsets of the corpus? How will their measurement at the right time be enforced? Will IAAs be used to track annotator training? If so, what level of IAA will be considered to indicate that training has succeeded? These questions are only a small selection of those that arise during the practical process of conducting annotation. The first goal of this chapter is to give an overview of the process of annotation from start to finish, pointing out these sorts of questions and subtasks for each stage. We will start with a known conceptual framework for the annotation process, the MATTER framework (Pustejovsky & Stubbs, 2013) and expand upon it. Our expanded framework is not guaranteed to be complete, but it will give a reader a very strong flavor of the kind of issues that arise so that they can start to anticipate them in the design of their own annotation project. The second goal is to explore the capabilities required by annotation tools. Tool support is central to effecting high quality, reusable annotations with low cost. The focus will be on identifying capabilities that are necessary or useful for annotation tools. Again, this list will not be exhaustive but it will be fairly representative, as the majority of it was generated by surveying a number of annotation experts about their opinions of available tools. Also listed are common problems that reduce tool utility (gathered during the same survey). Although specific examples of tools will be provided in many cases, the focus will be on more abstract capabilities and problems because new tools appear all the time while old tools disappear into disuse or disrepair. Before beginning, it is well to first introduce a few terms. By linguistic artifact, or just artifact, we mean the object to which annotations are being applied. These could be newspaper articles, web pages, novels, poems, TV 2 Mark A. Finlayson and Tomaž Erjavec shows, radio broadcasts, images, movies, or something else that involves language being captured in a semipermanent form. When we use the term document we will generally mean textual linguistic artifacts such as books, articles, transcripts, and the like. By annotation scheme, or just scheme, we follow the terminology as given in the early chapters of this volume, where a scheme comprises a linguistic theory, a derived model of a phenomenon of interest, a specification that defines the actual physical format of the annotation, and the guidelines that explain to an annotator how to apply the specification to linguistic artifacts. (citation to Chapter III by Ide et al.) By computing platform, or just platform, we mean any computational system on which an annotation tool can be run; classically this has meant personal computers, either desktops or laptops, but recently the range of potential computing platforms has expanded dramatically, to include on the one hand things like web browsers and mobile devices, and, on the other, internet-connected annotation servers and service oriented architectures. Choice of computing platform is driven by many things, including the identity of the annotators and their level of sophistication. We will speak of the annotation process or just process within an annotation project. By process, we mean any procedure or activity, at any level of granularity, involved in the production of annotation. This potentially encompasses everything from generating the initial idea, applying the annotation to the artifacts, to archiving the annotated documents for distribution. Although traditionally not considered part of annotation per se, we might also include here writing academic papers about the results of the annotation, as these activities also sometimes require annotation-focused tool support. We will also speak of annotation tools. By tool we mean any piece of computer software that runs on a computing platform that can be used to implement or carry out a process in the annotation project. Classically conceived annotation tools include software such as the Alembic workbench, Callisto, or brat (Day et al., 1997; Day, McHenry, Kozierok, & Riek, 2004; Stenetorp et al., 2012), but tools can also include software like Microsoft Word or Excel, Apache Tomcat (to run web servers), Subversion or Git (for document revision control), or mobile applications (apps). Tools usually have user interfaces (UIs), but they are not always graphical, fully functional, or even all that helpful. There is a useful distinction between a tool and a component (also called an NLP component, or an NLP algorithm; in UIMA (Apache, 2014) called an annotator), which are pieces of software that are intended to be integrated as libraries into software and can often be strung together in annotation pipelines for applying automatic annotations to linguistic artifacts. Software like tokenizers, part of speech taggers, parsers (Manning et al., 2014), multiword expression detectors (Kulkarni & Finlayson, 2011) or coreference resolvers (Pradhan et al., 2011) are all components. Sometimes the distinction between a tool and a component is not especially clear cut, but it is a useful one nonetheless. The main reason a chapter like this one is needed is that there is no one tool that does everything. There are multiple stages and tasks within every annotation project, typically requiring some degree of customization, and no tool does it all. That is why one needs multiple tools in annotation, and why a detailed consideration of the tool capabilities and problems is needed. 2 Overview of the Annotation Process The first step in an annotation project is, naturally, defining the scheme, but many other tasks must be executed to go from an annotation scheme to an actual set of cleanly annotated files useful for other tasks. 2.1 MATTER & MAMA A good starting place for organizing our conception of the various stages of the process of annotation is the MATTER cycle, proposed by Pustejovsky & Stubbs (2013). This framework outlines six major stages to annotation, corresponding to each letter in the word, defined as follows: M = Model: In this stage, the first of the process, the project leaders set up the conceptual framework for the project. Subtasks may include: Search background work to understand existing theories of the phenomena Create or adopt an abstract model of the phenomenon Define an annotation scheme based on the model Overview of Annotation Creation: Processes & Tools 3 Search libraries, the web, and online repositories for potential linguistic artifacts Create corpus artifacts if appropriate artifacts do not exist Measure overall characteristics of artifacts to ground estimates of representativeness and balance Collect the artifacts on which the annotation will be performed Track artifact licenses Measure various statistics of the collected corpus Choose an annotation specification language Build an annotation specification that disti",
"title": ""
},
{
"docid": "9d5c258e4a2d315d3e462ab333f3a6df",
"text": "The modern smart phone and car concepts provide a fertile ground for new location-aware applications, ranging from traffic management to social services. While the functionality is partly implemented at the mobile terminal, there is a rising need for efficient backend processing of high-volume, high update rate location streams. It is in this environment that geofencing, the detection of objects traversing virtual fences, is becoming a universal primitive required by an ever-growing number of applications. To satisfy the functionality and performance requirements of large-scale geofencing applications, we present in this work a backend system for indexing massive quantities of mobile objects and geofences. Our system runs on a cluster of servers, achieving a throughput of location updates that scales linearly with number of machines. The key ingredients to achieve a high performance are a specialized spatial index, a dynamic caching mechanism, and a load-sharing principle that reduces communication overhead to a minimum and enables a shared-nothing architecture. The throughput of the spatial index as well as the performance of the overall system are demonstrated by experiments using simulations of large-scale geofencing applications.",
"title": ""
}
] | scidocsrr |
c210f30c1e3255ffe2487adf19bfd6b0 | ICDAR 2003 robust reading competitions: entries, results, and future directions | [
{
"docid": "f3d86ca456bb9e97b090ea68a82be93b",
"text": "Many images—especially those used for page design on web pages—as well as videos contain visible text. If these text occurrences could be detected, segmented, and recognized automatically, they would be a valuable source of high-level semantics for indexing and retrieval. In this paper, we propose a novel method for localizing and segmenting text in complex images and videos. Text lines are identified by using a complex-valued multilayer feed-forward network trained to detect text at a fixed scale and position. The network’s output at all scales and positions is integrated into a single text-saliency map, serving as a starting point for candidate text lines. In the case of video, these candidate text lines are refined by exploiting the temporal redundancy of text in video. Localized text lines are then scaled to a fixed height of 100 pixels and segmented into a binary image with black characters on white background. For videos, temporal redundancy is exploited to improve segmentation performance. Input images and videos can be of any size due to a true multiresolution approach. Moreover, the system is not only able to locate and segment text occurrences into large binary images, but is also able to track each text line with sub-pixel accuracy over the entire occurrence in a video, so that one text bitmap is created for all instances of that text line. Therefore, our text segmentation results can also be used for object-based video encoding such as that enabled by MPEG-4.",
"title": ""
}
] | [
{
"docid": "dddec8d72a4ed68ee47c0cc7f4f31dbd",
"text": "Probabilistic topic modeling of text collections is a powerful tool for statistical text analysis. In this tutorial we introduce a novel non-Bayesian approach, called Additive Regularization of Topic Models. ARTM is free of redundant probabilistic assumptions and provides a simple inference for many combined and multi-objective topic models.",
"title": ""
},
{
"docid": "8775af6029924a390cfb51aa17f99a2a",
"text": "Machine learning is increasingly used to make sense of the physical world yet may suffer from adversarial manipulation. We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even when they are printed and then photographed.",
"title": ""
},
{
"docid": "44a84af55421c88347034d6dc14e4e30",
"text": "Anomaly detection plays an important role in protecting computer systems from unforeseen attack by automatically recognizing and filter atypical inputs. However, it can be difficult to balance the sensitivity of a detector – an aggressive system can filter too many benign inputs while a conservative system can fail to catch anomalies. Accordingly, it is important to rigorously test anomaly detectors to evaluate potential error rates before deployment. However, principled systems for doing so have not been studied – testing is typically ad hoc, making it difficult to reproduce results or formally compare detectors. To address this issue we present a technique and implemented system, Fortuna, for obtaining probabilistic bounds on false positive rates for anomaly detectors that process Internet data. Using a probability distribution based on PageRank and an efficient algorithm to draw samples from the distribution, Fortuna computes an estimated false positive rate and a probabilistic bound on the estimate’s accuracy. By drawing test samples from a well defined distribution that correlates well with data seen in practice, Fortuna improves on ad hoc methods for estimating false positive rate, giving bounds that are reproducible, comparable across different anomaly detectors, and theoretically sound. Experimental evaluations of three anomaly detectors (SIFT, SOAP, and JSAND) show that Fortuna is efficient enough to use in practice — it can sample enough inputs to obtain tight false positive rate bounds in less than 10 hours for all three detectors. These results indicate that Fortuna can, in practice, help place anomaly detection on a stronger theoretical foundation and help practitioners better understand the behavior and consequences of the anomaly detectors that they deploy. As part of our work, we obtain a theoretical result that may be of independent interest: We give a simple analysis of the convergence rate of the random surfer process defining PageRank that guarantees the same rate as the standard, second-eigenvalue analysis, but does not rely on any assumptions about the link structure of the web.",
"title": ""
},
{
"docid": "e19d53b7ebccb3a1354bb6411182b1d3",
"text": "ERP implementation projects affect large parts of an implementing organization and lead to changes in the way an organization performs its tasks. The costs needed for the effort to implement these systems are hard to estimate. Research indicates that the size of an ERP project can be a useful measurement for predicting the effort required to complete an ERP implementation project. However, such a metric does not yet exist. Therefore research should be carried out to find a set of variables which can define the size of an ERP project. This paper describes a first step in such a project. It shows 21 logical clusters of ERP implementation project activities based on 405 ERP implementation project activities retrieved from literature. Logical clusters of ERP project activities can be used in further research to find variables for defining the size of an ERP project. IntroductIon Globalization has put pressure on organizations to perform as efficiently and effectively as possible in order to compete in the market. Structuring their internal processes and making them most efficient by integrated information systems is very important for that reason. In the 1990s, organizations started implementing ERP systems in order to replace their legacy systems and improve their business processes. This change is still being implemented. ERP is a key ingredient for gaining competitive advantage, streamlining operations, and having “lean” manufacturing (Mabert, Soni, & Venkataramanan, 2003). A study of Hendricks indicates that research shows some evidence of improvements in profitability after implementing ERP systems (Hendricks, Singhal, & Stratman, 2006). Forecasters predict a growth in the ERP market. 1847 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 1848 Sizing ERP Implementation Projects Several researchers also indicate that much research is still being carried out in this area ( Møller, Kræmmergaard, & Rikhardsson, 2004; Botta-Genoulaz, Millet, & Grabot, 2005). Although the research area is rather clearly defined, many topics still have to be researched and the usefulness of results for actual projects has to be designed. ERP projects are large and risky projects for organizations, because they affect great parts of the implementing organization and lead to changes in the way the organization performs its tasks. The costs needed for the effort to implement these systems are usually very high and also very hard to estimate. Many cases are documented where the actual required time and costs exceeded the budget, that is to say the estimated costs, many times. There are even cases where ERP implementation projects led to bankruptcy (Holland & Light, 1999; Scott, 1999). Francalanci states that software costs only represent a fraction of the overall cost of ERP projects within the total costs of the implementation project, that is to say, less than 10% over a 5-year period (Francalanci, 2001). In addition, Willis states that consultants alone can cost as much as or more than five times the cost of the software (Willis, Willis-Brown, & McMillan, 2001). This is confirmed by von Arb, who indicates that consultancy costs can be 2 to 4 times as much as software license costs (Arb, 1997). This indicates that the effort required for implementing an ERP system largely consists of effort-related costs. Von Arb also argues that license and hardware costs are fairly constant and predictable and that only a focus on reducing these effort-related costs is realistic. The conclusion is legitimate that the total effort is the most important and difficult factor to estimate in an ERP implementation project. Therefore, the main research of the authors only focuses on the estimation of the total effort required for implementing an ERP system. In every project there is a great uncertainty at the start, while at the end there is only a minor uncertainty (Meredith & Mantel, 2003). In the planning phase, the most important decisions are made that will affect the future of the organization as a whole. As described earlier, a failure to implement an ERP system can seriously affect the health of an organization and even lead to bankruptcy. This means that it would be of great help if a method would exist that could predict the effort required for implementing the ERP system within reasonable boundaries. The method should not be too complex and should be quick. Its outcomes should support the rough estimation of the project and serve as a starting point for the detailed planning in the set-up phase of the project phase and for the first allocation of the resources. Moreover, if conditions greatly change during a project, the method could be used to estimate the consequences for the remaining effort required for implementing the ERP system. The aim of this article is to answer which activities exist in ERP projects according to literature and how these can be clustered as a basis for defining the size of an ERP project. In the article, the approach and main goal of our research will first be described, followed by a literature review on ERP project activities. After that, we will present the clustering approach and results followed by conclusions and discussion.",
"title": ""
},
{
"docid": "b11a161588bd1a3d4d7cd78ecce4aa64",
"text": "This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up a VE into a configuration task, and hence reducing the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.",
"title": ""
},
{
"docid": "691cdea5cf3fae2713c721c1cfa8c132",
"text": "of the Dissertation Addressing the Challenges of Underspecification in Web Search",
"title": ""
},
{
"docid": "d40ac2e9a896e13ece11d7429fab3d80",
"text": "We present our recent work (ICS 2011) on dynamic environments in which computational nodes, or decision makers, follow simple and unsophisticated rules of behavior (e.g., repeatedly \"best replying\" to others' actions, and minimizing \"regret\") that have been extensively studied in game theory and economics. We aim to understand when convergence of the resulting dynamics to an equilibrium point is guaranteed if nodes' interaction is not synchronized (e.g., as in Internet protocols and large-scale markets). We take the first steps of this research agenda. We exhibit a general non-convergence result and consider its implications across a wide variety of interesting and timely applications: routing, congestion control, game theory, social networks and circuit design. We also consider the relationship between classical nontermination results in distributed computing theory and our result, explore the impact of scheduling on convergence, study the computational and communication complexity of asynchronous dynamics and present some basic observations regarding the effects of asynchrony on no-regret dynamics.",
"title": ""
},
{
"docid": "043306203de8365bd1930a9c0b4138c7",
"text": "In this paper, we compare two different methods for automatic Arabic speech recognition for isolated words and sentences. Isolated word/sentence recognition was performed using cepstral feature extraction by linear predictive coding, as well as Hidden Markov Models (HMM) for pattern training and classification. We implemented a new pattern classification method, where we used Neural Networks trained using the Al-Alaoui Algorithm. This new method gave comparable results to the already implemented HMM method for the recognition of words, and it has overcome HMM in the recognition of sentences. The speech recognition system implemented is part of the Teaching and Learning Using Information Technology (TLIT) project which would implement a set of reading lessons to assist adult illiterates in developing better reading capabilities.",
"title": ""
},
{
"docid": "a7f046dcc5e15ccfbe748fa2af400c98",
"text": "INTRODUCTION\nSmoking and alcohol use (beyond social norms) by health sciences students are behaviors contradictory to the social function they will perform as health promoters in their eventual professions.\n\n\nOBJECTIVES\nIdentify prevalence of tobacco and alcohol use in health sciences students in Mexico and Cuba, in order to support educational interventions to promote healthy lifestyles and development of professional competencies to help reduce the harmful impact of these legal drugs in both countries.\n\n\nMETHODS\nA descriptive cross-sectional study was conducted using quantitative and qualitative techniques. Data were collected from health sciences students on a voluntary basis in both countries using the same anonymous self-administered questionnaire, followed by an in-depth interview.\n\n\nRESULTS\nPrevalence of tobacco use was 56.4% among Mexican students and 37% among Cuban. It was higher among men in both cases, but substantial levels were observed in women as well. The majority of both groups were regularly exposed to environmental tobacco smoke. Prevalence of alcohol use was 76.9% in Mexican students, among whom 44.4% were classified as at-risk users. Prevalence of alcohol use in Cuban students was 74.1%, with 3.7% classified as at risk.\n\n\nCONCLUSIONS\nThe high prevalence of tobacco and alcohol use in these health sciences students is cause for concern, with consequences not only for their individual health, but also for their professional effectiveness in helping reduce these drugs' impact in both countries.",
"title": ""
},
{
"docid": "c5731d7290f1ab073c12bf67101a386a",
"text": "Convolutional neural networks have emerged as the leading method for the classification and segmentation of images. In some cases, it is desirable to focus the attention of the net on a specific region in the image; one such case is the recognition of the contents of transparent vessels, where the vessel region in the image is already known. This work presents a valve filter approach for focusing the attention of the net on a region of interest (ROI). In this approach, the ROI is inserted into the net as a binary map. The net uses a different set of convolution filters for the ROI and background image regions, resulting in a different set of features being extracted from each region. More accurately, for each filter used on the image, a corresponding valve filter exists that acts on the ROI map and determines the regions in which the corresponding image filter will be used. This valve filter effectively acts as a valve that inhibits specific features in different image regions according to the ROI map. In addition, a new data set for images of materials in glassware vessels in a chemistry laboratory setting is presented. This data set contains a thousand images with pixel-wise annotation according to categories ranging from filled and empty to the exact phase of the material inside the vessel. The results of the valve filter approach and fully convolutional neural nets (FCN) with no ROI input are compared based on this data set.",
"title": ""
},
{
"docid": "e1e1fcc7a732e5b2835c5a137722b3ee",
"text": "Regular expression matching is a crucial task in several networking applications. Current implementations are based on one of two types of finite state machines. Non-deterministic finite automata (NFAs) have minimal storage demand but have high memory bandwidth requirements. Deterministic finite automata (DFAs) exhibit low and deterministic memory bandwidth requirements at the cost of increased memory space. It has already been shown how the presence of wildcards and repetitions of large character classes can render DFAs and NFAs impractical. Additionally, recent security-oriented rule-sets include patterns with advanced features, namely back-references, which add to the expressive power of traditional regular expressions and cannot therefore be supported through classical finite automata.\n In this work, we propose and evaluate an extended finite automaton designed to address these shortcomings. First, the automaton provides an alternative approach to handle character repetitions that limits memory space and bandwidth requirements. Second, it supports back-references without the need for back-tracking in the input string. In our discussion of this proposal, we address practical implementation issues and evaluate the automaton on real-world rule-sets. To our knowledge, this is the first high-speed automaton that can accommodate all the Perl-compatible regular expressions present in the Snort network intrusion and detection system.",
"title": ""
},
{
"docid": "7875910ad044232b4631ecacfec65656",
"text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3300e4e29d160fb28861ac58740834b5",
"text": "To facilitate proactive fault management in large-scale systems such as IBM Blue Gene/P, online failure prediction is of paramount importance. While many techniques have been presented for online failure prediction, questions arise regarding two commonly used approaches: period-based and event-driven. Which one has better accuracy? What is the best observation window (i.e., the time interval used to collect evidence before making a prediction)? How does the lead time (i.e., the time interval from the prediction to the failure occurrence) impact prediction arruracy? To answer these questions, we analyze and compare period-based and event-driven prediction approaches via a Bayesian prediction model. We evaluate these prediction approaches, under a variety of testing parameters, by means of RAS logs collected from a production supercomputer at Argonne National Laboratory. Experimental results show that the period-based Bayesian model and the event-driven Bayesian model can achieve up to 65.0% and 83.8% prediction accuracy, respectively. Furthermore, our sensitivity study indicates that the event-driven approach seems more suitable for proactive fault management in large-scale systems like Blue Gene/P.",
"title": ""
},
{
"docid": "807b1a6a389788d598c5c0ec11b336ab",
"text": "One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. handengineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "42cf4bd800000aed5e0599cba52ba317",
"text": "There is a significant amount of controversy related to the optimal amount of dietary carbohydrate. This review summarizes the health-related positives and negatives associated with carbohydrate restriction. On the positive side, there is substantive evidence that for many individuals, low-carbohydrate, high-protein diets can effectively promote weight loss. Low-carbohydrate diets (LCDs) also can lead to favorable changes in blood lipids (i.e., decreased triacylglycerols, increased high-density lipoprotein cholesterol) and decrease the severity of hypertension. These positives should be balanced by consideration of the likelihood that LCDs often lead to decreased intakes of phytochemicals (which could increase predisposition to cardiovascular disease and cancer) and nondigestible carbohydrates (which could increase risk for disorders of the lower gastrointestinal tract). Diets restricted in carbohydrates also are likely to lead to decreased glycogen stores, which could compromise an individual's ability to maintain high levels of physical activity. LCDs that are high in saturated fat appear to raise low-density lipoprotein cholesterol and may exacerbate endothelial dysfunction. However, for the significant percentage of the population with insulin resistance or those classified as having metabolic syndrome or prediabetes, there is much experimental support for consumption of a moderately restricted carbohydrate diet (i.e., one providing approximately 26%-44 % of calories from carbohydrate) that emphasizes high-quality carbohydrate sources. This type of dietary pattern would likely lead to favorable changes in the aforementioned cardiovascular disease risk factors, while minimizing the potential negatives associated with consumption of the more restrictive LCDs.",
"title": ""
},
{
"docid": "aefa758e6b5681c213150ed674eae915",
"text": "This paper presents a solution to automatically recognize the correct left/right and upright/upside-down orientation of iris images. This solution can be used to counter spoofing attacks directed to generate fake identities by rotating an iris image or the iris sensor during the acquisition. Two approaches are compared on the same data, using the same evaluation protocol: 1) feature engineering, using hand-crafted features classified by a support vector machine (SVM) and 2) feature learning, using data-driven features learned and classified by a convolutional neural network (CNN). A data set of 20 750 iris images, acquired for 103 subjects using four sensors, was used for development. An additional subject-disjoint data set of 1,939 images, from 32 additional subjects, was used for testing purposes. Both same-sensor and cross-sensor tests were carried out to investigate how the classification approaches generalize to unknown hardware. The SVM-based approach achieved an average correct classification rate above 95% (89%) for recognition of left/right (upright/upside-down) orientation when tested on subject-disjoint data and camera-disjoint data, and 99% (97%) if the images were acquired by the same sensor. The CNN-based approach performed better for same-sensor experiments, and presented slightly worse generalization capabilities to unknown sensors when compared with the SVM. We are not aware of any other papers on the automatic recognition of upright/upside-down orientation of iris images, or studying both hand-crafted and data-driven features in same-sensor and cross-sensor subject-disjoint experiments. The data sets used in this paper, along with random splits of the data used in cross-validation, are being made available.",
"title": ""
},
{
"docid": "26db4ecbc2ad4b8db0805b06b55fe27d",
"text": "The advent of high voltage (HV) wide band-gap power semiconductor devices has enabled the medium voltage (MV) grid tied operation of non-cascaded neutral point clamped (NPC) converters. This results in increased power density, efficiency as well as lesser control complexity. The multi-chip 15 kV/40 A SiC IGBT and 15 kV/20 A SiC MOSFET are two such devices which have gained attention for MV grid interface applications. Such converters based on these devices find application in active power filters, STATCOM or as active front end converters for solid state transformers. This paper presents an experimental comparative evaluation of these two SiC devices for 3-phase grid connected applications using a 3-level NPC converter as reference. The IGBTs are generally used for high power applications due to their lower conduction loss while MOSFETs are used for high frequency applications due to their lower switching loss. The thermal performance of these devices are compared based on device loss characteristics, device heat-run tests, 3-level pole heat-run tests, PLECS thermal simulation based loss comparison and MV experiments on developed hardware prototypes. The impact of switching frequency on the harmonic control of the grid connected converter is also discussed and suitable device is selected for better grid current THD.",
"title": ""
},
{
"docid": "d9160f2cc337de729af34562d77a042e",
"text": "Ontologies proliferate with the progress of the Semantic Web. Ontology matching is an important way of establishing interoperability between (Semantic) Web applications that use different but related ontologies. Due to their sizes and monolithic nature, large ontologies regarding real world domains bring a new challenge to the state of the art ontology matching technology. In this paper, we propose a divide-and-conquer approach to matching large ontologies. We develop a structure-based partitioning algorithm, which partitions entities of each ontology into a set of small clusters and constructs blocks by assigning RDF Sentences to those clusters. Then, the blocks from different ontologies are matched based on precalculated anchors, and the block mappings holding high similarities are selected. Finally, two powerful matchers, V-DOC and GMO, are employed to discover alignments in the block mappings. Comprehensive evaluation on both synthetic and real world data sets demonstrates that our approach both solves the scalability problem and achieves good precision and recall with significant reduction of execution time. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4cff5279110ff2e45060f3ccec7d51ba",
"text": "Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)",
"title": ""
}
] | scidocsrr |
d26016066331715339a082414469a654 | GUI Design for IDE Command Recommendations | [
{
"docid": "ef598ba4f9a4df1f42debc0eabd1ead8",
"text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.",
"title": ""
}
] | [
{
"docid": "c41259069ff779cf727ee4cfcf317cee",
"text": "Trends in miniaturization have resulted in an explosion of small, low power devices with network connectivity. Welcome to the era of Internet of Things (IoT), wearable devices, and automated home and industrial systems. These devices are loaded with sensors, collect information from their surroundings, process it, and relay it to remote locations for further analysis. Pervasive and seeminly harmless, this new breed of devices raise security and privacy concerns. In this chapter, we evaluate the security of these devices from an industry point of view, concentrating on the design flow, and catalogue the types of vulnerabilities we have found. We also present an in-depth evaluation of the Google Nest Thermostat, the Nike+ Fuelband SE Fitness Tracker, the Haier SmartCare home automation system, and the Itron Centron CL200 electric meter. We study and present an analysis of the effects of these compromised devices in an every day setting. We then finish by discussing design flow enhancements, with security mechanisms that can be efficiently added into a device in a comparative way.",
"title": ""
},
{
"docid": "bf08d673b40109d6d6101947258684fd",
"text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.",
"title": ""
},
{
"docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa",
"text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking",
"title": ""
},
{
"docid": "0ce46853852a20e5e0ab9aacd3ec20c1",
"text": "In immunocompromised subjects, Epstein-Barr virus (EBV) infection of terminally differentiated oral keratinocytes may result in subclinical productive infection of the virus in the stratum spinosum and in the stratum granulosum with shedding of infectious virions into the oral fluid in the desquamating cells. In a minority of cases this productive infection with dysregulation of the cell cycle of terminally differentiated epithelial cells may manifest as oral hairy leukoplakia. This is a white, hyperkeratotic, benign lesion of low morbidity, affecting primarily the lateral border of the tongue. Factors that determine whether productive EBV replication within the oral epithelium will cause oral hairy leukoplakia include the fitness of local immune responses, the profile of EBV gene expression, and local environmental factors.",
"title": ""
},
{
"docid": "be7662e67b3cff4991ae7249e8f8cde2",
"text": "The kernelized correlation filter (KCF) is one of the state-of-the-art object trackers. However, it does not reasonably model the distribution of correlation response during tracking process, which might cause the drifting problem, especially when targets undergo significant appearance changes due to occlusion, camera shaking, and/or deformation. In this paper, we propose an output constraint transfer (OCT) method that by modeling the distribution of correlation response in a Bayesian optimization framework is able to mitigate the drifting problem. OCT builds upon the reasonable assumption that the correlation response to the target image follows a Gaussian distribution, which we exploit to select training samples and reduce model uncertainty. OCT is rooted in a new theory which transfers data distribution to a constraint of the optimized variable, leading to an efficient framework to calculate correlation filters. Extensive experiments on a commonly used tracking benchmark show that the proposed method significantly improves KCF, and achieves better performance than other state-of-the-art trackers. To encourage further developments, the source code is made available.",
"title": ""
},
{
"docid": "4560e1b7318013be0688b8e73692fda4",
"text": "This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension kmeans is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, YesNet uses RNN as a filter to get the final boxes, which is more efficient. For 416 × 416 input, Yes-Net achieves 74.3% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal.",
"title": ""
},
{
"docid": "8a7a8de5cae191a4493e5a0e4f34bbf1",
"text": "B-spline surfaces, although widely used, are incapable of describing surfaces of arbitrary topology. It is not possible to model a general closed surface or a surface with handles as a single non-degenerate B-spline. In practice such surfaces are often needed. In this paper, we present generalizations of biquadratic and bicubic B-spline surfaces that are capable of capturing surfaces of arbitrary topology (although restrictions are placed on the connectivity of the control mesh). These results are obtained by relaxing the sufficient but not necessary smoothness constraints imposed by B-splines and through the use of an n-sided generalization of Bézier surfaces called S-patches.",
"title": ""
},
{
"docid": "bb4001c4cb5fde8d34fd48ee50eb053c",
"text": "We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: Inspired by Occam’s razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using Rényi entropy. Our main result is that, under natural assumptions, if the exogenous variable has lowH0 entropy (cardinality) in the true direction, it must have high H0 entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum H1 entropy (Shannon Entropy) is equivalent to the problem of finding minimum joint entropy given n marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for n = 2 provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum Shannon entropy. Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.",
"title": ""
},
{
"docid": "3cde70842ee80663cbdc04db6a871d46",
"text": "Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.",
"title": ""
},
{
"docid": "4f37b872c44c2bda3ff62e3e8ebf4391",
"text": "This paper proposes a method based on conditional random fields to incorporate sentence structure (syntax and semantics) and context information to identify sentiments of sentences within a document. It also proposes and evaluates two different active learning strategies for labeling sentiment data. The experiments with the proposed approach demonstrate a 5-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods.",
"title": ""
},
{
"docid": "b4e9cfc0dbac4a5d7f76001e73e8973d",
"text": "Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target’s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.",
"title": ""
},
{
"docid": "5e8154a99b4b0cc544cab604b680ebd2",
"text": "This work presents performance of robust wearable antennas intended to operate in Wireless Body Area Networks (W-BAN) in UHF, TETRAPOL communication band, 380-400 MHz. We propose a Planar Inverted F Antenna (PIFA) as reliable antenna type for UHF W-BAN applications. In order to satisfy the robustness requirements of the UHF band, both from communication and mechanical aspect, a new technology for building these antennas was proposed. The antennas are built out of flexible conductive sheets encapsulated inside a silicone based elastomer, Polydimethylsiloxane (PDMS). The proposed antennas are resistive to washing, bending and perforating. From the communication point of view, opting for a PIFA antenna type we solve the problem of coupling to the wearer and thus improve the overall communication performance of the antenna. Several different tests and comparisons were performed in order to check the stability of the proposed antennas when they are placed on the wearer or left in a common everyday environ- ment, on the ground, table etc. S11 deviations are observed and compared with the commercially available wearable antennas. As a final check, the antennas were tested in the frame of an existing UHF TETRAPOL communication system. All the measurements were performed in a real university campus scenario, showing reliable and good performance of the proposed PIFA antennas.",
"title": ""
},
{
"docid": "5f01e9cd6dc2f9bd051e172b3108f06d",
"text": "Head pose estimation is recently a more and more popular area of research. For the last three decades new approaches have constantly been developed, and steadily better accuracy was achieved. Unsurprisingly, a very broad range of methods was explored statistical, geometrical and tracking-based to name a few. This paper presents a brief summary of the evolution of head pose estimation and a glimpse at the current state-of-the-art in this eld.",
"title": ""
},
{
"docid": "4fa9db557f53fa3099862af87337cfa9",
"text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.",
"title": ""
},
{
"docid": "1fc9a4a769c7ff6d6ddeff7e5df7986b",
"text": "This paper describes a model of problem solving for use in collaborative agents. It is intended as a practical model for use in implemented systems, rather than a study of the theoretical underpinnings of collaborative action. The model is based on our experience in building a series of interactive systems in different domains, including route planning, emergency management, and medical advising. It is currently being used in an implemented, end-to- end spoken dialogue system in which the system assists a person in managing their medications. While we are primarily focussed on human-machine collaboration, we believe that the model will equally well apply to interactions between sophisticated software agents that need to coordinate their activities.",
"title": ""
},
{
"docid": "937de8ba80bd92084f9c2886a28874d1",
"text": "Android security has been a hot spot recently in both academic research and public concerns due to numerous instances of security attacks and privacy leakage on Android platform. Android security has been built upon a permission based mechanism which restricts accesses of third-party Android applications to critical resources on an Android device. Such permission based mechanism is widely criticized for its coarse-grained control of application permissions and difficult management of permissions by developers, marketers, and end-users. In this paper, we investigate the arising issues in Android security, including coarse granularity of permissions, incompetent permission administration, insufficient permission documentation, over-claim of permissions, permission escalation attack, and TOCTOU (Time of Check to Time of Use) attack. We illustrate the relationships among these issues, and investigate the existing countermeasures to address these issues. In particular, we provide a systematic review on the development of these countermeasures, and compare them according to their technical features. Finally, we propose several methods to further mitigate the risk in Android security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b0b2e50ea9020f6dd6419fbb0520cdfd",
"text": "Social interactions, such as an aggressive encounter between two conspecific males or a mating encounter between a male and a female, typically progress from an initial appetitive or motivational phase, to a final consummatory phase. This progression involves both changes in the intensity of the animals' internal state of arousal or motivation and sequential changes in their behavior. How are these internal states, and their escalating intensity, encoded in the brain? Does this escalation drive the progression from the appetitive/motivational to the consummatory phase of a social interaction and, if so, how are appropriate behaviors chosen during this progression? Recent work on social behaviors in flies and mice suggests possible ways in which changes in internal state intensity during a social encounter may be encoded and coupled to appropriate behavioral decisions at appropriate phases of the interaction. These studies may have relevance to understanding how emotion states influence cognitive behavioral decisions at higher levels of brain function.",
"title": ""
},
{
"docid": "a0d49d0f2dd9ef4fabf98d36f0180347",
"text": "This study draws on the work/family border theory to investigate the role of information communication technology (ICT) use at home in shaping the characteristics of work/family borders (i.e. flexibility and permeability) and consequently influencing individuals’ perceived work-family conflict, technostress, and level of telecommuting. Data were collected from a probability sample of 509 information workers in Hong Kong who were not selfemployed. The results showed that the more that people used ICT to do their work at home, the greater they perceived their work/family borders flexible and permeable. Interestingly, low flexibility and high permeability, rather than the use of ICT at home, had much stronger influences on increasing, in particular, family-to-work conflict. As expected, work-tofamily conflict was significantly and positively associated with technostress. Results also showed that the telecommuters tended to be older, had lower family incomes, used ICT frequently at home, and had a permeable boundary that allowed work to penetrate their home domain. The theoretical and practical implications are discussed. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0a7f93e98e1d256ea6a4400f33753d6a",
"text": "In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds a polygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensorplacement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the nextbest-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The International Journal of Robotics Research Vol. 21, No. 10–11, October-November 2002, pp. 829-848, ©2002 Sage Publications The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs. KEY WORDS—next-best view, safe region, online exploration, incidence constraints, map building",
"title": ""
},
{
"docid": "dfde48aa79ac10382fe4b9a312662cd9",
"text": "221 Abstract— Due to rapid advances and availabilities of powerful image processing software's, it is easy to manipulate and modify digital images. So it is very difficult for a viewer to judge the authenticity of a given image. Nowadays, it is possible to add or remove important features from an image without leaving any obvious traces of tampering. As digital cameras and video cameras replace their analog counterparts, the need for authenticating digital images, validating their content and detecting forgeries will only increase. For digital photographs to be used as evidence in law issues or to be circulated in mass media, it is necessary to check the authenticity of the image. So In this paper, describes an Image forgery detection method based on SIFT. In particular, we focus on detection of a special type of digital forgery – the copy-move attack, in a copy-move image forgery method; a part of an image is copied and then pasted on a different location within the same image. In this approach an improved algorithm based on scale invariant features transform (SIFT) is used to detect such cloning forgery, In this technique Transform is applied to the input image to yield a reduced dimensional representation, After that Apply key point detection and feature descriptor along with a matching over all the key points. Such a method allows us to both understand if a copy–move attack has occurred and, also furthermore gives output by applying clustering over matched points.",
"title": ""
}
] | scidocsrr |
eef6fdb81d07ee3c02cb0d082b02b290 | A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern | [
{
"docid": "641f8ac3567d543dd5df40a21629fbd7",
"text": "Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.",
"title": ""
}
] | [
{
"docid": "668e72cfb7f1dca5b097ba7df01008b0",
"text": "Detecting PE malware files is now commonly approached using statistical and machine learning models. While these models commonly use features extracted from the structure of PE files, we propose that icons from these files can also help better predict malware. We propose a new machine learning approach to extract information from icons. Our proposed approach consists of two steps: 1) extracting icon features using summary statics, a histogram of gradients (HOG), and a convolutional autoencoder, 2) clustering icons based on the extracted icon features. Using publicly available data and by using machine learning experiments, we show our proposed icon clusters significantly boost the efficacy of malware prediction models. In particular, our experiments show an average accuracy increase of 10 percent when icon clusters are used in the prediction model.",
"title": ""
},
{
"docid": "c4f706ff9ceb514e101641a816ba7662",
"text": "Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classication systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from dierent classes are further apart, resulting in statistically signicant improvement when compared to other approaches on three datasets from two dierent domains.",
"title": ""
},
{
"docid": "613f0bf05fb9467facd2e58b70d2b09e",
"text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.",
"title": ""
},
{
"docid": "7190c91917d1e1280010c66139837568",
"text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.",
"title": ""
},
{
"docid": "64cefd949f61afe81fbbb9ca1159dd4a",
"text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR",
"title": ""
},
{
"docid": "419f6e534c04e169a998865f71ee9488",
"text": "Stroma in the tumor microenvironment plays a critical role in cancer progression, but how it promotes metastasis is poorly understood. Exosomes are small vesicles secreted by many cell types and enable a potent mode of intercellular communication. Here, we report that fibroblast-secreted exosomes promote breast cancer cell (BCC) protrusive activity and motility via Wnt-planar cell polarity (PCP) signaling. We show that exosome-stimulated BCC protrusions display mutually exclusive localization of the core PCP complexes, Fzd-Dvl and Vangl-Pk. In orthotopic mouse models of breast cancer, coinjection of BCCs with fibroblasts dramatically enhances metastasis that is dependent on PCP signaling in BCCs and the exosome component, Cd81 in fibroblasts. Moreover, we demonstrate that trafficking in BCCs promotes tethering of autocrine Wnt11 to fibroblast-derived exosomes. This work reveals an intercellular communication pathway whereby fibroblast exosomes mobilize autocrine Wnt-PCP signaling to drive BCC invasive behavior.",
"title": ""
},
{
"docid": "b6303ae2b77ac5c187694d5320ef65ff",
"text": "Mechanisms for continuously changing or shifting a system's attack surface are emerging as game-changers in cyber security. In this paper, we propose a novel defense mechanism for protecting the identity of nodes in Mobile Ad Hoc Networks and defeat the attacker's reconnaissance efforts. The proposed mechanism turns a classical attack mechanism - Sybil - into an effective defense mechanism, with legitimate nodes periodically changing their virtual identity in order to increase the uncertainty for the attacker. To preserve communication among legitimate nodes, we modify the network layer by introducing (i) a translation service for mapping virtual identities to real identities; (ii) a protocol for propagating updates of a node's virtual identity to all legitimate nodes; and (iii) a mechanism for legitimate nodes to securely join the network. We show that the proposed approach is robust to different types of attacks, and also show that the overhead introduced by the update protocol can be controlled by tuning the update frequency.",
"title": ""
},
{
"docid": "7a8979f96411ef37c079d85c77c03bac",
"text": "Ankle-foot orthoses (AFOs) are orthotic devices that support the movement of the ankles of disabled people, for example, those suffering from hemiplegia or peroneal nerve palsy. We have developed an intelligently controllable AFO (i-AFO) in which the ankle torque is controlled by a compact magnetorheological fluid brake. Gait-control tests with the i-AFO were performed for a patient with flaccid paralysis of the ankles, who has difficulty in voluntary movement of the peripheral part of the inferior limb, and physical limitations on his ankles. By using the i-AFO, his gait control was improved by prevention of drop foot in the swing phase and by forward promotion in the stance phase.",
"title": ""
},
{
"docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d",
"text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.",
"title": ""
},
{
"docid": "2dc261ab24914dd3f865b8ede5b71be9",
"text": "Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [16]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014.",
"title": ""
},
{
"docid": "4804b3e0b8c2633ab0949bd98f900bb5",
"text": "Secure Simple Pairing (SSP), a characteristic of the Bluetooth Core Version 2.1 specification was build to address two foremost concerns amongst the Bluetooth user community: security and simplicity of the pairing process. It utilizes Elliptic Curve Diffie-Hellmen (ECDH) protocol for generating keys for the first time in Bluetooth pairing. It provides the security properties known session key security, forward security, resistance to key-compromise impersonation attack and to unknown key-share attack, key control. This paper presents the simulation and security analysis of Bluetooth pairing protocol for numeric comparison using ECDH in NS2. The protocol also employs SAGEMATH for cryptographic functions.",
"title": ""
},
{
"docid": "1499fd10ee703afd1d5b3ec35defa26b",
"text": "It is challenging to analyze the aerial locomotion of bats because of the complicated and intricate relationship between their morphology and flight capabilities. Developing a biologically inspired bat robot would yield insight into how bats control their body attitude and position through the complex interaction of nonlinear forces (e.g., aerodynamic) and their intricate musculoskeletal mechanism. The current work introduces a biologically inspired soft robot called Bat Bot (B2). The overall system is a flapping machine with 5 Degrees of Actuation (DoA). This work reports on some of the preliminary untethered flights of B2. B2 has a nontrivial morphology and it has been designed after examining several biological bats. Key DoAs, which contribute significantly to bat flight, are picked and incorporated in B2's flight mechanism design. These DoAs are: 1) forelimb flapping motion, 2) forelimb mediolateral motion (folding and unfolding) and 3) hindlimb dorsoventral motion (upward and downward movement).",
"title": ""
},
{
"docid": "f9ee82dcf1cce6d41a7f106436ee3a7d",
"text": "The Automatic Identification System (AIS) is based on VHF radio transmissions of ships' identity, position, speed and heading, in addition to other key parameters. In 2004, the Norwegian Defence Research Establishment (FFI) undertook studies to evaluate if the AIS signals could be detected in low Earth orbit. Since then, the interest in Space-Based AIS reception has grown significantly, and both public and private sector organizations have established programs to study the issue, and demonstrate such a capability in orbit. FFI is conducting two such programs. The objective of the first program was to launch a nano-satellite equipped with an AIS receiver into a near polar orbit, to demonstrate Space-Based AIS reception at high latitudes. The satellite was launched from India 12th July 2010. Even though the satellite has not finished commissioning, the receiver is operated with real-time transmission of received AIS data to the Norwegian Coastal Administration. The second program is an ESA-funded project to operate an AIS receiver on the European Columbus module of the International Space Station. Mounting of the equipment, the NORAIS receiver, was completed in April 2010. Currently, the AIS receiver has operated for more than three months, picking up several million AIS messages from more than 60 000 ship identities. In this paper, we will present experience gained with the space-based AIS systems, highlight aspects of tracking ships throughout their voyage, and comment on possible contributions to port security.",
"title": ""
},
{
"docid": "b954fa908229bdc0e514b2e21246b064",
"text": "The study of small-size animal models, such as the roundworm C. elegans, has provided great insight into several in vivo biological processes, extending from cell apoptosis to neural network computing. The physical manipulation of this micron-sized worm has always been a challenging task. Here, we discuss the applications, capabilities and future directions of a new family of worm manipulation tools, the 'worm chips'. Worm chips are microfabricated devices capable of precisely manipulating single worms or a population of worms and their environment. Worm chips pose a paradigm shift in current methodologies as they are capable of handling live worms in an automated fashion, opening up a new direction in in vivo small-size organism studies.",
"title": ""
},
{
"docid": "94c47638f35abc67c366ceb871898b86",
"text": "The past few years have seen a growing interest in the application\" of three-dimensional image processing. With the increasing demand for 3-D spatial information for tasks of passive navigation[7,12], automatic surveillance[9], aerial cartography\\l0,l3], and inspection in industrial automation, the importance of effective stereo analysis has been made quite clear. A particular challenge is to provide reliable and accurate depth data for input to object or terrain modelling systems (such as [5]. This paper describes an algorithm for such stereo sensing It uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable. The processing consists of extracting edge descriptions for a stereo pair of images, linking these edges to their nearest neighbors to obtain the edge connectivity structure, correlating the edge descriptions on the basis of local edge properties, then cooperatively removmg those edge correspondences determined to be in error those which violate the connectivity structure of the two images. A further correlation process, using a technique similar to that used for the edges, is applied to the image intensity values over intervals defined by the previous correlation The result of the processing is a full image array disparity map of the scene viewed. Mechanism and Constraints Edge-based stereo uses operators to reduce an image to a depiction of its intensity boundaries, which are then correlated. Area-based stereo uses area windowing mechanisms to measure local statistical properties of the intensities, which can then be correlated. The system described here deals, initially, with the former, edges, because of the: a) reduced combinatorics (there are fewer edges than pixels), b) greater accuracy (edges can be positioned to sub-pixel precision, while area positioning precision is inversely proportional to window size, and considerably poorer), and c) more realistic in variance assumptions (area-based analysis presupposes that the photometric properties of a scene arc invariant to viewing position, while edge-based analysis works with the assumption that it is the geometric properties that are invariant to viewing position). Edges are found by a convolution operator They are located at positions in the image where a change in sign of second difference in intensity occurs. A particular operator, the one described here being 1 by 7 pixels in size, measures the directional first difference in intensity at each pixel' Second differences are computed from these, and changes in sign of these second differences are used to interpolate sero crossings (i.e. peaks in first difference). Certain local properties other than position are measured and associated with each edge contrast, image slope, and intensity to either side and links are kept to nearest neighbours above, below, and to the sides. It is these properties that define an edge and provide the basis for the correlation (see the discussions in [1,2]). The correlation is & search for edge correspondence between images Fig. 2 shows the edges found in the two images of fig. 1 with the second difference operator (note, all stereo pairs in this paper are drawn for cross-eyed viewing) Although the operator works in both horizontal and vertical directions, it only allows correlation on edges whose horizontal gradient lies above the noise one standard deviation of the first difference in intensity With no prior knowledge of the viewing situation, one could have any edge in one image matching any edge in the other. By constraining the geometry of the cameras during picture taking one can vastly limit the computation that is required in determining corresponding edges in the two images. Consider fig. 3. If two balanced, equal focal length cameras are arranged with axes parallel, then they can be conceived of as sharing a single common image plane. Any point in the scene will project to two points on that joint image plane (one through each of the two lens centers), the connection of which will produce a line parallel to the baseline between the cameras. Thus corresponding edges in the two images must lie along the tame line in the joint image plane This line is termed an epipolar line. If the baseline between the two cameras happens to be parallel to an axis of the cameras, then the correlation only need consider edges lying along corresponding lines parallel to that axis in the two images. Fig. 3 indicates this camera geometry a geometry which produces rectified The edge operator is simple, basically one dimensional, and is noteworthy only in that it it fast and fairly effective.",
"title": ""
},
{
"docid": "26fef7add5f873aa7ec08bff979ef77c",
"text": "Citation: Nermin Kamal., et al. “Restorability of Teeth: A Numerical Simplified Restorative Decision-Making Chart”. EC Dental Science 17.6 (2018): 961-967. Abstract A decision to extract or to keep a tooth was always a debatable matter in dentistry. Each dental specialty has its own perspective in that regards. Although, real life in the dental clinic showed that the decision is always multi-disciplinary, and that full awareness of all aspects should be there in order to reach to a reliable outcome. This article presents a simple evidence-based clinical chart for the judgment of restorability of teeth for better treatment planning.",
"title": ""
},
{
"docid": "8cff1a60fd0eeb60924333be5641ca83",
"text": "Since Wireless Sensor Networks (WSNs) are composed of a set of sensor nodes that limit resource constraints such as energy constraints, energy consumption in WSNs is one of the challenges of these networks. One of the solutions to reduce energy consumption in WSNs is to use clustering. In clustering, cluster members send their data to their Cluster Head (CH), and the CH after collecting the data, sends them to the Base Station (BS). In clustering, choosing CHs is very important; so many methods have proposed to choose the CH. In this study, a hesitant fuzzy method with three input parameters namely, remaining energy, distance to the BS, distance to the center of cluster is proposed for efficient cluster head selection in WSNs. We define different scenarios and simulate them, then investigate the results of simulation.",
"title": ""
},
{
"docid": "9c74b77e79217602bb21a36a5787ed59",
"text": "Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infrared and synthetic aperture radar images, their results are affected by weather conditions, like clouds and ocean waves, and 2) the higher resolution results in larger data volume, which makes processing more difficult. Most of the previous works mainly focus on solving the first problem by improving segmentation or classification with complicated algorithms. These methods face difficulty in efficiently balancing performance and complexity. In this paper, we propose a ship detection approach to solving the aforementioned two issues using wavelet coefficients extracted from JPEG2000 compressed domain combined with deep neural network (DNN) and extreme learning machine (ELM). Compressed domain is adopted for fast ship candidate extraction, DNN is exploited for high-level feature representation and classification, and ELM is used for efficient feature pooling and decision making. Extensive experiments demonstrate that, in comparison with the existing relevant state-of-the-art approaches, the proposed method requires less detection time and achieves higher detection accuracy.",
"title": ""
},
{
"docid": "1e25480ef6bd5974fcd806aac7169298",
"text": "Alphabetical ciphers are being used since centuries for inducing confusion in messages, but there are some drawbacks that are associated with Classical alphabetic techniques like concealment of key and plaintext. Here in this paper we will suggest an encryption technique that is a blend of both classical encryption as well as modern technique, this hybrid technique will be superior in terms of security than average Classical ciphers.",
"title": ""
},
{
"docid": "e0eded1237c635af3c762f6bbe5d1b26",
"text": "Locating boundaries between coherent and/or repetitive segments of a time series is a challenging problem pervading many scientific domains. In this paper we propose an unsupervised method for boundary detection, combining three basic principles: novelty, homogeneity, and repetition. In particular, the method uses what we call structure features, a representation encapsulating both local and global properties of a time series. We demonstrate the usefulness of our approach in detecting music structure boundaries, a task that has received much attention in recent years and for which exist several benchmark datasets and publicly available annotations. We find our method to significantly outperform the best accuracies published so far. Importantly, our boundary approach is generic, thus being applicable to a wide range of time series beyond the music and audio domains.",
"title": ""
}
] | scidocsrr |
bbc4986971a6a5b4daf955c0991530a2 | A Survey on Deep Learning Toolkits and Libraries for Intelligent User Interfaces | [
{
"docid": "d5faccc7187a185f6e287a7cc29f0878",
"text": "The revival of deep neural networks and the availability of ImageNet laid the foundation for recent success in highly complex recognition tasks. However, ImageNet does not cover all visual concepts of all possible application scenarios. Hence, application experts still record new data constantly and expect the data to be used upon its availability. In this paper, we follow this observation and apply the classical concept of fine-tuning deep neural networks to scenarios where data from known or completely new classes is continuously added. Besides a straightforward realization of continuous fine-tuning, we empirically analyze how computational burdens of training can be further reduced. Finally, we visualize how the network’s attention maps evolve over time which allows for visually investigating what the network learned during continuous fine-tuning.",
"title": ""
},
{
"docid": "d053f8b728f94679cd73bc91193f0ba6",
"text": "Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.",
"title": ""
}
] | [
{
"docid": "e602ab2a2d93a8912869ae8af0925299",
"text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.",
"title": ""
},
{
"docid": "902a60b23d65c644877b350c63b86ba8",
"text": "The Internet of Things (IoT) is set to occupy a substantial component of future Internet. The IoT connects sensors and devices that record physical observations to applications and services of the Internet[1]. As a successor to technologies such as RFID and Wireless Sensor Networks (WSN), the IoT has stumbled into vertical silos of proprietary systems, providing little or no interoperability with similar systems. As the IoT represents future state of the Internet, an intelligent and scalable architecture is required to provide connectivity between these silos, enabling discovery of physical sensors and interpretation of messages between the things. This paper proposes a gateway and Semantic Web enabled IoT architecture to provide interoperability between systems, which utilizes established communication and data standards. The Semantic Gateway as Service (SGS) allows translation between messaging protocols such as XMPP, CoAP and MQTT via a multi-protocol proxy architecture. Utilization of broadly accepted specifications such as W3Cs Semantic Sensor Network (SSN) ontology for semantic annotations of sensor data provide semantic interoperability between messages and support semantic reasoning to obtain higher-level actionable knowledge from low-level sensor data.",
"title": ""
},
{
"docid": "e6f8fcdf69ccde7528a3dc60ee0b9907",
"text": "This work provides a forensic analysis method for a directory index in NTFS file system. NTFS employed B-tree indexing for providing efficient storage of many files and fast lookups, which changes in a structure of the directory index when files are operated. As a forensic view point, we observe behaviors of the B-tree to analyze files that once existed in the directory. However, it is difficult to analyze the allocated index entry when the file commands are executed. So, this work treats a forensic method for a directory index, especially when there are a large number of files in the directory. The index entry records are naturally expanded, then we examine how the index entry records are configured in the index tree. And we provide information that how the directory index nodes are changed and how the index entries remain traces in the index entry record with a computer forensic point of view when the files are deleted.",
"title": ""
},
{
"docid": "51dcb89aa02a09a15d41d10a2af0315e",
"text": "In order to combat a variety of pests, pesticides are widely used in fruits. Several extraction procedures (liquid extraction, single drop microextraction, microwave-assisted extraction, pressurized liquid extraction, supercritical fluid extraction, solid-phase extraction, solid-phase microextraction, matrix solid-phase dispersion, and stir bar sorptive extraction) have been reported to determine pesticide residues in fruits and fruit juices. The significant change in recent years is the introduction of the Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) methods in these matrices analysis. A combination of techniques reported the use of new extraction methods and chromatography to provide better quantitative recoveries at low levels. The use of mass spectrometric detectors in combination with liquid and gas chromatography has played a vital role to solve many problems related to food safety. The main attention in this review is on the achievements that have been possible because of the progress in extraction methods and the latest advances and novelties in mass spectrometry, and how these progresses have influenced the best control of food, allowing for an increase in the food safety and quality standards.",
"title": ""
},
{
"docid": "114e2a9d3b502164ad06cbde59b682b6",
"text": "As the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses significant challenge to construct a high performance implementations of deep learning neural networks. In order to improve the performance as well as to maintain the low power cost, in this paper we design deep learning accelerator unit (DLAU), which is a scalable accelerator architecture for large-scale deep learning networks using field-programmable gate array (FPGA) as the hardware prototype. The DLAU accelerator employs three pipelined processing units to improve the throughput and utilizes tile techniques to explore locality for deep learning applications. Experimental results on the state-of-the-art Xilinx FPGA board demonstrate that the DLAU accelerator is able to achieve up to $36.1 {\\times }$ speedup comparing to the Intel Core2 processors, with the power consumption at 234 mW.",
"title": ""
},
{
"docid": "233427420d0ff900736ca0692b281ed5",
"text": "Machine learning is useful for grid-based crime prediction. Many previous studies have examined factors including time, space, and type of crime, but the geographic characteristics of the grid are rarely discussed, leaving prediction models unable to predict crime displacement. This study incorporates the concept of a criminal environment in grid-based crime prediction modeling, and establishes a range of spatial-temporal features based on 84 types of geographic information by applying the Google Places API to theft data for Taoyuan City, Taiwan. The best model was found to be Deep Neural Networks, which outperforms the popular Random Decision Forest, Support Vector Machine, and K-Near Neighbor algorithms. After tuning, compared to our design’s baseline 11-month moving average, the F1 score improves about 7% on 100-by-100 grids. Experiments demonstrate the importance of the geographic feature design for improving performance and explanatory ability. In addition, testing for crime displacement also shows that our model design outperforms the baseline.",
"title": ""
},
{
"docid": "f1212fec5368307451fc6513eadb43ba",
"text": "The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks. We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process.",
"title": ""
},
{
"docid": "eadba0f4aa52b20b0a512cc3d869146d",
"text": "This paper first describes the phenomenon of Gaussian pulse spread due to numerical dispersion in the finite-difference time-domain (FDTD) method for electromagnetic computation. This effect is undesired, as it reduces the precision with which multipath pulses can be resolved in the time domain. The quantification of the pulse spread is thus useful to evaluate the accuracy of pulsed FDTD simulations. Then, using a linear approximation to the numerical phase delay, a formula to predict the pulse duration is developed. Later, this formula is used to design a Gaussian source that keeps the spread of numerical pulses bounded in wideband FDTD. Finally, the developed model and the approximation are validated via simulations.",
"title": ""
},
{
"docid": "d4f15a40e12d823a943097e08368fec1",
"text": "Wearable or attachable health monitoring smart systems are considered to be the next generation of personal portable devices for remote medicine practices. Smart flexible sensing electronics are components crucial in endowing health monitoring systems with the capability of real-time tracking of physiological signals. These signals are closely associated with body conditions, such as heart rate, wrist pulse, body temperature, blood/intraocular pressure and blood/sweat bio-information. Monitoring such physiological signals provides a convenient and non-invasive way for disease diagnoses and health assessments. This Review summarizes the recent progress of flexible sensing electronics for their use in wearable/attachable health monitoring systems. Meanwhile, we present an overview of different materials and configurations for flexible sensors, including piezo-resistive, piezo-electrical, capacitive, and field effect transistor based devices, and analyze the working principles in monitoring physiological signals. In addition, the future perspectives of wearable healthcare systems and the technical demands on their commercialization are briefly discussed.",
"title": ""
},
{
"docid": "461d0b9ca1d0f1395d98cb18b2f45a0f",
"text": "Semantic maps augment metric-topological maps with meta-information, i.e. semantic knowledge aimed at the planning and execution of high-level robotic tasks. Semantic knowledge typically encodes human-like concepts, like types of objects and rooms, which are connected to sensory data when symbolic representations of percepts from the robot workspace are grounded to those concepts. This symbol grounding is usually carried out by algorithms that individually categorize each symbol and provide a crispy outcome – a symbol is either a member of a category or not. Such approach is valid for a variety of tasks, but it fails at: (i) dealing with the uncertainty inherent to the grounding process, and (ii) jointly exploiting the contextual relations among concepts (e.g. microwaves are usually in kitchens). This work provides a solution for probabilistic symbol grounding that overcomes these limitations. Concretely, we rely on Conditional Random Fields (CRFs) to model and exploit contextual relations, and to provide measurements about the uncertainty coming from the possible groundings in the form of beliefs (e.g. an object can be categorized (grounded) as a microwave or as a nightstand with beliefs 0.6 and 0.4, respectively). Our solution is integrated into a novel semantic map representation called Multiversal Semantic Map (MvSmap ), which keeps the different groundings, or universes, as instances of ontologies annotated with the obtained beliefs for their posterior exploitation. The suitability of our proposal has been proven with the Robot@Home dataset, a repository that contains challenging multi-modal sensory information gathered by a mobile robot in home environments.",
"title": ""
},
{
"docid": "5b0e088e2bddd0535bc9d2dfbfeb0298",
"text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.",
"title": ""
},
{
"docid": "3130e666076d119983ac77c5d77d0aed",
"text": "of Ph.D. dissertation, University of Haifa, Israel.",
"title": ""
},
{
"docid": "057a6fc7c761006d49cceea9a05e35e5",
"text": "For large global enterprises, providing adequate resources for organizational acculturation, the process in which employees learn about an organization's culture, remains a challenge. We present results from a survey of 802 users from an enterprise social networking site that identifies two groups of employees (new to the company and geographically distant from headquarters) that perceive higher benefit from using a SNS to learn about the organization's values and beliefs. In addition, we observe regional differences in viewing behaviors between two groups of new employees. These results suggest that a SNS can also potentially contribute to the information-seeking and sense-making activities that underlie organization acculturation.",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "a7456ecf7af7e447cdde61f371128965",
"text": "For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN.",
"title": ""
},
{
"docid": "8aca118a1171c2c3fd7057468adc84b2",
"text": "Automatically constructing a complete documentary or educational film from scattered pieces of images and knowledge is a significant challenge. Even when this information is provided in an annotated format, the problems of ordering, structuring and animating sequences of images, and producing natural language descriptions that correspond to those images within multiple constraints, are each individually difficult tasks. This paper describes an approach for tackling these problems through a combination of rhetorical structures with narrative and film theory to produce movie-like visual animations from still images along with natural language generation techniques needed to produce text descriptions of what is being seen in the animations. The use of rhetorical structures from NLG is used to integrate separate components for video creation and script generation. We further describe an implementation, named GLAMOUR, that produces actual, short video documentaries, focusing on a cultural heritage domain, and that have been evaluated by professional filmmakers. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "10ca113b333bf891beff38bd84914324",
"text": "In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject gaze at the faces of conversational partners during four-person conversations. Results indicate that when someone is listening or speaking to individuals, there is indeed a high probability that the person looked at is the person listened (p=88%) or spoken to (p=77%). We conclude that gaze is an excellent predictor of conversational attention in multiparty conversations. As such, it may form a reliable source of input for conversational systems that need to establish whom the user is speaking or listening to. We implemented our findings in FRED, a multi-agent conversational system that uses eye input to gauge which agent the user is listening or speaking to.",
"title": ""
},
{
"docid": "9d7e520928aa2fdeab7fbfe4fe2258ed",
"text": "Psychomotor stimulants and neuroleptics exert multiple effects on dopaminergic signaling and produce the dopamine (DA)-related behaviors of motor activation and catalepsy, respectively. However, a clear relationship between dopaminergic activity and behavior has been very difficult to demonstrate in the awake animal, thus challenging existing notions about the mechanism of these drugs. The present study examined whether the drug-induced behaviors are linked to a presynaptic site of action, the DA transporter (DAT) for psychomotor stimulants and the DA autoreceptor for neuroleptics. Doses of nomifensine (7 mg/kg i.p.), a DA uptake inhibitor, and haloperidol (0.5 mg/kg i.p.), a dopaminergic antagonist, were selected to examine characteristic behavioral patterns for each drug: stimulant-induced motor activation in the case of nomifensine and neuroleptic-induced catalepsy in the case of haloperidol. Presynaptic mechanisms were quantified in situ from extracellular DA dynamics evoked by electrical stimulation and recorded by voltammetry in the freely moving animal. In the first experiment, the maximal concentration of electrically evoked DA ([DA](max)) measured in the caudate-putamen was found to reflect the local, instantaneous change in presynaptic DAT or DA autoreceptor activity according to the ascribed action of the drug injected. A positive temporal association was found between [DA](max) and motor activation following nomifensine (r=0.99) and a negative correlation was found between [DA](max) and catalepsy following haloperidol (r=-0.96) in the second experiment. Taken together, the results suggest that a dopaminergic presynaptic site is a target of systemically applied psychomotor stimulants and regulates the postsynaptic action of neuroleptics during behavior. This finding was made possible by a voltammetric microprobe with millisecond temporal resolution and its use in the awake animal to assess release and uptake, two key mechanisms of dopaminergic neurotransmission. Moreover, the results indicate that presynaptic mechanisms may play a more important role in DA-behavior relationships than is currently thought.",
"title": ""
},
{
"docid": "8f494ce7965747ab0f90c1543dd3c02e",
"text": "The world is becoming urban. The UN predicts that the world's urban population will almost double from 3·3 billion in 2007 to 6·3 billion in 2050. Most of this increase will be in developing countries. Exponential urban growth is having a profound effect on global health. Because of international travel and migration, cities are becoming important hubs for the transmission of infectious diseases, as shown by recent pandemics. Physicians in urban environments in developing and developed countries need to be aware of the changes in infectious diseases associated with urbanisation. Furthermore, health should be a major consideration in town planning to ensure urbanisation works to reduce the burden of infectious diseases in the future.",
"title": ""
},
{
"docid": "b008f4477ec7bdb80bc88290a57e5883",
"text": "Artificial Neural networks purport to be biomimetic, but are by definition acyclic computational graphs. As a corollary, neurons in artificial nets fire only once and have no time-dynamics. Both these properties contrast with what neuroscience has taught us about human brain connectivity, especially with regards to object recognition. We therefore propose a way to simulate feedback loops in the brain by unrolling loopy neural networks several timesteps, and investigate the properties of these networks. We compare different variants of loops, including multiplicative composition of inputs and additive composition of inputs. We demonstrate that loopy networks outperform deep feedforward networks with the same number of parameters on the CIFAR-10 dataset, as well as nonloopy versions of the same network, and perform equally well on the MNIST dataset. In order to further understand our models, we visualize neurons in loop layers with guided backprop, demonstrating that the same filters behave increasingly nonlinearly at higher unrolling levels. Furthermore, we interpret loops as attention mechanisms, and demonstrate that the composition of the loop output with the input image produces images that look qualitatively like attention maps.",
"title": ""
}
] | scidocsrr |
d329a8777725e85d84e5ef4d16d84a8c | Modelling Competitive Sports: Bradley-Terry-Élő Models for Supervised and On-Line Learning of Paired Competition Outcomes | [
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
}
] | [
{
"docid": "bceb4e66638fba85a5b5d94e8546e4ee",
"text": "Data grows at the impressive rate of 50% per year, and 75% of the digital world is a copy! Although keeping multiple copies of data is necessary to guarantee their availability and long term durability, in many situations the amount of data redundancy is immoderate. By keeping a single copy of repeated data, data deduplication is considered as one of the most promising solutions to reduce the storage costs, and improve users experience by saving network bandwidth and reducing backup time. However, this solution must now solve many security issues to be completely satisfying. In this paper we target the attacks from malicious clients that are based on the manipulation of data identifiers and those based on backup time and network traffic observation. We present a deduplication scheme mixing an intraand an inter-user deduplication in order to build a storage system that is secure against the aforementioned type of attacks by controlling the correspondence between files and their identifiers, and making the inter-user deduplication unnoticeable to clients using deduplication proxies. Our method provides global storage space savings, per-client bandwidth network savings between clients and deduplication proxies, and global network bandwidth savings between deduplication proxies and the storage server. The evaluation of our solution compared to a classic system shows that the overhead introduced by our scheme is mostly due to data encryption which is necessary to ensure confidentiality.",
"title": ""
},
{
"docid": "f95e568513847369eba15e154461a3c1",
"text": "We address the problem of identifying the domain of onlinedatabases. More precisely, given a set F of Web forms automaticallygathered by a focused crawler and an online databasedomain D, our goal is to select from F only the formsthat are entry points to databases in D. Having a set ofWebforms that serve as entry points to similar online databasesis a requirement for many applications and techniques thataim to extract and integrate hidden-Web information, suchas meta-searchers, online database directories, hidden-Webcrawlers, and form-schema matching and merging.We propose a new strategy that automatically and accuratelyclassifies online databases based on features that canbe easily extracted from Web forms. By judiciously partitioningthe space of form features, this strategy allows theuse of simpler classifiers that can be constructed using learningtechniques that are better suited for the features of eachpartition. Experiments using real Web data in a representativeset of domains show that the use of different classifiersleads to high accuracy, precision and recall. This indicatesthat our modular classifier composition provides an effectiveand scalable solution for classifying online databases.",
"title": ""
},
{
"docid": "fb46f67ba94cb4d7dd7620e2bdf5f00e",
"text": "We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50% of mining power.\n Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically.\n We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.",
"title": ""
},
{
"docid": "44368062de68f6faed57d43b8e691e35",
"text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.",
"title": ""
},
{
"docid": "b66a2ce976a145827b5b9a5dd2ad2495",
"text": "Compared to previous head-mounted displays, the compact and low-cost Oculus Rift has claimed to offer improved virtual reality experiences. However, how and what kinds of user experiences are encountered by people when using the Rift in actual gameplay has not been examined. We present an exploration of 10 participants' experiences of playing a first-person shooter game using the Rift. Despite cybersickness and a lack of control, participants experienced heightened experiences, a richer engagement with passive game elements, a higher degree of flow and a deeper immersion on the Rift than on a desktop setup. Overly demanding movements, such as the large range of head motion required to navigate the game environment were found to adversely affect gaming experiences. Based on these and other findings, we also present some insights for designing games for the Rift.",
"title": ""
},
{
"docid": "bb770a0cb686fbbb4ea1adb6b4194967",
"text": "Parental refusal of vaccines is a growing a concern for the increased occurrence of vaccine preventable diseases in children. A number of studies have looked into the reasons that parents refuse, delay, or are hesitant to vaccinate their child(ren). These reasons vary widely between parents, but they can be encompassed in 4 overarching categories. The 4 categories are religious reasons, personal beliefs or philosophical reasons, safety concerns, and a desire for more information from healthcare providers. Parental concerns about vaccines in each category lead to a wide spectrum of decisions varying from parents completely refusing all vaccinations to only delaying vaccinations so that they are more spread out. A large subset of parents admits to having concerns and questions about childhood vaccinations. For this reason, it can be helpful for pharmacists and other healthcare providers to understand the cited reasons for hesitancy so they are better prepared to educate their patients' families. Education is a key player in equipping parents with the necessary information so that they can make responsible immunization decisions for their children.",
"title": ""
},
{
"docid": "3b06ce783d353cff3cdbd9a60037162e",
"text": "The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the ‘rules’ for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.",
"title": ""
},
{
"docid": "d3e35963e85ade6e3e517ace58cb3911",
"text": "In this paper, we present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers’ sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e, direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.",
"title": ""
},
{
"docid": "96ee31337d66b8ccd3876c1575f9b10c",
"text": "Although different modeling techniques have been proposed during the last 300 years, the differential equation formalism proposed by Newton and Leibniz has been the tool of choice for modeling and problem solving Taylor (1996); Wainer (2009). Differential equations provide a formal mathematical method (sometimes also called an analytical method) for studying the entity of interest. Computational methods based on differential equations could not be easily applied in studying human-made dynamic systems (e.g., traffic controllers, robotic arms, automated factories, production plants, computer networks, VLSI circuits). These systems are usually referred to as discrete event systems because their states do not change continuously but, rather, because of the occurrence of events. This makes them asynchronous, inherently concurrent, and highly nonlinear, rendering their modeling and simulation different from that used in traditional approaches. In order to improve the model definition for this class of systems, a number of techniques were introduced, including Petri Nets, Finite State Machines, min-max algebra, Timed Automata, etc. Banks & Nicol. (2005); Cassandras (1993); Cellier & Kofman. (2006); Fishwick (1995); Law & Kelton (2000); Toffoli & Margolus. (1987). Wireless Sensor Network (WSN) is a discrete event system which consists of a network of sensor nodes equipped with sensing, computing, power, and communication modules to monitor certain phenomenon such as environmental data or object tracking Zhao & Guibas (2004). Emerging applications of wireless sensor networks are comprised of asset and warehouse *[email protected] †[email protected] ‡[email protected] 1",
"title": ""
},
{
"docid": "a412f5facafdb2479521996c05143622",
"text": "A temperature and supply independent on-chip reference relaxation oscillator for low voltage design is described. The frequency of oscillation is mainly a function of a PVT robust biasing current. The comparator for the relaxation oscillator is replaced with a high speed common-source stage to eliminate the temperature dependency of the comparator delay. The current sources and voltages are biased by a PVT robust references derived from a bandgap circuitry. This oscillator is designed in TSMC 65 nm CMOS process to operate with a minimum supply voltage of 1.4 V and consumes 100 μW at 157 MHz frequency of oscillation. The oscillator exhibits frequency variation of 1.6% for supply changes from 1.4 V to 1.9 V, and ±1.2% for temperature changes from 20°C to 120°C.",
"title": ""
},
{
"docid": "dc33d2edcfb124af607bcb817589f6e9",
"text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.",
"title": ""
},
{
"docid": "9734f4395c306763e6cc5bf13b0ca961",
"text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.",
"title": ""
},
{
"docid": "2575bad473ef55281db460617e0a37c8",
"text": "Automated license plate recognition (ALPR) has been applied to identify vehicles by their license plates and is critical in several important transportation applications. In order to achieve the recognition accuracy levels typically required in the market, it is necessary to obtain properly segmented characters. A standard method, projection-based segmentation, is challenged by substantial variation across the plate in the regions surrounding the characters. In this paper a reinforcement learning (RL) method is adapted to create a segmentation agent that can find appropriate segmentation paths that avoid characters, traversing from the top to the bottom of a cropped license plate image. Then a hybrid approach is proposed, leveraging the speed and simplicity of the projection-based segmentation technique along with the power of the RL method. The results of our experiments show significant improvement over the histogram projection currently used for character segmentation.",
"title": ""
},
{
"docid": "f38554695eb3ca5b6d62b1445d8826b7",
"text": "Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems. However, it remains a challenge to analyze and interpret the underlying process of neuroevolution in such high dimensions. To begin to address this challenge, this paper presents an interactive data visualization tool called VINE (Visual Inspector for NeuroEvolution) aimed at helping neuroevolution researchers and end-users better understand and explore this family of algorithms. VINE works seamlessly with a breadth of neuroevolution algorithms, including ES and GA, and addresses the difficulty of observing the underlying dynamics of the learning process through an interactive visualization of the evolving agent's behavior characterizations over generations. As neuroevolution scales to neural networks with millions or more connections, visualization tools like VINE that offer fresh insight into the underlying dynamics of evolution become increasingly valuable and important for inspiring new innovations and applications.",
"title": ""
},
{
"docid": "ab2159730f00662ba29e25a0e27d1799",
"text": "This paper proposes a novel and efficient re-ranking technque to solve the person re-identification problem in the surveillance application. Previous methods treat person re-identification as a special object retrieval problem, and compute the retrieval result purely based on a unidirectional matching between the probe and all gallery images. However, the correct matching may be not included in the top-k ranking result due to appearance changes caused by variations in illumination, pose, viewpoint and occlusion. To obtain more accurate re-identification results, we propose to reversely query every gallery person image in a new gallery composed of the original probe person image and other gallery person images, and revise the initial query result according to bidirectional ranking lists. The behind philosophy of our method is that images of the same person should not only have similar visual content, refer to content similarity, but also possess similar k-nearest neighbors, refer to context similarity. Furthermore, the proposed bidirectional re-ranking method can be divided into offline and online parts, where the majority of computation load is accomplished by the offline part and the online computation complexity is only proportional to the size of the gallery data set, which is especially suited to the real-time required video investigation task. Extensive experiments conducted on a series of standard data sets have validated the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "e6cae5bec5bb4b82794caca85d3412a2",
"text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.",
"title": ""
},
{
"docid": "d6976361b44aab044c563e75056744d6",
"text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).",
"title": ""
},
{
"docid": "5b57eb0b695a1c85d77db01e94904fb1",
"text": "Depth map super-resolution is an emerging topic due to the increasing needs and applications using RGB-D sensors. Together with the color image, the corresponding range data provides additional information and makes visual analysis tasks more tractable. However, since the depth maps captured by such sensors are typically with limited resolution, it is preferable to enhance its resolution for improved recognition. In this paper, we present a novel joint trilateral filtering (JTF) algorithm for solving depth map super-resolution (SR) problems. Inspired by bilateral filtering, our JTF utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels. Our proposed further integrates local gradient information of the depth map when synthesizing its HR output, which alleviates textural artifacts like edge discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works.",
"title": ""
},
{
"docid": "61ba52f205c8b497062995498816b60f",
"text": "The past century experienced a proliferation of retail formats in the marketplace. However, as a new century begins, these retail formats are being threatened by the emergence of a new kind of store, the online or Internet store. From being almost a novelty in 1995, online retailing sales were expected to reach $7 billion by 2000 [9]. In this increasngly timeconstrained world, Internet stores allow consumers to shop from the convenience of remote locations. Yet most of these Internet stores are losing money [6]. Why is such counterintuitive phenomena prevailing? The explanation may lie in the risks associated with Internet shopping. These risks may arise because consumers are concerned about the security of transmitting credit card information over the Internet. Consumers may also be apprehensive about buying something without touching or feeling it and being unable to return it if it fails to meet their approval. Having said this, however, we must point out that consumers are buying goods on the Internet. This is reflected in the fact that total sales on the Internet are on the increase [8, 11]. Who are the consumers that are patronizing the Internet? Evidently, for them the perception of the risk associated with shopping on the Internet is low or is overshadowed by its relative convenience. This article attempts to determine why certain consumers are drawn to the Internet and why others are not. Since the pioneering research done by Becker [3], it has been accepted that the consumer maximizes his utility subject to not only income constraints but also time constraints. A consumer seeks out his best decision given that he has a limited budget of time and money. While purchasing a product from a store, a consumer has to expend both money and time. Therefore, the consumer patronizes the retail store where his total costs or the money and time spent in the entire process are the least. Since the util-",
"title": ""
},
{
"docid": "1c1988ae64bef3475f36eceaffda0b7d",
"text": "Home Office (Grant number: PTA-033-2005-00028). We gratefully acknowledge the three anonymous reviewers, whose comments and suggestions improved an earlier version of this paper. Criminologists have long contended that neighborhoods are important determinants of how individuals perceive their risk of criminal victimization. Yet, despite the theoretical importance and policy-relevance of these claims, the empirical evidence-base is surprisingly thin and inconsistent. Drawing on data from a national probability sample of individuals, linked to independent measures of neighborhood demographic characteristics, visual signs of physical disorder, and reported crime, we test four hypotheses about the mechanisms through which neighborhoods influence fear of crime. Our large sample size, analytical approach and the independence of our empirical measures enable us to overcome some of the limitations that have hampered much previous research into this question. We find that neighborhood structural characteristics, visual signs of disorder, and recorded crime all have direct and independent effects on individual level fear of crime. Additionally, we demonstrate that individual differences in fear of crime are strongly moderated by neighborhood socioeconomic characteristics; between group differences in expressed fear of crime are both exacerbated and ameliorated by the characteristics of the areas in which people live. interests include criminal statistics, neighborhood effects, missing data problems, and survey methodology. Methods at the University of Southampton. His research interests are in the areas of survey methodology, statistical methods, public opinion, and political behaviour.",
"title": ""
}
] | scidocsrr |
095438f5ab742de58bfbc27df8cef909 | Topology-independent 3D garment fitting for virtual clothing | [
{
"docid": "5b8c43561c322a6e85fcffc6e4ca08db",
"text": "In this paper we address the problem of rapid distance computation between rigid objects and highly deformable objects, which is important in the context of physically based modeling of e.g hair or clothing. Our method is in particular useful when modeling deformable objects with particle systems—the most common approach to simulate such objects. We combine some already known techniques about distance fields into an algorithm for rapid collision detection. Only the rigid objects of an environment are represented by distance fields. In the context of proximity queries, which are essential for proper collision detection, this representation has two main advantages: First, any given boundary representation can be approximated quite easily, no high-degree polynomials or complicated approximation algorithms are needed. Second, the evaluation of distances and normals needed for collision response is extremely fast and independent of the complexity of the object. In the course of the paper we propose a simple, but fast algorithm for partial distance field computation. The sources are triangular meshes. Then, we present our approach for collision detection in detail. Examples from an interactive cloth animation system show the advantages of our approach in practice. We conclude that our method allows real-time animations of complex deformable objects in non-trivial environments on standard PC hardware.",
"title": ""
}
] | [
{
"docid": "962bc645d5a6a6644d28599948a18df0",
"text": "The demand for computer-assisted game study in sports is growing dramatically. This paper presents a practical video analysis system to facilitate semantic content understanding. A physics-based algorithm is designed for ball tracking and 3D trajectory reconstruction in basketball videos and shooting location statistics can be obtained. The 2D-to-3D inference is intrinsically a challenging problem due to the loss of 3D information in projection to 2D frames. One significant contribution of the proposed system lies in the integrated scheme incorporating domain knowledge and physical characteristics of ball motion into object tracking to overcome the problem of 2D-to-3D inference. With the 2D trajectory extracted and the camera parameters calibrated, physical characteristics of ball motion are involved to reconstruct the 3D trajectories and estimate the shooting locations. Our experiments on broadcast basketball videos show promising results. We believe the proposed system will greatly assist intelligence collection and statistics analysis in basketball games. 2008 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "fbc8d5a6a4299eaf0bf13d7d8c580bd1",
"text": "The lexical semantic system is an important component of human language and cognitive processing. One approach to modeling semantic knowledge makes use of hand-constructed networks or trees of interconnected word senses (Miller, Beckwith, Fellbaum, Gross, & Miller, 1990; Jarmasz & Szpakowicz, 2003). An alternative approach seeks to model word meanings as high-dimensional vectors, which are derived from the cooccurrence of words in unlabeled text corpora (Landauer & Dumais, 1997; Burgess & Lund, 1997a). This paper introduces a new vector-space method for deriving word-meanings from large corpora that was inspired by the HAL and LSA models, but which achieves better and more consistent results in predicting human similarity judgments. We explain the new model, known as COALS, and how it relates to prior methods, and then evaluate the various models on a range of tasks, including a novel set of semantic similarity ratings involving both semantically and morphologically related terms.",
"title": ""
},
{
"docid": "e35b4a46ccd73aa79246f09b86e01c24",
"text": "Emotion detection can considerably enhance our understanding of users’ emotional states. Understanding users’ emotions especially in a real-time setting can be pivotal in improving user interactions and understanding their preferences. In this paper, we propose a constraint optimization framework to discover emotions from social media content of the users. Our framework employs several novel constraints such as emotion bindings, topic correlations, along with specialized features proposed by prior work and well-established emotion lexicons. We propose an efficient inference algorithm and report promising empirical results on three diverse datasets.",
"title": ""
},
{
"docid": "6fd1e9896fc1aaa79c769bd600d9eac3",
"text": "In future planetary exploration missions, rovers will be required to autonomously traverse challenging environments. Much of the previous work in robot motion planning cannot be successfully applied to the rough-terrain planning problem. A model-based planning method is presented in this paper that is computationally efficient and takes into account uncertainty in the robot model, terrain model, range sensor data, and rover pathfollowing errors. It is based on rapid path planning through the visible terrain map with a simple graph-search algorithm, followed by a physics-based evaluation of the path with a rover model. Simulation results are presented which demonstrate the method’s effectiveness.",
"title": ""
},
{
"docid": "07f7a4fe69f6c4a1180cc3ca444a363a",
"text": "With the popularization of IoT (Internet of Things) devices and the continuous development of machine learning algorithms, learning-based IoT malicious traffic detection technologies have gradually matured. However, learning-based IoT traffic detection models are usually very vulnerable to adversarial samples. There is a great need for an automated testing framework to help security analysts to detect errors in learning-based IoT traffic detection systems. At present, most methods for generating adversarial samples require training parameters of known models and are only applicable to image data. To address the challenge, we propose a testing framework for learning-based IoT traffic detection systems, TLTD. By introducing genetic algorithms and some technical improvements, TLTD can generate adversarial samples for IoT traffic detection systems and can perform a black-box test on the systems.",
"title": ""
},
{
"docid": "540a6dd82c7764eedf99608359776e66",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "5a5b30b63944b92b168de7c17d5cdc5e",
"text": "We introduce the Densely Segmented Supermarket (D2S) dataset, a novel benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21 000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. To further benchmark the robustness of instance segmentation methods, the scenes are acquired with different lightings, rotations, and backgrounds. We ensure that there are no ambiguities in the labels and that every instance is labeled comprehensively. The annotations are pixel-precise and allow using crops of single instances for articial data augmentation. The dataset covers several challenges highly relevant in the field, such as a limited amount of training data and a high diversity in the test and validation sets. The evaluation of state-of-the-art object detection and instance segmentation methods on D2S reveals significant room for improvement.",
"title": ""
},
{
"docid": "4b1bb1a79d755ea8ccd6f80a8e827b40",
"text": "This paper analyzes the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of O ( 1 √ t ) , where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymptotically according to O ( e − τt (ln t)d/4 ) with high probability. Here, d is the dimension of the search space and τ is a constant that depends on the behaviour of the objective function near its global maximum.",
"title": ""
},
{
"docid": "902ca8c9a7cd8384143654ee302eca82",
"text": "The Paper presents the outlines of the Field Programmable Gate Array (FPGA) implementation of Real Time speech enhancement by Spectral Subtraction of acoustic noise using Dynamic Moving Average Method. It describes an stand alone algorithm for Speech Enhancement and presents a architecture for the implementation. The traditional Spectral Subtraction method can only suppress stationary acoustic noise from speech by subtracting the spectral noise bias calculated during non-speech activity, while adding the unique option of dynamic moving averaging to it, it can now periodically upgrade the estimation and cope up with changes in noise level. Signal to Noise Ratio (SNR) has been tested at different noisy environment and the improvement in SNR certifies the effectiveness of the algorithm. The FPGA implementation presented in this paper, works on streaming speech signals and can be used in factories, bus terminals, Cellular Phones, or in outdoor conferences where a large number of people have gathered. The Table in the Experimental Result section consolidates our claim of optimum resouce usage.",
"title": ""
},
{
"docid": "06d146f0f44775e05161a90a95f4eca9",
"text": "The authors discuss various filling agents currently available that can be used to augment the lips, correct perioral rhytides, and enhance overall lip appearance. Fillers are compared and information provided about choosing the appropriate agent based on the needs of each patient to achieve the much coveted \"pouty\" look while avoiding hypercorrection. The authors posit that the goal for the upper lip is to create a form that harmonizes with the patient's unique features, taking into account age and ethnicity; the goal for the lower lip is to create bulk, greater prominence, and projection of the vermillion.",
"title": ""
},
{
"docid": "87396c917dd760eddc2d16e27a71e81d",
"text": "We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism-neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.",
"title": ""
},
{
"docid": "3738d3c5d5bf4a3de55aa638adac07bb",
"text": "The term malware stands for malicious software. It is a program installed on a system without the knowledge of owner of the system. It is basically installed by the third party with the intention to steal some private data from the system or simply just to play pranks. This in turn threatens the computer’s security, wherein computer are used by one’s in day-to-day life as to deal with various necessities like education, communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great challenge for malware detectors. This paper focuses on basis study of malwares and various detection techniques which can be used to detect malwares.",
"title": ""
},
{
"docid": "0851caf6599f97bbeaf68b57e49b4da5",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "69460a225a498b96b2119f07beebbd29",
"text": "To eliminate the problems related to the Internet Protocol (IP) network, Multi-Protocol Label Switching (MPLS) networks the packets, they use label switching technology at the IP core routers to improve the routing mechanism and to make it more efficient. The developed protocol configure the data packet with fixed labels at the start and the at the end of the MPLS domain, it also allows a service provider to provide value added services like Virtual Private Network (VPNs), MPLS is faster than the standard method of routing and switching packets of the data. MPLS traffic engineering (MPLS TE) provides better utilization of network recourses, while MPLS offers VPN implementation and interconnected with other networks to gain secure and reliable communication, MPLS was improved to support routing functionality on conventional service provider IP Network. MPLS permit service providers to provide customer support services, and It naturally supports Quality of Service (QoS) by providing classification and marked package, avoid congestion, congestion management, Improve traffic, and Signaling. MPLS is not complex at all, and there is no need to any changed in the network structure because it uses one Unified Network Infrastructure. Also, no need to run Border Gateway Protocol (BGP) in the core of MPLS network, this will increase the efficiency of Internet Service Provider (ISP). Therefore MPLS provide the reliability of communication while reducing the delays and supporting the speed of the packet transfer. .",
"title": ""
},
{
"docid": "936cdd4b58881275485739518ccb4f85",
"text": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems — BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code.",
"title": ""
},
{
"docid": "23c71e8893fceed8c13bf2fc64452bc2",
"text": "Variable stiffness actuators (VSAs) are complex mechatronic devices that are developed to build passively compliant, robust, and dexterous robots. Numerous different hardware designs have been developed in the past two decades to address various demands on their functionality. This review paper gives a guide to the design process from the analysis of the desired tasks identifying the relevant attributes and their influence on the selection of different components such as motors, sensors, and springs. The influence on the performance of different principles to generate the passive compliance and the variation of the stiffness are investigated. Furthermore, the design contradictions during the engineering process are explained in order to find the best suiting solution for the given purpose. With this in mind, the topics of output power, potential energy capacity, stiffness range, efficiency, and accuracy are discussed. Finally, the dependencies of control, models, sensor setup, and sensor quality are addressed.",
"title": ""
},
{
"docid": "7b68933da1bedbc89ebe1fb8b1ca96c4",
"text": "PATIENT\nMale, 0 FINAL DIAGNOSIS: Pallister-Killian syndrome Symptoms: Decidious tooth • flattened nasal bridge • frontal bossing • grooved palate • low-set ears • mid-facial hypoplasia • nuchal fold thickening • right inquinal testis • shortened upper extremities • undescended left intraabdominal testis • widely spaced nipples\n\n\nMEDICATION\n- Clinical Procedure: - Specialty: Pediatrics and Neonatology.\n\n\nOBJECTIVE\nCongenital defects/diseases.\n\n\nBACKGROUND\nPallister-Killian syndrome (PKS) is a rare, sporadic, polydysmorphic condition that often has highly distinctive features. The clinical features are highly variable, ranging from mild to severe intellectual disability and birth defects. We here report the first case of PKS diagnosed at our institution in a patient in the second trimester of pregnancy.\n\n\nCASE REPORT\nA pregnant 43-year-old woman presented for genetic counseling secondary to advanced maternal age and an increased risk for Down syndrome. Ultrasound showed increased fetal nuchal fold thickness, short limbs, polyhydramnios, and a small stomach. The ultrasound evaluation was compromised due to the patient's body habitus. The patient subsequently underwent amniocentesis and the karyotype revealed the presence of an isochromosome in the short arm of chromosome 12 consistent with the diagnosis of Pallister-Killian syndrome. Postnatally, the infant showed frontal bossing, a flattened nasal bridge, mid-facial hypoplasia, low-set ears, a right upper deciduous tooth, grooved palate, nuchal fold thickening, widely spaced nipples, left ulnar polydactyly, simian creases, flexion contractures of the right middle finger, shortened upper extremities, undescended left intraabdominal testis, and right inguinal testis.\n\n\nCONCLUSIONS\nThe occurrence of PKS is sporadic in nature, but prenatal diagnosis is possible.",
"title": ""
},
{
"docid": "c4b4c647e13d0300845bed2b85c13a3c",
"text": "Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.",
"title": ""
},
{
"docid": "8d2b28892efc5cf4ab228fc599f5e91f",
"text": "Will reading habit influence your life? Many say yes. Reading cooperative control of distributed multi agent systems is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
}
] | scidocsrr |
42557afb223c11fb89eb19dc57f28634 | AVID: Adversarial Visual Irregularity Detection | [
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "6470b7d1532012e938063d971f3ead29",
"text": "As society continues to accumulate more and more data, demand for machine learning algorithms that can learn from data with limited human intervention only increases. Semi-supervised learning (SSL) methods, which extend supervised learning algorithms by enabling them to use unlabeled data, play an important role in addressing this challenge. In this thesis, a framework unifying the traditional assumptions and approaches to SSL is defined. A synthesis of SSL literature then places a range of contemporary approaches into this common framework. Our focus is on methods which use generative adversarial networks (GANs) to perform SSL. We analyse in detail one particular GAN-based SSL approach. This is shown to be closely related to two preceding approaches. Through synthetic experiments we provide an intuitive understanding and motivate the formulation of our focus approach. We then theoretically analyse potential alternative formulations of its loss function. This analysis motivates a number of research questions that centre on possible improvements to, and experiments to better understand the focus model. While we find support for our hypotheses, our conclusion more broadly is that the focus method is not especially robust.",
"title": ""
}
] | [
{
"docid": "de016ffaace938c937722f8a47cc0275",
"text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.",
"title": ""
},
{
"docid": "39ee9e4c7dad30d875d70e0a41a37034",
"text": "The aim of the present study is to investigate the effect of daily injection of ginger Zingiber officinale extract on the physiological parameters, as well as the histological structure of the l iver of adult rats. Adult male rats were divided into four groups; (G1, G2, G3, and Control groups). The first group received 500 ml/kg b. wt/day of aqueous extract of Zingiber officinale i.p. for four weeks, G2 received 500 ml/kg b wt/day of aqueous extract of Zingiber officinale for three weeks and then received carbon tetrachloride CCl4 0.1ml/150 g b. wt. for one week, G3 received 500 ml/kg body weight/day of aqueous extract of ginger Zingiber officinale i .p. for three weeks and then received CCl4 for one week combined with ginger). The control group (C) received a 500 ml/kg B WT/day of saline water i.p. for four weeks. The results indicated a significant decrease in the total protein and increase in the albumin/globulin ratio in the third group compared with first and second group. Also, the results reported a significant decrease in the body weight in the third and the fourth groups compared with the first and the second groups. A significant decrease in the globulin levels in the third and the fourth groups were detected compared with the first and the second groups. The obtained results showed that treating rats with ginger improved the histopathological changes induced in the liver by CCl4. The study suggests that ginger extract can be used as antioxidant, free radical scavenging and protective action against carbon tetrachloride oxidative damage in the l iver.",
"title": ""
},
{
"docid": "10f2726026dbe1deac859715f57b15b6",
"text": "Monte-Carlo Tree Search, especially UCT and its POMDP version POMCP, have demonstrated excellent performance on many problems. However, to efficiently scale to large domains one should also exploit hierarchical structure if present. In such hierarchical domains, finding rewarded states typically requires to search deeply; covering enough such informative states very far from the root becomes computationally expensive in flat non-hierarchical search approaches. We propose novel, scalable MCTS methods which integrate a task hierarchy into the MCTS framework, specifically leading to hierarchical versions of both, UCT and POMCP. The new method does not need to estimate probabilistic models of each subtask, it instead computes subtask policies purely sample-based. We evaluate the hierarchical MCTS methods on various settings such as a hierarchical MDP, a Bayesian model-based hierarchical RL problem, and a large hierarchi-",
"title": ""
},
{
"docid": "c57fa27a4745e3a5440bd7209cf109a2",
"text": "OBJECTIVES\nWe sought to use natural language processing to develop a suite of language models to capture key symptoms of severe mental illness (SMI) from clinical text, to facilitate the secondary use of mental healthcare data in research.\n\n\nDESIGN\nDevelopment and validation of information extraction applications for ascertaining symptoms of SMI in routine mental health records using the Clinical Record Interactive Search (CRIS) data resource; description of their distribution in a corpus of discharge summaries.\n\n\nSETTING\nElectronic records from a large mental healthcare provider serving a geographic catchment of 1.2 million residents in four boroughs of south London, UK.\n\n\nPARTICIPANTS\nThe distribution of derived symptoms was described in 23 128 discharge summaries from 7962 patients who had received an SMI diagnosis, and 13 496 discharge summaries from 7575 patients who had received a non-SMI diagnosis.\n\n\nOUTCOME MEASURES\nFifty SMI symptoms were identified by a team of psychiatrists for extraction based on salience and linguistic consistency in records, broadly categorised under positive, negative, disorganisation, manic and catatonic subgroups. Text models for each symptom were generated using the TextHunter tool and the CRIS database.\n\n\nRESULTS\nWe extracted data for 46 symptoms with a median F1 score of 0.88. Four symptom models performed poorly and were excluded. From the corpus of discharge summaries, it was possible to extract symptomatology in 87% of patients with SMI and 60% of patients with non-SMI diagnosis.\n\n\nCONCLUSIONS\nThis work demonstrates the possibility of automatically extracting a broad range of SMI symptoms from English text discharge summaries for patients with an SMI diagnosis. Descriptive data also indicated that most symptoms cut across diagnoses, rather than being restricted to particular groups.",
"title": ""
},
{
"docid": "c1538df6d2aa097d5c4a8c4fc7e42d01",
"text": "During the First International EEG Congress, London in 1947, it was recommended that Dr. Herbert H. Jasper study methods to standardize the placement of electrodes used in EEG (Jasper 1958). A report with recommendations was to be presented to the Second International Congress in Paris in 1949. The electrode placement systems in use at various centers were found to be similar, with only minor differences, although their designations, letters and numbers were entirely different. Dr. Jasper established some guidelines which would be established in recommending a speci®c system to the federation and these are listed below.",
"title": ""
},
{
"docid": "adae03c768e3bc72f325075cf22ef7b1",
"text": "The vergence-accommodation conflict (VAC) remains a major problem in head-mounted displays for virtual and augmented reality (VR and AR). In this review, I discuss why this problem is pivotal for nearby tasks in VR and AR, present a comprehensive taxonomy of potential solutions, address advantages and shortfalls of each design, and cover various ways to better evaluate the solutions. The review describes how VAC is addressed in monocular, stereoscopic, and multiscopic HMDs, including retinal scanning and accommodation-free displays. Eye-tracking-based approaches that do not provide natural focal cues-gaze-guided blur and dynamic stereoscopy-are also covered. Promising future research directions in this area are identified.",
"title": ""
},
{
"docid": "e4493c56867bfe62b7a96b33fb171fad",
"text": "In the field of agricultural information, the automatic identification and diagnosis of maize leaf diseases is highly desired. To improve the identification accuracy of maize leaf diseases and reduce the number of network parameters, the improved GoogLeNet and Cifar10 models based on deep learning are proposed for leaf disease recognition in this paper. Two improved models that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers. In addition, the number of parameters of the improved models is significantly smaller than that of the VGG and AlexNet structures. During the recognition of eight kinds of maize leaf diseases, the GoogLeNet model achieves a top - 1 average identification accuracy of 98.9%, and the Cifar10 model achieves an average accuracy of 98.8%. The improved methods are possibly improved the accuracy of maize leaf disease, and reduced the convergence iterations, which can effectively improve the model training and recognition efficiency.",
"title": ""
},
{
"docid": "71ee8396220ce8f3d9c4c6aca650fa42",
"text": "In order to increase our ability to use measurement to support software development practise we need to do more analysis of code. However, empirical studies of code are expensive and their results are difficult to compare. We describe the Qualitas Corpus, a large curated collection of open source Java systems. The corpus reduces the cost of performing large empirical studies of code and supports comparison of measurements of the same artifacts. We discuss its design, organisation, and issues associated with its development.",
"title": ""
},
{
"docid": "23c2ea4422ec6057beb8fa0be12e57b3",
"text": "This study applied logistic regression to model urban growth in the Atlanta Metropolitan Area of Georgia in a GIS environment and to discover the relationship between urban growth and the driving forces. Historical land use/cover data of Atlanta were extracted from the 1987 and 1997 Landsat TM images. Multi-resolution calibration of a series of logistic regression models was conducted from 50 m to 300 m at intervals of 25 m. A fractal analysis pointed to 225 m as the optimal resolution of modeling. The following two groups of factors were found to affect urban growth in different degrees as indicated by odd ratios: (1) population density, distances to nearest urban clusters, activity centers and roads, and high/low density urban uses (all with odds ratios < 1); and (2) distance to the CBD, number of urban cells within a 7 · 7 cell window, bare land, crop/grass land, forest, and UTM northing coordinate (all with odds ratios > 1). A map of urban growth probability was calculated and used to predict future urban patterns. Relative operating characteristic (ROC) value of 0.85 indicates that the probability map is valid. It was concluded that despite logistic regression’s lack of temporal dynamics, it was spatially explicit and suitable for multi-scale analysis, and most importantly, allowed much deeper understanding of the forces driving the growth and the formation of the urban spatial pattern. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3a9bba31f77f4026490d7a0faf4aeaa4",
"text": "We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy.",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "07db8f037ff720c8b8b242879c14531f",
"text": "PURPOSE\nMatriptase-2 (also known as TMPRSS6) is a critical regulator of the iron-regulatory hormone hepcidin in the liver; matriptase-2 cleaves membrane-bound hemojuvelin and consequently alters bone morphogenetic protein (BMP) signaling. Hemojuvelin and hepcidin are expressed in the retina and play a critical role in retinal iron homeostasis. However, no information on the expression and function of matriptase-2 in the retina is available. The purpose of the present study was to examine the retinal expression of matriptase-2 and its role in retinal iron homeostasis.\n\n\nMETHODS\nRT-PCR, quantitative PCR (qPCR), and immunofluorescence were used to analyze the expression of matriptase-2 and other iron-regulatory proteins in the mouse retina. Polarized localization of matriptase-2 in the RPE was evaluated using markers for the apical and basolateral membranes. Morphometric analysis of retinas from wild-type and matriptase-2 knockout (Tmprss6(msk/msk) ) mice was also performed. Retinal iron status in Tmprss6(msk/msk) mice was evaluated by comparing the expression levels of ferritin and transferrin receptor 1 between wild-type and knockout mice. BMP signaling was monitored by the phosphorylation status of Smads1/5/8 and expression levels of Id1 while interleukin-6 signaling was monitored by the phosphorylation status of STAT3.\n\n\nRESULTS\nMatriptase-2 is expressed in the mouse retina with expression detectable in all retinal cell types. Expression of matriptase-2 is restricted to the apical membrane in the RPE where hemojuvelin, the substrate for matriptase-2, is also present. There is no marked difference in retinal morphology between wild-type mice and Tmprss6(msk/msk) mice, except minor differences in specific retinal layers. The knockout mouse retina is iron-deficient, demonstrable by downregulation of the iron-storage protein ferritin and upregulation of transferrin receptor 1 involved in iron uptake. Hepcidin is upregulated in Tmprss6(msk/msk) mouse retinas, particularly in the neural retina. BMP signaling is downregulated while interleukin-6 signaling is upregulated in Tmprss6(msk/msk) mouse retinas, suggesting that the upregulaton of hepcidin in knockout mouse retinas occurs through interleukin-6 signaling and not through BMP signaling.\n\n\nCONCLUSIONS\nThe iron-regulatory serine protease matriptase-2 is expressed in the retina, and absence of this enzyme leads to iron deficiency and increased expression of hemojuvelin and hepcidin in the retina. The upregulation of hepcidin expression in Tmprss6(msk/msk) mouse retinas does not occur via BMP signaling but likely via the proinflammatory cytokine interleukin-6. We conclude that matriptase-2 is a critical participant in retinal iron homeostasis.",
"title": ""
},
{
"docid": "e9af5e2bfc36dd709ae6feefc4c38976",
"text": "Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.",
"title": ""
},
{
"docid": "f34b41a7f0dd902119197550b9bcf111",
"text": "Tachyzoites, bradyzoites (in tissue cysts), and sporozoites (in oocysts) are the three infectious stages of Toxoplasma gondii. The prepatent period (time to shedding of oocysts after primary infection) varies with the stage of T. gondii ingested by the cat. The prepatent period (pp) after ingesting bradyzoites is short (3-10 days) while it is long (18 days or longer) after ingesting oocysts or tachyzoites, irrespective of the dose. The conversion of bradyzoites to tachyzoites and tachyzoites to bradyzoites is biologically important in the life cycle of T. gondii. In the present paper, the pp was used to study in vivo conversion of tachyzoites to bradyzoites using two isolates, VEG and TgCkAr23. T. gondii organisms were obtained from the peritoneal exudates (pex) of mice inoculated intraperitoneally (i.p.) with these isolates and administered to cats orally by pouring in the mouth or by a stomach tube. In total, 94 of 151 cats shed oocysts after ingesting pex. The pp after ingesting pex was short (5-10 days) in 50 cats, intermediate (11-17) in 30 cats, and long (18 or higher) in 14 cats. The strain of T. gondii (VEG, TgCKAr23) or the stage (bradyzoite, tachyzoite, and sporozoite) used to initiate infection in mice did not affect the results. In addition, six of eight cats fed mice infected 1-4 days earlier shed oocysts with a short pp; the mice had been inoculated i.p. with bradyzoites of the VEG strain and their whole carcasses were fed to cats 1, 2, 3, or 4 days post-infection. Results indicate that bradyzoites may be formed in the peritoneal cavities of mice inoculated intraperitoneally with T. gondii and some bradyzoites might give rise directly to bradyzoites without converting to tachyzoites.",
"title": ""
},
{
"docid": "343f45efbdbf654c421b99927c076c5d",
"text": "As software engineering educators, it is important for us to realize the increasing domain-specificity of software, and incorporate these changes in our design of teaching material. Bioinformatics software is an example of immensely complex and critical scientific software and this domain provides an excellent illustration of the role of computing in the life sciences. To study bioinformatics from a software engineering standpoint, we conducted an exploratory survey of bioinformatics developers. The survey had a range of questions about people, processes and products. We learned that practices like extreme programming, requirements engineering and documentation. As software engineering educators, we realized that the survey results had important implications for the education of bioinformatics professionals. We also investigated the current status of software engineering education in bioinformatics, by examining the curricula of more than fifty bioinformatics programs and the contents of over fifteen textbooks. We observed that there was no mention of the role and importance of software engineering practices essential for creating dependable software systems. Based on our findings and existing literature we present a set of recommendations for improving software engineering education in bioinformatics.",
"title": ""
},
{
"docid": "cc980260540d9e9ae8e7219ff9424762",
"text": "The persuasive design of e-commerce websites has been shown to support people with online purchases. Therefore, it is important to understand how persuasive applications are used and assimilated into e-commerce website designs. This paper demonstrates how the PSD model’s persuasive features could be used to build a bridge supporting the extraction and evaluation of persuasive features in such e-commerce websites; thus practically explaining how feature implementation can enhance website persuasiveness. To support a deeper understanding of persuasive e-commerce website design, this research, using the Persuasive Systems Design (PSD) model, identifies the distinct persuasive features currently assimilated in ten successful e-commerce websites. The results revealed extensive use of persuasive features; particularly features related to dialogue support, credibility support, and primary task support; thus highlighting weaknesses in the implementation of social support features. In conclusion we suggest possible ways for enhancing persuasive feature implementation via appropriate contextual examples and explanation.",
"title": ""
},
{
"docid": "2f9e5a34137fe7871c9388078c57dc8e",
"text": "This paper presents a new model of measuring semantic similarity in the taxonomy of WordNet. The model takes the path length between two concepts and IC value of each concept as its metric, furthermore, the weight of two metrics can be adapted artificially. In order to evaluate our model, traditional and widely used datasets are used. Firstly, coefficients of correlation between human ratings of similarity and six computational models are calculated, the result shows our new model outperforms their homologues. Then, the distribution graphs of similarity value of 65 word pairs are discussed our model having no faulted zone more centralized than other five methods. So our model can make up the insufficient of other methods which only using one metric(path length or IC value) in their model.",
"title": ""
},
{
"docid": "1056fbe244f25672680ea45d6e8a4c73",
"text": "In this paper, we address the problem of reconstructing an object’s surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.",
"title": ""
},
{
"docid": "f9ba9cb0d10c6e44e40c7a5f06e87b5e",
"text": "Graphomotor impressions are a product of complex cognitive, perceptual and motor skills and are widely used as psychometric tools for the diagnosis of a variety of neuro-psychological disorders. Apparent deformations in these responses are quantified as errors and are used are indicators of various conditions. Contrary to conventional assessment methods where manual analysis of impressions is carried out by trained clinicians, an automated scoring system is marked by several challenges. Prior to analysis, such computerized systems need to extract and recognize individual shapes drawn by subjects on a sheet of paper as an important pre-processing step. The aim of this study is to apply deep learning methods to recognize visual structures of interest produced by subjects. Experiments on figures of Bender Gestalt Test (BGT), a screening test for visuo-spatial and visuo-constructive disorders, produced by 120 subjects, demonstrate that deep feature representation brings significant improvements over classical approaches. The study is intended to be extended to discriminate coherent visual structures between produced figures and expected prototypes.",
"title": ""
}
] | scidocsrr |
e93528333487ee373bd5e04bd8f0ff6b | Automatically Mapping and Integrating Multiple Data Entry Forms into a Database | [
{
"docid": "faf4f549186bffc799ce545bbc3d320e",
"text": "In many applications it is important to find a meaningful relationship between the schemas of a source and target database. This relationship is expressed in terms of declarative logical expressions called schema mappings. The more successful previous solutions have relied on inputs such as simple element correspondences between schemas in addition to local schema constraints such as keys and referential integrity. In this paper, we investigate the use of an alternate source of information about schemas, namely the presumed presence of semantics for each table, expressed in terms of a conceptual model (CM) associated with it. Our approach first compiles each CM into a graph and represents each table's semantics as a subtree in it. We then develop algorithms for discovering subgraphs that are plausible connections between those concepts/nodes in the CM graph that have attributes participating in element correspondences. A conceptual mapping candidate is now a pair of source and target subgraphs which are semantically similar. At the end, these are converted to expressions at the database level. We offer experimental results demonstrating that, for test cases of non-trivial mapping expressions involving schemas from a number of domains, the \"semantic\" approach outperforms the traditional technique in terms of recall and especially precision.",
"title": ""
}
] | [
{
"docid": "14551a9e92dc9ce47e2f80a8fc4dd741",
"text": "We model a simple genetic algorithm as a Markov chain. Our method is both complete (selection, mutation, and crossover are incorporated into an explicitly given transition matrix) and exact; no special assumptions are made which restrict populations or population trajectories. We also consider the asymptotics of the steady state distributions as population size increases.",
"title": ""
},
{
"docid": "a49ea9c9f03aa2d926faa49f4df63b7a",
"text": "Deep stacked RNNs are usually hard to train. Recent studies have shown that shortcut connections across different RNN layers bring substantially faster convergence. However, shortcuts increase the computational complexity of the recurrent computations. To reduce the complexity, we propose the shortcut block, which is a refinement of the shortcut LSTM blocks. Our approach is to replace the self-connected parts (ct) with shortcuts (hl−2 t ) in the internal states. We present extensive empirical experiments showing that this design performs better than the original shortcuts. We evaluate our method on CCG supertagging task, obtaining a 8% relatively improvement over current state-of-the-art results.",
"title": ""
},
{
"docid": "e2cd2edc74d932f1632a858ac124f902",
"text": "Large writes are beneficial both on individual disks and on disk arrays, e.g., RAID-5. The presented design enables large writes of internal B-tree nodes and leaves. It supports both in-place updates and large append-only (“log-structured”) write operations within the same storage volume, within the same B-tree, and even at the same time. The essence of the proposal is to make page migration inexpensive, to migrate pages while writing them, and to make such migration optional rather than mandatory as in log-structured file systems. The inexpensive page migration also aids traditional defragmentation as well as consolidation of free space needed for future large writes. These advantages are achieved with a very limited modification to conventional B-trees that also simplifies other B-tree operations, e.g., key range locking and compression. Prior proposals and prototypes implemented transacted B-tree on top of log-structured file systems and added transaction support to log-structured file systems. Instead, the presented design adds techniques and performance characteristics of log-structured file systems to traditional B-trees and their standard transaction support, notably without adding a layer of indirection for locating B-tree nodes on disk. The result retains fine-granularity locking, full transactional ACID guarantees, fast search performance, etc. expected of a modern B-tree implementation, yet adds efficient transacted page relocation and large, high-bandwidth writes.",
"title": ""
},
{
"docid": "daaa048824f1fa8303a2f4ac95301ccc",
"text": "The Internet of Things (IoT) represents a diverse technology and usage with unprecedented business opportunities and risks. The Internet of Things is changing the dynamics of security industry & reshaping it. It allows data to be transferred seamlessly among physical devices to the Internet. The growth of number of intelligent devices will create a network rich with information that allows supply chains to assemble and communicate in new ways. The technology research firm Gartner predicts that there will be 26 billion installed units on the Internet of Things (IoT) by 2020[1]. This paper explains the concept of Internet of Things (IoT), its characteristics, explain security challenges, technology adoption trends & suggests a reference architecture for E-commerce enterprise.",
"title": ""
},
{
"docid": "6c5c6e201e2ae886908aff554866b9ed",
"text": "HDBSCAN: Hierarchical Density-Based Spatial Clustering of Applications with Noise (Campello, Moulavi, and Sander 2013), (Campello et al. 2015). Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. The library also includes support for Robust Single Linkage clustering (Chaudhuri et al. 2014), (Chaudhuri and Dasgupta 2010), GLOSH outlier detection (Campello et al. 2015), and tools for visualizing and exploring cluster structures. Finally support for prediction and soft clustering is also available.",
"title": ""
},
{
"docid": "ca9f1a955ad033e43d25533d37f50b88",
"text": "Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.",
"title": ""
},
{
"docid": "f8fe22b2801a250a52e3d19ae23804e9",
"text": "Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here, we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies importation routes that contribute to malaria epidemiology on regional spatial scales.",
"title": ""
},
{
"docid": "3240607824a6dace92925e75df92cc09",
"text": "We propose a framework to model general guillotine restrictions in two-dimensional cutting problems formulated as Mixed Integer Linear Programs (MIP). The modeling framework requires a pseudo-polynomial number of variables and constraints, which can be effectively enumerated for medium-size instances. Our modeling of general guillotine cuts is the first one that, once it is implemented within a state-of-the-art MIP solver, can tackle instances of challenging size. We mainly concentrate our analysis on the Guillotine Two Dimensional Knapsack Problem (G2KP), for which a model, and an exact procedure able to significantly improve the computational performance, are given. We also show how the modeling of general guillotine cuts can be extended to other relevant problems such as the Guillotine Two Dimensional Cutting Stock Problem (G2CSP) and the Guillotine Strip Packing Problem (GSPP). Finally, we conclude the paper discussing an extensive set of computational experiments on G2KP and GSPP benchmark instances from the literature.",
"title": ""
},
{
"docid": "c4d0a1cd8a835dc343b456430791035b",
"text": "Social networks offer an invaluable amount of data from which useful information can be obtained on the major issues in society, among which crime stands out. Research about information extraction of criminal events in Social Networks has been done primarily in English language, while in Spanish, the problem has not been addressed. This paper propose a system for extracting spatio-temporally tagged tweets about crime events in Spanish language. In order to do so, it uses a thesaurus of criminality terms and a NER (named entity recognition) system to process the tweets and extract the relevant information. The NER system is based on the implementation OSU Twitter NLP Tools, which has been enhanced for Spanish language. Our results indicate an improved performance in relation to the most relevant tools such as Standford NER and OSU Twitter NLP Tools, achieving 80.95% precision, 59.65% recall and 68.69% F-measure. The end result shows the crime information broken down by place, date and crime committed through a webservice.",
"title": ""
},
{
"docid": "df48f9d3096d8528e9f517783a044df8",
"text": "We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a new attentional technique and a label-to-label connection for sequence learning, akin to Hidden Markov Models. Our experiments show that both of these innovations enable our model to outperform strong baselines for dialogue-act classification on the MapTask and Switchboard corpora. In addition, we analyse empirically the effectiveness of each of these innovations.",
"title": ""
},
{
"docid": "bb9f5ab961668b8aac5f786d33fb7e1f",
"text": "The process that resulted in the diagnostic criteria for posttraumatic stress disorder (PTSD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5; American Psychiatric Association; ) was empirically based and rigorous. There was a high threshold for any changes in any DSM-IV diagnostic criterion. The process is described in this article. The rationale is presented that led to the creation of the new chapter, \"Trauma- and Stressor-Related Disorders,\" within the DSM-5 metastructure. Specific issues discussed about the DSM-5 PTSD criteria themselves include a broad versus narrow PTSD construct, the decisions regarding Criterion A, the evidence supporting other PTSD symptom clusters and specifiers, the addition of the dissociative and preschool subtypes, research on the new criteria from both Internet surveys and the DSM-5 field trials, the addition of PTSD subtypes, the noninclusion of complex PTSD, and comparisons between DSM-5 versus the World Health Association's forthcoming International Classification of Diseases (ICD-11) criteria for PTSD. The PTSD construct continues to evolve. In DSM-5, it has moved beyond a narrow fear-based anxiety disorder to include dysphoric/anhedonic and externalizing PTSD phenotypes. The dissociative subtype may open the way to a fresh approach to complex PTSD. The preschool subtype incorporates important developmental factors affecting the expression of PTSD in young children. Finally, the very different approaches taken by DSM-5 and ICD-11 should have a profound effect on future research and practice.",
"title": ""
},
{
"docid": "d8fab661721e70a64fac930343203d20",
"text": "Studies of a range of higher cognitive functions consistently activate a region of anterior cingulate cortex (ACC), typically posterior to the genu and superior to the corpus collosum. In particular, this ACC region appears to be active in task situations where there is a need to override a prepotent response tendency, when responding is underdetermined, and when errors are made. We have hypothesized that the function of this ACC region is to monitor for the presence of crosstalk or competition between incompatible responses. In prior work, we provided initial support for this hypothesis, demonstrating ACC activity in the same region both during error trials and during correct trials in task conditions designed to elicit greater response competition. In the present study, we extend our testing of this hypothesis to task situations involving underdetermined responding. Specifically, 14 healthy control subjects performed a verb-generation task during event-related functional magnetic resonance imaging (fMRI), with the on-line acquisition of overt verbal responses. The results demonstrated that the ACC, and only the ACC, was more active in a series of task conditions that elicited competition among alternative responses. These conditions included a greater ACC response to: (1) Nouns categorized as low vs. high constraint (i.e., during a norming study, multiple verbs were produced with equal frequency vs. a single verb that produced much more frequently than any other); (2) the production of verbs that were weak associates, rather than, strong associates of particular nouns; and (3) the production of verbs that were weak associates for nouns categorized as high constraint. We discuss the implication of these results for understanding the role that the ACC plays in human cognition.",
"title": ""
},
{
"docid": "448f12ead2cae05dbb2a19e3d565a8f5",
"text": "This paper presents a feature extraction technique based on the Hilbert-Huang Transform (HHT) method for emotion recognition from physiological signals. Four kinds of physiological signals were used for analysis: electrocardiogram (ECG), electromyogram (EMG), skin conductivity (SC) and respiration changes (RSP). Each signal is decomposed into a finite set of AM-FM mono components (fission process) by the Empirical Mode Decomposition (EMD) which is the key part of the HHT. The information components of interest are then combined to create feature vectors (fusion process) for the next classification stage. In addition, classification is performed by using Support Vector Machines (SVM). The classification scores show that HHT based methods outperform traditional statistical techniques and provide a promising framework for both analysis and recognition of physiological signals in emotion recognition.",
"title": ""
},
{
"docid": "89d736c68d2befba66a0b7d876e52502",
"text": "The optical properties of human skin, subcutaneous adipose tissue and human mucosa were measured in the wavelength range 400–2000 nm. The measurements were carried out using a commercially available spectrophotometer with an integrating sphere. The inverse adding–doubling method was used to determine the absorption and reduced scattering coefficients from the measurements.",
"title": ""
},
{
"docid": "bed6312dd677fa37c30e72d0383973ed",
"text": " Fig.1にマスタリーラーニングのアウトラインを示す。 初めに教師はカリキュラムや教材をコンセプトやアイディアが重要であるためレビューする必要がある。 次に教師による診断手段や診断プロセスという形式的評価の計画である。また学習エラーを改善するための Corrective Activitiesの計画の主要な援助でもある。 Corrective Activites 矯正活動にはさまざまな形がとられる。Peer Cross-age Tutoring、コンピュータ支援レッスンなど Enrichment Activities 問題解決練習の特別なtutoringであり、刺激的で早熟な学習者に実りのある学習となっている。 Formative Assesment B もしCorrective Activitiesが学習者を改善しているのならばこの2回目の評価では体得を行っている。 この2回目の評価は学習者に改善されていることや良い学習者になっていることを示し、強力なモチベーショ ンのデバイスとなる。最後は累積的試験または評価の開発がある。",
"title": ""
},
{
"docid": "e8d0eab8c5ea4c3186499aa13cc6fc56",
"text": "A new multiple-input dc-dc converter realized from a modified inverse Watkins-Johnson topology is presented and analyzed. Fundamental electrical characteristics are presented and power budget equations are derived. Small signal analysis model of the propose converter is presented and studied. Two possible operation methods to achieve output voltage regulation are presented here. The analysis is verified with simulations and experiments on a prototype circuit.",
"title": ""
},
{
"docid": "0ec7ac1f00fb20854d622982d28f9056",
"text": "The structure of an air-core cylindrical high voltage pulse transformer is relatively simple, but considerable attention is needed to prevent breakdown between transformer windings. Since the thickness of the spiral windings is on the order of sub-millimeter, field enhancement at the edges of the windings is very high. Therefore, it is important to have proper electrical insulations to prevent breakdown at the edges and to make the system compact. Major design parameters of the transformer are primary inductance of 170 nH, and output voltage of about 500 kV. The fabricated transformer is 45 cm in length and 30 cm in diameter. The fabricated transformer is tested up to 450 kV with a Marx generator. In this paper, we will discuss design and fabrication procedures, and preliminary test results of the air-core cylindrical HV pulse transformer",
"title": ""
},
{
"docid": "e56173228f9d5b89e4173bc83e73d3d2",
"text": "The categorization of gender identity variants (GIVs) as \"mental disorders\" in the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association is highly controversial among professionals as well as among persons with GIV. After providing a brief history of GIV categorizations in the DSM, this paper presents some of the major issues of the ongoing debate: GIV as psychopathology versus natural variation; definition of \"impairment\" and \"distress\" for GID; associated psychopathology and its relation to stigma; the stigma impact of the mental-disorder label itself; the unusual character of \"sex reassignment surgery\" as a psychiatric treatment; and the consequences for health and mental-health services if the disorder label is removed. Finally, several categorization options are examined: Retaining the GID category, but possibly modifying its grouping with other syndromes; narrowing the definition to dysphoria and taking \"disorder\" out of the label; categorizing GID as a neurological or medical rather than a psychiatric disorder; removing GID from both the DSM and the International Classification of Diseases (ICD); and creating a special category for GIV in the DSM. I conclude that-as also evident in other DSM categories-the decision on the categorization of GIVs cannot be achieved on a purely scientific basis, and that a consensus for a pragmatic compromise needs to be arrived at that accommodates both scientific considerations and the service needs of persons with GIVs.",
"title": ""
},
{
"docid": "f649a975dcec02ea82bebb95dafd5eab",
"text": "Online games have emerged as popular computer applications and gamer loyalty is vital to game providers, since online gamers frequently switch between games. Online gamers often participate in teams also. This study investigates whether and how team participation improves loyalty. We utilized a cross-sectional design and an online survey, with 546 valid responses from online game subjects. Confirmatory factor analysis was applied to assess measurement reliability and validity directly, and structural equation modeling was utilized to test our hypotheses. The results indicate that participation in teams motivates online gamers to adhere to team norms and satisfies their social needs, also enhancing their loyalty. The contribution of this research is the introduction of social norms to explain online gamer loyalty. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3de0b9a5d5241893d8a3de4b723e5140",
"text": "One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT.",
"title": ""
}
] | scidocsrr |
b333dcddc559ebcf28b6f58e4124b6fa | Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds | [
{
"docid": "634b30b81da7139082927109b4c22d5e",
"text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
}
] | [
{
"docid": "489aa160c450539b50c63c6c3c6993ab",
"text": "Adequacy of citations is very important for a scientific paper. However, it is not an easy job to find appropriate citations for a given context, especially for citations in different languages. In this paper, we define a novel task of cross-language context-aware citation recommendation, which aims at recommending English citations for a given context of the place where a citation is made in a Chinese paper. This task is very challenging because the contexts and citations are written in different languages and there exists a language gap when matching them. To tackle this problem, we propose the bilingual context-citation embedding algorithm (i.e. BLSRec-I), which can learn a low-dimensional joint embedding space for both contexts and citations. Moreover, two advanced algorithms named BLSRec-II and BLSRec-III are proposed by enhancing BLSRec-I with translation results and abstract information, respectively. We evaluate the proposed methods based on a real dataset that contains Chinese contexts and English citations. The results demonstrate that our proposed algorithms can outperform a few baselines and the BLSRec-II and BLSRec-III methods can outperform the BLSRec-I method.",
"title": ""
},
{
"docid": "5e7a87078f92b7ce145e24a2e7340f1b",
"text": "Unsupervised artificial neural networks are now considered as a likely alternative to classical computing models in many application domains. For example, recent neural models defined by neuro-scientists exhibit interesting properties for an execution in embedded and autonomous systems: distributed computing, unsupervised learning, self-adaptation, self-organisation, tolerance. But these properties only emerge from large scale and fully connected neural maps that result in intensive computation coupled with high synaptic communications. We are interested in deploying these powerful models in the embedded context of an autonomous bio-inspired robot learning its environment in realtime. So we study in this paper in what extent these complex models can be simplified and deployed in hardware accelerators compatible with an embedded integration. Thus we propose a Neural Processing Unit designed as a programmable accelerator implementing recent equations close to self-organizing maps and neural fields. The proposed architecture is validated on FPGA devices and compared to state of the art solutions. The trade-off proposed by this dedicated but programmable neural processing unit allows to achieve significant improvements and makes our architecture adapted to many embedded systems.",
"title": ""
},
{
"docid": "014759efa636aec38aa35287b61e44a4",
"text": "Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection",
"title": ""
},
{
"docid": "8476c0832f62e061cf2e63f61e59abf0",
"text": "OBJECTIVE\nThis study examined the effectiveness of using a weighted vest for increasing attention to a fine motor task and decreasing self-stimulatory behaviors in preschool children with pervasive developmental disorders (PDD).\n\n\nMETHOD\nUsing an ABA single-subject design, the duration of attention to task and self-stimulatory behaviors and the number of distractions were measured in five preschool children with PDD over a period of 6 weeks.\n\n\nRESULTS\nDuring the intervention phase, all participants displayed a decrease in the number of distractions and an increase in the duration of focused attention while wearing the weighted vest. All but 1 participant demonstrated a decrease in the duration of self-stimulatory behaviors while wearing a weighted vest; however, the type of self-stimulatory behaviors changed and became less self-abusive for this child while she wore the vest. During the intervention withdrawal phase, 3 participants experienced an increase in the duration of self-stimulatory behaviors, and all participants experienced an increase in the number of distractions and a decrease in the duration of focused attention. The increase or decrease, however, never returned to baseline levels for these behaviors.\n\n\nCONCLUSION\nThe findings suggest that for these 5 children with PDD, the use of a weighted vest resulted in an increase in attention to task and decrease in self-stimulatory behaviors. The most consistent improvement observed was the decreased number of distractions. Additional research is necessary to build consensus about the effectiveness of wearing a weighted vest to increase attention to task and decrease self-stimulatory behaviors for children with PDD.",
"title": ""
},
{
"docid": "b9ec6867c23e5e5ecf53a4159872747c",
"text": "Competition in the wireless telecommunications industry is rampant. To maintain profitability, wireless carriers must control churn, the loss of subscribers who switch from one carrier to another. We explore statistical techniques for churn prediction and, based on these predictions, an optimal policy for identifying customers to whom incentives should be offered to increase retention. Our experiments are based on a data base of nearly 47,000 U.S. domestic subscribers, and includes information about their usage, billing, credit, application, and complaint history. We show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, churn prediction and remediation can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Competition in the wireless telecommunications industry is rampant. As many as seven competing carriers operate in each market. The industry is extremely dynamic, with new services, technologies, and carriers constantly altering the landscape. Carriers announce new rates and incentives weekly, hoping to entice new subscribers and to lure subscribers away from the competition. The extent of rivalry is reflected in the deluge of advertisements for wireless service in the daily newspaper and other mass media. The United States had 69 million wireless subscribers in 1998, roughly 25% of the population. Some markets are further developed; for example, the subscription rate in Finland is 53%. Industry forecasts are for a U.S. penetration rate of 48% by 2003. Although there is significant room for growth in most markets, the industry growth rate is declining and competition is rising. Consequently, it has become crucial for wireless carriers to control churn—the loss of customers who switch from one carrier to another. At present, domestic monthly churn rates are 2-3% of the customer base. At an average cost of $400 to acquire a subscriber, churn cost the industry nearly $6.3 billion in 1998; the total annual loss rose to nearly $9.6 billion when lost monthly revenue from subscriber cancellations is considered (Luna, 1998). It costs roughly five times as much to sign on a new subscriber as to retain an existing one. Consequently, for a carrier with 1.5 million subscribers, reducing the monthly churn rate from 2% to 1% would yield an increase in annual earnings of at least $54 million, and an increase in shareholder value of approximately $150 million. (Estimates are even higher when lost monthly revenue is considered; see Fowlkes, Madan, Andrew, & Jensen, 1999; Luna, 1998.) The goal of our research is to evaluate the benefits of predicting churn using techniques from statistical machine learning. We designed models that predict the probability Mozer, M. C., Wolniewicz, R., Grimes, D. B., Johnson, E., & Kaushansky, H. (2000). Churn reduction in the wireless industry. In S. A. Solla, T. K. Leen, & K.-R. Mueller (Eds.), Advances in Neural Information Processing Systems 12 (pp. 935941). Cambridge, MA: MIT Press. of a subscriber churning within a short time window, and we evaluated how well these predictions could be used for decision making by estimating potential cost savings to the wireless carrier under a variety of assumptions concerning subscriber behavior.",
"title": ""
},
{
"docid": "850854aeae187ffdd74c56135d9a4d5b",
"text": "Dynamic interactive maps with transparent but powerful human interface capabilities are beginning to emerge for a variety of geographical information systems, including ones situated on portables for travelers, students, business and service people, and others working in field settings. In the present research, interfaces supporting spoken, pen-based, and multimodal input were analyze for their potential effectiveness in interacting with this new generation of map systems. Input modality (speech, writing, multimodal) and map display format (highly versus minimally structured) were varied in a within-subject factorial design as people completed realistic tasks with a simulated map system. The results identified a constellation of performance difficulties associated with speech-only map interactions, including elevated performance errors, spontaneous disfluencies, and lengthier task completion t ime-problems that declined substantially when people could interact multimodally with the map. These performance advantages also mirrored a strong user preference to interact multimodally. The error-proneness and unacceptability of speech-only input to maps was attributed in large part to people's difficulty generating spoken descriptions of spatial location. Analyses also indicated that map display format can be used to minimize performance errors and disfluencies, and map interfaces that guide users' speech toward brevity can nearly eliminate disfiuencies. Implications of this research are discussed for the design of high-performance multimodal interfaces for future map",
"title": ""
},
{
"docid": "87552ea79b92986de3ce5306ef0266bc",
"text": "This paper presents a novel secondary frequency and voltage control method for islanded microgrids based on distributed cooperative control. The proposed method utilizes a sparse communication network where each DG unit only requires local and its neighbors’ information to perform control actions. The frequency controller restores the system frequency to the nominal value while maintaining the equal generation cost increment value among DG units. The voltage controller simultaneously achieves the critical bus voltage restoration and accurate reactive power sharing. Subsequently, the case when the DG unit ac-side voltage reaches its limit value is discussed and a controller output limitation method is correspondingly provided to selectively realize the desired control objective. This paper also provides a small-signal dynamic model of the microgrid with the proposed controller to evaluate the system dynamic performance. Finally, simulation results on a microgrid test system are presented to validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "e75b7c2fcdfc19a650d7da4e6ae643a2",
"text": "With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services.",
"title": ""
},
{
"docid": "b41d56e726628673d12b9efcb715a69c",
"text": "Ten new phenylpropanoid glucosides, tadehaginosides A-J (1-10), and the known compound tadehaginoside (11) were obtained from Tadehagi triquetrum. These phenylpropanoid glucosides were structurally characterized through extensive physical and chemical analyses. Compounds 1 and 2 represent the first set of dimeric derivatives of tadehaginoside with an unusual bicyclo[2.2.2]octene skeleton, whereas compounds 3 and 4 contain a unique cyclobutane basic core in their carbon scaffolds. The effects of these compounds on glucose uptake in C2C12 myotubes were evaluated. Compounds 3-11, particularly 4, significantly increased the basal and insulin-elicited glucose uptake. The results from molecular docking, luciferase analyses, and ELISA indicated that the increased glucose uptake may be due to increases in peroxisome proliferator-activated receptor γ (PPARγ) activity and glucose transporter-4 (GLUT-4) expression. These results indicate that the isolated phenylpropanoid glucosides, particularly compound 4, have the potential to be developed into antidiabetic compounds.",
"title": ""
},
{
"docid": "b97208934c9475bc9d9bb3a095826a15",
"text": "Article history: Received 12 February 2014 Received in revised form 13 August 2014 Accepted 29 August 2014 Available online 8 September 2014",
"title": ""
},
{
"docid": "2c226c7be6acf725190c72a64bfcdf91",
"text": "The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and datadriven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-ofthe-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions.",
"title": ""
},
{
"docid": "d87f336cc82cbd29df1f04095d98a7fb",
"text": "The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which “when a measure becomes a target, it ceases to be a good measure.” In this study, we analyzed over 120 million papers to examine how the academic publishing world has evolved over the last century. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of over 2600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department. Academic publishing has changed considerably; now we need to reconsider how we measure success. Multimedia Links I Interactive Data Visualization I Code Tutorials I Fields-of-Study Features Table",
"title": ""
},
{
"docid": "1fba9ed825604e8afde8459a3d3dc0c0",
"text": "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.",
"title": ""
},
{
"docid": "b75336a7470fe2b002e742dbb6bfa8d5",
"text": "In Intelligent Tutoring System (ITS), tracing the student's knowledge state during learning has been studied for several decades in order to provide more supportive learning instructions. In this paper, we propose a novel model for knowledge tracing that i) captures students' learning ability and dynamically assigns students into distinct groups with similar ability at regular time intervals, and ii) combines this information with a Recurrent Neural Network architecture known as Deep Knowledge Tracing. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art techniques for student modelling.",
"title": ""
},
{
"docid": "238620ca0d9dbb9a4b11756630db5510",
"text": "this planet and many oceanic and maritime applications seem relatively slow in exploiting the state-of-the-art info-communication technologies. The natural and man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like sensor networks as an economically viable alternative to currently adopted and costly methods used in seismic monitoring, structural health monitoring, installation and mooring, etc. Underwater sensor networks (UWSNs) are the enabling technology for wide range of applications like monitoring the strong influences and impact of climate regulation, nutrient production, oil retrieval and transportation The underwater environment differs from the terrestrial radio environment both in terms of its energy costs and channel propagation phenomena. The underwater channel is characterized by long propagation times and frequency-dependent attenuation that is highly affected by the distance between nodes as well as by the link orientation. Some of other issues in which UWSNs differ from terrestrial are limited bandwidth, constrained battery power, more failure of sensors because of fouling and corrosion, etc. This paper presents several fundamental key aspects and architectures of UWSNs, emerging research issues of underwater sensor networks and exposes the researchers into networking of underwater communication devices for exciting ocean monitoring and exploration applications. I. INTRODUCTION The Earth is a water planet. Around 70% of the surface of earth is covered by water. This is largely unexplored area and recently it has fascinated humans to explore it. Natural or man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like wireless sensor",
"title": ""
},
{
"docid": "85657981b55e3a87e74238cd373b3db6",
"text": "INTRODUCTION\nLung cancer mortality rates remain at unacceptably high levels. Although mitochondrial dysfunction is a characteristic of most tumor types, mitochondrial dynamics are often overlooked. Altered rates of mitochondrial fission and fusion are observed in lung cancer and can influence metabolic function, proliferation and cell survival.\n\n\nAREAS COVERED\nIn this review, the authors outline the mechanisms of mitochondrial fission and fusion. They also identify key regulatory proteins and highlight the roles of fission and fusion in metabolism and other cellular functions (e.g., proliferation, apoptosis) with an emphasis on lung cancer and the interaction with known cancer biomarkers. They also examine the current therapeutic strategies reported as altering mitochondrial dynamics and review emerging mitochondria-targeted therapies.\n\n\nEXPERT OPINION\nMitochondrial dynamics are an attractive target for therapeutic intervention in lung cancer. Mitochondrial dysfunction, despite its molecular heterogeneity, is a common abnormality of lung cancer. Targeting mitochondrial dynamics can alter mitochondrial metabolism, and many current therapies already non-specifically affect mitochondrial dynamics. A better understanding of mitochondrial dynamics and their interaction with currently identified cancer 'drivers' such as Kirsten-Rat Sarcoma Viral Oncogene homolog will lead to the development of novel therapeutics.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "809384abcd6e402c1b30c3d2dfa75aa1",
"text": "Traditionally, psychiatry has offered clinical insights through keen behavioral observation and a deep study of emotion. With the subsequent biological revolution in psychiatry displacing psychoanalysis, some psychiatrists were concerned that the field shifted from “brainless” to “mindless.”1 Over the past 4 decades, behavioral expertise, once the strength of psychiatry, has diminished in importanceaspsychiatricresearchfocusedonpharmacology,genomics, and neuroscience, and much of psychiatric practicehasbecomeaseriesofbriefclinical interactionsfocused on medication management. In research settings, assigning a diagnosis from the Diagnostic and Statistical Manual of Mental Disorders has become a surrogate for behavioral observation. In practice, few clinicians measure emotion, cognition, or behavior with any standard, validated tools. Some recent changes in both research and practice are promising. The National Institute of Mental Health has led an effort to create a new diagnostic approach for researchers that is intended to combine biological, behavioral, and social factors to create “precision medicine for psychiatry.”2 Although this Research Domain Criteria project has been controversial, the ensuing debate has been",
"title": ""
},
{
"docid": "64d3ecaa2f9e850cb26aac0265260aff",
"text": "The case of the Frankfurt Airport attack in 2011 in which a 21-year-old man shot several U.S. soldiers, murdering 2 U.S. airmen and severely wounding 2 others, is assessed with the Terrorist Radicalization Assessment Protocol (TRAP-18). The study is based on an extensive qualitative analysis of investigation and court files focusing on the complex interconnection among offender personality, specific opportunity structures, and social contexts. The role of distal psychological factors and proximal warning behaviors in the run up to the deed are discussed. Although in this case the proximal behaviors of fixation on a cause and identification as a “soldier” for the cause developed over years, we observed only a very brief and accelerated pathway toward the violent act. This represents an important change in the demands placed upon threat assessors.",
"title": ""
}
] | scidocsrr |
931d129c91a8a84ef68653fc27a5f21d | Named entity recognition in query | [
{
"docid": "419c721c2d0a269c65fae59c1bdb273c",
"text": "Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational\" searches are less prevalent than generally believed while a previously unexplored \"resource-seeking\" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.",
"title": ""
}
] | [
{
"docid": "758a922ccba0fc70574af94de5a4c2d9",
"text": "We study unsupervised learning by developing a generative model built from progressively learned deep convolutional neural networks. The resulting generator is additionally a discriminator, capable of \"introspection\" in a sense — being able to self-evaluate the difference between its generated samples and the given training data. Through repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. Specifically, our model learns a sequence of CNN classifiers using a synthesis-by-classification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and unsupervised feature learning.",
"title": ""
},
{
"docid": "36e3489f2d144be867fa4f2ff05324d4",
"text": "Sentiment classification of Twitter data has been successfully applied in finding predictions in a variety of domains. However, using sentiment classification to predict stock market variables is still challenging and ongoing research. The main objective of this study is to compare the overall accuracy of two machine learning techniques (logistic regression and neural network) with respect to providing a positive, negative and neutral sentiment for stock-related tweets. Both classifiers are compared using Bigram term frequency (TF) and Unigram term frequency - inverse document term frequency (TF-IDF) weighting schemes. Classifiers are trained using a dataset that contains 42,000 automatically annotated tweets. The training dataset forms positive, negative and neutral tweets covering four technology-related stocks (Twitter, Google, Facebook, and Tesla) collected using Twitter Search API. Classifiers give the same results in terms of overall accuracy (58%). However, empirical experiments show that using Unigram TF-IDF outperforms TF.",
"title": ""
},
{
"docid": "d0c8e58e06037d065944fc59b0bd7a74",
"text": "We propose a new discrete choice model that generalizes the random utility model (RUM). We show that this model, called the Generalized Stochastic Preference (GSP) model can explain several choice phenomena that can’t be represented by a RUM. In particular, the model can easily (and also exactly) replicate some well known examples that are not RUM, as well as controlled choice experiments carried out since 1980’s that possess strong regularity violations. One of such regularity violation is the decoy effect in which the probability of choosing a product increases when a similar, but inferior product is added to the choice set. An appealing feature of the GSP is that it is non-parametric and therefore it has very high flexibility. The model has also a simple description and interpretation: it builds upon the well known representation of RUM as a stochastic preference, by allowing some additional consumer types to be non-rational.",
"title": ""
},
{
"docid": "33b405dbbe291f6ba004fa6192501861",
"text": "A quasi-static analysis of an open-ended coaxial line terminated by a semi-infinite medium on ground plane is presented in this paper. The analysis is based on a vtiriation formulation of the problem. A comparison of results obtained by this method with the experimental and the other theoretical approaches shows an excellent agreement. This analysis is expected to be helpful in the inverse problem of calculating the pertnittivity of materials in oico for a given iuput impedance of the coaxial line.",
"title": ""
},
{
"docid": "369cdea246738d5504669e2f9581ae70",
"text": "Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.",
"title": ""
},
{
"docid": "c776fccb35d9aa43e965604573156c6a",
"text": "BACKGROUND\nMalnutrition in children is a major public health concern. This study aimed to determine the association between dietary diversity and stunting, underweight, wasting, and diarrhea and that between consumption of each specific food group and these nutritional and health outcomes among children.\n\n\nMETHODS\nA nationally representative household survey of 6209 children aged 12 to 59 months was conducted in Cambodia. We examined the consumption of food in the 24 hours before the survey and stunting, underweight, wasting, and diarrhea that had occurred in the preceding 2 weeks. A food variety score (ranging from 0 to 9) was calculated to represent dietary diversity.\n\n\nRESULTS\nStunting was negatively associated with dietary diversity (adjusted odd ratios [ORadj] 0.95, 95% confident interval [CI] 0.91-0.99, P = 0.01) after adjusting for socioeconomic and geographical factors. Consumption of animal source foods was associated with reduced risk of stunting (ORadj 0.69, 95% CI 0.54-0.89, P < 0.01) and underweight (ORadj 0.74, 95% CI 0.57-0.96, P = 0.03). On the other hand, the higher risk of diarrhea was significantly associated with consumption of milk products (ORadj 1.46, 95% CI 1.10-1.92, P = 0.02) and it was significantly pronounced among children from the poorer households (ORadj 1.85, 95% CI 1.17-2.93, P < 0.01).\n\n\nCONCLUSIONS\nConsumption of a diverse diet was associated with a reduction in stunting. In addition to dietary diversity, animal source food was a protective factor of stunting and underweight. Consumption of milk products was associated with an increase in the risk of diarrhea, particularly among the poorer households. Both dietary diversity and specific food types are important considerations of dietary recommendation.",
"title": ""
},
{
"docid": "aab5aaf24c421cc75fce9b657a886ab4",
"text": "This study aimed to identify the similarities and differences among half-marathon runners in relation to their performance level. Forty-eight male runners were classified into 4 groups according to their performance level in a half-marathon (min): Group 1 (n = 11, < 70 min), Group 2 (n = 13, < 80 min), Group 3 (n = 13, < 90 min), Group 4 (n = 11, < 105 min). In two separate sessions, training-related, anthropometric, physiological, foot strike pattern and spatio-temporal variables were recorded. Significant differences (p<0.05) between groups (ES = 0.55-3.16) and correlations with performance were obtained (r = 0.34-0.92) in training-related (experience and running distance per week), anthropometric (mass, body mass index and sum of 6 skinfolds), physiological (VO2max, RCT and running economy), foot strike pattern and spatio-temporal variables (contact time, step rate and length). At standardized submaximal speeds (11, 13 and 15 km·h-1), no significant differences between groups were observed in step rate and length, neither in contact time when foot strike pattern was taken into account. In conclusion, apart from training-related, anthropometric and physiological variables, foot strike pattern and step length were the only biomechanical variables sensitive to half-marathon performance, which are essential to achieve high running speeds. However, when foot strike pattern and running speeds were controlled (submaximal test), the spatio-temporal variables were similar. This indicates that foot strike pattern and running speed are responsible for spatio-temporal differences among runners of different performance level.",
"title": ""
},
{
"docid": "0946b5cb25e69f86b074ba6d736cd50f",
"text": "Increase of malware and advanced cyber-attacks are now becoming a serious problem. Unknown malware which has not determined by security vendors is often used in these attacks, and it is becoming difficult to protect terminals from their infection. Therefore, a countermeasure for after infection is required. There are some malware infection detection methods which focus on the traffic data comes from malware. However, it is difficult to perfectly detect infection only using traffic data because it imitates benign traffic. In this paper, we propose malware process detection method based on process behavior in possible infected terminals. In proposal, we investigated stepwise application of Deep Neural Networks to classify malware process. First, we train the Recurrent Neural Network (RNN) to extract features of process behavior. Second, we train the Convolutional Neural Network (CNN) to classify feature images which are generated by the extracted features from the trained RNN. The evaluation result in several image size by comparing the AUC of obtained ROC curves and we obtained AUC= 0:96 in best case.",
"title": ""
},
{
"docid": "4874f55e577bea77deed2750a9a73b30",
"text": "Best practice exemplars suggest that digital platforms play a critical role in managing supply chain activities and partnerships that generate perjormance gains for firms. However, there is Umited academic investigation on how and why information technology can create performance gains for firms in a supply chain management (SCM) context. Grant's (1996) theoretical notion of higher-order capabilities and a hierarchy of capabilities has been used in recent information systems research by Barua et al. (2004). Sambamurthy et al. (2003), and Mithas et al. (2004) to reframe the conversation from the direct performance impacts of IT resources and investments to how and why IT shapes higher-order proeess capabilities that ereate performance gains for firms. We draw on the emerging IT-enabled organizational capabilities perspective to suggest that firms that develop IT infrastrueture integration for SCM and leverage it to create a higher-order supply chain integration capability generate significant and sustainable performance gains. A research model is developed to investigate the hierarchy oflT-related capabilities and their impaet on firm performance. Data were collected from } 10 supply chain and logisties managers in manufacturing and retail organizations. Our results suggest that integrated IT infrastructures enable firms to develop the higher-order capability of supply chain process integration. This eapability enables firms to unbundle information flows from physical flows, and to share information with their supply chain partners to create information-based approaches for superior demand planning, for the staging and movement of physical products, and for streamlining voluminous and complex financial work processes. Furthermore. IT-enabled supply chain integration capability results in significant and sustained firm performance gains, especially in operational excellence and revenue growth. Managerial",
"title": ""
},
{
"docid": "ba3e9746291c2a355321125093b41c88",
"text": "Sentiment analysis of microblogs such as Twitter has recently gained a fair amount of attention. One of the simplest sentiment analysis approaches compares the words of a posting against a labeled word list, where each word has been scored for valence, — a “sentiment lexicon” or “affective word lists”. There exist several affective word lists, e.g., ANEW (Affective Norms for English Words) developed before the advent of microblogging and sentiment analysis. I wanted to examine how well ANEW and other word lists performs for the detection of sentiment strength in microblog posts in comparison with a new word list specifically constructed for microblogs. I used manually labeled postings from Twitter scored for sentiment. Using a simple word matching I show that the new word list may perform better than ANEW, though not as good as the more elaborate approach found in SentiStrength.",
"title": ""
},
{
"docid": "f119b0ee9a237ab1e9acdae19664df0f",
"text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "afec537d95b185d8bda4e0e48799bfd3",
"text": "We propose a method for optimizing an acoustic feature extractor for anomalous sound detection (ASD). Most ASD systems adopt outlier-detection techniques because it is difficult to collect a massive amount of anomalous sound data. To improve the performance of such outlier-detection-based ASD, it is essential to extract a set of efficient acoustic features that is suitable for identifying anomalous sounds. However, the ideal property of a set of acoustic features that maximizes ASD performance has not been clarified. By considering outlier-detection-based ASD as a statistical hypothesis test, we defined optimality as an objective function that adopts Neyman-Pearson lemma; the acoustic feature extractor is optimized to extract a set of acoustic features which maximize the true positive rate under an arbitrary false positive rate. The variational auto-encoder is applied as an acoustic feature extractor and optimized to maximize the objective function. We confirmed that the proposed method improved the F-measure score from 0.02 to 0.06 points compared to those of conventional methods, and ASD results of a stereolithography 3D-printer in a real-environment show that the proposed method is effective in identifying anomalous sounds.",
"title": ""
},
{
"docid": "ab4cada23ae2142e52c98a271c128c58",
"text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.",
"title": ""
},
{
"docid": "f27547cfee95505fe8a2f44f845ddaed",
"text": "High-performance, two-dimensional arrays of parallel-addressed InGaN blue micro-light-emitting diodes (LEDs) with individual element diameters of 8, 12, and 20 /spl mu/m, respectively, and overall dimensions 490 /spl times/490 /spl mu/m, have been fabricated. In order to overcome the difficulty of interconnecting multiple device elements with sufficient step-height coverage for contact metallization, a novel scheme involving the etching of sloped-sidewalls has been developed. The devices have current-voltage (I-V) characteristics approaching those of broad-area reference LEDs fabricated from the same wafer, and give comparable (3-mW) light output in the forward direction to the reference LEDs, despite much lower active area. The external efficiencies of the micro-LED arrays improve as the dimensions of the individual elements are scaled down. This is attributed to scattering at the etched sidewalls of in-plane propagating photons into the forward direction.",
"title": ""
},
{
"docid": "f0f88be4a2b7619f6fb5cdcca1741d1f",
"text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)",
"title": ""
},
{
"docid": "66127055aff890d3f3f9d40bd1875980",
"text": "A simple, but comprehensive model of heat transfer and solidification of the continuous casting of steel slabs is described, including phenomena in the mold and spray regions. The model includes a one-dimensional (1-D) transient finite-difference calculation of heat conduction within the solidifying steel shell coupled with two-dimensional (2-D) steady-state heat conduction within the mold wall. The model features a detailed treatment of the interfacial gap between the shell and mold, including mass and momentum balances on the solid and liquid interfacial slag layers, and the effect of oscillation marks. The model predicts the shell thickness, temperature distributions in the mold and shell, thickness of the resolidified and liquid powder layers, heat-flux profiles down the wide and narrow faces, mold water temperature rise, ideal taper of the mold walls, and other related phenomena. The important effect of the nonuniform distribution of superheat is incorporated using the results from previous threedimensional (3-D) turbulent fluid-flow calculations within the liquid pool. The FORTRAN program CONID has a user-friendly interface and executes in less than 1 minute on a personal computer. Calibration of the model with several different experimental measurements on operating slab casters is presented along with several example applications. In particular, the model demonstrates that the increase in heat flux throughout the mold at higher casting speeds is caused by two combined effects: a thinner interfacial gap near the top of the mold and a thinner shell toward the bottom. This modeling tool can be applied to a wide range of practical problems in continuous casters.",
"title": ""
},
{
"docid": "5491c265a1eb7166bb174097b49d258e",
"text": "The importance of service quality for business performance has been recognized in the literature through the direct effect on customer satisfaction and the indirect effect on customer loyalty. The main objective of the study was to measure hotels' service quality performance from the customer perspective. To do so, a performance-only measurement scale (SERVPERF) was administered to customers stayed in three, four and five star hotels in Aqaba and Petra. Although the importance of service quality and service quality measurement has been recognized, there has been limited research that has addressed the structure and antecedents of the concept for the hotel industry. The clarification of the dimensions is important for managers in the hotel industry as it identifies the bundles of service attributes consumers find important. The results of the study demonstrate that SERVPERF is a reliable and valid tool to measure service quality in the hotel industry. The instrument consists of five dimensions, namely \"tangibles\", \"responsiveness\", \"empathy\", \"assurance\" and \"reliability\". Hotel customers are expecting more improved services from the hotels in all service quality dimensions. However, hotel customers have the lowest perception scores on empathy and tangibles. In the light of the results, possible managerial implications are discussed and future research subjects are recommended.",
"title": ""
},
{
"docid": "e2de8284e14cb3abbd6e3fbcfb5bc091",
"text": "In this paper, novel 2 one-dimensional (1D) Haar-like filtering techniques are proposed as a new and low calculation cost feature extraction method suitable for 3D acceleration signals based human activity recognition. Proposed filtering method is a simple difference filter with variable filter parameters. Our method holds a strong adaptability to various classification problems which no previously studied features (mean, standard deviation, etc.) possessed. In our experiment on human activity recognition, the proposed method achieved both the highest recognition accuracy of 93.91% while reducing calculation cost to 21.22% compared to previous method.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "bf00f7d7cdcbdc3e9d082bf92eec075c",
"text": "Network software is a critical component of any distributed system. Because of its complexity, network software is commonly layered into a hierarchy of protocols, or more generally, into a protocol graph. Typical protocol graphs—including those standardized in the ISO and TCP/IP network architectures—share three important properties; the protocol graph is simple, the nodes of the graph (protocols) encapsulate complex functionality, and the topology of the graph is relatively static. This paper describes a new way to organize network software that differs from conventional architectures in all three of these properties. In our approach, the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic. The main contribution of this paper is to describe the ideas behind our new architecture, illustrate the advantages of using the architecture, and demonstrate that the architecture results in efficient network software.",
"title": ""
}
] | scidocsrr |
9a1282ed6142beb775735c0ab8d54c2b | Anomalies in Intertemporal Choice : Evidence and an Interpretation | [
{
"docid": "e50d156bde3479c27119231073705f70",
"text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.",
"title": ""
}
] | [
{
"docid": "d0a2c8cf31e1d361a7c2b306dffddc25",
"text": "During the first years of the so called fourth industrial revolution, main attempts that tried to define the main ideas and tools behind this new era of manufacturing, always end up referring to the concept of smart machines that would be able to communicate with each and with the environment. In fact, the defined cyber physical systems, connected by the internet of things, take all the attention when referring to the new industry 4.0. But, nevertheless, the new industrial environment will benefit from several tools and applications that complement the real formation of a smart, embedded system that is able to perform autonomous tasks. And most of these revolutionary concepts rest in the same background theory as artificial intelligence does, where the analysis and filtration of huge amounts of incoming information from different types of sensors, assist to the interpretation and suggestion of the most recommended course of action. For that reason, artificial intelligence science suit perfectly with the challenges that arise in the consolidation of the fourth industrial revolution.",
"title": ""
},
{
"docid": "fac86557cbb42457ccec058699f47ff8",
"text": "As mobile apps become more closely integrated into our everyday lives, mobile app interactions ought to be rapid and responsive. Unfortunately, even the basic primitive of launching a mobile app is sorrowfully sluggish: 20 seconds of delay is not uncommon even for very popular apps.\n We have designed and built FALCON to remedy slow app launch. FALCON uses contexts such as user location and temporal access patterns to predict app launches before they occur. FALCON then provides systems support for effective app-specific prelaunching, which can dramatically reduce perceived delay.\n FALCON uses novel features derived through extensive data analysis, and a novel cost-benefit learning algorithm that has strong predictive performance and low runtime overhead. Trace-based analysis shows that an average user saves around 6 seconds per app startup time with daily energy cost of no more than 2% battery life, and on average gets content that is only 3 minutes old at launch without needing to wait for content to update. FALCON is implemented as an OS modification to the Windows Phone OS.",
"title": ""
},
{
"docid": "f74dd570fd04512dc82aac9d62930992",
"text": "A compact microstrip-line ultra-wideband (UWB) bandpass filter (BPF) using the proposed stub-loaded multiple-mode resonator (MMR) is presented. This MMR is formed by loading three open-ended stubs in shunt to a simple stepped-impedance resonator in center and two symmetrical locations, respectively. By properly adjusting the lengths of these stubs, the first four resonant modes of this MMR can be evenly allocated within the 3.1-to-10.6 GHz UWB band while the fifth resonant frequency is raised above 15.0GHz. It results in the formulation of a novel UWB BPF with compact-size and widened upper-stopband by incorporating this MMR with two interdigital parallel-coupled feed lines. Simulated and measured results are found in good agreement with each other, showing improved UWB bandpass behaviors with the insertion loss lower than 0.8dB, return loss higher than 14.3dB, and maximum group delay variation less than 0.64ns in the realized UWB passband",
"title": ""
},
{
"docid": "2d9921e49e58725c9c85da02249c8d27",
"text": "Recently, the performance of Si power devices gradually approaches the physical limit, and the latest SiC device seemingly has the ability to substitute the Si insulated gate bipolar transistor (IGBT) in 1200 V class. In this paper, we demonstrate the feasibility of further improving the Si IGBT based on the new concept of CSTBTtrade. In point of view of low turn-off loss and high uniformity in device characteristics, we employ the techniques of fine-pattern and retro grade doping in the design of new device structures, resulting in significant reduction on the turn-off loss and the VGE(th) distribution, respectively.",
"title": ""
},
{
"docid": "dcee2282ea923cc0e32ae3ddd602964d",
"text": "We describe an architecture that provides a programmable display layer in order to allow the execution of custom programs on consecutive display frames. This replaces the default display behavior of repeating application frames until an update is available. The architecture is implemented using a multi-GPU system. We will show three applications of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion can be beneficial for walk-throughs of large scenes. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images.",
"title": ""
},
{
"docid": "6a0ac77c7471484e3829b7a561c78723",
"text": "While the growth of business-to-consumer electronic commerce seems phenomenal in recent years, several studies suggest that a large number of individuals using the Internet have serious privacy concerns, and that winning public trust is the primary hurdle to continued growth in e-commerce. This research investigated the relative importance, when purchasing goods and services over the Web, of four common trust indices (i.e. (1) third party privacy seals, (2) privacy statements, (3) third party security seals, and (4) security features). The results indicate consumers valued security features significantly more than the three other trust indices. We also investigated the relationship between these trust indices and the consumer’s perceptions of a marketer’s trustworthiness. The findings indicate that consumers’ ratings of trustworthiness of Web merchants did not parallel experts’ evaluation of sites’ use of the trust indices. This study also examined the extent to which consumers are willing to provide private information to electronic and land merchants. The results revealed that when making the decision to provide private information, consumers rely on their perceptions of trustworthiness irrespective of whether the merchant is electronic only or land and electronic. Finally, we investigated the relative importance of three types of Web attributes: security, privacy and pleasure features (convenience, ease of use, cosmetics). Privacy and security features were of lesser importance than pleasure features when considering consumers’ intention to purchase. A discussion of the implications of these results and an agenda for future research are provided. q 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1de568efbb57cc4e5d5ffbbfaf8d39ae",
"text": "The Insider Threat Study, conducted by the U.S. Secret Service and Carnegie Mellon University’s Software Engineering Institute CERT Program, analyzed insider cyber crimes across U.S. critical infrastructure sectors. The study indicates that management decisions related to organizational and employee performance sometimes yield unintended consequences magnifying risk of insider attack. Lack of tools for understanding insider threat, analyzing risk mitigation alternatives, and communicating results exacerbates the problem. The goal of Carnegie Mellon University’s MERIT (Management and Education of the Risk of Insider Threat) project is to develop such tools. MERIT uses system dynamics to model and analyze insider threats and produce interactive learning environments. These tools can be used by policy makers, security officers, information technology, human resources, and management to understand the problem and assess risk from insiders based on simulations of policies, cultural, technical, and procedural factors. This paper describes the MERIT insider threat model and simulation results.",
"title": ""
},
{
"docid": "f042dd6b78c65541e657c48452a1e0e4",
"text": "We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stagethe pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.",
"title": ""
},
{
"docid": "bb72e4d6f967fb88473756cdcbb04252",
"text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.",
"title": ""
},
{
"docid": "e27b61e4683f2474839e75fe1caf7b49",
"text": "A novel multi-purpose integrated planar six-port front-end circuit combining both substrate integrated waveguide (SIW) technology and integrated loads is presented and demonstrated. The use of SIW technology allows a very compact circuit and very low radiation loss at millimeter frequencies. An integrated load is used to simplify the fabrication process and also reduce dimensions and cost. To validate the proposed concept, an integrated broadband six-port front-end circuit prototype was fabricated and measured. Simulation and measurement results show that the proposed six-port circuit can easily operate at 24 GHz for radar systems and also over 23–29 GHz for broadband millimetre-wave radio services.",
"title": ""
},
{
"docid": "b74ee9d63787d93411a4b37e4ed6882d",
"text": "We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.",
"title": ""
},
{
"docid": "68b8dd0fd648b9ad862554795935de45",
"text": "Feedforward neural networks (FFNN) have been utilised for various research in machine learning and they have gained a significantly wide acceptance. However, it was recently noted that the feedforward neural network has been functioning slower than needed. As a result, it has created critical bottlenecks among its applications. Extreme Learning Machines (ELM) were suggested as alternative learning algorithms instead of FFNN. The former is characterised by single-hidden layer feedforward neural networks (SLFN). It selects hidden nodes randomly and analytically determines their output weight. This review aims to, first, present a short mathematical explanation to explain the basic ELM. Second, because of its notable simplicity, efficiency, and remarkable generalisation performance, ELM has had wide uses in various domains, such as computer vision, biomedical engineering, control and robotics, system identification, etc. Thus, in this review, we will aim to present a complete view of these ELM advances for different applications. Finally, ELM’s strengths and weakness will be presented, along with its future perspectives.",
"title": ""
},
{
"docid": "80336a3bba9c0d7fd692b1321c0739f6",
"text": "Fine-grained image classification is to recognize hundreds of subcategories in each basic-level category. Existing methods employ discriminative localization to find the key distinctions among similar subcategories. However, existing methods generally have two limitations: (1) Discriminative localization relies on region proposal methods to hypothesize the locations of discriminative regions, which are time-consuming and the bottleneck of classification speed. (2) The training of discriminative localization depends on object or part annotations, which are heavily labor-consuming and the obstacle of marching towards practical application. It is highly challenging to address the two key limitations simultaneously, and existing methods only focus on one of them. Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: (1) n-pathway end-to-end discriminative localization network is designed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation. (2) Multi-level attention guided localization learning is proposed to localize discriminative regions with different focuses automatically, without using object and part annotations, avoiding the labor consumption. Different level attentions focus on different characteristics of the image, which are complementary and boost the classification accuracy. Both are jointly employed to simultaneously improve classification speed and eliminate dependence on object and part annotations. Compared with state-of-theart methods on 2 widely-used fine-grained image classification datasets, our WSDL approach achieves both the best accuracy and efficiency of classification.",
"title": ""
},
{
"docid": "3c014205609a8bbc2f5e216d7af30b32",
"text": "This paper proposes a novel design for variable-flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets to achieve high air-gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then, several modifications are applied to the stator and rotor designs through finite-element analysis (FEA) simulations to improve machine efficiency and torque density. A prototype of the proposed design is built, and the experimental results are in good correlation with the FEA simulations, confirming the validity of the proposed machine design concept.",
"title": ""
},
{
"docid": "8d49e37ab80dae285dbf694ba1849f68",
"text": "In this paper we present a reference architecture for ETL stages of EDM and LA that works with different data formats and different extraction sites, ensuring privacy and making easier for new participants to enter into the process without demanding more computing power. Considering scenarios with a multitude of virtual environments hosting educational activities, accessible through a common infrastructure, we devised a reference model where data generated from interaction between users and among users and the environment itself, are selected, organized and stored in local “baskets”. Local baskets are then collected and grouped in a global basket. Organization resources like item modeling are used in both levels of basket construction. Using this reference upon a client-server architectural style, a reference architecture was developed and has been used to carry out a project for an official foundation linked to Brazilian Ministry of Education, involving educational data mining and sharing of 100+ higher education institutions and their respective virtual environments. In this architecture, a client-collector inside each virtual environment collects information from database and event logs. This information along with definitions obtained from item models are used to build local baskets. A synchronization protocol keeps all item models synced with client-collectors and server-collectors generating global baskets. This approach has shown improvements on ETL like: parallel processing of items, economy on storage space and bandwidth, privacy assurance, better tenacity, and good scalability.",
"title": ""
},
{
"docid": "f4e98796feefcceb86a94f978a21e5ab",
"text": "This tutorial provides a brief overview of space-time adaptive processing (STAP) for radar applications. We discuss space-time signal diversity and various forms of the adaptive processor, including reduced-dimension and reduced-rank STAP approaches. Additionally, we describe the space-time properties of ground clutter and noise-jamming, as well as essential STAP performance metrics. We conclude this tutorial with an overview of some current STAP topics: space-based radar, bistatic STAP, knowledge-aided STAP, multi-channel synthetic aperture radar and non-sidelooking array configurations.",
"title": ""
},
{
"docid": "cb4cc56b013ca35250c4d966da843d58",
"text": "Cyber-Physical System (CPS) is a system of system which integrates physical system with cyber capability in order to improve the physical performance. It is being widely used in areas closely related to national economy and people's livelihood, therefore CPS security problems have drawn a global attention and an appropriate risk assessment for CPS is in urgent need. Existing risk assessment for CPS always focuses on the reliability assessment, using Probability Risk Assessment (PRA). In this way, the assessment of physical part and cyber part is isolated as PRA is difficult to quantify the risks from the cyber world. Methodologies should be developed to assess the both parts as a whole system, considering this integrated system has a high coupling between the physical layer and cyber layer. In this paper, a risk assessment idea for CPS with the use of attack tree is proposed. Firstly, it presents a detailed description about the threat and vulnerability attributes of each leaf in an attack tree and tells how to assign value to its threat and vulnerability vector. Then this paper focuses on calculating the threat and vulnerability vector of an attack path with the use of the leaf vector values. Finally, damage is taken into account and an idea to calculate the risk value of the whole attack path is given.",
"title": ""
},
{
"docid": "3b1a7539000a8ddabdaa4888b8bb1adc",
"text": "This paper presents evaluations among the most usual maximum power point tracking (MPPT) techniques, doing meaningful comparisons with respect to the amount of energy extracted from the photovoltaic (PV) panel [tracking factor (TF)] in relation to the available power, PV voltage ripple, dynamic response, and use of sensors. Using MatLab/Simulink and dSPACE platforms, a digitally controlled boost dc-dc converter was implemented and connected to an Agilent Solar Array E4350B simulator in order to verify the analytical procedures. The main experimental results are presented for conventional MPPT algorithms and improved MPPT algorithms named IC based on proportional-integral (PI) and perturb and observe based on PI. Moreover, the dynamic response and the TF are also evaluated using a user-friendly interface, which is capable of online program power profiles and computes the TF. Finally, a typical daily insulation is used in order to verify the experimental results for the main PV MPPT methods.",
"title": ""
},
{
"docid": "d449a4d183c2a3e1905935f624d684d3",
"text": "This paper introduces the approach CBRDIA (Case-based Reasoning for Document Invoice Analysis) which uses the principles of case-based reasoning to analyze, recognize and interpret invoices. Two CBR cycles are performed sequentially in CBRDIA. The first one consists in checking whether a similar document has already been processed, which makes the interpretation of the current one easy. The second cycle works if the first one fails. It processes the document by analyzing and interpreting its structuring elements (adresses, amounts, tables, etc) one by one. The CBR cycles allow processing documents from both knonwn or unknown classes. Applied on 923 invoices, CBRDIA reaches a recognition rate of 85,22% for documents of known classes and 74,90% for documents of unknown classes.",
"title": ""
},
{
"docid": "dd2819d0413a1d41c602aef4830888a4",
"text": "Presented here is a fast method that combines curve matching techniques with a surface matching algorithm to estimate the positioning and respective matching error for the joining of three-dimensional fragmented objects. Furthermore, this paper describes how multiple joints are evaluated and how the broken artefacts are clustered and transformed to form potential solutions of the assemblage problem. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
e270440b45d2810de5d62df97acdea83 | Subjective and Objective Quality-of-Experience of Adaptive Video Streaming | [
{
"docid": "a4f3bb1e91fb996858ff438487476217",
"text": "Digital video data, stored in video databases and distributed through communication networks, is subject to various kinds of distortions during acquisition, compression, processing, transmission, and reproduction. For example, lossy video compression techniques, which are almost always used to reduce the bandwidth needed to store or transmit video data, may degrade the quality during the quantization process. For another instance, the digital video bitstreams delivered over error-prone channels, such as wireless channels, may be received imperfectly due to the impairment occurred during transmission. Package-switched communication networks, such as the Internet, can cause loss or severe delay of received data packages, depending on the network conditions and the quality of services. All these transmission errors may result in distortions in the received video data. It is therefore imperative for a video service system to be able to realize and quantify the video quality degradations that occur in the system, so that it can maintain, control and possibly enhance the quality of the video data. An effective image and video quality metric is crucial for this purpose.",
"title": ""
}
] | [
{
"docid": "9ce3f1a67d23425e3920670ac5a1f9b4",
"text": "We examine the limits of consistency in highly available and fault-tolerant distributed storage systems. We introduce a new property—convergence—to explore the these limits in a useful manner. Like consistency and availability, convergence formalizes a fundamental requirement of a storage system: writes by one correct node must eventually become observable to other connected correct nodes. Using convergence as our driving force, we make two additional contributions. First, we close the gap between what is known to be impossible (i.e. the consistency, availability, and partition-tolerance theorem) and known systems that are highly-available but that provide weaker consistency such as causal. Specifically, in an asynchronous system, we show that natural causal consistency, a strengthening of causal consistency that respects the real-time ordering of operations, provides a tight bound on consistency semantics that can be enforced without compromising availability and convergence. In an asynchronous system with Byzantine-failures, we show that it is impossible to implement many of the recently introduced forking-based consistency semantics without sacrificing either availability or convergence. Finally, we show that it is not necessary to compromise availability or convergence by showing that there exist practically useful semantics that are enforceable by available, convergent, and Byzantine-fault tolerant systems.",
"title": ""
},
{
"docid": "ad868d09ec203c2080e0f8458daccf91",
"text": "We present empirical measurements of the packet delivery performance of the latest sensor platforms: Micaz and Telos motes. In this article, we present observations that have implications to a set of common assumptions protocol designers make while designing sensornet protocols—specifically—the MAC and network layer protocols. We first distill these common assumptions in to a conceptual model and show how our observations support or dispute these assumptions. We also present case studies of protocols that do not make these assumptions. Understanding the implications of these observations to the conceptual model can improve future protocol designs.",
"title": ""
},
{
"docid": "f8330ca9f2f4c05c26d679906f65de04",
"text": "In recent years, VDSL2 standard has been gaining popularity as a high speed network access technology to deliver triple play services of video, voice and data. These services require strict quality-of-experience (QoE) and quality-of-services (QoS) on DSL systems operating in an impulse noise environment. The DSL systems, in-turn, are affected severely in the presence of impulse noise in the telephone line. Therefore to improve upon the requirements of IPTV under the impulse noise conditions the standard body has been evaluating various proposals to mitigate and reduce the error rates. This paper lists and qualitatively compares various initiatives that have been suggested in the VDSL2 standard body to improve the protection of VDSL2 services against impulse noise.",
"title": ""
},
{
"docid": "c6c4edf88c38275e82aa73a11ef3a006",
"text": "In this paper, we propose a new concept for understanding the role of algorithms in daily life: algorithmic authority. Algorithmic authority is the legitimate power of algorithms to direct human action and to impact which information is considered true. We use this concept to examine the culture of users of Bit coin, a crypto-currency and payment platform. Through Bit coin, we explore what it means to trust in algorithms. Our study utilizes interview and survey data. We found that Bit coin users prefer algorithmic authority to the authority of conventional institutions, which they see as untrustworthy. However, we argue that Bit coin users do not have blind faith in algorithms, rather, they acknowledge the need for mediating algorithmic authority with human judgment. We examine the tension between members of the Bit coin community who would prefer to integrate Bit coin with existing institutions and those who would prefer to resist integration.",
"title": ""
},
{
"docid": "72eceddfa08e73739022df7c0dc89a3a",
"text": "The empirical mode decomposition (EMD) proposed by Huang et al. in 1998 shows remarkably effective in analyzing nonlinear signals. It adaptively represents nonstationary signals as sums of zero-mean amplitude modulation-frequency modulation (AM-FM) components by iteratively conducting the sifting process. How to determine the boundary conditions of the cubic spline when constructing the envelopes of data is the critical issue of the sifting process. A simple bound hit process technique is presented in this paper which constructs two periodic series from the original data by even and odd extension and then builds the envelopes using cubic spline with periodic boundary condition. The EMD is conducted fluently without any assumptions of the processed data by this approach. An example is presented to pick out the weak modulation of internal waves from an Envisat ASAR image by EMD with the boundary process technique",
"title": ""
},
{
"docid": "535934dc80c666e0d10651f024560d12",
"text": "The following individuals read and discussed the thesis submitted by student Mindy Elizabeth Bennett, and they also evaluated her presentation and response to questions during the final oral examination. They found that the student passed the final oral examination, and that the thesis was satisfactory for a master's degree and ready for any final modifications that they explicitly required. iii ACKNOWLEDGEMENTS During my time of study at Boise State University, I have received an enormous amount of academic support and guidance from a number of different individuals. I would like to take this opportunity to thank everyone who has been instrumental in the completion of this degree. Without the continued support and guidance of these individuals, this accomplishment would not have been possible. I would also like to thank the following individuals for generously giving their time to provide me with the help and support needed to complete this study. Without them, the completion of this study would not have been possible. Breast hypertrophy is a common medical condition whose morbidity has increased over recent decades. Symptoms of breast hypertrophy often include musculoskeletal pain in the neck, back and shoulders, and numerous psychosocial health burdens. To date, reduction mammaplasty (RM) is the only treatment shown to significantly reduce the severity of the symptoms associated with breast hypertrophy. However, due to a lack of scientific evidence in the medical literature justifying the medical necessity of RM, insurance companies often deny requests for coverage of this procedure. Therefore, the purpose of this study is to investigate biomechanical differences in the upper body of women with larger breast sizes in order to provide scientific evidence of the musculoskeletal burdens of breast hypertrophy to the medical community Twenty-two female subjects (average age 25.90, ± 5.47 years) who had never undergone or been approved for breast augmentation surgery, were recruited to participate in this study. Kinematic data of the head, thorax, pelvis and scapula was collected during static trials and during each of four different tasks of daily living. Surface electromyography (sEMG) data from the Midcervical (C-4) Paraspinal, Upper Trapezius, Lower Trapezius, Serratus Anterior, and Erector Spinae muscles were recorded in the same activities. Maximum voluntary contractions (MVC) were used to normalize the sEMG data, and %MVC during each task in the protocol was analyzed. Kinematic data from the tasks of daily living were normalized to average static posture data for each subject. Subjects were …",
"title": ""
},
{
"docid": "76d22feb7da3dbc14688b0d999631169",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "61615f5aefb0aa6de2dd1ab207a966d5",
"text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.",
"title": ""
},
{
"docid": "b6dbccc6b04c282ca366eddea77d0107",
"text": "Current methods for annotating and interpreting human genetic variation tend to exploit a single information type (for example, conservation) and/or are restricted in scope (for example, to missense changes). Here we describe Combined Annotation–Dependent Depletion (CADD), a method for objectively integrating many diverse annotations into a single measure (C score) for each variant. We implement CADD as a support vector machine trained to differentiate 14.7 million high-frequency human-derived alleles from 14.7 million simulated variants. We precompute C scores for all 8.6 billion possible human single-nucleotide variants and enable scoring of short insertions-deletions. C scores correlate with allelic diversity, annotations of functionality, pathogenicity, disease severity, experimentally measured regulatory effects and complex trait associations, and they highly rank known pathogenic variants within individual genomes. The ability of CADD to prioritize functional, deleterious and pathogenic variants across many functional categories, effect sizes and genetic architectures is unmatched by any current single-annotation method.",
"title": ""
},
{
"docid": "424239765383edd8079d90f63b3fde1d",
"text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "b847446c0babb9e8ebb8e8d4c50a7023",
"text": "This paper introduces a general technique, called LABurst, for identifying key moments, or moments of high impact, in social media streams without the need for domain-specific information or seed keywords. We leverage machine learning to model temporal patterns around bursts in Twitter's unfiltered public sample stream and build a classifier to identify tokens experiencing these bursts. We show LABurst performs competitively with existing burst detection techniques while simultaneously providing insight into and detection of unanticipated moments. To demonstrate our approach's potential, we compare two baseline event-detection algorithms with our language-agnostic algorithm to detect key moments across three major sporting competitions: 2013 World Series, 2014 Super Bowl, and 2014 World Cup. Our results show LABurst outperforms a time series analysis baseline and is competitive with a domain-specific baseline even though we operate without any domain knowledge. We then go further by transferring LABurst's models learned in the sports domain to the task of identifying earthquakes in Japan and show our method detects large spikes in earthquake-related tokens within two minutes of the actual event.",
"title": ""
},
{
"docid": "d3214d24911a5e42855fd1a53516d30b",
"text": "This paper extends the face detection framework proposed by Viola and Jones 2001 to handle profile views and rotated faces. As in the work of Rowley et al 1998. and Schneiderman et al. 2000, we build different detectors for different views of the face. A decision tree is then trained to determine the viewpoint class (such as right profile or rotated 60 degrees) for a given window of the image being examined. This is similar to the approach of Rowley et al. 1998. The appropriate detector for that viewpoint can then be run instead of running all detectors on all windows. This technique yields good results and maintains the speed advantage of the Viola-Jones detector. Shown as a demo at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 18, 2003 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2003 201 Broadway, Cambridge, Massachusetts 02139 Publication History:– 1. First printing, TR2003-96, July 2003 Fast Multi-view Face Detection Michael J. Jones Paul Viola [email protected] [email protected] Mitsubishi Electric Research Laboratory Microsoft Research 201 Broadway One Microsoft Way Cambridge, MA 02139 Redmond, WA 98052",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "c726dc2218fa4d286aa10d827b427871",
"text": "Acquisition of the intestinal microbiota begins at birth, and a stable microbial community develops from a succession of key organisms. Disruption of the microbiota during maturation by low-dose antibiotic exposure can alter host metabolism and adiposity. We now show that low-dose penicillin (LDP), delivered from birth, induces metabolic alterations and affects ileal expression of genes involved in immunity. LDP that is limited to early life transiently perturbs the microbiota, which is sufficient to induce sustained effects on body composition, indicating that microbiota interactions in infancy may be critical determinants of long-term host metabolic effects. In addition, LDP enhances the effect of high-fat diet induced obesity. The growth promotion phenotype is transferrable to germ-free hosts by LDP-selected microbiota, showing that the altered microbiota, not antibiotics per se, play a causal role. These studies characterize important variables in early-life microbe-host metabolic interaction and identify several taxa consistently linked with metabolic alterations. PAPERCLIP:",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "9b7ff8a7dec29de5334f3de8d1a70cc3",
"text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.",
"title": ""
},
{
"docid": "178ba744f5e9df6c5a7a704949ad8ac1",
"text": "This software paper describes ‘Stylometry with R’ (stylo), a flexible R package for the highlevel analysis of writing style in stylometry. Stylometry (computational stylistics) is concerned with the quantitative study of writing style, e.g. authorship verification, an application which has considerable potential in forensic contexts, as well as historical research. In this paper we introduce the possibilities of stylo for computational text analysis, via a number of dummy case studies from English and French literature. We demonstrate how the package is particularly useful in the exploratory statistical analysis of texts, e.g. with respect to authorial writing style. Because stylo provides an attractive graphical user interface for high-level exploratory analyses, it is especially suited for an audience of novices, without programming skills (e.g. from the Digital Humanities). More experienced users can benefit from our implementation of a series of standard pipelines for text processing, as well as a number of similarity metrics.",
"title": ""
},
{
"docid": "81bbacc372c1f67e218895bcb046651d",
"text": "Sensor-based activity recognition seeks the profound high-level knowledge about human activities from multitudes of low-level sensor readings. Conventional pattern recognition approaches have made tremendous progress in the past years. However, those methods often heavily rely on heuristic hand-crafted feature extraction, which could hinder their generalization performance. Additionally, existing methods are undermined for unsupervised and incremental learning tasks. Recently, the recent advancement of deep learning makes it possible to perform automatic high-level feature extraction thus achieves promising performance in many areas. Since then, deep learning based methods have been widely adopted for the sensor-based activity recognition tasks. This paper surveys the recent advance of deep learning based sensor-based activity recognition. We summarize existing literature from three aspects: sensor modality, deep model, and application. We also present detailed insights on existing work and propose grand challenges for future research.",
"title": ""
},
{
"docid": "e658507a3ed6c52d27c5db618f9fa8cb",
"text": "Accident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.",
"title": ""
}
] | scidocsrr |
34105146cfbde5353c1ec63e2112fcfb | Multi-Label Learning with Posterior Regularization | [
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
},
{
"docid": "f59a7b518f5941cd42086dc2fe58fcea",
"text": "This paper contributes a novel algorithm for effective and computationally efficient multilabel classification in domains with large label sets L. The HOMER algorithm constructs a Hierarchy Of Multilabel classifiERs, each one dealing with a much smaller set of labels compared to L and a more balanced example distribution. This leads to improved predictive performance along with linear training and logarithmic testing complexities with respect to |L|. Label distribution from parent to children nodes is achieved via a new balanced clustering algorithm, called balanced k means.",
"title": ""
}
] | [
{
"docid": "4d7c0222317fbd866113e1a244a342f3",
"text": "A simple method of \"tuning up\" a multiple-resonant-circuit filter quickly and exactly is demonstrated. The method may be summarized as follows: Very loosely couple a detector to the first resonator of the filter; then, proceeding in consecutive order, tune all odd-numbered resonators for maximum detector output, and all even-numbered resonators for minimum detector output (always making sure that the resonator immediately following the one to be resonated is completely detuned). Also considered is the correct adjustment of the two other types of constants in a filter. Filter constants can always be reduced to only three fundamental types: f0, dr(1/Qr), and Kr(r+1). This is true whether a lumped-element 100-kc filter or a distributed-element 5,000-mc unit is being considered. dr is adjusted by considering the rth resonator as a single-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the 3-db-down-points to the required value. Kr(r+1) is adjusted by considering the rth and (r+1)th adjacent resonators as a double-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the resulting response peaks to the required value. Finally, all the required values for K and Q are given for an n-resonant-circuit filter that will produce the response (Vp/V)2=1 +(Δf/Δf3db)2n.",
"title": ""
},
{
"docid": "36ed684e39877873407efb809f3cd1dc",
"text": "A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process.",
"title": ""
},
{
"docid": "4dcdb2520ec5f9fc9c32f2cbb343808c",
"text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.",
"title": ""
},
{
"docid": "d395193924613f6818511650d24cf9ae",
"text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.",
"title": ""
},
{
"docid": "2e5981a41d13ee2d588ee0e9fe04e1ec",
"text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.",
"title": ""
},
{
"docid": "a5ee673c895bac1a616bb51439461f5f",
"text": "OBJECTIVES\nTo summarise logistical aspects of recently completed systematic reviews that were registered in the International Prospective Register of Systematic Reviews (PROSPERO) registry to quantify the time and resources required to complete such projects.\n\n\nDESIGN\nMeta-analysis.\n\n\nDATA SOURCES AND STUDY SELECTION\nAll of the 195 registered and completed reviews (status from the PROSPERO registry) with associated publications at the time of our search (1 July 2014).\n\n\nDATA EXTRACTION\nAll authors extracted data using registry entries and publication information related to the data sources used, the number of initially retrieved citations, the final number of included studies, the time between registration date to publication date and number of authors involved for completion of each publication. Information related to funding and geographical location was also recorded when reported.\n\n\nRESULTS\nThe mean estimated time to complete the project and publish the review was 67.3 weeks (IQR=42). The number of studies found in the literature searches ranged from 27 to 92 020; the mean yield rate of included studies was 2.94% (IQR=2.5); and the mean number of authors per review was 5, SD=3. Funded reviews took significantly longer to complete and publish (mean=42 vs 26 weeks) and involved more authors and team members (mean=6.8 vs 4.8 people) than those that did not report funding (both p<0.001).\n\n\nCONCLUSIONS\nSystematic reviews presently take much time and require large amounts of human resources. In the light of the ever-increasing volume of published studies, application of existing computing and informatics technology should be applied to decrease this time and resource burden. We discuss recently published guidelines that provide a framework to make finding and accessing relevant literature less burdensome.",
"title": ""
},
{
"docid": "3613ae9cfcadee0053a270fe73c6e069",
"text": "Depth-map merging approaches have become more and more popular in multi-view stereo (MVS) because of their flexibility and superior performance. The quality of depth map used for merging is vital for accurate 3D reconstruction. While traditional depth map estimation has been performed in a discrete manner, we suggest the use of a continuous counterpart. In this paper, we first integrate silhouette information and epipolar constraint into the variational method for continuous depth map estimation. Then, several depth candidates are generated based on a multiple starting scales (MSS) framework. From these candidates, refined depth maps for each view are synthesized according to path-based NCC (normalized cross correlation) metric. Finally, the multiview depth maps are merged to produce 3D models. Our algorithm excels at detail capture and produces one of the most accurate results among the current algorithms for sparse MVS datasets according to the Middlebury benchmark. Additionally, our approach shows its outstanding robustness and accuracy in free-viewpoint video scenario.",
"title": ""
},
{
"docid": "eb9459d0eb18f0e49b3843a6036289f9",
"text": "Experimental research has had a long tradition in psychology and education. When psychology emerged as an infant science during the 1900s, it modeled its research methods on the established paradigms of the physical sciences, which for centuries relied on experimentation to derive principals and laws. Subsequent reliance on experimental approaches was strengthened by behavioral approaches to psychology and education that predominated during the first half of this century. Thus, usage of experimentation in educational technology over the past 40 years has been influenced by developments in theory and research practices within its parent disciplines. In this chapter, we examine practices, issues, and trends related to the application of experimental research methods in educational technology. The purpose is to provide readers with sufficient background to understand and evaluate experimental designs encountered in the literature and to identify designs that will effectively address questions of interest in their own research. In an introductory section, we define experimental research, differentiate it from alternative approaches, and identify important concepts in its use (e.g., internal vs. external validity). We also suggest procedures for conducting experimental studies and publishing them in educational technology research journals. Next, we analyze uses of experimental methods by instructional researchers, extending the analyses of three decades ago by Clark and Snow (1975). In the concluding section, we turn to issues in using experimental research in educational technology, to include balancing internal and external validity, using multiple outcome measures to assess learning processes and products, using item responses vs. aggregate scores as dependent variables, reporting effect size as a complement to statistical significance, and media replications vs. media comparisons.",
"title": ""
},
{
"docid": "d2c4693856ae88c3c49b5fc7c4a7baf7",
"text": "In Jesuit universities, laypersons, who come from the same or different faith backgrounds or traditions, are considered as collaborators in mission. The Jesuits themselves support the contributions of the lay partners in realizing the mission of the Society of Jesus and recognize the important role that they play in education. This study aims to investigate and generate particular notions and understandings of lived experiences of being a lay partner in Jesuit universities in the Philippines, particularly those involved in higher education. Using the qualitative approach as introduced by grounded theorist Barney Glaser, the lay partners’ concept of being a partner, as lived in higher education, is generated systematically from the data collected in the field primarily through in-depth interviews, field notes and observations. Glaser’s constant comparative method of analysis of data is used going through the phases of open coding, theoretical coding, and selective coding from memoing to theoretical sampling to sorting and then writing. In this study, Glaser’s grounded theory as a methodology will provide a substantial insight into and articulation of the layperson’s actual experience of being a partner of the Jesuits in education. Such articulation provides a phenomenological approach or framework to an understanding of the meaning and core characteristics of JesuitLay partnership in Jesuit educational institution of higher learning in the country. This study is expected to provide a framework or model for lay partnership in academic institutions that have the same practice of having lay partners in mission. Keywords—Grounded theory, Jesuit mission in higher education, lay partner, lived experience. I. BACKGROUND AND INTRODUCTION HE Second Vatican Council document of the Roman Catholic Church establishes and defines the vocation and mission of lay members of the Church. It says that regardless of status, “all laypersons are called and obliged to engage in the apostolate of being laborers in the vineyard of the Lord, the world, to serve the Kingdom of God” [1, par.16]. Christifideles Laici, a post-synodal apostolic exhortation of Pope John Paul II, renews and reaffirms this same apostolic role of lay people in the Catholic Church saying that “[t]he call is a concern not only of Pastors, clergy, and men and women religious. The call is addressed to everyone: lay people as well are personally called by the Lord, from whom they receive a mission on behalf of the Church and the world” [2, par.2]. Catholic universities, “being born from the heart of the Church” [2, p.1] follow the same orientation and mission in affirming the apostolic roles that lay men and women could exercise in sharing with the works of the church on deepening faith and spirituality [3, par.25]. Janet Badong-Badilla is with the De La Salle University, Philippines (email: [email protected]). In Jesuit Catholic universities, the laypersons’ sense of mission and passion is recognized. The Jesuits say that “the call they have received is a call shared by them all together, Jesuits and lay” [4, par. 3]. Lay-Jesuit collaboration is in fact among the 28 distinctive characteristics of Jesuit education (CJE) and a positive goal that a Jesuit school tries to achieve in response to the Second Vatican Council and to recent General Congregations of the Society of Jesus [5]. In the Philippines, there are five Jesuit and Catholic universities that operate under the charism and educational principles of St. Ignatius of Loyola, the founder of the Society of Jesus. In a Jesuit university, the work in education is linked with Ignatian spirituality that inspires it [6, par. 13]. In managing human resources in a Jesuit school, the CJE document says that as much as the administration is able, “people chosen to join the educational community will be men and women capable of understanding its distinctive nature and of contributing to the implementation of characteristics that result from the Ignatian vision” [6, par. 122]. Laypersons in Jesuit universities, then, are expected to be able to share and carry on the kind of education that is based on the Ignatian tradition and spirituality. Fr. Pedro Arrupe, S.J., the former superior general of the Society of Jesus, in his closing session to the committee working on the document on the Characteristics of Jesuit Education, said that a Jesuit school, “if it is an authentic Jesuit school,” should manifest “Ignacianidad”: “...if our operation of the school flows out of the strengths drawn from our own specific charisma, if we emphasize our essential characteristics and our basic options then the education which our students receive should give them a certain \"Ignacianidad” [5, par. 3]. For Arrupe, Ignacianidad or the spirituality inspired by St. Ignatius is “a logical consequence of the fact that Jesuit schools live and operate out of its own charism” [5, par. 3]. Not only do the Jesuits support the contributions of lay partners in realizing the Society’s mission, but more importantly, they also recognize the powerful role that the lay partners in higher education play in the growth and revitalization of the congregation itself in the present time [7]. In an article in Conversations on Jesuit Higher Education, Fr. Howell writes: In a span of 50 years the Society of Jesus has been refounded. It is thriving. But it is thriving in a totally new and creative way. Its commitment to scholarship, for instance, is one of the strongest it has ever been, but carried out primarily through lay colleagues within the Jesuit university setting. Being a Lay Partner in Jesuit Higher Education in the Philippines: A Grounded Theory Application Janet B. Badong-Badilla T World Academy of Science, Engineering and Technology International Journal of Educational and Pedagogical Sciences",
"title": ""
},
{
"docid": "e04dda55d05d15e6a2fb3680a603bd43",
"text": "Multilayer perceptrons (MLPs) or neural networks are popular models used for nonlinear regression and classification tasks. As regressors, MLPs model the conditional distribution of the predictor variables Y given the input variables X . However, this predictive distribution is assumed to be unimodal (e.g. Gaussian). For tasks involving structured prediction, the conditional distribution should be multi-modal, resulting in one-to-many mappings. By using stochastic hidden variables rather than deterministic ones, Sigmoid Belief Nets (SBNs) can induce a rich multimodal distribution in the output space. However, previously proposed learning algorithms for SBNs are not efficient and unsuitable for modeling real-valued data. In this paper, we propose a stochastic feedforward network with hidden layers composed of both deterministic and stochastic variables. A new Generalized EM training procedure using importance sampling allows us to efficiently learn complicated conditional distributions. Our model achieves superior performance on synthetic and facial expressions datasets compared to conditional Restricted Boltzmann Machines and Mixture Density Networks. In addition, the latent features of our model improves classification and can learn to generate colorful textures of objects.",
"title": ""
},
{
"docid": "8452091115566adaad8a67154128dff8",
"text": "© The Ecological Society of America www.frontiersinecology.org T Millennium Ecosystem Assessment (MA) advanced a powerful vision for the future (MA 2005), and now it is time to deliver. The vision of the MA – and of the prescient ecologists and economists whose work formed its foundation – is a world in which people and institutions appreciate natural systems as vital assets, recognize the central roles these assets play in supporting human well-being, and routinely incorporate their material and intangible values into decision making. This vision is now beginning to take hold, fueled by innovations from around the world – from pioneering local leaders to government bureaucracies, and from traditional cultures to major corporations (eg a new experimental wing of Goldman Sachs; Daily and Ellison 2002; Bhagwat and Rutte 2006; Kareiva and Marvier 2007; Ostrom et al. 2007; Goldman et al. 2008). China, for instance, is investing over 700 billion yuan (about US$102.6 billion) in ecosystem service payments, in the current decade (Liu et al. 2008). The goal of the Natural Capital Project – a partnership between Stanford University, The Nature Conservancy, and World Wildlife Fund (www.naturalcapitalproject.org) – is to help integrate ecosystem services into everyday decision making around the world. This requires turning the valuation of ecosystem services into effective policy and finance mechanisms – a problem that, as yet, no one has solved on a large scale. A key challenge remains: relative to other forms of capital, assets embodied in ecosystems are often poorly understood, rarely monitored, and are undergoing rapid degradation (Heal 2000a; MA 2005; Mäler et al. 2008). The importance of ecosystem services is often recognized only after they have been lost, as was the case following Hurricane Katrina (Chambers et al. 2007). Natural capital, and the ecosystem services that flow from it, are usually undervalued – by governments, businesses, and the public – if indeed they are considered at all (Daily et al. 2000; Balmford et al. 2002; NRC 2005). Two fundamental changes need to occur in order to replicate, scale up, and sustain the pioneering efforts that are currently underway, to give ecosystem services weight in decision making. First, the science of ecosystem services needs to advance rapidly. In promising a return (of services) on investments in nature, the scientific community needs to deliver the knowledge and tools necessary to forecast and quantify this return. To help address this challenge, the Natural Capital Project has developed InVEST (a system for Integrated Valuation of Ecosystem ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES",
"title": ""
},
{
"docid": "bcfc8566cf73ec7c002dcca671e3a0bd",
"text": "of the thoracic spine revealed a 1.1 cm intradural extramedullary mass at the level of the T2 vertebral body (Figure 1a). Spinal neurosurgery was planned due to exacerbation of her chronic back pain and progressive weakness of the lower limbs at 28 weeks ’ gestation. Emergent spinal decompression surgery was performed with gross total excision of the tumour. Doppler fl ow of the umbilical artery was used preoperatively and postoperatively to monitor fetal wellbeing. Th e histological examination revealed HPC, World Health Organization (WHO) grade 2 (Figure 1b). Complete recovery was seen within 1 week of surgery. Follow-up MRI demonstrated complete removal of the tumour. We recommended adjuvant external radiotherapy to the patient in the 3rd trimester of pregnancy due to HPC ’ s high risk of recurrence. However, the patient declined radiotherapy. Routine weekly obstetric assessments were performed following surgery. At the 37th gestational week, a 2,850 g, Apgar score 7 – 8, healthy infant was delivered by caesarean section, without need of admission to the neonatal intensive care unit. Adjuvant radiotherapy was administered to the patient in the postpartum period.",
"title": ""
},
{
"docid": "cd67a650969aa547cad8e825511c45c2",
"text": "We present DAPIP, a Programming-By-Example system that learns to program with APIs to perform data transformation tasks. We design a domainspecific language (DSL) that allows for arbitrary concatenations of API outputs and constant strings. The DSL consists of three family of APIs: regular expression-based APIs, lookup APIs, and transformation APIs. We then present a novel neural synthesis algorithm to search for programs in the DSL that are consistent with a given set of examples. The search algorithm uses recently introduced neural architectures to encode input-output examples and to model the program search in the DSL. We show that synthesis algorithm outperforms baseline methods for synthesizing programs on both synthetic and real-world benchmarks.",
"title": ""
},
{
"docid": "c0e4aa45a961aa69bc5c52e7cf7c889d",
"text": "CRM gains increasing importance due to intensive competition and saturated markets. With the purpose of retaining customers, academics as well as practitioners find it crucial to build a churn prediction model that is as accurate as possible. This study applies support vector machines in a newspaper subscription context in order to construct a churn model with a higher predictive performance. Moreover, a comparison is made between two parameter-selection techniques, needed to implement support vector machines. Both techniques are based on grid search and cross-validation. Afterwards, the predictive performance of both kinds of support vector machine models is benchmarked to logistic regression and random forests. Our study shows that support vector machines show good generalization performance when applied to noisy marketing data. Nevertheless, the parameter optimization procedure plays an important role in the predictive performance. We show that only when the optimal parameter selection procedure is applied, support vector machines outperform traditional logistic regression, whereas random forests outperform both kinds of support vector machines. As a substantive contribution, an overview of the most important churn drivers is given. Unlike ample research, monetary value and frequency do not play an important role in explaining churn in this subscription-services application. Even though most important churn predictors belong to the category of variables describing the subscription, the influence of several client/company-interaction variables can not be neglected.",
"title": ""
},
{
"docid": "864c2987092ca266b97ed11faec42aa3",
"text": "BACKGROUND\nAnxiety is the most common emotional response in women during delivery, which can be accompanied with adverse effects on fetus and mother.\n\n\nOBJECTIVES\nThis study was conducted to compare the effects of aromatherapy with rose oil and warm foot bath on anxiety in the active phase of labor in nulliparous women in Tehran, Iran.\n\n\nPATIENTS AND METHODS\nThis clinical trial study was performed after obtaining informed written consent on 120 primigravida women randomly assigned into three groups. The experimental group 1 received a 10-minute inhalation and footbath with oil rose. The experimental group 2 received a 10-minute warm water footbath. Both interventions were applied at the onset of active and transitional phases. Control group, received routine care in labor. Anxiety was assessed using visual analogous scale (VASA) at onset of active and transitional phases before and after the intervention. Statistical comparison was performed using SPSS software version 16 and P < 0.05 was considered significant.\n\n\nRESULTS\nAnxiety scores in the intervention groups in active phase after intervention were significantly lower than the control group (P < 0.001). Anxiety scores before and after intervention in intervention groups in transitional phase was significantly lower than the control group (P < 0.001).\n\n\nCONCLUSIONS\nUsing aromatherapy and footbath reduces anxiety in active phase in nulliparous women.",
"title": ""
},
{
"docid": "6a763e49cdfd41b28922eb536d9404ed",
"text": "With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.",
"title": ""
},
{
"docid": "785ca963ea1f9715cdea9baede4c6081",
"text": "In this paper, factor analysis is applied on a set of data that was collected to study the effectiveness of 58 different agile practices. The analysis extracted 15 factors, each was associated with a list of practices. These factors with the associated practices can be used as a guide for agile process improvement. Correlations between the extracted factors were calculated, and the significant correlation findings suggested that people who applied iterative and incremental development and quality assurance practices had a high success rate, that communication with the customer was not very popular as it had negative correlations with governance and iterative and incremental development. Also, people who applied governance practices also applied quality assurance practices. Interestingly success rate related negatively with traditional analysis methods such as Gantt chart and detailed requirements specification.",
"title": ""
},
{
"docid": "555f06011d03cbe8dedb2fcd198540e9",
"text": "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve highquality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.",
"title": ""
},
{
"docid": "ba89a62ac2d1b36738e521d4c5664de2",
"text": "Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.",
"title": ""
},
{
"docid": "c460660e6ea1cc38f4864fe4696d3a07",
"text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.",
"title": ""
}
] | scidocsrr |
7fbe1e066bf607663234d89602f0666e | A multi-case study on Industry 4.0 for SME's in Brandenburg, Germany | [
{
"docid": "1857eb0d2d592961bd7c1c2f226df616",
"text": "The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios.",
"title": ""
}
] | [
{
"docid": "7ddc7a3fffc582f7eee1d0c29914ba1a",
"text": "Cyclic neutropenia is an uncommon hematologic disorder characterized by a marked decrease in the number of neutrophils in the peripheral blood occurring at regular intervals. The neutropenic phase is characteristically associated with clinical symptoms such as recurrent fever, malaise, headaches, anorexia, pharyngitis, ulcers of the oral mucous membrane, and gingival inflammation. This case report describes a Japanese girl who has this disease and suffers from periodontitis and oral ulceration. Her case has been followed up for the past 5 years from age 7 to 12. The importance of regular oral hygiene, careful removal of subgingival plaque and calculus, and periodic and thorough professional mechanical tooth cleaning was emphasized to arrest the progress of periodontal breakdown. Local antibiotic application with minocycline ointment in periodontal pockets was beneficial as an ancillary treatment, especially during neutropenic periods.",
"title": ""
},
{
"docid": "d94f4df63ac621d9a8dec1c22b720abb",
"text": "Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000.",
"title": ""
},
{
"docid": "95bb07e57d9bd2b7e9a9a59c29806b66",
"text": "Breast cancer is one of the most common cancers and the second most responsible for cancer mortality worldwide. In 2014, in Portugal approximately 27,200 people died of cancer, of which 1,791 were women with breast cancer. Flaxseed has been one of the most studied foods, regarding possible relations to breast cancer, though mainly in experimental studies in animals, yet in few clinical trials. It is rich in omega-3 fatty acids, α-linolenic acid, lignan, and fibers. One of the main components of flaxseed is the lignans, of which 95% are made of the predominant secoisolariciresinol diglucoside (SDG). SDG is converted into enterolactone and enterodiol, both with antiestrogen activity and structurally similar to estrogen; they can bind to cell receptors, decreasing cell growth. Some studies have shown that the intake of omega-3 fatty acids is related to the reduction of breast cancer risk. In animal studies, α-linolenic acids have been shown to be able to suppress growth, size, and proliferation of cancer cells and also to promote breast cancer cell death. Other animal studies found that the intake of flaxseed combined with tamoxifen can reduce tumor size to a greater extent than taking tamoxifen alone. Additionally, some clinical trials showed that flaxseed can have an important role in decreasing breast cancer risk, mainly in postmenopausal women. Further studies are needed, specifically clinical trials that may demonstrate the potential benefits of flaxseed in breast cancer.",
"title": ""
},
{
"docid": "c12d27988e70e9b3e6987ca2f0ca8bca",
"text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.",
"title": ""
},
{
"docid": "ea596b23af4b34fdb6a9986a03730d99",
"text": "In the past few years, recommender systems and semantic web technologies have become main subjects of interest in the research community. In this paper, we present a domain independent semantic similarity measure that can be used in the recommendation process. This semantic similarity is based on the relations between the individuals of an ontology. The assessment can be done offline which allows time to be saved and then, get real-time recommendations. The measure has been experimented on two different domains: movies and research papers. Moreover, the generated recommendations by the semantic similarity have been evaluated by a set of volunteers and the results have been promising.",
"title": ""
},
{
"docid": "0a981597279b2fb1792b5d1a00f0c9ec",
"text": "With billions of people using smartphones and the exponential growth of smartphone apps, it is prohibitive for app marketplaces, such as Google App Store, to thoroughly verify if an app is legitimate or malicious. As a result, mobile users are left to decide for themselves whether an app is safe to use. Even worse, recent studies have shown that over 70% of apps in markets request to collect data irrelevant to the main functions of the apps, which could cause leaking of private information or inefficient use of mobile resources. It is worth mentioning that since resource management mechanism of mobile devices is different from PC machines, existing security solutions in PC malware area are not quite compatible with mobile devices. Therefore, academic researchers and commercial anti-malware companies have proposed many security mechanisms to address the security issues of the Android devices. Considering the mechanisms and techniques which are different in nature and used in proposed works, they can be classified into different categories. In this survey, we discuss the existing Android security threats and existing security enforcements solutions between 2010−2015 and try to classify works and review their functionalities. We review a few works of each class. The survey also reviews the strength and weak points of the solutions.",
"title": ""
},
{
"docid": "5bfc5768cf41643a870e3f3dddbbd741",
"text": "Homomorphic encryption has progressed rapidly in both efficiency and versatility since its emergence in 2009. Meanwhile, a multitude of pressing privacy needs — ranging from cloud computing to healthcare management to the handling of shared databases such as those containing genomics data — call for immediate solutions that apply fully homomorpic encryption (FHE) and somewhat homomorphic encryption (SHE) technologies. Further progress towards these ends requires new ideas for the efficient implementation of algebraic operations on word-based (as opposed to bit-wise) encrypted data. Whereas handling data encrypted at the bit level leads to prohibitively slow algorithms for the arithmetic operations that are essential for cloud computing, the word-based approach hits its bottleneck when operations such as integer comparison are needed. In this work, we tackle this challenging problem, proposing solutions to problems — including comparison and division — in word-based encryption via a leveled FHE scheme. We present concrete performance figures for all proposed primitives.",
"title": ""
},
{
"docid": "ec5095df6250a8f6cdf088f730dfbd5e",
"text": "Canine atopic dermatitis (CAD) is a multifaceted disease associated with exposure to various offending agents such as environmental and food allergens. The diagnosis of this condition is difficult because none of the typical signs are pathognomonic. Sets of criteria have been proposed but are mainly used to include dogs in clinical studies. The goals of the present study were to characterize the clinical features and signs of a large population of dogs with CAD, to identify which of these characteristics could be different in food-induced atopic dermatitis (FIAD) and non-food-induced atopic dermatitis (NFIAD) and to develop criteria for the diagnosis of this condition. Using simulated annealing, selected criteria were tested on a large and geographically widespread population of pruritic dogs. The study first described the signalment, history and clinical features of a large population of CAD dogs, compared FIAD and NFIAD dogs and confirmed that both conditions are clinically indistinguishable. Correlations of numerous clinical features with the diagnosis of CAD are subsequently calculated, and two sets of criteria associated with sensitivity and specificity ranging from 80% to 85% and from 79% to 85%, respectively, are proposed. It is finally demonstrated that these new sets of criteria provide better sensitivity and specificity, when compared to Willemse and Prélaud criteria. These criteria can be applied to both FIAD and NFIAD dogs.",
"title": ""
},
{
"docid": "31add593ce5597c24666d9662b3db89d",
"text": "Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of dressed human body scans. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2, 18] as statistical model.",
"title": ""
},
{
"docid": "ef6160d304908ea87287f2071dea5f6d",
"text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.",
"title": ""
},
{
"docid": "eb6675c6a37aa6839fa16fe5d5220cfb",
"text": "In this paper, we propose an efficient method to detect the underlying structures in data. The same as RANSAC, we randomly sample MSSs (minimal size samples) and generate hypotheses. Instead of analyzing each hypothesis separately, the consensus information in all hypotheses is naturally fused into a hypergraph, called random consensus graph, with real structures corresponding to its dense subgraphs. The sampling process is essentially a progressive refinement procedure of the random consensus graph. Due to the huge number of hyperedges, it is generally inefficient to detect dense subgraphs on random consensus graphs. To overcome this issue, we construct a pairwise graph which approximately retains the dense subgraphs of the random consensus graph. The underlying structures are then revealed by detecting the dense subgraphs of the pair-wise graph. Since our method fuses information from all hypotheses, it can robustly detect structures even under a small number of MSSs. The graph framework enables our method to simultaneously discover multiple structures. Besides, our method is very efficient, and scales well for large scale problems. Extensive experiments illustrate the superiority of our proposed method over previous approaches, achieving several orders of magnitude speedup along with satisfactory accuracy and robustness.",
"title": ""
},
{
"docid": "bd1a13c94d0e12b4ba9f14fef47d2564",
"text": "Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f = u+ η, and η is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle’s projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article1.",
"title": ""
},
{
"docid": "8c46f24d8e710c5fb4e25be76fc5b060",
"text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.",
"title": ""
},
{
"docid": "701be9375bb7c019710f7887a0074d15",
"text": "A blockchain powered health information exchange (HIE) can unlock the true value of interoperability and cyber security. This system has the potential to eliminate the friction and costs of current third party intermediaries, when considering population health management. There are promises of improved data integrity, reduced transaction costs, decentralization and disintermediation of trust. Being able to coordinate patient care via a blockchain HIE essentially alleviates unnecessary services and duplicate tests with lowering costs and improvements in efficiencies of the continuum care cycle, while adhering to all HIPAA rules and standards. A patient-centered protocol supported by blockchain technology, Patientory is changing the way healthcare stakeholders manage electronic medical data and interact with clinical care teams.",
"title": ""
},
{
"docid": "3647b5e0185c0120500fff8061265abd",
"text": "Human and machine visual sensing is enhanced when surface properties of objects in scenes, including color, can be reliably estimated despite changes in the ambient lighting conditions. We describe a computational method for estimating surface spectral reflectance when the spectral power distribution of the ambient light is not known.",
"title": ""
},
{
"docid": "dc42ffc3d9a5833f285bac114e8a8b37",
"text": "In this paper, we present a recursive algorithm for extracting classification rules from feedforward neural networks (NNs) that have been trained on data sets having both discrete and continuous attributes. The novelty of this algorithm lies in the conditions of the extracted rules: the rule conditions involving discrete attributes are disjoint from those involving continuous attributes. The algorithm starts by first generating rules with discrete attributes only to explain the classification process of the NN. If the accuracy of a rule with only discrete attributes is not satisfactory, the algorithm refines this rule by recursively generating more rules with discrete attributes not already present in the rule condition, or by generating a hyperplane involving only the continuous attributes. We show that for three real-life credit scoring data sets, the algorithm generates rules that are not only more accurate but also more comprehensible than those generated by other NN rule extraction methods.",
"title": ""
},
{
"docid": "062839e72c6bdc6c6bf2ba1d1041d07b",
"text": "Students’ increasing use of text messaging language has prompted concern that textisms (e.g., 2 for to, dont for don’t, ☺) will intrude into their formal written work. Eighty-six Australian and 150 Canadian undergraduates were asked to rate the appropriateness of textism use in various situations. Students distinguished between the appropriateness of using textisms in different writing modalities and to different recipients, rating textism use as inappropriate in formal exams and assignments, but appropriate in text messages, online chat and emails with friends and siblings. In a second study, we checked the examination papers of a separate sample of 153 Australian undergraduates for the presence of textisms. Only a negligible number were found. We conclude that, overall, university students recognise the different requirements of different recipients and modalities when considering textism use and that students are able to avoid textism use in exams despite media reports to the contrary.",
"title": ""
},
{
"docid": "a458f16b84f40dc0906658a93d4b2efa",
"text": "We investigated the usefulness of Sonazoid contrast-enhanced ultrasonography (Sonazoid-CEUS) in the diagnosis of hepatocellular carcinoma (HCC). The examination was performed by comparing the images during the Kupffer phase of Sonazoid-CEUS with superparamagnetic iron oxide magnetic resonance (SPIO-MRI). The subjects were 48 HCC nodules which were histologically diagnosed (well-differentiated HCC, n = 13; moderately differentiated HCC, n = 30; poorly differentiated HCC, n = 5). We performed Sonazoid-CEUS and SPIO-MRI on all subjects. In the Kupffer phase of Sonazoid-CEUS, the differences in the contrast agent uptake between the tumorous and non-tumorous areas were quantified as the Kupffer phase ratio and compared. In the SPIO-MRI, it was quantified as the SPIO-intensity index. We then compared these results with the histological differentiation of HCCs. The Kupffer phase ratio decreased as the HCCs became less differentiated (P < 0.0001; Kruskal–Wallis test). The SPIO-intensity index also decreased as HCCs became less differentiated (P < 0.0001). A positive correlation was found between the Kupffer phase ratio and the SPIO-MRI index (r = 0.839). In the Kupffer phase of Sonazoid-CEUS, all of the moderately and poorly differentiated HCCs appeared hypoechoic and were detected as a perfusion defect, whereas the majority (9 of 13 cases, 69.2%) of the well-differentiated HCCs had an isoechoic pattern. The Kupffer phase images of Sonazoid-CEUS and SPIO-MRI matched perfectly (100%) in all of the moderately and poorly differentiated HCCs. Sonazoid-CEUS is useful for estimating histological grading of HCCs. It is a modality that could potentially replace SPIO-MRI.",
"title": ""
},
{
"docid": "9f1441bc10d7b0234a3736ce83d5c14b",
"text": "Conservation of genetic diversity, one of the three main forms of biodiversity, is a fundamental concern in conservation biology as it provides the raw material for evolutionary change and thus the potential to adapt to changing environments. By means of meta-analyses, we tested the generality of the hypotheses that habitat fragmentation affects genetic diversity of plant populations and that certain life history and ecological traits of plants can determine differential susceptibility to genetic erosion in fragmented habitats. Additionally, we assessed whether certain methodological approaches used by authors influence the ability to detect fragmentation effects on plant genetic diversity. We found overall large and negative effects of fragmentation on genetic diversity and outcrossing rates but no effects on inbreeding coefficients. Significant increases in inbreeding coefficient in fragmented habitats were only observed in studies analyzing progenies. The mating system and the rarity status of plants explained the highest proportion of variation in the effect sizes among species. The age of the fragment was also decisive in explaining variability among effect sizes: the larger the number of generations elapsed in fragmentation conditions, the larger the negative magnitude of effect sizes on heterozygosity. Our results also suggest that fragmentation is shifting mating patterns towards increased selfing. We conclude that current conservation efforts in fragmented habitats should be focused on common or recently rare species and mainly outcrossing species and outline important issues that need to be addressed in future research on this area.",
"title": ""
}
] | scidocsrr |
a2d04f9748040ba26485b311176ecc8a | Very High Frequency PWM Buck Converters Using Monolithic GaN Half-Bridge Power Stages With Integrated Gate Drivers | [
{
"docid": "e09d142b072122da62ebe79650f42cc5",
"text": "This paper describes a synchronous buck converter based on a GaN-on-SiC integrated circuit, which includes a halfbridge power stage, as well as a modified active pull-up gate driver stage. The integrated modified active pull-up driver takes advantage of depletion-mode device characteristics to achieve fast switching with low power consumption. Design principles and results are presented for a synchronous buck converter prototype operating at 100 MHz switching frequency, delivering up to 7 W from 20 V input voltage. Measured power-stage efficiency peaks above 91%, and remains above 85% over a wide range of operating conditions. Experimental results show that the converter has the ability to accurately track a 20 MHz bandwidth LTE envelope signal with 83.7% efficiency.",
"title": ""
},
{
"docid": "3f77b59dc39102eb18e31dbda0578ecb",
"text": "GaN high electron mobility transistors (HEMTs) are well suited for high-frequency operation due to their lower on resistance and device capacitance compared with traditional silicon devices. When grown on silicon carbide, GaN HEMTs can also achieve very high power density due to the enhanced power handling capabilities of the substrate. As a result, GaN-on-SiC HEMTs are increasingly popular in radio-frequency power amplifiers, and applications as switches in high-frequency power electronics are of high interest. This paper explores the use of GaN-on-SiC HEMTs in conventional pulse-width modulated switched-mode power converters targeting switching frequencies in the tens of megahertz range. Device sizing and efficiency limits of this technology are analyzed, and design principles and guidelines are given to exploit the capabilities of the devices. The results are presented for discrete-device and integrated implementations of a synchronous Buck converter, providing more than 10-W output power supplied from up to 40 V with efficiencies greater than 95% when operated at 10 MHz, and greater than 90% at switching frequencies up to 40 MHz. As a practical application of this technology, the converter is used to accurately track a 3-MHz bandwidth communication envelope signal with 92% efficiency.",
"title": ""
}
] | [
{
"docid": "172f206c8b3b0bc0d75793a13fa9ef88",
"text": "Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets— WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information.",
"title": ""
},
{
"docid": "326cb7464df9c9361be4e27d82f61455",
"text": "We implemented an attack against WEP, the link-layer security protocol for 802.11 networks. The attack was described in a recent paper by Fluhrer, Mantin, and Shamir. With our implementation, and permission of the network administrator, we were able to recover the 128 bit secret key used in a production network, with a passive attack. The WEP standard uses RC4 IVs improperly, and the attack exploits this design failure. This paper describes the attack, how we implemented it, and some optimizations to make the attack more efficient. We conclude that 802.11 WEP is totally insecure, and we provide some recommendations.",
"title": ""
},
{
"docid": "e0633afb6f4dcb1561dbb23b6e3aa713",
"text": "Software security vulnerabilities are one of the critical issues in the realm of computer security. Due to their potential high severity impacts, many different approaches have been proposed in the past decades to mitigate the damages of software vulnerabilities. Machine-learning and data-mining techniques are also among the many approaches to address this issue. In this article, we provide an extensive review of the many different works in the field of software vulnerability analysis and discovery that utilize machine-learning and data-mining techniques. We review different categories of works in this domain, discuss both advantages and shortcomings, and point out challenges and some uncharted territories in the field.",
"title": ""
},
{
"docid": "8da0bdec21267924d16f9a04e6d9a7ef",
"text": "Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section).",
"title": ""
},
{
"docid": "44abac09424c717f3a691e4ba2640c1a",
"text": "In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "28fcdd3282dd57c760e9e2628764c0f8",
"text": "Constructing a valid measure of presence and discovering the factors that contribute to presence have been much sought after goals of presence researchers and at times have generated controversy among them. This paper describes the results of principal-components analyses of Presence Questionnaire (PQ) data from 325 participants following exposure to immersive virtual environments. The analyses suggest that a 4-factor model provides the best fit to our data. The factors are Involvement, Adaptation/Immersion, Sensory Fidelity, and Interface Quality. Except for the Adaptation/Immersion factor, these factors corresponded to those identified in a cluster analysis of data from an earlier version of the questionnaire. The existence of an Adaptation/Immersion factor leads us to postulate that immersion is greater for those individuals who rapidly and easily adapt to the virtual environment. The magnitudes of the correlations among the factors indicate moderately strong relationships among the 4 factors. Within these relationships, Sensory Fidelity items seem to be more closely related to Involvement, whereas Interface Quality items appear to be more closely related to Adaptation/Immersion, even though there is a moderately strong relationship between the Involvement and Adaptation/Immersion factors.",
"title": ""
},
{
"docid": "b08027d8febf1d7f8393b9934739847d",
"text": "Sarcasm is generally characterized as a figure of speech that involves the substitution of a literal by a figurative meaning, which is usually the opposite of the original literal meaning. We re-frame the sarcasm detection task as a type of word sense disambiguation problem, where the sense of a word is either literal or sarcastic. We call this the Literal/Sarcastic Sense Disambiguation (LSSD) task. We address two issues: 1) how to collect a set of target words that can have either literal or sarcastic meanings depending on context; and 2) given an utterance and a target word, how to automatically detect whether the target word is used in the literal or the sarcastic sense. For the latter, we investigate several distributional semantics methods and show that a Support Vector Machines (SVM) classifier with a modified kernel using word embeddings achieves a 7-10% F1 improvement over a strong lexical baseline.",
"title": ""
},
{
"docid": "653b9148a229bd8b2c1909d98d67e7a4",
"text": "In this work, a beam switched antenna system based on a planar connected antenna array (CAA) is proposed at 28 GHz for 5G applications. The antenna system consists of a 4 × 4 connected slot antenna elements. It is covering frequency band from 27.4 GHz to 28.23 GHz with at least −10dB bandwidth of 830 MHz. It is modeled on a commercially available RO3003 substrate with ∊r equal to 3.3. The dimensions of the board are equal to 61×54×0.13 mm3. The proposed design is compact and low profile. A Butler matrix based feed network is used to steer the beam at different locations.",
"title": ""
},
{
"docid": "fb0b06eb6238c008bef7d3b2e9a80792",
"text": "An N-dimensional image is divided into “object” and “background” segments using a graph cut approach. A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.",
"title": ""
},
{
"docid": "a00fe5032a5e1835120135e6e504d04b",
"text": "Perfect information Monte Carlo (PIMC) search is the method of choice for constructing strong Al systems for trick-taking card games. PIMC search evaluates moves in imperfect information games by repeatedly sampling worlds based on state inference and estimating move values by solving the corresponding perfect information scenarios. PIMC search performs well in trick-taking card games despite the fact that it suffers from the strategy fusion problem, whereby the game's information set structure is ignored because moves are evaluated opportunistically in each world. In this paper we describe imperfect information Monte Carlo (IIMC) search, which aims at mitigating this problem by basing move evaluation on more realistic playout sequences rather than perfect information move values. We show that RecPIMC - a recursive IIMC search variant based on perfect information evaluation - performs considerably better than PIMC search in a large class of synthetic imperfect information games and the popular card game of Skat, for which PIMC search is the state-of-the-art cardplay algorithm.",
"title": ""
},
{
"docid": "1f7bd85c5b28f97565d8b38781e875ab",
"text": "Parental socioeconomic status is among the widely cited factors that has strong association with academic performance of students. Explanatory research design was employed to assess the effects of parents’ socioeconomic status on the academic achievement of students in regional examination. To that end, regional examination result of 538 randomly selected students from thirteen junior secondary schools has been analysed using percentage, independent samples t-tests, Spearman’s rho correlation and one way ANOVA. The results of the analysis revealed that socioeconomic status of parents (particularly educational level and occupational status of parents) has strong association with the academic performance of students. Students from educated and better off families have scored higher result in their regional examination than their counterparts. Being a single parent student and whether parents are living together or not have also a significant impact on the academic performance of students. Parents’ age did not have a significant association with the performance of students.",
"title": ""
},
{
"docid": "e05ea52ecf42826e73ed7095ed162557",
"text": "This paper aims at detecting and recognizing fish species from underwater images by means of Fast R-CNN (Regions with Convolutional Neural and Networks) features. Encouraged by powerful recognition results achieved by Convolutional Neural Networks (CNNs) on generic VOC and ImageNet dataset, we apply this popular deep ConvNets to domain-specific underwater environment which is more complicated than overland situation, using a new dataset of 24277 ImageCLEF fish images belonging to 12 classes. The experimental results demonstrate the promising performance of our networks. Fast R-CNN improves mean average precision (mAP) by 11.2% relative to Deformable Parts Model (DPM) baseline-achieving a mAP of 81.4%, and detects 80× faster than previous R-CNN on a single fish image.",
"title": ""
},
{
"docid": "19acedd03589d1fd1173dd1565d11baf",
"text": "This is the first report on the microbial diversity of xaj-pitha, a rice wine fermentation starter culture through a metagenomics approach involving Illumine-based whole genome shotgun (WGS) sequencing method. Metagenomic DNA was extracted from rice wine starter culture concocted by Ahom community of Assam and analyzed using a MiSeq® System. A total of 2,78,231 contigs, with an average read length of 640.13 bp, were obtained. Data obtained from the use of several taxonomic profiling tools were compared with previously reported microbial diversity studies through the culture-dependent and culture-independent method. The microbial community revealed the existence of amylase producers, such as Rhizopus delemar, Mucor circinelloides, and Aspergillus sp. Ethanol producers viz., Meyerozyma guilliermondii, Wickerhamomyces ciferrii, Saccharomyces cerevisiae, Candida glabrata, Debaryomyces hansenii, Ogataea parapolymorpha, and Dekkera bruxellensis, were found associated with the starter culture along with a diverse range of opportunistic contaminants. The bacterial microflora was dominated by lactic acid bacteria (LAB). The most frequent occurring LAB was Lactobacillus plantarum, Lactobacillus brevis, Leuconostoc lactis, Weissella cibaria, Lactococcus lactis, Weissella para mesenteroides, Leuconostoc pseudomesenteroides, etc. Our study provided a comprehensive picture of microbial diversity associated with rice wine fermentation starter and indicated the superiority of metagenomic sequencing over previously used techniques.",
"title": ""
},
{
"docid": "9f7aaba61ef395f85252820edae5db1b",
"text": "Theory and research on sex differences in adjustment focus largely on parental, societal, and biological influences. However, it also is important to consider how peers contribute to girls' and boys' development. This article provides a critical review of sex differences in several peer relationship processes, including behavioral and social-cognitive styles, stress and coping, and relationship provisions. The authors present a speculative peer-socialization model based on this review in which the implications of these sex differences for girls' and boys' emotional and behavioral development are considered. Central to this model is the idea that sex-linked relationship processes have costs and benefits for girls' and boys' adjustment. Finally, the authors present recent research testing certain model components and propose approaches for testing understudied aspects of the model.",
"title": ""
},
{
"docid": "89ed5dc0feb110eb3abc102c4e50acaf",
"text": "Automatic object detection in infrared images is a vital task for many military defense systems. The high detection rate and low false detection rate of this phase directly affect the performance of the following algorithms in the system as well as the general performance of the system. In this work, a fast and robust algorithm is proposed for detection of small and high intensity objects in infrared scenes. Top-hat transformation and mean filter was used to increase the visibility of the objects, and a two-layer thresholding algorithm was introduced to calculate the object sizes more accurately. Finally, small objects extracted by using post processing methods.",
"title": ""
},
{
"docid": "4ecc49bb99ade138783899b6f9b47f16",
"text": "This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We nd that in this task model-based approaches support reinforcement learning from smaller amounts of training data and eecient handling of changing goals.",
"title": ""
},
{
"docid": "f0af0497727f2256aa52b30c3a7f64d1",
"text": "This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles' diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle's position, which enhanced the particles' capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.",
"title": ""
},
{
"docid": "131c163caef9ab345eada4b2d423aa9d",
"text": "Text pre-processing of Arabic Language is a challenge and crucial stage in Text Categorization (TC) particularly and Text Mining (TM) generally. Stemming algorithms can be employed in Arabic text preprocessing to reduces words to their stems/or root. Arabic stemming algorithms can be ranked, according to three category, as root-based approach (ex. Khoja); stem-based approach (ex. Larkey); and statistical approach (ex. N-Garm). However, no stemming of this language is perfect: The existing stemmers have a small efficiency. In this paper, in order to improve the accuracy of stemming and therefore the accuracy of our proposed TC system, an efficient hybrid method is proposed for stemming Arabic text. The effectiveness of the aforementioned four methods was evaluated and compared in term of the F-measure of the Naïve Bayesian classifier and the Support Vector Machine classifier used in our TC system. The proposed stemming algorithm was found to supersede the other stemming ones: The obtained results illustrate that using the proposed stemmer enhances greatly the performance of Arabic Text Categorization.",
"title": ""
},
{
"docid": "7a62e5a78eabbcbc567d5538a2f35434",
"text": "This paper presents a system for a design and implementation of Optical Arabic Braille Recognition(OBR) with voice and text conversion. The implemented algorithm based on a comparison of Braille dot position extraction in each cell with the database generated for each Braille cell. Many digital image processing have been performed on the Braille scanned document like binary conversion, edge detection, holes filling and finally image filtering before dot extraction. The work in this paper also involved a unique decimal code generation for each Braille cell used as a base for word reconstruction with the corresponding voice and text conversion database. The implemented algorithm achieve expected result through letter and words recognition and transcription accuracy over 99% and average processing time around 32.6 sec per page. using matlab environmemt",
"title": ""
}
] | scidocsrr |
c6befff453b219541b9d377794eca89d | Intelligent Traffic Information System Based on Integration of Internet of Things and Agent Technology | [
{
"docid": "6c8983865bf3d6bdbf120e0480345aac",
"text": "In the future Internet of Things (IoT), smart objects will be the fundamental building blocks for the creation of cyber-physical smart pervasive systems in a great variety of application domains ranging from health-care to transportation, from logistics to smart grid and cities. The implementation of a smart objects-oriented IoT is a complex challenge as distributed, autonomous, and heterogeneous IoT components at different levels of abstractions and granularity need to cooperate among themselves, with conventional networked IT infrastructures, and also with human users. In this paper, we propose the integration of two complementary mainstream paradigms for large-scale distributed computing: Agents and Cloud. Agent-based computing can support the development of decentralized, dynamic, cooperating and open IoT systems in terms of multi-agent systems. Cloud computing can enhance the IoT objects with high performance computing capabilities and huge storage resources. In particular, we introduce a cloud-assisted and agent-oriented IoT architecture that will be realized through ACOSO, an agent-oriented middleware for cooperating smart objects, and BodyCloud, a sensor-cloud infrastructure for large-scale sensor-based systems.",
"title": ""
}
] | [
{
"docid": "2915d67d630b31bc23d44b9eea0d039e",
"text": "Life-size humanoids which have the same joint arrangement as humans are expected to help in the living environment. In this case, they require high load operations such as gripping and conveyance of heavy load, and holding people at the care spot. However, these operations are difficult for existing humanoids because of their low joint output. Therefore, the purpose of this study is to develop the highoutput life-size humanoid robot. We first designed a motor driver for humanoid with featuring small, water-cooled, and high output, and it performed higher joint output than existing humanoids utilizing. In this paper, we describe designed humanoid arm and leg with this motor driver. The arm is featuring the designed 2-axis unit and the leg is featuring the water-cooled double motor system. We demonstrated the arm's high torque and high velocity experiment and the leg's high performance experiment based on water-cooled double motor compared with air-cooled and single motor. Then we designed and developed a life-size humanoid with these arms and legs. We demonstrated some humanoid's experiment operating high load to find out the arm and leg's validity.",
"title": ""
},
{
"docid": "ef1758847263c0708ed653c74a3cff41",
"text": "The management of central diabetes insipidus has been greatly simplified by the introduction of desmopressin (DDAVP). Its ease of administration, safety and tolerability make DDAVP the first line agent for outpatient treatment of central diabetes insipidus. The major complication of DDAVP therapy is water intoxication and hyponatremia. The risk of hyponatremia can be reduced by careful dose titration when initiating therapy and by close monitoring of serum osmolality when DDAVP is used with other medications affecting water balance. Herein we review the adverse effects of DDAVP and its predecessor, vasopressin, as well as discuss important clinical considerations when using these agents to treat central diabetes insipidus.",
"title": ""
},
{
"docid": "e70c6ccc129f602bd18a49d816ee02a9",
"text": "This purpose of this paper is to show how prevalent features of successful human tutoring interactions can be integrated into a pedagogical agent, AutoTutor. AutoTutor is a fully automated computer tutor that responds to learner input by simulating the dialog moves of effective, normal human tutors. AutoTutor’s delivery of dialog moves is organized within a 5step framework that is unique to normal human tutoring interactions. We assessed AutoTutor’s performance as an effective tutor and conversational partner during tutoring sessions with virtual students of varying ability levels. Results from three evaluation cycles indicate the following: (1) AutoTutor is capable of delivering pedagogically effective dialog moves that mimic the dialog move choices of human tutors, and (2) AutoTutor is a reasonably effective conversational partner. INTRODUCTION AND BACKGROUND Over the last decade a number of researchers have attempted to uncover the mechanisms of human tutoring that are responsible for student learning gains. Many of the informative findings have been reported in studies that have systematically analyzed the collaborative discourse that occurs between tutors and students (Fox, 1993; Graesser & Person, 1994; Graesser, Person, & Magliano, 1995; Hume, Michael, Rovick, & Evens, 1996; McArthur, Stasz, & Zmuidzinas, 1990; Merrill, Reiser, Ranney, & Trafton, 1992; Moore, 1995; Person & Graesser, 1999; Person, Graesser, Magliano, & Kreuz, 1994; Person, Kreuz, Zwaan, & Graesser, 1995; Putnam, 1987). For example, we have learned that the tutorial session is predominately controlled by the tutor. That is, tutors, not students, typically determine when and what topics will be covered in the session. Further, we know that human tutors rarely employ sophisticated or “ideal” tutoring models that are often incorporated into intelligent tutoring systems. Instead, human tutors are more likely to rely on localized strategies that are embedded within conversational turns. Although many findings such as these have illuminated the tutoring process, they present formidable challenges for designers of intelligent tutoring systems. After all, building a knowledgeable conversational partner is no small feat. However, if designers of future tutoring systems wish to capitalize on the knowledge gained from human tutoring studies, the next generation of tutoring systems will incorporate pedagogical agents that engage in learning dialogs with students. The purpose of this paper is twofold. First, we will describe how prevalent features of successful human tutoring interactions can be incorporated into a pedagogical agent, AutoTutor. Second, we will provide data from several preliminary performance evaluations in which AutoTutor interacts with virtual students of varying ability levels. Person, Graesser, Kreuz, Pomeroy, and the Tutoring Research Group AutoTutor is a fully automated computer tutor that is currently being developed by the Tutoring Research Group (TRG). AutoTutor is a working system that attempts to comprehend students’ natural language contributions and then respond to the student input by simulating the dialogue moves of human tutors. AutoTutor differs from other natural language tutors in several ways. First, AutoTutor does not restrict the natural language input of the student like other systems (e.g., Adele (Shaw, Johnson, & Ganeshan, 1999); the Ymir agents (Cassell & Thórisson, 1999); Cirscim-Tutor (Hume, Michael, Rovick, & Evens, 1996; Zhou et al., 1999); Atlas (Freedman, 1999); and Basic Electricity and Electronics (Moore, 1995; Rose, Di Eugenio, & Moore, 1999)). These systems tend to limit student input to a small subset of judiciously worded speech acts. Second, AutoTutor does not allow the user to substitute natural language contributions with GUI menu options like those in the Atlas and Adele systems. The third difference involves the open-world nature of AutoTutor’s content domain (i.e., computer literacy). The previously mentioned tutoring systems are relatively more closed-world in nature, and therefore, constrain the scope of student contributions. The current version of AutoTutor simulates the tutorial dialog moves of normal, untrained tutors; however, plans for subsequent versions include the integration of more sophisticated ideal tutoring strategies. AutoTutor is currently designed to assist college students learn about topics covered in an introductory computer literacy course. In a typical tutoring session with AutoTutor, students will learn the fundamentals of computer hardware, the operating system, and the Internet. A Brief Sketch of AutoTutor AutoTutor is an animated pedagogical agent that serves as a conversational partner with the student. AutoTutor’s interface is comprised of four features: a two-dimensional, talking head, a text box for typed student input, a text box that displays the problem/question being discussed, and a graphics box that displays pictures and animations that are related to the topic at hand. AutoTutor begins the session by introducing himself and then presents the student with a question or problem that is selected from a curriculum script. The question/problem remains in a text box at the top of the screen until AutoTutor moves on to the next topic. For some questions and problems, there are graphical displays and animations that appear in a specially designated box on the screen. Once AutoTutor has presented the student with a problem or question, a multi-turn tutorial dialog occurs between AutoTutor and the learner. All student contributions are typed into the keyboard and appear in a text box at the bottom of the screen. AutoTutor responds to each student contribution with one or a combination of pedagogically appropriate dialog moves. These dialog moves are conveyed via synthesized speech, appropriate intonation, facial expressions, and gestures and do not appear in text form on the screen. In the future, we hope to have AutoTutor handle speech recognition, so students can speak their contributions. However, current speech recognition packages require time-consuming training that is not optimal for systems that interact with multiple users. The various modules that enable AutoTutor to interact with the learner will be described in subsequent sections of the paper. For now, however, it is important to note that our initial goals for building AutoTutor have been achieved. That is, we have designed a computer tutor that participates in a conversation with the learner while simulating the dialog moves of normal human tutors. WHY SIMULATE NORMAL HUMAN TUTORS? It has been well documented that normal, untrained human tutors are effective. Effect sizes ranging between .5 and 2.3 have been reported in studies where student learning gains were measured (Bloom, 1984; Cohen, Kulik, & Kulik, 1982). For quite a while, these rather large effect sizes were somewhat puzzling. That is, normal tutors typically do not have expert domain knowledge nor do they have knowledge about sophisticated tutoring strategies. In order to gain a better understanding of the primary mechanisms that are responsible for student learning Simulating Human Tutor Dialog Moves in AutoTutor gains, a handful of researchers have systematically analyzed the dialogue that occurs between normal, untrained tutors and students (Graesser & Person, 1994; Graesser et al., 1995; Person & Graesser, 1999; Person et al., 1994; Person et al., 1995). Graesser, Person, and colleagues analyzed over 100 hours of tutoring interactions and identified two prominent features of human tutoring dialogs: (1) a five-step dialog frame that is unique to tutoring interactions, and (2) a set of tutor-initiated dialog moves that serve specific pedagogical functions. We believe these two features are responsible for the positive learning outcomes that occur in typical tutoring settings, and further, these features can be implemented in a tutoring system more easily than the sophisticated methods and strategies that have been advocated by other educational researchers and ITS developers. Five-step Dialog Frame The structure of human tutorial dialogs differs from learning dialogs that often occur in classrooms. Mehan (1979) and others have reported a 3-step pattern that is prevalent in classroom interactions. This pattern is often referred to as IRE, which stands for Initiation (a question or claim articulated by the teacher), Response (an answer or comment provided by the student), and Evaluation (teacher evaluates the student contribution). In tutoring, however, the dialog is managed by a 5-step dialog frame (Graesser & Person, 1994; Graesser et al., 1995). The five steps in this frame are presented below. Step 1: Tutor asks question (or presents problem). Step 2: Learner answers question (or begins to solve problem). Step 3: Tutor gives short immediate feedback on the quality of the answer (or solution). Step 4: Tutor and learner collaboratively improve the quality of the answer. Step 5: Tutor assesses learner’s understanding of the answer. This 5-step dialog frame in tutoring is a significant augmentation over the 3-step dialog frame in classrooms. We believe that the advantage of tutoring over classroom settings lies primarily in Step 4. Typically, Step 4 is a lengthy multi-turn dialog in which the tutor and student collaboratively contribute to the explanation that answers the question or solves the problem. At a macro-level, the dialog that occurs between AutoTutor and the learner conforms to Steps 1 through 4 of the 5-step frame. For example, at the beginning of each new topic, AutoTutor presents the learner with a problem or asks the learner a question (Step 1). The learner then attempts to solve the problem or answer the question (Step 2). Next, AutoTutor provides some type of short, evaluative feedback (Step 3). During Step 4, AutoTutor employs a variety of dialog moves (see next section) that encourage learner participation. Thus, ins",
"title": ""
},
{
"docid": "d428715497a2de16437a0b8f11fb69a0",
"text": "Fog or Edge computing has recently attracted broad attention from both industry and academia. It is deemed as a paradigm shift from the current centralized cloud computing model and could potentially bring a “Fog-IoT” architecture that would significantly benefit the future ubiquitous Internet of Things (IoT) systems and applications. However, it takes a series of key enabling technologies including emerging technologies to realize such a vision. In this article, we will survey these key enabling technologies with specific focuses on security and scalability, which are two very important and much-needed characteristics for future large-scale deployment. We aim to draw an overall big picture of the future for the research and development in these areas.",
"title": ""
},
{
"docid": "472ff656dc35c5ed37aae6e3a82e3192",
"text": "Status of This Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract JavaScript Object Notation (JSON) is a lightweight, text-based, language-independent data interchange format. It was derived from the ECMAScript Programming Language Standard. JSON defines a small set of formatting rules for the portable representation of structured data.",
"title": ""
},
{
"docid": "962831a1fa8771c68feb894dc2c63943",
"text": "San-Francisco in the US and Natal in Brazil are two coastal cities which are known rather for its tech scene and natural beauty than for its criminal activities. We analyze characteristics of the urban environment in these two cities, deploying a machine learning model to detect categories and hotspots of criminal activities. We propose an extensive set of spatio-temporal & urban features which can significantly improve the accuracy of machine learning models for these tasks, one of which achieved Top 1% performance on a Crime Classification Competition by kaggle.com. Extensive evaluation on several years of crime records from both cities show how some features — such as the street network — carry important information about criminal activities.",
"title": ""
},
{
"docid": "fa03a0640ada358378f1b4915aa68be2",
"text": "Recent evidence suggests that there are two possible systems for empathy: a basic emotional contagion system and a more advanced cognitive perspective-taking system. However, it is not clear whether these two systems are part of a single interacting empathy system or whether they are independent. Additionally, the neuroanatomical bases of these systems are largely unknown. In this study, we tested the hypothesis that emotional empathic abilities (involving the mirror neuron system) are distinct from those related to cognitive empathy and that the two depend on separate anatomical substrates. Subjects with lesions in the ventromedial prefrontal (VM) or inferior frontal gyrus (IFG) cortices and two control groups were assessed with measures of empathy that incorporate both cognitive and affective dimensions. The findings reveal a remarkable behavioural and anatomic double dissociation between deficits in cognitive empathy (VM) and emotional empathy (IFG). Furthermore, precise anatomical mapping of lesions revealed Brodmann area 44 to be critical for emotional empathy while areas 11 and 10 were found necessary for cognitive empathy. These findings are consistent with these cortices being different in terms of synaptic hierarchy and phylogenetic age. The pattern of empathy deficits among patients with VM and IFG lesions represents a first direct evidence of a double dissociation between emotional and cognitive empathy using the lesion method.",
"title": ""
},
{
"docid": "d4d802b296b210a1957b1a214d9fd9fb",
"text": "Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot’s physical surroundings. Such interpretation is known variously as the symbol grounding problem (Harnad, 1990), grounded semantics (Feldman et al., 1996) and grounded language acquisition (Nenov and Dyer, 1993, 1994). This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands (e.g. route directions like “Go down the hallway through the door”) and mobile manipulation commands (e.g. physical directives like “Pick up the pallet on the truck”) involving a variety of semi-autonomous robotic platforms, including a wheelchair, a microair vehicle, a forklift, and the Willow Garage PR2. The first two authors contributed equally to this paper. 1 ar X iv :1 71 2. 01 09 7v 1 [ cs .C L ] 2 9 N ov 2 01 7",
"title": ""
},
{
"docid": "b941dc9133a12aad0a75d41112e91aa8",
"text": "Recurrent neural network grammars (RNNG) are a recently proposed probabilistic generative modeling family for natural language. They show state-ofthe-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model’s latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.",
"title": ""
},
{
"docid": "86ffd10b7f5f49f8e917be87cdbcb02d",
"text": "Audit logs are considered good practice for business systems, and are required by federal regulations for secure systems, drug approval data, medical information disclosure, financial records, and electronic voting. Given the central role of audit logs, it is critical that they are correct and inalterable. It is not sufficient to say, “our data is correct, because we store all interactions in a separate audit log.” The integrity of the audit log itself must also be guaranteed. This paper proposes mechanisms within a database management system (DBMS), based on cryptographically strong one-way hash functions, that prevent an intruder, including an auditor or an employee or even an unknown bug within the DBMS itself, from silently corrupting the audit log. We propose that the DBMS store additional information in the database to enable a separate audit log validator to examine the database along with this extra information and state conclusively whether the audit log has been compromised. We show with an implementation on a high-performance storage engine that the overhead for auditing is low and that the validator can efficiently and correctly determine if the audit log has been compromised.",
"title": ""
},
{
"docid": "1b9bcb2ab5bc0b2b2e475066a1f78fbe",
"text": "Fragility curves are becoming increasingly common components of flood risk assessments. This report introduces the concept of the fragility curve and shows how fragility curves are related to more familiar reliability concepts, such as the deterministic factor of safety and the relative reliability index. Examples of fragility curves are identified in the literature on structures and risk assessment to identify what methods have been used to develop fragility curves in practice. Four basic approaches are identified: judgmental, empirical, hybrid, and analytical. Analytical approaches are, by far, the most common method encountered in the literature. This group of methods is further decomposed based on whether the limit state equation is an explicit function or an implicit function and on whether the probability of failure is obtained using analytical solution methods or numerical solution methods. Advantages and disadvantages of the various approaches are considered. DISCLAIMER: The contents of this report are not to be used for advertising, publication, or promotional purposes. Citation of trade names does not constitute an official endorsement or approval of the use of such commercial products. All product names and trademarks cited are the property of their respective owners. The findings of this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE ORIGINATOR.",
"title": ""
},
{
"docid": "a7cdfdefc87e899596579826dbb137a4",
"text": "Purpose\nThe purpose of this tutorial is to provide an overview of the benefits and challenges associated with the early identification of dyslexia.\n\n\nMethod\nThe literature on the early identification of dyslexia is reviewed. Theoretical arguments and research evidence are summarized. An overview of response to intervention as a method of early identification is provided, and the benefits and challenges associated with it are discussed. Finally, the role of speech-language pathologists in the early identification process is addressed.\n\n\nConclusions\nEarly identification of dyslexia is crucial to ensure that children are able to maximize their educational potential, and speech-language pathologists are well placed to play a role in this process. However, early identification alone is not sufficient-difficulties with reading may persist or become apparent later in schooling. Therefore, continuing progress monitoring and access to suitable intervention programs are essential.",
"title": ""
},
{
"docid": "ae8292c58a58928594d5f3730a6feacf",
"text": "Photoplethysmography (PPG) signals, captured using smart phones are generally noisy in nature. Although they have been successfully used to determine heart rate from frequency domain analysis, further indirect markers like blood pressure (BP) require time domain analysis for which the signal needs to be substantially cleaned. In this paper we propose a methodology to clean such noisy PPG signals. Apart from filtering, the proposed approach reduces the baseline drift of PPG signal to near zero. Furthermore it models each cycle of PPG signal as a sum of 2 Gaussian functions which is a novel contribution of the method. We show that, the noise cleaning effect produces better accuracy and consistency in estimating BP, compared to the state of the art method that uses the 2-element Windkessel model on features derived from raw PPG signal, captured from an Android phone.",
"title": ""
},
{
"docid": "8cdd54a8bd288692132b57cb889b2381",
"text": "This research deals with the soft computing methodology of fuzzy cognitive map (FCM). Here a mathematical description of FCM is presented and a new methodology based on fuzzy logic techniques for developing the FCM is examined. The capability and usefulness of FCM in modeling complex systems and the application of FCM to modeling and describing the behavior of a heat exchanger system is presented. The applicability of FCM to model the supervisor of complex systems is discussed and the FCM-supervisor for evaluating the performance of a system is constructed; simulation results are presented and discussed.",
"title": ""
},
{
"docid": "90d57d4b7fcd45c35e9e738a29badde7",
"text": "This paper deals with the problem of optimizing a factory floor layout in a Slovenian furniture factory. First, the current state of the manufacturing system is analyzed by constructing a discrete event simulation (DES) model that reflects the manufacturing processes. The company produces over 10,000 different products, and their manufacturing processes include approximately 30,000 subprocesses. Therefore, manually constructing a model to include every subprocess is not feasible. To overcome this problem, a method for automated model construction was developed to construct a DES model based on a selection of manufacturing orders and relevant subprocesses. The obtained simulation model provided insight into the manufacturing processes and enable easy modification of model parameters for optimizing the manufacturing processes. Finally, the optimization problem was solved: the total distance the products had to traverse between machines was minimized by devising an optimal machine layout. With the introduction of certain simplifications, the problem was best described as a quadratic assignment problem. A novel heuristic method based on force-directed graph drawing algorithms was developed. Optimizing the floor layout resulted in a significant reduction of total travel distance for the products.",
"title": ""
},
{
"docid": "d29485bc844995b639bb497fb05fcb6a",
"text": "Vol. LII (June 2015), 375–393 375 © 2015, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Paul R. Hoban is Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin–Madison (e-mail: phoban@ bus. wisc. edu). Randolph E. Bucklin is Professor of Marketing, Peter W. Mullin Chair in Management, UCLA Anderson School of Management, University of California, Los Angeles (e-mail: randy.bucklin@anderson. ucla. edu). Avi Goldfarb served as associate editor for this article. PAUL R. HOBAN and RANDOLPH E. BUCKLIN*",
"title": ""
},
{
"docid": "79102cc14ce0d11b52b4288d2e52de10",
"text": "This paper presents a text detection method based on Extremal Regions (ERs) and Corner-HOG feature. Local Histogram of Oriented Gradient (HOG) extracted around corners (Corner-HOG) is used to effectively prune the non-text components in the component tree. Experimental results show that the Corner-HOG based pruning method can discard an average of 83.06% of all ERs in an image while preserving a recall of 90.51% of the text components. The remaining ERs are then grouped into text lines and candidate text lines are verified using black-white transition feature and the covariance descriptor of HOG. Experimental results on the 2011 Robust Reading Competition dataset show that the proposed text detection method provides promising performance.",
"title": ""
},
{
"docid": "d4d24bee47b97e1bf4aadad0f3993e78",
"text": "An aircraft landed safely is the result of a huge organizational effort required to cope with a complex system made up of humans, technology and the environment. The aviation safety record has improved dramatically over the years to reach an unprecedented low in terms of accidents per million take-offs, without ever achieving the “zero accident” target. The introduction of automation on board airplanes must be acknowledged as one of the driving forces behind the decline in the accident rate down to the current level.",
"title": ""
},
{
"docid": "e141a2e89edc2398c27a740a0bc885c0",
"text": "Modern information retrieval (IR) systems exhibit user dynamics through interactivity. These dynamic aspects of IR, including changes found in data, users, and systems, are increasingly being utilized in search engines. Session search is one such IR task—document retrieval within a session. During a session, a user constantly modifies queries to find documents that fulfill an information need. Existing IR techniques for assisting the user in this task are limited in their ability to optimize over changes, learn with a minimal computational footprint, and be responsive. This article proposes a novel query change retrieval model (QCM), which uses syntactic editing changes between consecutive queries, as well as the relationship between query changes and previously retrieved documents, to enhance session search. We propose modeling session search as a Markov decision process (MDP). We consider two agents in this MDP: the user agent and the search engine agent. The user agent’s actions are query changes that we observe, and the search engine agent’s actions are term weight adjustments as proposed in this work. We also investigate multiple query aggregation schemes and their effectiveness on session search. Experiments show that our approach is highly effective and outperforms top session search systems in TREC 2011 and TREC 2012.",
"title": ""
},
{
"docid": "412e10ae26c0abcb37379c6b37ea022a",
"text": "This paper presents the Gavagai Living Lexicon, which is an online distributional semantic model currently available in 20 different languages. We describe the underlying distributional semantic model, and how we have solved some of the challenges in applying such a model to large amounts of streaming data. We also describe the architecture of our implementation, and discuss how we deal with continuous quality assurance of the lexicon.",
"title": ""
}
] | scidocsrr |
64369bf5f3f3924cce8fb7f37cc9b129 | Understanding symmetries in deep networks | [
{
"docid": "10c357d046dbf27cab92b1c3f91affb1",
"text": "We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling 1. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.",
"title": ""
},
{
"docid": "60ea2144687d867bb4f6b21e792a8441",
"text": "Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.",
"title": ""
}
] | [
{
"docid": "48168ed93d710d3b85b7015f2c238094",
"text": "ion and hierarchical information processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such flexibility in artificial systems is challenging, even with more and more computational power. Here, we investigate the hypothesis that abstraction and hierarchical information processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded-optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.",
"title": ""
},
{
"docid": "abb748541b980385e4b8bc477c5adc0e",
"text": "Spin–orbit torque, a torque brought about by in-plane current via the spin–orbit interactions in heavy-metal/ferromagnet nanostructures, provides a new pathway to switch the magnetization direction. Although there are many recent studies, they all build on one of two structures that have the easy axis of a nanomagnet lying orthogonal to the current, that is, along the z or y axes. Here, we present a new structure with the third geometry, that is, with the easy axis collinear with the current (along the x axis). We fabricate a three-terminal device with a Ta/CoFeB/MgO-based stack and demonstrate the switching operation driven by the spin–orbit torque due to Ta with a negative spin Hall angle. Comparisons with different geometries highlight the previously unknown mechanisms of spin–orbit torque switching. Our work offers a new avenue for exploring the physics of spin–orbit torque switching and its application to spintronics devices.",
"title": ""
},
{
"docid": "0b1baa3190abb39284f33b8e73bcad1d",
"text": "Despite significant advances in machine learning and perception over the past few decades, perception algorithms can still be unreliable when deployed in challenging time-varying environments. When these systems are used for autonomous decision-making, such as in self-driving vehicles, the impact of their mistakes can be catastrophic. As such, it is important to characterize the performance of the system and predict when and where it may fail in order to take appropriate action. While similar in spirit to the idea of introspection, this work introduces a new paradigm for predicting the likely performance of a robot’s perception system based on past experience in the same workspace. In particular, we propose two models that probabilistically predict perception performance from observations gathered over time. While both approaches are place-specific, the second approach additionally considers appearance similarity when incorporating past observations. We evaluate our method in a classical decision-making scenario in which the robot must choose when and where to drive autonomously in 60 km of driving data from an urban environment. Results demonstrate that both approaches lead to fewer false decisions (in terms of incorrectly offering or denying autonomy) for two different detector models, and show that leveraging visual appearance within a state-of-the-art navigation framework increases the accuracy of our performance predictions.",
"title": ""
},
{
"docid": "4b057d86825e346291d675e0c1285fad",
"text": "We describe theclipmap, a dynamic texture representation that efficiently caches textures of arbitrarily large size in a finite amount of physical memory for rendering at real-time rates. Further, we describe a software system for managing clipmaps that supports integration into demanding real-time applications. We show the scale and robustness of this integrated hardware/software architecture by reviewing an application virtualizing a 170 gigabyte texture at 60 Hertz. Finally, we suggest ways that other rendering systems may exploit the concepts underlying clipmaps to solve related problems. CR",
"title": ""
},
{
"docid": "9e50093d32e0a8c6ab40b1eb2c063a04",
"text": "Credit card fraud detection is a very challenging problem because of the specific nature of transaction data and the labeling process. The transaction data are peculiar because they are obtained in a streaming fashion, and they are strongly imbalanced and prone to non-stationarity. The labeling is the outcome of an active learning process, as every day human investigators contact only a small number of cardholders (associated with the riskiest transactions) and obtain the class (fraud or genuine) of the related transactions. An adequate selection of the set of cardholders is therefore crucial for an efficient fraud detection process. In this paper, we present a number of active learning strategies and we investigate their fraud detection accuracies. We compare different criteria (supervised, semi-supervised and unsupervised) to query unlabeled transactions. Finally, we highlight the existence of an exploitation/exploration trade-off for active learning in the context of fraud detection, which has so far been overlooked in the literature.",
"title": ""
},
{
"docid": "c86fbf52aecb41ce4f3d806f62965c50",
"text": "Multi-core end-systems use Receive Side Scaling (RSS) to parallelize protocol processing. RSS uses a hash function on the standard flow descriptors and an indirection table to assign incoming packets to receive queues which are pinned to specific cores. This ensures flow affinity in that the interrupt processing of all packets belonging to a specific flow is processed by the same core. A key limitation of standard RSS is that it does not consider the application process that consumes the incoming data in determining the flow affinity. In this paper, we carry out a detailed experimental analysis of the performance impact of the application affinity in a 40 Gbps testbed network with a dual hexa-core end-system. We show, contrary to conventional wisdom, that when the application process and the flow are affinitized to the same core, the performance (measured in terms of end-to-end TCP throughput) is significantly lower than the line rate. Near line rate performance is observed when the flow and the application process are affinitized to different cores belonging to the same socket. Furthermore, affinitizing the application and the flow to cores on different sockets results in significantly lower throughput than the line rate. These results arise due to the memory bottleneck, which is demonstrated using preliminary correlational data on the cache hit rate in the core that services the application process.",
"title": ""
},
{
"docid": "69eceabd9967260cbdec56d02bcafd83",
"text": "A modified Vivaldi antenna is proposed in this paper especially for the millimeter-wave application. The metal support frame is used to fix the structured substrate and increased the front-to-back ratio as well as the radiation gain. Detailed design process are presented, following which one sample is designed with its working frequency band from 75GHz to 150 GHz. The sample is also fabricated and measured. Good agreements between simulated results and measured results are obtained.",
"title": ""
},
{
"docid": "db5865f8f8701e949a9bb2f41eb97244",
"text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.",
"title": ""
},
{
"docid": "682686007186f8af85f2eb27b49a2df5",
"text": "In the last few years, deep learning has lead to very good performance on a variety of problems, such as object recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Due to the lack of training data and computing power in early days, it is hard to train a large high-capacity convolutional neural network without overfitting. Recently, with the rapid growth of data size and the increasing power of graphics processor unit, many researchers have improved the convolutional neural networks and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. Besides, we also introduce some applications of convolutional neural networks in computer vision.",
"title": ""
},
{
"docid": "82b8f70d4705caae5d334b721a8e2c7e",
"text": "This paper presents the design concept, models, and open-loop control of a particular form of a variablereluctance spherical motor (VRSM), referred here as a spherical wheel motor (SWM). Unlike existing spherical motors where design focuses have been on controlling the three degrees of freedom (DOF) angular displacements, the SWM offers a means to control the orientation of a continuously rotating shaft in an open-loop (OL) fashion. We provide a formula for deriving different switching sequences (full step and fractional step) for a specified current magnitude and pole configurations. The concept feasibility of an OL controlled SWM has been experimentally demonstrated on a prototype that has 8 rotor permanent-magnet (PM) pole-pairs and 10 stator electromagnet (EM) pole-pairs.",
"title": ""
},
{
"docid": "712d292b38a262a8c37679c9549a631d",
"text": "Addresses for correspondence: Dr Sara de Freitas, London Knowledge Lab, Birkbeck College, University of London, 23–29 Emerald Street, London WC1N 3QS. UK. Tel: +44(0)20 7763 2117; fax: +44(0)20 7242 2754; email: [email protected]. Steve Jarvis, Vega Group PLC, 2 Falcon Way, Shire Park, Welwyn Garden City, Herts AL7 1TW, UK. Tel: +44 (0)1707 362602; Fax: +44 (0)1707 393909; email: [email protected]",
"title": ""
},
{
"docid": "4681e8f07225e305adfc66cd1b48deb8",
"text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.",
"title": ""
},
{
"docid": "59323291555a82ef99013bd4510b3020",
"text": "This paper aims to classify and analyze recent as well as classic image registration techniques. Image registration is the process of super imposing images of the same scene taken at different times, location and by different sensors. It is a key enabling technology in medical image analysis for integrating and analyzing information from various modalities. Basically image registration finds temporal correspondences between the set of images and uses transformation model to infer features from these correspondences.The approaches for image registration can beclassified according to their nature vizarea-based and feature-based and dimensionalityvizspatial domain and frequency domain. The procedure of image registration by intensity based model, spatial domain transform, Rigid transform and Non rigid transform based on the above mentioned classification has been performed and the eminence of image is measured by the three quality parameters such as SNR, PSNR and MSE. The techniques have been implemented and inferred thatthe non-rigid transform exhibit higher perceptual quality and offer visually sharper image than other techniques.Problematic issues of image registration techniques and outlook for the future research are discussed. This work may be one of the comprehensive reference sources for the researchers involved in image registration.",
"title": ""
},
{
"docid": "26fad325410424982d29577e49797159",
"text": "How do the statements made by people in online political discussions affect other people's willingness to express their own opinions, or argue for them? And how does group interaction ultimately shape individual opinions? We examine carefully whether and how patterns of group discussion shape (a) individuals' expressive behavior within those discussions and (b) changes in personal opinions. This research proposes that the argumentative \"climate\" of group opinion indeed affects postdiscussion opinions, and that a primary mechanism responsible for this effect is an intermediate influence on individual participants' own expressions during the online discussions. We find support for these propositions in data from a series of 60 online group discussions, involving ordinary citizens, about the tax plans offered by rival U.S. presidential candidates George W. Bush and Al Gore in 2000. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/asc_papers/99 Normative and Informational Influences in Online Political Discussions Vincent Price, Lilach Nir, & Joseph N. Cappella 1 Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA 19104 6220 2 Department of Communication and the Department of Political Science, Hebrew University of Jerusalem, Jerusalem, Israel, 91905 How do the statements made by people in online political discussions affect other peo ple’s willingness to express their own opinions, or argue for them? And how does group interaction ultimately shape individual opinions? We examine carefully whether and how patterns of group discussion shape (a) individuals’ expressive behavior within those discussions and (b) changes in personal opinions. This research proposes that the argu mentative ‘‘climate’’ of group opinion indeed affects postdiscussion opinions, and that a primary mechanism responsible for this effect is an intermediate influence on individ ual participants’ own expressions during the online discussions. We find support for these propositions in data from a series of 60 online group discussions, involving ordi nary citizens, about the tax plans offered by rival U.S. presidential candidates George W. Bush and Al Gore in 2000. Investigations of social influence and public opinion go hand in hand. Opinions may exist as psychological phenomena in individual minds, but the processes that shape these opinions—at least, public opinions—are inherently social–psychological. The notion that group interaction can influence individual opinions is widely accepted. Indeed, according to many participatory theories of democracy, lively exchanges among citizens are deemed central to the formation of sound or ‘‘true’’ public opinion, which is forged in the fire of group discussion. This truly public opinion is commonly contrasted with mass or ‘‘pseudo’’-opinion developed in isolation by disconnected media consumers responding individually to the news (e.g., Blumer, 1946; Fishkin, 1991, 1995; Graber, 1982). Although discussion is celebrated in democratic theory as a critical element of proper opinion formation, it also brings with it a variety of potential downsides. These include a possible tyranny of the majority (e.g., de Tocqueville, 1835/1945), distorted expression of opinions resulting from fear of social isolation (Noelle-Neumann, 1984), or shifts of opinion to more extreme positions than most individuals might actually prefer (see, e.g., Janis, 1972, on dangerous forms of ‘‘group think,’’ or more recently Sunstein, 2001, on the polarizing effects of ‘‘enclave’’ communication on the Web). The problem of how to foster productive social interaction while avoiding potential dysfunctions of group influence has occupied a large place in normative writings on public opinion and democracy. Modern democracies guarantee freedom of association and public expression; they also employ systems and procedures aimed at protecting collective decision making from untoward social pressure, including not only the use of secret ballots in elections but also more generally republican legislatures and executive and judicial offices that by design are insulated from too much democracy, that is, from direct popular control (e.g., Madison, 1788/1966). However, steady advances in popular education and growth of communication media have enlarged expectations of the ordinary citizen and brought calls for more direct, popular participation in government. In particular, dramatic technological changes over the past several decades—and especially the rise of interactive forms of electronic communication enabled by the Internet and World Wide Web—have fueled hopes for new, expansive, and energized forms of ‘‘teledemocracy’’ (e.g., Arterton, 1987). Online political discussion is thus of considerable interest to students of public opinion and political communication. It has been credited with creating vital spaces for public conversation, opening in a new ‘‘public sphere’’ of the sort envisioned by Habermas (1962/1989), (see, e.g., Papacharissi, 2004; Poor, 2005; Poster, 1997). Though still not a routine experience for citizens, it has been steadily growing in prevalence and likely import for popular opinion formation. Recent surveys indicate that close to a third of Internet users regularly engage with groups online, with nearly 10% reporting that they joined online discussions about the 2004 presidential election (Pew Research Center, 2005). Online political discussion offers new and potentially quite powerful modes of scientific observation as well. Despite continuous methodological improvements, the mainstay of public opinion research, the general-population survey, has always consisted of randomly sampled, one-on-one, respondent-to-interviewer ‘‘conversations’’ aimed at extracting precoded responses or short verbal answers to structured questionnaires. Web-based technologies, however, may now permit randomly constituted respondent-withrespondent group conversations. The conceptual fit between such conversations and the phenomenon of public opinion, itself grounded in popular discussion, renders it quite appealing. Developments in electronic data storage and retrieval, and telecommunication networks of increasing channel capacity, now make possible an integration of general-population survey techniques and more qualitative research approaches, such as focus group methods, that have become popular in large part owing to the sense that they offer a more refined understanding of popular thought than might be gained from structured surveys (e.g., Morgan, 1997). Perhaps most important, the study of online discussion opens new theoretical avenues for public opinion research. Understanding online citizen interactions calls for bringing together several strands of theory in social psychology, smallgroup decision making, and political communication that have heretofore been disconnected (Price, 1992). Social influence in opinion formation Certainly, the most prominent theory of social influence in public opinion research has been Noelle-Neumann’s (1984) spiral of silence. Citing early research on group conformity processes, such as that of Asch (1956), Noelle-Neumann argued that media depictions of the normative ‘‘climate of opinion’’ have a silencing effect on those who hold minority viewpoints. The reticence of minorities to express their views contributes to the appearance of a solid majority opinion, which, in turn, produces a spiral of silence that successively emboldens the majority and enervates the minority. Meta-analytic evaluations of research on the hypothetical silencing effect of the mediated climate of opinion suggest that such effects, if they indeed exist, appear to be fairly small (Glynn, Hayes, & Shanahan, 1997); nevertheless, the theory has garnered considerable empirical attention and remains influential. In experimental social psychology, group influence has been the object of systematic study for over half a century. Although no single theoretical framework is available for explaining how social influence operates, some important organizing principles and concepts have emerged over time (Price & Oshagan, 1995). One of the most useful heuristics, proposed by Deutsch and Gerard (1955), distinguishes two broad forms of social influence (see also Kaplan & Miller, 1987). Normative social influence occurs when someone is motivated by a desire to conform to the positive expectations of other people. Motivations for meeting these normative expectations lie in the various rewards that might accrue (self-esteem or feelings of social approval) or possible negative sanctions that might result from deviant behavior (alienation, excommunication, or social isolation). Normative social influence is clearly the basis of Noelle-Neumann’s (1984) theorizing about minorities silencing themselves in the face of majority pressure. Informational social influence, in contrast, occurs when people accept the words, opinions, and deeds of others as valid evidence about reality. People learn about the world, in part, from discovering that they disagree (e.g., Burnstein & Vinokur, 1977; Vinokur & Burnstein, 1974). They are influenced by groups not only because of group norms, but also because of arguments that arise in groups, through a comparison of their views to those expressed by others (see also the distinction between normative and comparative functions of reference groups in sociology, e.g., Hyman & Singer, 1968; Kelley, 1952). Although the distinction between informational and normative influence has proven useful and historically important in small-group research, it can become cloudy in many instances. This is so because normative pressure and persuasive information operate in similar ways within groups, and often with similar effects. For example, the tendency of groups to polarize—that is, to move following discussion to extreme positions in the direction that group members were initially inc",
"title": ""
},
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
},
{
"docid": "e52f5174a9d5161e18eced6e2eb36684",
"text": "The clinical use of ivabradine has and continues to evolve along channels that are predicated on its mechanism of action. It selectively inhibits the funny current (If) in sinoatrial nodal tissue, resulting in a decrease in the rate of diastolic depolarization and, consequently, the heart rate, a mechanism that is distinct from those of other negative chronotropic agents. Thus, it has been evaluated and is used in select patients with systolic heart failure and chronic stable angina without clinically significant adverse effects. Although not approved for other indications, ivabradine has also shown promise in the management of inappropriate sinus tachycardia. Here, the authors review the mechanism of action of ivabradine and salient studies that have led to its current clinical indications and use.",
"title": ""
},
{
"docid": "2e66317dfe4005c069ceac2d4f9e3877",
"text": "The Semantic Web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting Ginseng, a quasi natural language guided query interface to the Semantic Web. Ginseng relies on a simple question grammar which gets dynamically extended by the structure of an ontology to guide users in formulating queries in a language seemingly akin to English. Based on the grammar Ginseng then translates the queries into a Semantic Web query language (RDQL), which allows their execution. Our evaluation with 20 users shows that Ginseng is extremely simple to use without any training (as opposed to any logic-based querying approach) resulting in very good query performance (precision = 92.8%, recall = 98.4%). We, furthermore, found that even with its simple grammar/approach Ginseng could process over 40% of questions from a query corpus without modification.",
"title": ""
},
{
"docid": "41b712d0d485c65a8dff32725c215f97",
"text": "In this article, we present a novel, multi-user, virtual reality environment for the interactive, collaborative 3D analysis of large 3D scans and the technical advancements that were necessary to build it: a multi-view rendering system for large 3D point clouds, a suitable display infrastructure, and a suite of collaborative 3D interaction techniques. The cultural heritage site of Valcamonica in Italy with its large collection of prehistoric rock-art served as an exemplary use case for evaluation. The results show that our output-sensitive level-of-detail rendering system is capable of visualizing a 3D dataset with an aggregate size of more than 14 billion points at interactive frame rates. The system design in this exemplar application results from close exchange with a small group of potential users: archaeologists with expertise in rockart. The system allows them to explore the prehistoric art and its spatial context with highly realistic appearance. A set of dedicated interaction techniques was developed to facilitate collaborative visual analysis. A multi-display workspace supports the immediate comparison of geographically distributed artifacts. An expert review of the final demonstrator confirmed the potential for added value in rock-art research and the usability of our collaborative interaction techniques.",
"title": ""
},
{
"docid": "34f7497eaae4a6b56089889781935263",
"text": "The research on two-wheeled inverted pendulum (T-WIP) mobile robots or commonly known as balancing robots have gained momentum over the last decade in a number of robotic laboratories around the world (Solerno & Angeles, 2003;Grasser et al., 2002; Solerno & Angeles, 2007;Koyanagi, Lida & Yuta, 1992;Ha & Yuta, 1996; Kim, Kim & Kwak, 2003). This chapter describes the hardware design of such a robot. The objective of the design is to develop a T-WIP mobile robot as well as MATLABTM interfacing configuration to be used as flexible platform which comprises of embedded unstable linear plant intended for research and teaching purposes. Issues such as selection of actuators and sensors, signal processing units, MATLABTM Real Time Workshop coding, modeling and control scheme is addressed and discussed. The system is then tested using a well-known state feedback controller to verify its functionality.",
"title": ""
}
] | scidocsrr |
4eec4732e5cfa7dc0ec716a7e2475a23 | Time delay deep neural network-based universal background models for speaker recognition | [
{
"docid": "b7597e1f8c8ae4b40f5d7d1fe1f76a38",
"text": "In this paper we present a Time-Delay Neural Network (TDNN) approach to phoneme recognition which is characterized by two important properties. 1) Using a 3 layer arrangement of simple computing units, a hierarchy can be constructed that allows for the formation of arbitrary nonlinear decision surfaces. The TDNN learns these decision surfaces automatically using error backpropagation 111. 2) The time-delay arrangement enables the network to discover acoustic-phonetic features and the temporal relationships between them independent of position in time and hence not blurred by temporal shifts",
"title": ""
},
{
"docid": "86f478fbf4e38ce1f1d0119a3175adfe",
"text": "We introduce recurrent neural networks (RNNs) for acoustic modeling which are unfolded in time for a fixed number of time steps. The proposed models are feedforward networks with the property that the unfolded layers which correspond to the recurrent layer have time-shifted inputs and tied weight matrices. Besides the temporal depth due to unfolding, hierarchical processing depth is added by means of several non-recurrent hidden layers inserted between the unfolded layers and the output layer. The training of these models: (a) has a complexity that is comparable to deep neural networks (DNNs) with the same number of layers; (b) can be done on frame-randomized minibatches; (c) can be implemented efficiently through matrix-matrix operations on GPU architectures which makes it scalable for large tasks. Experimental results on the Switchboard 300 hours English conversational telephony task show a 5% relative improvement in word error rate over state-of-the-art DNNs trained on FMLLR features with i-vector speaker adaptation and hessianfree sequence discriminative training.",
"title": ""
}
] | [
{
"docid": "80d920f1f886b81e167d33d5059b8afe",
"text": "Agriculture is one of the most important aspects of human civilization. The usages of information and communication technologies (ICT) have significantly contributed in the area in last two decades. Internet of things (IOT) is a technology, where real life physical objects (e.g. sensor nodes) can work collaboratively to create an information based and technology driven system to maximize the benefits (e.g. improved agricultural production) with minimized risks (e.g. environmental impact). Implementation of IOT based solutions, at each phase of the area, could be a game changer for whole agricultural landscape, i.e. from seeding to selling and beyond. This article presents a technical review of IOT based application scenarios for agriculture sector. The article presents a brief introduction to IOT, IOT framework for agricultural applications and discusses various agriculture specific application scenarios, e.g. farming resource optimization, decision support system, environment monitoring and control systems. The article concludes with the future research directions in this area.",
"title": ""
},
{
"docid": "7e07856be3374b4eed585e430d236ebc",
"text": "Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Semantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting three sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our approach can be used for any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines shows that our approach is superior in terms of classification accuracy. Adding Data Mining Support to SPARQL via Statistical Relational Learning Methods Christoph Kiefer, Abraham Bernstein, and André Locher Department of Informatics, University of Zurich, Switzerland {kiefer,bernstein}@ifi.uzh.ch, [email protected] Abstract. Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Semantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting three sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our approach can be used for any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines shows that our approach is superior in terms of classification accuracy. Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Semantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting three sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our approach can be used for any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines shows that our approach is superior in terms of classification accuracy.",
"title": ""
},
{
"docid": "e52bac5b665aae5cf020538ab37356bc",
"text": "The greater decrease of conduction velocity in sensory than in motor fibres of the peroneal, median and ulnar nerves (particularly in the digital segments) found in patients with chronic carbon disulphide poisoning, permitted the diagnosis of polyneuropathy to be made in the subclinical stage, even while the conduction in motor fibres was still within normal limits. A process of axonal degeneration is presumed to underlie occurrence of neuropathy consequent to carbon disulphide poisoning.",
"title": ""
},
{
"docid": "c47c2f7c7958843d67d19837ba081b16",
"text": "Research produced through international collaboration is often more highly cited than other work, but is it also more novel? Using measures of conventionality and novelty developed by Uzzi et al. (2013) and replicated by Boyack and Klavans (2013), we test for novelty and conventionality in international research collaboration. Many studies have shown that international collaboration is more highly cited than national or sole-authored papers. Others have found that coauthored papers are more novel. Scholars have suggested that diverse groups have a greater chance of producing creative work. As such, we expected to find that international collaboration is also more novel. Using data from Web of Science and Scopus in 2005, we failed to show that international collaboration tends to produce more novel articles. In fact, international collaboration appeared to produce less novel and more conventional research. Transaction costs and the limits of global communication may be suppressing novelty, while an “audience effect” may be responsible for higher citation rates. Closer examination across the sciences, social sciences, and arts and humanities, as well as examination of six scientific specialties further illuminates the interplay of conventionality and novelty in work produced by international research teams.",
"title": ""
},
{
"docid": "208b4cb4dc4cee74b9357a5ebb2f739c",
"text": "We report improved AMR parsing results by adding a new action to a transitionbased AMR parser to infer abstract concepts and by incorporating richer features produced by auxiliary analyzers such as a semantic role labeler and a coreference resolver. We report final AMR parsing results that show an improvement of 7% absolute in F1 score over the best previously reported result. Our parser is available at: https://github.com/ Juicechuan/AMRParsing",
"title": ""
},
{
"docid": "24a117cf0e59591514dd8630bcd45065",
"text": "This work presents a coarse-grained distributed genetic algorithm (GA) for RNA secondary structure prediction. This research builds on previous work and contains two new thermodynamic models, INN and INN-HB, which add stacking-energies using base pair adjacencies. Comparison tests were performed against the original serial GA on known structures that are 122, 543, and 784 nucleotides in length on a wide variety of parameter settings. The effects of the new models are investigated, the predicted structures are compared to known structures and the GA is compared against a serial GA with identical models. Both algorithms perform well and are able to predict structures with high accuracy for short sequences.",
"title": ""
},
{
"docid": "e20f6ef6524a422c80544eaf590e326d",
"text": "Computing the semantic similarity/relatedness between terms is an important research area for several disciplines, including artificial intelligence, cognitive science, linguistics, psychology, biomedicine and information retrieval. These measures exploit knowledge bases to express the semantics of concepts. Some approaches, such as the information theoretical approaches, rely on knowledge structure, while others, such as the gloss-based approaches, use knowledge content. Firstly, based on structure, we propose a new intrinsic Information Content (IC) computing method which is based on the quantification of the subgraph formed by the ancestors of the target concept. Taxonomic measures including the IC-based ones consume the topological parameters that must be extracted from taxonomies considered as Directed Acyclic Graphs (DAGs). Accordingly, we propose a routine of graph algorithms that are able to provide some basic parameters, such as depth, ancestors, descendents, Lowest Common Subsumer (LCS). The IC-computing method is assessed using several knowledge structures which are: the noun and verb WordNet “is a” taxonomies, Wikipedia Category Graph (WCG), and MeSH taxonomy. We also propose an aggregation schema that exploits the WordNet “is a” taxonomy and WCG in a complementary way through the IC-based measures to improve coverage capacity. Secondly, taking content into consideration, we propose a gloss-based semantic similarity measure that operates based on the noun weighting mechanism using our IC-computing method, as well as on the WordNet, Wiktionary and Wikipedia resources. Further evaluation is performed on various items, including nouns, verbs, multiword expressions and biomedical datasets, using well-recognized benchmarks. The results indicate an improvement in terms of similarity and relatedness assessment accuracy.",
"title": ""
},
{
"docid": "350dc562863b8702208bfb41c6ceda6a",
"text": "THE use of formal devices for assessing function is becoming standard in agencies serving the elderly. In the Gerontological Society's recent contract study on functional assessment (Howell, 1968), a large assortment of rating scales, checklists, and other techniques in use in applied settings was easily assembled. The present state of the trade seems to be one in which each investigator or practitioner feels an inner compusion to make his own scale and to cry that other existent scales cannot possibly fit his own setting. The authors join this company in presenting two scales first standardized on their own population (Lawton, 1969). They take some comfort, however, in the fact that one scale, the Physical Self-Maintenance Scale (PSMS), is largely a scale developed and used by other investigators (Lowenthal, 1964), which was adapted for use in our own institution. The second of the scales, the Instrumental Activities of Daily Living Scale (IADL), taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence. Both of the scales have been tested further for their usefulness in a variety of types of institutions and other facilities serving community-resident older people. Before describing in detail the behavior measured by these two scales, we shall briefly describe the schema of competence into which these behaviors fit (Lawton, 1969). Human behavior is viewed as varying in the degree of complexity required for functioning in a variety of tasks. The lowest level is called life maintenance, followed by the successively more complex levels of func-",
"title": ""
},
{
"docid": "cb0b7879f61630b467aa595d961bfcef",
"text": "UNLABELLED\nGlucagon-like peptide 1 (GLP-1[7-36 amide]) is an incretin hormone primarily synthesized in the lower gut (ileum, colon/rectum). Nevertheless, there is an early increment in plasma GLP-1 immediately after ingesting glucose or mixed meals, before nutrients have entered GLP-1 rich intestinal regions. The responsible signalling pathway between the upper and lower gut is not clear. It was the aim of this study to see, whether small intestinal resection or colonectomy changes GLP-1[7-36 amide] release after oral glucose. In eight healthy controls, in seven patients with inactive Crohn's disease (no surgery), in nine patients each after primarily jejunal or ileal small intestinal resections, and in six colonectomized patients not different in age (p = 0.10), body-mass-index (p = 0.24), waist-hip-ratio (p = 0.43), and HbA1c (p = 0.22), oral glucose tolerance tests (75 g) were performed in the fasting state. GLP-1[7-36 amide], insulin C-peptide, GIP and glucagon (specific (RIAs) were measured over 240 min.\n\n\nSTATISTICS\nRepeated measures ANOVA, t-test (significance: p < 0.05). A clear and early (peak: 15-30 min) GLP-1[7-36 amide] response was observed in all subjects, without any significant difference between gut-resected and control groups (p = 0.95). There were no significant differences in oral glucose tolerance (p = 0.21) or in the suppression of pancreatic glucagon (p = 0.36). Colonectomized patients had a higher insulin (p = 0.011) and C-peptide (p = 0.0023) response in comparison to all other groups. GIP responses also were higher in the colonectomized patients (p = 0.0005). Inactive Crohn's disease and resections of the small intestine as well as proctocolectomy did not change overall GLP-1[7-36 amide] responses and especially not the early increment after oral glucose. This may indicate release of GLP-1[7-36 amide] after oral glucose from the small number of GLP-1[7-36 amide] producing L-cells in the upper gut rather than from the main source in the ileum, colon and rectum. Colonectomized patients are characterized by insulin hypersecretion, which in combination with their normal oral glucose tolerance possibly indicates a reduced insulin sensitivity in this patient group. GIP may play a role in mediating insulin hypersecretion in these patients.",
"title": ""
},
{
"docid": "13a4dccde0ae401fc39b50469a0646b6",
"text": "The stability theorem for persistent homology is a central result in topological data analysis. While the original formulation of the result concerns the persistence barcodes of R-valued functions, the result was later cast in a more general algebraic form, in the language of persistence modules and interleavings. In this paper, we establish an analogue of this algebraic stability theorem for zigzag persistence modules. To do so, we functorially extend each zigzag persistence module to a two-dimensional persistence module, and establish an algebraic stability theorem for these extensions. One part of our argument yields a stability result for free two-dimensional persistence modules. As an application of our main theorem, we strengthen a result of Bauer et al. on the stability of the persistent homology of Reeb graphs. Our main result also yields an alternative proof of the stability theorem for level set persistent homology of Carlsson et al.",
"title": ""
},
{
"docid": "349a9374e3ff6c068f26c0a1b0dfe3a2",
"text": "Heart failure (HF) is a growing healthcare burden and one of the leading causes of hospitalizations and readmission. Preventing readmissions for HF patients is an increasing priority for clinicians, researchers, and various stakeholders. The following review will discuss the interventions found to reduce readmissions for patients and improve hospital performance on the 30-day readmission process measure. While evidence-based therapies for HF management have proliferated, the consistent implementation of these therapies and development of new strategies to more effectively prevent readmissions remain areas for continued improvement.",
"title": ""
},
{
"docid": "097912a74fbc55ba7909b6e0622c0b42",
"text": "Many ubiquitous computing applications involve human activity recognition based on wearable sensors. Although this problem has been studied for a decade, there are a limited number of publicly available datasets to use as standard benchmarks to compare the performance of activity models and recognition algorithms. In this paper, we describe the freely available USC human activity dataset (USC-HAD), consisting of well-defined low-level daily activities intended as a benchmark for algorithm comparison particularly for healthcare scenarios. We briefly review some existing publicly available datasets and compare them with USC-HAD. We describe the wearable sensors used and details of dataset construction. We use high-precision well-calibrated sensing hardware such that the collected data is accurate, reliable, and easy to interpret. The goal is to make the dataset and research based on it repeatable and extendible by others.",
"title": ""
},
{
"docid": "9c5d3f89d5207b42d7e2c8803b29994c",
"text": "With the advent of data mining, machine learning has come of age and is now a critical technology in many businesses. However, machine learning evolved in a different research context to that in which it now finds itself employed. A particularly important problem in the data mining world is working effectively with large data sets. However, most machine learning research has been conducted in the context of learning from very small data sets. To date most approaches to scaling up machine learning to large data sets have attempted to modify existing algorithms to deal with large data sets in a more computationally efficient and effective manner. But is this necessarily the best method? This paper explores the possibility of designing algorithms specifically for large data sets. Specifically, the paper looks at how increasing data set size affects bias and variance error decompositions for classification algorithms. Preliminary results of experiments to determine these effects are presented, showing that, as hypothesised variance can be expected to decrease as training set size increases. No clear effect of training set size on bias was observed. These results have profound implications for data mining from large data sets, indicating that developing effective learning algorithms for large data sets is not simply a matter of finding computationally efficient variants of existing learning algorithms.",
"title": ""
},
{
"docid": "223a7496c24dcf121408ac3bba3ad4e5",
"text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.",
"title": ""
},
{
"docid": "e8edb58537ada97ee5da365fa096ae2d",
"text": "In this paper, we present a novel semi-supervised learning framework based on `1 graph. The `1 graph is motivated by that each datum can be reconstructed by the sparse linear superposition of the training data. The sparse reconstruction coefficients, used to deduce the weights of the directed `1 graph, are derived by solving an `1 optimization problem on sparse representation. Different from conventional graph construction processes which are generally divided into two independent steps, i.e., adjacency searching and weight selection, the graph adjacency structure as well as the graph weights of the `1 graph is derived simultaneously and in a parameter-free manner. Illuminated by the validated discriminating power of sparse representation in [16], we propose a semi-supervised learning framework based on `1 graph to utilize both labeled and unlabeled data for inference on a graph. Extensive experiments on semi-supervised face recognition and image classification demonstrate the superiority of our proposed semi-supervised learning framework based on `1 graph over the counterparts based on traditional graphs.",
"title": ""
},
{
"docid": "74ccb28a31d5a861bea1adfaab2e9bf1",
"text": "For many decades CMOS devices have been successfully scaled down to achieve higher speed and increased performance of integrated circuits at lower cost. Today’s charge-based CMOS electronics encounters two major challenges: power dissipation and variability. Spintronics is a rapidly evolving research and development field, which offers a potential solution to these issues by introducing novel ‘more than Moore’ devices. Spin-based magnetoresistive random-access memory (MRAM) is already recognized as one of the most promising candidates for future universal memory. Magnetic tunnel junctions, the main elements of MRAM cells, can also be used to build logic-in-memory circuits with non-volatile storage elements on top of CMOS logic circuits, as well as versatile compact on-chip oscillators with low power consumption. We give an overview of CMOS-compatible spintronics applications. First, we present a brief introduction to the physical background considering such effects as magnetoresistance, spin-transfer torque (STT), spin Hall effect, and magnetoelectric effects. We continue with a comprehensive review of the state-of-the-art spintronic devices for memory applications (STT-MRAM, domain wallmotion MRAM, and spin–orbit torque MRAM), oscillators (spin torque oscillators and spin Hall nano-oscillators), logic (logic-in-memory, all-spin logic, and buffered magnetic logic gate grid), sensors, and random number generators. Devices with different types of resistivity switching are analyzed and compared, with their advantages highlighted and challenges revealed. CMOScompatible spintronic devices are demonstrated beginning with predictive simulations, proceeding to their experimental confirmation and realization, and finalized by the current status of application in modern integrated systems and circuits. We conclude the review with an outlook, where we share our vision on the future applications of the prospective devices in the area.",
"title": ""
},
{
"docid": "1278d0b3ea3f06f52b2ec6b20205f8d0",
"text": "The future global Internet is going to have to cater to users that will be largely mobile. Mobility is one of the main factors affecting the design and performance of wireless networks. Mobility modeling has been an active field for the past decade, mostly focusing on matching a specific mobility or encounter metric with little focus on matching protocol performance. This study investigates the adequacy of existing mobility models in capturing various aspects of human mobility behavior (including communal behavior), as well as network protocol performance. This is achieved systematically through the introduction of a framework that includes a multi-dimensional mobility metric space. We then introduce COBRA, a new mobility model capable of spanning the mobility metric space to match realistic traces. A methodical analysis using a range of protocol (epidemic, spraywait, Prophet, and Bubble Rap) dependent and independent metrics (modularity) of various mobility models (SMOOTH and TVC) and traces (university campuses, and theme parks) is done. Our results indicate significant gaps in several metric dimensions between real traces and existing mobility models. Our findings show that COBRA matches communal aspect and realistic protocol performance, reducing the overhead gap (w.r.t existing models) from 80% to less than 12%, showing the efficacy of our framework.",
"title": ""
},
{
"docid": "98cd53e6bf758a382653cb7252169d22",
"text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.",
"title": ""
},
{
"docid": "41e9dac7301e00793c6e4891e07b53fa",
"text": "We present an intriguing property of visual data that we observe in our attempt to isolate the influence of data for learning a visual representation. We observe that we can get better performance than existing model by just conditioning the existing representation on a million unlabeled images without any extra knowledge. As a by-product of this study, we achieve results better than prior state-of-theart for surface normal estimation on NYU-v2 depth dataset, and improved results for semantic segmentation using a selfsupervised representation on PASCAL-VOC 2012 dataset.",
"title": ""
},
{
"docid": "ce9238236040aed852b1c8f255088b61",
"text": "This paper proposes a high efficiency LLC resonant inverter for induction heating applications by using asymmetrical voltage cancellation control. The proposed control method is implemented in a full-bridge topology for induction heating application. The operating frequency is automatically adjusted to maintain a small constant lagging phase angle under load parameter variation. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed without the use of output transformer. This results in an increase of the net efficiency of the induction heating system. The validity of the proposed method is verified through computer simulation and hardware experiment at the operating frequency of 93 to 96 kHz.",
"title": ""
}
] | scidocsrr |
762b69459f5f9cbbb3e67b5bb6528518 | Modellingof a special class of spherical parallel manipulators with Euler parameters | [
{
"docid": "8fa0c59e04193ff1375b3ed544847229",
"text": "In this paper, the problem of workspace analysis of spherical parallel manipulators (SPMs) is addressed with respect to a spherical robotic wrist. The wrist is designed following a modular approach and capable of a unlimited rotation of rolling. An equation dealing with singularity surfaces is derived and branches of the singularity surfaces are identified. By using the Euler parameters, the singularity surfaces are generated in a solid unit sphere, the workspace analysis and dexterity evaluation hence being able to be performed in the confined region of the sphere. Examples of workspace evaluation of the spherical wrist and general SPMs are included to demonstrate the application of the proposed method.",
"title": ""
},
{
"docid": "b427ebf5f9ce8af9383f74dc86819583",
"text": "This paper deals with the in-depth kinematic analysis of a special parallel wrist, called the agile eye. The agile eye is a three-legged spherical parallel robot with revolute joints, in which all pairs of adjacent joint axes are orthogonal. Its most peculiar feature, demonstrated in this paper for the first time, is that its workspace is unlimited and flawed only by six singularity curves (instead of surfaces). These curves correspond to self-motions of the mobile platform and of the legs, or to a lockup configuration. This paper also demonstrates that the four solutions to the direct kinematics of the agile eye (assembly modes) have a simple direct relationship with the eight solutions to the inverse kinematics (working modes)",
"title": ""
}
] | [
{
"docid": "175fa180bc18a59dd6855d469aed91ec",
"text": "A new solution of the inverse kinematics task for a 3-DOF parallel manipulator with a R-P -S joint structure is obtained for a given position of end-effector in the form of simple position equations. Based on this the number of the inverse kinematics task solutions was investigated, in general, equal to four. We identify the size of the manipulator feasible area and simple relationships are found between the position and orientation of the platform. We prove a new theorem stating that, while the end-effector traces a circular horizontal path with its centre at the vertical z-axis, the norm of the joint coordinates vector remains constant.",
"title": ""
},
{
"docid": "d77a8c630e50ed2879cafba7367ed456",
"text": "A survey found the language in use in introductory programming classes in the top U.S. computer science schools.",
"title": ""
},
{
"docid": "99ddcb898895b04f4e86337fe35c1713",
"text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.",
"title": ""
},
{
"docid": "c7993af6bf01f8b35f5494e5a564d757",
"text": "Microservice Architectures (MA) have the potential to increase the agility of software development. In an era where businesses require software applications to evolve to support emerging software requirements, particularly for Internet of Things (IoT) applications, we examine the issue of microservice granularity and explore its effect upon application latency. Two approaches to microservice deployment are simulated; the first with microservices in a single container, and the second with microservices partitioned across separate containers. We observed a negligible increase in service latency for the multiple container deployment over a single container.",
"title": ""
},
{
"docid": "b0b84a9f7f694dd8d7e0deb1533c4de5",
"text": "Medical institutes use Electronic Medical Record (EMR) to record a series of medical events, including diagnostic information (diagnosis codes), procedures performed (procedure codes) and admission details. Plenty of data mining technologies are applied in the EMR data set for knowledge discovery, which is precious to medical practice. The knowledge found is conducive to develop treatment plans, improve health care and reduce medical expenses, moreover, it could also provide further assistance to predict and control outbreaks of epidemic disease. The growing social value it creates has made it a hot spot for experts and scholars. In this paper, we will summarize the research status of data mining technologies on EMR, and analyze the challenges that EMR research is confronting currently.",
"title": ""
},
{
"docid": "a78caf89bb51dca3a8a95f7736ae1b2b",
"text": "The understanding of sentences involves not only the retrieval of the meaning of single words, but the identification of the relation between a verb and its arguments. The way the brain manages to process word meaning and syntactic relations during language comprehension on-line still is a matter of debate. Here we review the different views discussed in the literature and report data from crucial experiments investigating the temporal and neurotopological parameters of different information types encoded in verbs, i.e. word category information, the verb's argument structure information, the verb's selectional restriction and the morphosyntactic information encoded in the verb's inflection. The neurophysiological indices of the processes dealing with these different information types suggest an initial independence of the processing of word category information from other information types as the basis of local phrase structure building, and a later processing stage during which different information types interact. The relative ordering of the subprocesses appears to be universal, whereas the absolute timing of when during later phrases interaction takes places varies as a function of when the relevant information becomes available. Moreover, the neurophysiological indices for non-local dependency relations vary as a function of the morphological richness of the language.",
"title": ""
},
{
"docid": "e680f8b83e7a2137321cc644724827de",
"text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.",
"title": ""
},
{
"docid": "a8553e9f90e8766694f49dcfdeab83b7",
"text": "The need for solid-state ac-dc converters to improve power quality in terms of power factor correction, reduced total harmonic distortion at input ac mains, and precisely regulated dc output has motivated the investigation of several topologies based on classical converters such as buck, boost, and buck-boost converters. Boost converters operating in continuous-conduction mode have become particularly popular because reduced electromagnetic interference levels result from their utilization. Within this context, this paper introduces a bridgeless boost converter based on a three-state switching cell (3SSC), whose distinct advantages are reduced conduction losses with the use of magnetic elements with minimized size, weight, and volume. The approach also employs the principle of interleaved converters, as it can be extended to a generic number of legs per winding of the autotransformers and high power levels. A literature review of boost converters based on the 3SSC is initially presented so that key aspects are identified. The theoretical analysis of the proposed converter is then developed, while a comparison with a conventional boost converter is also performed. An experimental prototype rated at 1 kW is implemented to validate the proposal, as relevant issues regarding the novel converter are discussed.",
"title": ""
},
{
"docid": "66a49a50b63892a857a40531630be800",
"text": "We present a neural network architecture applied to the problem of refining a dense disparity map generated by a stereo algorithm to which we have no access. Our approach is able to learn which disparity values should be modified and how, from a training set of images, estimated disparity maps and the corresponding ground truth. Its only input at test time is a disparity map and the reference image. Two design characteristics are critical for the success of our network: (i) it is formulated as a recurrent neural network, and (ii) it estimates the output refined disparity map as a combination of residuals computed at multiple scales, that is at different up-sampling and down-sampling rates. The first property allows the network, which we named RecResNet, to progressively improve the disparity map, while the second property allows the corrections to come from different scales of analysis, addressing different types of errors in the current disparity map. We present competitive quantitative and qualitative results on the KITTI 2012 and 2015 benchmarks that surpass the accuracy of previous disparity refinement methods.",
"title": ""
},
{
"docid": "76d1509549ba64157911e6b723f6ebc5",
"text": "A single-stage soft-switching converter is proposed for universal line voltage applications. A boost type of active-clamp circuit is used to achieve zero-voltage switching operation of the power switches. A simple DC-link voltage feedback scheme is applied to the proposed converter. A resonant voltage-doubler rectifier helps the output diodes to achieve zero-current switching operation. The reverse-recovery losses of the output diodes can be eliminated without any additional components. The DC-link capacitor voltage can be reduced, providing reduced voltage stresses of switching devices. Furthermore, power conversion efficiency can be improved by the soft-switching operation of switching devices. The performance of the proposed converter is evaluated on a 160-W (50 V/3.2 A) experimental prototype. The proposed converter complies with International Electrotechnical Commission (IEC) 1000-3-2 Class-D requirements for the light-emitting diode power supply of large-sized liquid crystal displays, maintaining the DC-link capacitor voltage within 400 V under the universal line voltage (90-265 Vrms).",
"title": ""
},
{
"docid": "63b283d40abcccd17b4771535ac000e4",
"text": "Developing agents to engage in complex goaloriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-theart method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.",
"title": ""
},
{
"docid": "1f4ff9d732b3512ee9b105f084edd3d2",
"text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "4e50e68e099ab77aedcb0abe8b7a9ca2",
"text": "In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input–multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the “APoZ”-based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.",
"title": ""
},
{
"docid": "54a35bf200d9af060ce38a9aec972f50",
"text": "The linear preferential attachment hypothesis has been shown to be quite successful in explaining the existence of networks with power-law degree distributions. It is then quite important to determine if this mechanism is the consequence of a general principle based on local rules. In this work it is claimed that an effective linear preferential attachment is the natural outcome of growing network models based on local rules. It is also shown that the local models offer an explanation for other properties like the clustering hierarchy and degree correlations recently observed in complex networks. These conclusions are based on both analytical and numerical results for different local rules, including some models already proposed in the literature.",
"title": ""
},
{
"docid": "e4dc1f30a914dc6f710f23b5bc047978",
"text": "Intelligence, expertise, ability and talent, as these terms have traditionally been used in education and psychology, are socially agreed upon labels that minimize the dynamic, evolving, and contextual nature of individual–environment relations. These hypothesized constructs can instead be described as functional relations distributed across whole persons and particular contexts through which individuals appear knowledgeably skillful. The purpose of this article is to support a concept of ability and talent development that is theoretically grounded in 5 distinct, yet interrelated, notions: ecological psychology, situated cognition, distributed cognition, activity theory, and legitimate peripheral participation. Although talent may be reserved by some to describe individuals possessing exceptional ability and ability may be described as an internal trait, in our description neither ability nor talent are possessed. Instead, they are treated as equivalent terms that can be used to describe functional transactions that are situated across person-in-situation. Further, and more important, by arguing that ability is part of the individual–environment transaction, we take the potential to appear talented out of the hands (or heads) of the few and instead treat it as an opportunity that is available to all although it may be actualized more frequently by some.",
"title": ""
},
{
"docid": "21197ea03a0c9ce6061ea524aca10b52",
"text": "Developers of gamified business applications face the challenge of creating motivating gameplay strategies and creative design techniques to deliver subject matter not typically associated with games in a playful way. We currently have limited models that frame what makes gamification effective (i.e., engaging people with a business application). Thus, we propose a design-centric model and analysis tool for gamification: The kaleidoscope of effective gamification. We take a look at current models of game design, self-determination theory and the principles of systems design to deconstruct the gamification layer in the design of these applications. Based on the layers of our model, we provide design guidelines for effective gamification of business applications.",
"title": ""
},
{
"docid": "2a58426989cbfab0be9e18b7ee272b0a",
"text": "Potholes are a nuisance, especially in the developing world, and can often result in vehicle damage or physical harm to the vehicle occupants. Drivers can be warned to take evasive action if potholes are detected in real-time. Moreover, their location can be logged and shared to aid other drivers and road maintenance agencies. This paper proposes a vehicle-based computer vision approach to identify potholes using a window-mounted camera. Existing literature on pothole detection uses either theoretically constructed pothole models or footage taken from advantageous vantage points at low speed, rather than footage taken from within a vehicle at speed. A distinguishing feature of the work presented in this paper is that a thorough exercise was performed to create an image library of actual and representative potholes under different conditions, and results are obtained using a part of this library. A model of potholes is constructed using the image library, which is used in an algorithmic approach that combines a road colour model with simple image processing techniques such as a Canny filter and contour detection. Using this approach, it was possible to detect potholes with a precision of 81.8% and recall of 74.4.%.",
"title": ""
},
{
"docid": "e68aac3565df039aa431bf2a69e27964",
"text": "region, a five-year-old girl with mild asthma presented to the emergency department of a children’s hospital in acute respiratory distress. She had an 11-day history of cough, rhinorrhea and progressive chest discomfort. She was otherwise healthy, with no history of severe respiratory illness, prior hospital admissions or immu nocompromise. Outside of infrequent use of salbutamol, she was not taking any medications, and her routine childhood immunizations, in cluding conjugate pneumococcal vaccine, were up to date. She had not received the pandemic influenza vaccine because it was not yet available for her age group. The patient had been seen previously at a community health centre a week into her symptoms, and a chest radiograph had shown perihi lar and peribronchial thickening but no focal con solidation, atelectasis or pleural effusion. She had then been reassessed 24 hours later at an influenza assessment centre and empirically started on oseltamivir. Two days later, with the onset of vomiting, diarrhea, fever and progressive shortness of breath, she was brought to the emergency department of the children’s hospital. On examination, she was in considerable distress; her heart rate was 170 beats/min, her respiratory rate was 60 breaths/min and her blood pressure was 117/57 mm Hg. Her oxygen saturations on room air were consistently 70%. On auscultation, she had decreased air entry to the right side with bronchial breath sounds. Repeat chest radiography showed almost complete opacification of the right hemithorax, air bronchograms in the middle and lower lobes, and minimal aeration to the apex. This was felt to be in keeping with whole lung consolidation and parapneumonic effusion. The left lung appeared normal. Blood tests done on admission showed a hemoglobin level of 122 (normal 110–140) g/L, a leukocyte count of 1.5 (normal 5.5–15.5) × 10/L (neutrophils 11% [normal 47%] and bands 19% [normal 5%]) and a platelet count of 92 (normal 217–533) × 10/L. Results of blood tests were otherwise unremarkable. Venous blood gas had a pH level of 7.32 (normal 7.35–7.42), partial pressure of carbon dioxide of 43 (normal 32– 43) mm Hg, a base deficit of 3.6 (normal –2 to 3) mmol/L, and a bicarbonate level of 21.8 (normal 21–26) mmol/L. The initial serum creatinine level was 43.0 (normal < 36) μmol/L and the urea level was 6.5 (normal 2.0–7.0) mmol/L, with no clinical evidence of renal dysfunction. Given the patient’s profound increased work of breathing, she was admitted to the intensive care unit (ICU), where intubation was required because of her continued decline over the next 24 hours. Blood cultures taken on admission were negative. Nasopharyngeal aspirates were negative on rapid respiratory viral testing, but antiviral treatment for presumed pandemic (H1N1) influenza was continued given her clinical presentation, the prevalence of pandemic influenza in the community and the low sensitivity of the test in the range of only 62%. Viral cultures were not done. Empiric treatment with intravenous cefotaxime (200 mg/kg/d) and vancomycin (40 mg/kg/d) was started in the ICU for broad antimicrobial coverage, including possible Cases",
"title": ""
},
{
"docid": "eef7ce5b4268054ed6c7de7fdbbf003e",
"text": "This paper proposes a new closed-loop synchronization algorithm, PLL (Phase-Locked Loop), for applications in power conditioner systems for single-phase networks. The structure presented is based on the correlation of the input signal with a complex signal generated from the use of an adaptive filter in a PLL algorithm in order to minimize the computational effort. Moreover, the adapted PLL presents a higher level of rejection for two particular disturbances: interharmonic and subharmonic, when compared to the original algorithm. Simulation and experimental results will be presented in order to prove the efficacy of the proposed adaptive algorithm.",
"title": ""
}
] | scidocsrr |
32808d28ff8781af0fba70b60890a6f5 | Accurate Continuous Sweeping Framework in Indoor Spaces With Backpack Sensor System for Applications to 3-D Mapping | [
{
"docid": "c5cc4da2906670c30fc0bac3040217bd",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
},
{
"docid": "d7f50c4b31e14f80fd84b3488f318539",
"text": "We propose a novel 6-degree-of-freedom (DoF) visual simultaneous localization and mapping (SLAM) method based on the structural regularity of man-made building environments. The idea is that we use the building structure lines as features for localization and mapping. Unlike other line features, the building structure lines encode the global orientation information that constrains the heading of the camera over time, eliminating the accumulated orientation errors and reducing the position drift in consequence. We extend the standard extended Kalman filter visual SLAM method to adopt the building structure lines with a novel parameterization method that represents the structure lines in dominant directions. Experiments have been conducted in both synthetic and real-world scenes. The results show that our method performs remarkably better than the existing methods in terms of position error and orientation error. In the test of indoor scenes of the public RAWSEEDS data sets, with the aid of a wheel odometer, our method produces bounded position errors about 0.79 m along a 967-m path although no loop-closing algorithm is applied.",
"title": ""
},
{
"docid": "93fe562da15b8babc98fb2c10d0f1082",
"text": "In this paper we address the problem of estimating the intrinsic parameters of a 3D LIDAR while at the same time computing its extrinsic calibration with respect to a rigidly connected camera. Existing approaches to solve this nonlinear estimation problem are based on iterative minimization of nonlinear cost functions. In such cases, the accuracy of the resulting solution hinges on the availability of a precise initial estimate, which is often not available. In order to address this issue, we divide the problem into two least-squares sub-problems, and analytically solve each one to determine a precise initial estimate for the unknown parameters. We further increase the accuracy of these initial estimates by iteratively minimizing a batch nonlinear least-squares cost function. In addition, we provide the minimal identifiability conditions, under which it is possible to accurately estimate the unknown parameters. Experimental results consisting of photorealistic 3D reconstruction of indoor and outdoor scenes, as well as standard metrics of the calibration errors, are used to assess the validity of our approach.",
"title": ""
}
] | [
{
"docid": "62bf93deeb73fab74004cb3ced106bac",
"text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.",
"title": ""
},
{
"docid": "fdff78b32803eb13904c128d8e011ea8",
"text": "The task of identifying when to take a conversational turn is an important function of spoken dialogue systems. The turn-taking system should also ideally be able to handle many types of dialogue, from structured conversation to spontaneous and unstructured discourse. Our goal is to determine how much a generalized model trained on many types of dialogue scenarios would improve on a model trained only for a specific scenario. To achieve this goal we created a large corpus of Wizard-of-Oz conversation data which consisted of several different types of dialogue sessions, and then compared a generalized model with scenario-specific models. For our evaluation we go further than simply reporting conventional metrics, which we show are not informative enough to evaluate turn-taking in a real-time system. Instead, we process results using a performance curve of latency and false cut-in rate, and further improve our model's real-time performance using a finite-state turn-taking machine. Our results show that the generalized model greatly outperformed the individual model for attentive listening scenarios but was worse in job interview scenarios. This implies that a model based on a large corpus is better suited to conversation which is more user-initiated and unstructured. We also propose that our method of evaluation leads to more informative performance metrics in a real-time system.",
"title": ""
},
{
"docid": "f94764347d07af17cd034e40be54bc4a",
"text": "Device level Self-Heating (SH) is becoming a limiting factor during traditional DC Hot Carrier stresses in bulk and SOI technologies. Consideration is given to device layout and design for Self-Heating minimization during HCI stress in SOI technologies, the effect of SH on activation energy (Ea) and the SH induced enhancement to degradation. Applying a methodology for SH temperature correction of extracted device lifetime, correlation is established between DC device level stress and AC device stress using a specially designed ring oscillator.",
"title": ""
},
{
"docid": "11578b2cd8be05e0162528b403b7caf3",
"text": "The aims of this paper are threefold. First we highlight the usefulness of generalized linear mixed models (GLMMs) in the modelling of portfolio credit default risk. The GLMM-setting allows for a flexible specification of the systematic portfolio risk in terms of observed fixed effects and unobserved random effects, in order to explain the phenomena of default dependence and time-inhomogeneity in empirical default data. Second we show that computational Bayesian techniques such as the Gibbs sampler can be successfully applied to fit models with serially correlated random effects, which are special instances of state space models. Third we provide an empirical study using Standard & Poor’s data on US firms. A model incorporating rating category and sector effects and a macroeconomic proxy variable for state-ofthe-economy suggests the presence of a residual, cyclical, latent component in the systematic risk.",
"title": ""
},
{
"docid": "9c41df95c11ec4bed3e0b19b20f912bb",
"text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.",
"title": ""
},
{
"docid": "873c2e7774791417d6cb4f5904cde74c",
"text": "This article discusses empirical findings and conceptual elaborations of the last 10 years in strategic niche management research (SNM). The SNM approach suggests that sustainable innovation journeys can be facilitated by creating technological niches, i.e. protected spaces that allow the experimentation with the co-evolution of technology, user practices, and regulatory structures. The assumption was that if such niches were constructed appropriately, they would act as building blocks for broader societal changes towards sustainable development. The article shows how concepts and ideas have evolved over time and new complexities were introduced. Research focused on the role of various niche-internal processes such as learning, networking, visioning and the relationship between local projects and global rule sets that guide actor behaviour. The empirical findings showed that the analysis of these niche-internal dimensions needed to be complemented with attention to niche external processes. In this respect, the multi-level perspective proved useful for contextualising SNM. This contextualisation led to modifications in claims about the dynamics of sustainable innovation journeys. Niches are to be perceived as crucial for bringing about regime shifts, but they cannot do this on their own. Linkages with ongoing external processes are also important. Although substantial insights have been gained, the SNM approach is still an unfinished research programme. We identify various promising research directions, as well as policy implications.",
"title": ""
},
{
"docid": "282424d3a055bcc2d0d5c99c6f8e58e9",
"text": "Over the last few years, neuroimaging techniques have contributed greatly to the identification of the structural and functional neuroanatomy of anxiety disorders. The amygdala seems to be a crucial structure for fear and anxiety, and has consistently been found to be activated in anxiety-provoking situations. Apart from the amygdala, the insula and anterior cinguiate cortex seem to be critical, and ail three have been referred to as the \"fear network.\" In the present article, we review the main findings from three major lines of research. First, we examine human models of anxiety disorders, including fear conditioning studies and investigations of experimentally induced panic attacks. Then we turn to research in patients with anxiety disorders and take a dose look at post-traumatic stress disorder and obsessive-compulsive disorder. Finally, we review neuroimaging studies investigating neural correlates of successful treatment of anxiety, focusing on exposure-based therapy and several pharmacological treatment options, as well as combinations of both.",
"title": ""
},
{
"docid": "17d06584c35a9879b0bd4b653ff64b40",
"text": "We present a solution to the rolling shutter (RS) absolute camera pose problem with known vertical direction. Our new solver, R5Pup, is an extension of the general minimal solution R6P, which uses a double linearized RS camera model initialized by the standard perspective P3P. Here, thanks to using known vertical directions, we avoid double linearization and can get the camera absolute pose directly from the RS model without the initialization by a standard P3P. Moreover, we need only five 2D-to-3D matches while R6P needed six such matches. We demonstrate in simulated and real experiments that our new R5Pup is robust, fast and a very practical method for absolute camera pose computation for modern cameras on mobile devices. We compare our R5Pup to the state of the art RS and perspective methods and demonstrate that it outperforms them when vertical direction is known in the range of accuracy available on modern mobile devices. We also demonstrate that when using R5Pup solver in structure from motion (SfM) pipelines, it is better to transform already reconstructed scenes into the standard position, rather than using hard constraints on the verticality of up vectors.",
"title": ""
},
{
"docid": "ffba4650ec3349c096c35779775d350d",
"text": "Massively parallel short-read sequencing technologies, coupled with powerful software platforms, are enabling investigators to analyse tens of thousands of genetic markers. This wealth of data is rapidly expanding and allowing biological questions to be addressed with unprecedented scope and precision. The sizes of the data sets are now posing significant data processing and analysis challenges. Here we describe an extension of the Stacks software package to efficiently use genotype-by-sequencing data for studies of populations of organisms. Stacks now produces core population genomic summary statistics and SNP-by-SNP statistical tests. These statistics can be analysed across a reference genome using a smoothed sliding window. Stacks also now provides several output formats for several commonly used downstream analysis packages. The expanded population genomics functions in Stacks will make it a useful tool to harness the newest generation of massively parallel genotyping data for ecological and evolutionary genetics.",
"title": ""
},
{
"docid": "119ea9c1d6b2cf2063efaf4d5ed7e756",
"text": "In this paper, we use shape grammars (SGs) for facade parsing, which amounts to segmenting 2D building facades into balconies, walls, windows, and doors in an architecturally meaningful manner. The main thrust of our work is the introduction of reinforcement learning (RL) techniques to deal with the computational complexity of the problem. RL provides us with techniques such as Q-learning and state aggregation which we exploit to efficiently solve facade parsing. We initially phrase the 1D parsing problem in terms of a Markov Decision Process, paving the way for the application of RL-based tools. We then develop novel techniques for the 2D shape parsing problem that take into account the specificities of the facade parsing problem. Specifically, we use state aggregation to enforce the symmetry of facade floors and demonstrate how to use RL to exploit bottom-up, image-based guidance during optimization. We provide systematic results on the Paris building dataset and obtain state-of-the-art results in a fraction of the time required by previous methods. We validate our method under diverse imaging conditions and make our software and results available online.",
"title": ""
},
{
"docid": "0b17e1cbfa3452ba2ff7c00f4e137aef",
"text": "Brain-computer interfaces (BCIs) promise to provide a novel access channel for assistive technologies, including augmentative and alternative communication (AAC) systems, to people with severe speech and physical impairments (SSPI). Research on the subject has been accelerating significantly in the last decade and the research community took great strides toward making BCI-AAC a practical reality to individuals with SSPI. Nevertheless, the end goal has still not been reached and there is much work to be done to produce real-world-worthy systems that can be comfortably, conveniently, and reliably used by individuals with SSPI with help from their families and care givers who will need to maintain, setup, and debug the systems at home. This paper reviews reports in the BCI field that aim at AAC as the application domain with a consideration on both technical and clinical aspects.",
"title": ""
},
{
"docid": "a645f2b68ced60099d8ae93f79e1714a",
"text": "The purpose of this study was to examine the extent to which fundamental movement skills and physical fitness scores assessed in early adolescence predict self-reported physical activity assessed 6 years later. The sample comprised 333 (200 girls, 133 boys; M age = 12.41) students. The effects of previous physical activity, sex, and body mass index (BMI) were controlled in the main analyses. Adolescents' fundamental movement skills, physical fitness, self-report physical activity, and BMI were collected at baseline, and their self-report energy expenditure (metabolic equivalents: METs) and intensity of physical activity were collected using the International Physical Activity Questionnaire 6 years later. Results showed that fundamental movement skills predicted METs, light, moderate, and vigorous intensity physical activity levels, whereas fitness predicted METs, moderate, and vigorous physical activity levels. Hierarchical regression analyses also showed that after controlling for previous levels of physical activity, sex, and BMI, the size of the effect of fundamental movement skills and physical fitness on energy expenditure and physical activity intensity was moderate (R(2) change between 0.06 and 0.15), with the effect being stronger for high intensity physical activity.",
"title": ""
},
{
"docid": "3ed5a33db314d464973577c9a4442d33",
"text": "Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Cameraequipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.",
"title": ""
},
{
"docid": "2c7bafac9d4c4fedc43982bd53c99228",
"text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.",
"title": ""
},
{
"docid": "4c82a4e51633b87f2f6b2619ca238686",
"text": "Allocentric space is mapped by a widespread brain circuit of functionally specialized cell types located in interconnected subregions of the hippocampal-parahippocampal cortices. Little is known about the neural architectures required to express this variety of firing patterns. In rats, we found that one of the cell types, the grid cell, was abundant not only in medial entorhinal cortex (MEC), where it was first reported, but also in pre- and parasubiculum. The proportion of grid cells in pre- and parasubiculum was comparable to deep layers of MEC. The symmetry of the grid pattern and its relationship to the theta rhythm were weaker, especially in presubiculum. Pre- and parasubicular grid cells intermingled with head-direction cells and border cells, as in deep MEC layers. The characterization of a common pool of space-responsive cells in architecturally diverse subdivisions of parahippocampal cortex constrains the range of mechanisms that might give rise to their unique functional discharge phenotypes.",
"title": ""
},
{
"docid": "5bd713c468f48313e42b399f441bb709",
"text": "Nowadays, malware is affecting not only PCs but also mobile devices, which became pervasive in everyday life. Mobile devices can access and store personal information (e.g., location, photos, and messages) and thus are appealing to malware authors. One of the most promising approach to analyze malware is by monitoring its execution in a sandbox (i.e., via dynamic analysis). In particular, most malware sandboxing solutions for Android rely on an emulator, rather than a real device. This motivates malware authors to include runtime checks in order to detect whether the malware is running in a virtualized environment. In that case, the malicious app does not trigger the malicious payload. The presence of differences between real devices and Android emulators started an arms race between security researchers and malware authors, where the former want to hide these differences and the latter try to seek them out. In this paper we present Mirage, a malware sandbox architecture for Android focused on dynamic analysis evasion attacks. We designed the components of Mirage to be extensible via software modules, in order to build specific countermeasures against such attacks. To the best of our knowledge, Mirage is the first modular sandbox architecture that is robust against sandbox detection techniques. As a representative case study, we present a proof of concept implementation of Mirage with a module that tackles evasion attacks based on sensors API return values.",
"title": ""
},
{
"docid": "c1492f5eb2fafc52da81902a9d19d480",
"text": "A compact dual-band multiple-input-multiple-output (MIMO)/diversity antenna is proposed. This antenna is designed for 2.4/5.2/5.8GHz WLAN and 2.5/3.5/5.5 GHz WiMAX applications in portable mobile devices. It consists of two back-to-back monopole antennas connected with a T-shaped stub, where two rectangular slots are cut from the ground, which significantly reduces the mutual coupling between the two ports at the lower frequency band. The volume of this antenna is 40mm ∗ 30mm ∗ 1mm including the ground plane. Measured results show the isolation is better than −20 dB at the lower frequency band from 2.39 to 3.75GHz and −25 dB at the higher frequency band from 5.03 to 7 GHz, respectively. Moreover, acceptable radiation patterns, antenna gain, and envelope correlation coefficient are obtained. These characteristics indicate that the proposed antenna is suitable for some portable MIMO/diversity equipments.",
"title": ""
},
{
"docid": "c4d5464727db6deafc2ce2307284dd0c",
"text": "— Recently, many researchers have focused on building dual handed static gesture recognition systems. Single handed static gestures, however, pose more recognition complexity due to the high degree of shape ambiguities. This paper presents a gesture recognition setup capable of recognizing and emphasizing the most ambiguous static single handed gestures. Performance of the proposed scheme is tested on the alphabets of American Sign Language (ASL). Segmentation of hand contours from image background is carried out using two different strategies; skin color as detection cue with RGB and YCbCr color spaces, and thresholding of gray level intensities. A novel, rotation and size invariant, contour tracing descriptor is used to describe gesture contours generated by each segmentation technique. Performances of k-Nearest Neighbor (k-NN) and multiclass Support Vector Machine (SVM) classification techniques are evaluated to classify a particular gesture. Gray level segmented contour traces classified by multiclass SVM achieve accuracy up to 80.8% on the most ambiguous gestures of ASL alphabets with overall accuracy of 90.1%.",
"title": ""
},
{
"docid": "ec2257854faa3076b5c25d2c947d1780",
"text": "This paper presents a novel approach for road marking detection and classification based on machine learning algorithms. Road marking recognition is an important feature of an intelligent transportation system (ITS). Previous works are mostly developed using image processing and decisions are often made using empirical functions, which makes it difficult to be generalized. Hereby, we propose a general framework for object detection and classification, aimed at video-based intelligent transportation applications. It is a two-step approach. The detection is carried out using binarized normed gradient (BING) method. PCA network (PCANet) is employed for object classification. Both BING and PCANet are among the latest algorithms in the field of machine learning. Practically the proposed method is applied to a road marking dataset with 1,443 road images. We randomly choose 60% images for training and use the remaining 40% images for testing. Upon training, the system can detect 9 classes of road markings with an accuracy better than 96.8%. The proposed approach is readily applicable to other ITS applications.",
"title": ""
},
{
"docid": "4d26d3823e3889c22fe517857a49d508",
"text": "As an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane. Rather, complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, changes in illumination relative to light sources, and may even become partially or fully occluded. In this paper, we develop an efficient, general framework for object tracking—one which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Throughout, we present experimental results performed on live video sequences demonstrating the effectiveness and efficiency of our methods.",
"title": ""
}
] | scidocsrr |
4c522ee75323641bcadf9828b7bb7acc | A Snapback Suppressed Reverse-Conducting IGBT With a Floating p-Region in Trench Collector | [
{
"docid": "1d6c4f6efccb211ced52dbed51b0be22",
"text": "In this paper, an advanced Reverse Conducting (RC) IGBT concept is presented. The new technology is referred to as the Bi-mode Insulated Gate Transistor (BIGT) implying that the device can operate at the same current densities in transistor (IGBT) mode and freewheeling diode mode by utilizing the same available silicon volume in both operational modes. The BIGT design concept differs from that of the standard RC-IGBT while targeting to fully replace the state-of-the-art two-chip IGBT/Diode approach with a single chip. The BIGT is also capable of improving the over-all performance especially under hard switching conditions.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
}
] | [
{
"docid": "f437f971d7d553b69d438a469fd26d41",
"text": "This paper introduces a single-chip, 200 200element sensor array implemented in a standard two-metal digital CMOS technology. The sensor is able to grab the fingerprint pattern without any use of optical and mechanical adaptors. Using this integrated sensor, the fingerprint is captured at a rate of 10 F/s by pressing the finger skin onto the chip surface. The fingerprint pattern is sampled by capacitive sensors that detect the electric field variation induced by the skin surface. Several design issues regarding the capacitive sensing problem are reported and the feedback capacitive sensing scheme (FCS) is introduced. More specifically, the problem of the charge injection in MOS switches has been revisited for charge amplifier design.",
"title": ""
},
{
"docid": "07ce1301392e18c1426fd90507dc763f",
"text": "The fluorescent lamp lifetime is very dependent of the start-up lamp conditions. The lamp filament current and temperature during warm-up and at steady-state operation are important to extend the life of a hot-cathode fluorescent lamp, and the preheating circuit is responsible for attending to the start-up lamp requirements. The usual solution for the preheating circuit used in self-oscillating electronic ballasts is simple and presents a low cost. However, the performance to extend the lamp lifetime is not the most effective. This paper presents an effective preheating circuit for self-oscillating electronic ballasts as an alternative to the usual solution.",
"title": ""
},
{
"docid": "10b4d77741d40a410b30b0ba01fae67f",
"text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: [email protected] (Oke). © 2006 AAEP.",
"title": ""
},
{
"docid": "58039fbc0550c720c4074c96e866c025",
"text": "We argue that to best comprehend many data sets, plotting judiciously selected sample statistics with associated confidence intervals can usefully supplement, or even replace, standard hypothesis-testing procedures. We note that most social science statistics textbooks limit discussion of confidence intervals to their use in between-subject designs. Our central purpose in this article is to describe how to compute an analogous confidence interval that can be used in within-subject designs. This confidence interval rests on the reasoning that because between-subject variance typically plays no role in statistical analyses of within-subject designs, it can legitimately be ignored; hence, an appropriate confidence interval can be based on the standard within-subject error term-that is, on the variability due to the subject × condition interaction. Computation of such a confidence interval is simple and is embodied in Equation 2 on p. 482 of this article. This confidence interval has two useful properties. First, it is based on the same error term as is the corresponding analysis of variance, and hence leads to comparable conclusions. Second, it is related by a known factor (√2) to a confidence interval of the difference between sample means; accordingly, it can be used to infer the faith one can put in some pattern of sample means as a reflection of the underlying pattern of population means. These two properties correspond to analogous properties of the more widely used between-subject confidence interval.",
"title": ""
},
{
"docid": "91c57b7a9dd2555e92b5ffa1f5a21790",
"text": "This article presents suggestions for nurses to gain skill, competence, and comfort in caring for critically ill patients receiving mechanical ventilatory support, with a specific focus on education strategies and building communication skills with these challenging nonverbal patients. Engaging in evidence-based practice projects at the unit level and participating in or leading research studies are key ways nurses can contribute to improving outcomes for patients receiving mechanical ventilation. Suggestions are offered for evidence-based practice projects and possible research studies to improve outcomes and advance the science in an effort to achieve quality patient-ventilator management in intensive care units.",
"title": ""
},
{
"docid": "0a7673d423c9134fb96bb3bb5b286433",
"text": "In this contribution the development, design, fabrication and test of a highly integrated broadband multifunctional chip is presented. The MMIC covers the C-, X-and Ku- Band and it is suitable for applications in high performance Transmit/Receive Modules. In less than 26 mm2, the MMIC embeds several T/R switches, low noise/medium power amplifiers, a stepped phase shifter and analog/digital attenuators in order to perform the RF signal routing and phase/amplitude conditioning. Besides, an embedded serial-to-parallel converter drives the phase shifter and the digital attenuator leading to a reduction in complexity of the digital control interface.",
"title": ""
},
{
"docid": "655a95191700e24c6dcd49b827de4165",
"text": "With the increasing demand for express delivery, a courier needs to deliver many tasks in one day and it's necessary to deliver punctually as the customers expect. At the same time, they want to schedule the delivery tasks to minimize the total time of a courier's one-day delivery, considering the total travel time. However, most of scheduling researches on express delivery focus on inter-city transportation, and they are not suitable for the express delivery to customers in the “last mile”. To solve the issue above, this paper proposes a personalized service for scheduling express delivery, which not only satisfies all the customers' appointment time but also makes the total time minimized. In this service, personalized and accurate travel time estimation is important to guarantee delivery punctuality when delivering shipments. Therefore, the personalized scheduling service is designed to consist of two basic services: (1) personalized travel time estimation service for any path in express delivery using courier trajectories, (2) an express delivery scheduling service considering multiple factors, including customers' appointments, one-day delivery costs, etc., which is based on the accurate travel time estimation provided by the first service. We evaluate our proposed service based on extensive experiments, using GPS trajectories generated by more than 1000 couriers over a period of two months in Beijing. The results demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "95f57e37d04b6b3b8c9ce29ebf23d345",
"text": "Finite state machines (FSMs) are the backbone of sequential circuit design. In this paper, a new FSM watermarking scheme is proposed by making the authorship information a non-redundant property of the FSM. To overcome the vulnerability to state removal attack and minimize the design overhead, the watermark bits are seamlessly interwoven into the outputs of the existing and free transitions of state transition graph (STG). Unlike other transition-based STG watermarking, pseudo input variables have been reduced and made functionally indiscernible by the notion of reserved free literal. The assignment of reserved literals is exploited to minimize the overhead of watermarking and make the watermarked FSM fallible upon removal of any pseudo input variable. A direct and convenient detection scheme is also proposed to allow the watermark on the FSM to be publicly detectable. Experimental results on the watermarked circuits from the ISCAS'89 and IWLS'93 benchmark sets show lower or acceptably low overheads with higher tamper resilience and stronger authorship proof in comparison with related watermarking schemes for sequential functions.",
"title": ""
},
{
"docid": "6c11bb11540719ad64e98bb67cd9a798",
"text": "Opium poppy (Papaver somniferum) produces a large number of benzylisoquinoline alkaloids, including the narcotic analgesics morphine and codeine, and has emerged as one of the most versatile model systems to study alkaloid metabolism in plants. As summarized in this review, we have taken a holistic strategy—involving biochemical, cellular, molecular genetic, genomic, and metabolomic approaches—to draft a blueprint of the fundamental biological platforms required for an opium poppy cell to function as an alkaloid factory. The capacity to synthesize and store alkaloids requires the cooperation of three phloem cell types—companion cells, sieve elements, and laticifers—in the plant, but also occurs in dedifferentiated cell cultures. We have assembled an opium poppy expressed sequence tag (EST) database based on the attempted sequencing of more than 30,000 cDNAs from elicitor-treated cell culture, stem, and root libraries. Approximately 23,000 of the elicitor-induced cell culture and stem ESTs are represented on a DNA microarray, which has been used to examine changes in transcript profile in cultured cells in response to elicitor treatment, and in plants with different alkaloid profiles. Fourier transform-ion cyclotron resonance mass spectrometry and proton nuclear magnetic resonance mass spectroscopy are being used to detect corresponding differences in metabolite profiles. Several new genes involved in the biosynthesis and regulation of alkaloid pathways in opium poppy have been identified using genomic tools. A biological blueprint for alkaloid production coupled with the emergence of reliable transformation protocols has created an unprecedented opportunity to alter the chemical profile of the world’s most valuable medicinal plant.",
"title": ""
},
{
"docid": "a0ebefc5137a1973e1d1da2c478de57c",
"text": "This paper presents BOTTA, the first Arabic dialect chatbot. We explore the challenges of creating a conversational agent that aims to simulate friendly conversations using the Egyptian Arabic dialect. We present a number of solutions and describe the different components of the BOTTA chatbot. The BOTTA database files are publicly available for researchers working on Arabic chatbot technologies. The BOTTA chatbot is also publicly available for any users who want to chat with it online.",
"title": ""
},
{
"docid": "f651d8505f354fe0ad8e0866ca64e6e1",
"text": "Building on existing categorical accounts of natural language semantics, we propose a compositional distributional model of ambiguous meaning. Originally inspired by the high-level category theoretic language of quantum information protocols, the compositional, distributional categorical model provides a conceptually motivated procedure to compute the meaning of a sentence, given its grammatical structure and an empirical derivation of the meaning of its parts. Grammar is given a type-logical description in a compact closed category while the meaning of words is represented in a finite inner product space model. Since the category of finite-dimensional Hilbert spaces is also compact closed, the type-checking deduction process lifts to a concrete meaning-vector computation via a strong monoidal functor between the two categories. The advantage of reasoning with these structures is that grammatical composition admits an interpretation in terms of flow of meaning between words. Pushing the analogy with quantum mechanics further, we describe ambiguous words as statistical ensembles of unambiguous concepts and extend the semantics of the previous model to a category that supports probabilistic mixing. We introduce two different Frobenius algebras representing different ways of composing the meaning of words, and discuss their properties. We conclude with a range of applications to the case of definitions, including a meaning update rule that reconciles the meaning of an ambiguous word with that of its definition.",
"title": ""
},
{
"docid": "d5c57af0f7ab41921ddb92a5de31c33a",
"text": "This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.",
"title": ""
},
{
"docid": "be83224a853fd65808def16ff20e9c02",
"text": "Cascades of information-sharing are a primary mechanism by which content reaches its audience on social media, and an active line of research has studied how such cascades, which form as content is reshared from person to person, develop and subside. In this paper, we perform a large-scale analysis of cascades on Facebook over significantly longer time scales, and find that a more complex picture emerges, in which many large cascades recur, exhibiting multiple bursts of popularity with periods of quiescence in between. We characterize recurrence by measuring the time elapsed between bursts, their overlap and proximity in the social network, and the diversity in the demographics of individuals participating in each peak. We discover that content virality, as revealed by its initial popularity, is a main driver of recurrence, with the availability of multiple copies of that content helping to spark new bursts. Still, beyond a certain popularity of content, the rate of recurrence drops as cascades start exhausting the population of interested individuals. We reproduce these observed patterns in a simple model of content recurrence simulated on a real social network. Using only characteristics of a cascade’s initial burst, we demonstrate strong performance in predicting whether it will recur in the future.",
"title": ""
},
{
"docid": "5b50e84437dc27f5b38b53d8613ae2c7",
"text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.",
"title": ""
},
{
"docid": "e099186ceed71e03276ab168ecf79de7",
"text": "Twelve patients with deafferentation pain secondary to central nervous system lesions were subjected to chronic motor cortex stimulation. The motor cortex was mapped as carefully as possible and the electrode was placed in the region where muscle twitch of painful area can be observed with the lowest threshold. 5 of the 12 patients reported complete absence of previous pain with intermittent stimulation at 1 year following the initiation of this therapy. Improvements in hemiparesis was also observed in most of these patients. The pain of these patients was typically barbiturate-sensitive and morphine-resistant. Another 3 patients had some degree of residual pain but considerable reduction of pain was still obtained by stimulation. Thus, 8 of the 12 patients (67%) had continued effect of this therapy after 1 year. In 3 patients, revisions of the electrode placement were needed because stimulation became incapable of inducing muscle twitch even with higher stimulation intensity. The effect of stimulation on pain and capability of producing muscle twitch disappeared simultaneously in these cases and the effect reappeared after the revisions, indicating that appropriate stimulation of the motor cortex is definitely necessary for obtaining satisfactory pain control in these patients. None of the patients subjected to this therapy developed neither observable nor electroencephalographic seizure activity.",
"title": ""
},
{
"docid": "39b7ab83a6a0d75b1ec28c5ff485b98d",
"text": "Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texturebased techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios.",
"title": ""
},
{
"docid": "bfd57465a5d6f85fb55ffe13ef79f3a5",
"text": "We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.",
"title": ""
},
{
"docid": "31756ac6aaa46df16337dbc270831809",
"text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。",
"title": ""
},
{
"docid": "073eb81bbd654b90e6a7ffce608f8ea2",
"text": "OBJECTIVE\nTo examine factors associated with variation in the risk for type 2 diabetes in women with prior gestational diabetes mellitus (GDM).\n\n\nRESEARCH DESIGN AND METHODS\nWe conducted a systematic literature review of articles published between January 1965 and August 2001, in which subjects underwent testing for GDM and then testing for type 2 diabetes after delivery. We abstracted diagnostic criteria for GDM and type 2 diabetes, cumulative incidence of type 2 diabetes, and factors that predicted incidence of type 2 diabetes.\n\n\nRESULTS\nA total of 28 studies were examined. After the index pregnancy, the cumulative incidence of diabetes ranged from 2.6% to over 70% in studies that examined women 6 weeks postpartum to 28 years postpartum. Differences in rates of progression between ethnic groups was reduced by adjustment for various lengths of follow-up and testing rates, so that women appeared to progress to type 2 diabetes at similar rates after a diagnosis of GDM. Cumulative incidence of type 2 diabetes increased markedly in the first 5 years after delivery and appeared to plateau after 10 years. An elevated fasting glucose level during pregnancy was the risk factor most commonly associated with future risk of type 2 diabetes.\n\n\nCONCLUSIONS\nConversion of GDM to type 2 diabetes varies with the length of follow-up and cohort retention. Adjustment for these differences reveals rapid increases in the cumulative incidence occurring in the first 5 years after delivery for different racial groups. Targeting women with elevated fasting glucose levels during pregnancy may prove to have the greatest effect for the effort required.",
"title": ""
},
{
"docid": "1ebb46b4c9e32423417287ab26cae14b",
"text": "Two field studies explored the relationship between self-awareness and transgressive behavior. In the first study, 363 Halloween trick-or-treaters were instructed to only take one candy. Self-awareness induced by the presence of a mirror placed behind the candy bowl decreased transgression rates for children who had been individuated by asking them their name and address, but did not affect the behavior of children left anonymous. Self-awareness influenced older but not younger children. Naturally occurring standards instituted by the behavior of the first child to approach the candy bowl in each group were shown to interact with the experimenter's verbally stated standard. The behavior of 349 subjects in the second study replicated the findings in the first study. Additionally, when no standard was stated by the experimenter, children took more candy when not self-aware than when self-aware.",
"title": ""
}
] | scidocsrr |
16db2a19ce63b6b189aa6980cdbb1208 | Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization | [
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
}
] | [
{
"docid": "09beeeaf2d92087da10c5725bda10d2f",
"text": "We report a quantitative investigation of the visual identification and auditory comprehension deficits of 4 patients who had made a partial recovery from herpes simplex encephalitis. Clinical observations had suggested the selective impairment and selective preservation of certain categories of visual stimuli. In all 4 patients a significant discrepancy between their ability to identify inanimate objects and inability to identify living things and foods was demonstrated. In 2 patients it was possible to compare visual and verbal modalities and the same pattern of dissociation was observed in both. For 1 patient, comprehension of abstract words was significantly superior to comprehension of concrete words. Consistency of responses was recorded within a modality in contrast to a much lesser degree of consistency between modalities. We interpret our findings in terms of category specificity in the organization of meaning systems that are also modality specific semantic systems.",
"title": ""
},
{
"docid": "66fc8ff7073579314c50832a6f06c10d",
"text": "Endodontic management of the permanent immature tooth continues to be a challenge for both clinicians and researchers. Clinical concerns are primarily related to achieving adequate levels of disinfection as 'aggressive' instrumentation is contraindicated and hence there exists a much greater reliance on endodontic irrigants and medicaments. The open apex has also presented obturation difficulties, notably in controlling length. Long-term apexification procedures with calcium hydroxide have proven to be successful in retaining many of these immature infected teeth but due to their thin dentinal walls and perceived problems associated with long-term placement of calcium hydroxide, they have been found to be prone to cervical fracture and subsequent tooth loss. In recent years there has developed an increasing interest in the possibility of 'regenerating' pulp tissue in an infected immature tooth. It is apparent that although the philosophy and hope of 'regeneration' is commendable, recent histologic studies appear to suggest that the calcified material deposited on the canal wall is bone/cementum rather than dentine, hence the absence of pulp tissue with or without an odontoblast layer.",
"title": ""
},
{
"docid": "eb83f7367ba11bb5582864a08bb746ff",
"text": "Probabilistic inference algorithms for find ing the most probable explanation, the max imum aposteriori hypothesis, and the maxi mum expected utility and for updating belief are reformulated as an elimination-type al gorithm called bucket elimination. This em phasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining condition ing and elimination within this framework. Bounds on complexity are given for all the al gorithms as a function of the problem's struc ture.",
"title": ""
},
{
"docid": "48fc7aabdd36ada053ebc2d2a1c795ae",
"text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.",
"title": ""
},
{
"docid": "8cb3aed5fab2f5d54195b0e4c2a9a4c6",
"text": "This paper describes a tri-modal asymmetric bidirectional differential memory interface that supports data rates of up to 20 Gbps over 3\" FR4 PCB channels while achieving power efficiency of 6.1 mW/Gbps at full speed. The interface also accommodates single-ended standard DDR3 and GDDR5 signaling at 1.6-Gbps and 6.4-Gbps operations, respectively, without package change. The compact, low-power and high-speed tri-modal interface is enabled by substantial reuse of the circuit elements among various signaling modes, particularly in the wide-band clock generation and distribution system and the multi-modal driver output stage, as well as the use of fast equalization for post-cursor intersymbol interference (ISI) mitigation. In the high-speed differential mode, the system utilizes a 1-tap transmit equalizer during a WRITE operation to the memory. In contrast, during a memory READ operation, it employs a linear equalizer (LEQ) with 3 dB of peaking as well as a calibrated high-speed 1-tap predictive decision feedback equalizer (prDFE), while no transmitter equalization is assumed for the memory. The prototype tri-modal interface implemented in a 40-nm CMOS process, consists of 16 data links and achieves more than 2.5 × energy-efficient memory transactions at 16 Gbps compared to a previous single-mode generation.",
"title": ""
},
{
"docid": "9464f2e308b5c8ab1f2fac1c008042c0",
"text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.",
"title": ""
},
{
"docid": "dd0562e604e6db2c31132f1ffcd94d4f",
"text": "a r t i c l e i n f o Keywords: Data quality Utility Cost–benefit analysis Data warehouse CRM Managing data resources at high quality is usually viewed as axiomatic. However, we suggest that, since the process of improving data quality should attempt to maximize economic benefits as well, high data quality is not necessarily economically-optimal. We demonstrate this argument by evaluating a microeconomic model that links the handling of data quality defects, such as outdated data and missing values, to economic outcomes: utility, cost, and net-benefit. The evaluation is set in the context of Customer Relationship Management (CRM) and uses large samples from a real-world data resource used for managing alumni relations. Within this context, our evaluation shows that all model parameters can be measured, and that all model-related assumptions are, largely, well supported. The evaluation confirms the assumption that the optimal quality level, in terms of maximizing net-benefits, is not necessarily the highest possible. Further, the evaluation process contributes some important insights for revising current data acquisition and maintenance policies. Maintaining data resources at a high quality level is a critical task in managing organizational information systems (IS). Data quality (DQ) significantly affects IS adoption and the success of data utilization [10,26]. Data quality management (DQM) has been examined from a variety of technical, functional, and organizational perspectives [22]. Achieving high quality is the primary objective of DQM efforts, and much research in DQM focuses on methodologies, tools and techniques for improving quality. Recent studies (e.g., [14,19]) have suggested that high DQ, although having clear merits, should not necessarily be the only objective to consider when assessing DQM alternatives, particularly in an IS that manages large datasets. As shown in these studies, maximizing economic benefits, based on the value gained from improving quality, and the costs involved in improving quality, may conflict with the target of achieving a high data quality level. Such findings inspire the need to link DQM decisions to economic outcomes and tradeoffs, with the goal of identifying more cost-effective DQM solutions. The quality of organizational data is rarely perfect as data, when captured and stored, may suffer from such defects as inaccuracies and missing values [22]. Its quality may further deteriorate as the real-world items that the data describes may change over time (e.g., a customer changing address, profession, and/or marital status). A plethora of studies have underscored the negative effect of low …",
"title": ""
},
{
"docid": "bdae3fb85df9de789a9faa2c08a5c0fb",
"text": "The rapid, exponential growth of modern electronics has brought about profound changes to our daily lives. However, maintaining the growth trend now faces significant challenges at both the fundamental and practical levels [1]. Possible solutions include More Moore?developing new, alternative device structures and materials while maintaining the same basic computer architecture, and More Than Moore?enabling alternative computing architectures and hybrid integration to achieve increased system functionality without trying to push the devices beyond limits. In particular, an increasing number of computing tasks today are related to handling large amounts of data, e.g. image processing as an example. Conventional von Neumann digital computers, with separate memory and processer units, become less and less efficient when large amount of data have to be moved around and processed quickly. Alternative approaches such as bio-inspired neuromorphic circuits, with distributed computing and localized storage in networks, become attractive options [2]?[6].",
"title": ""
},
{
"docid": "7f54157faf8041436174fa865d0f54a8",
"text": "The goal of robot learning from demonstra tion is to have a robot learn from watching a demonstration of the task to be performed In our approach to learning from demon stration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task A policy is computed based on the learned reward function and task model Lessons learned from an implementation on an an thropomorphic robot arm using a pendulum swing up task include simply mimicking demonstrated motions is not adequate to per form this task a task planner can use a learned model and reward function to com pute an appropriate policy this model based planning process supports rapid learn ing both parametric and nonparametric models can be learned and used and in corporating a task level direct learning com ponent which is non model based in addi tion to the model based planner is useful in compensating for structural modeling errors and slow model learning",
"title": ""
},
{
"docid": "7882d2d18bc8a30a63e9fdb726c48ff1",
"text": "Flying ad-hoc networks (FANETs) are a very vibrant research area nowadays. They have many military and civil applications. Limited battery energy and the high mobility of micro unmanned aerial vehicles (UAVs) represent their two main problems, i.e., short flight time and inefficient routing. In this paper, we try to address both of these problems by means of efficient clustering. First, we adjust the transmission power of the UAVs by anticipating their operational requirements. Optimal transmission range will have minimum packet loss ratio (PLR) and better link quality, which ultimately save the energy consumed during communication. Second, we use a variant of the K-Means Density clustering algorithm for selection of cluster heads. Optimal cluster heads enhance the cluster lifetime and reduce the routing overhead. The proposed model outperforms the state of the art artificial intelligence techniques such as Ant Colony Optimization-based clustering algorithm and Grey Wolf Optimization-based clustering algorithm. The performance of the proposed algorithm is evaluated in term of number of clusters, cluster building time, cluster lifetime and energy consumption.",
"title": ""
},
{
"docid": "f7a2f86526209860d7ea89d3e7f2b576",
"text": "Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes CURATOR, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and EDISON, an NLP data structure library in Java that provides streamlined interactions with CURATOR and offers a range of useful supporting functionality.",
"title": ""
},
{
"docid": "c1fa2b5da311edb241dca83edcf327a4",
"text": "The growing amount of web-based attacks poses a severe threat to the security of web applications. Signature-based detection techniques increasingly fail to cope with the variety and complexity of novel attack instances. As a remedy, we introduce a protocol-aware reverse HTTP proxy TokDoc (the token doctor), which intercepts requests and decides on a per-token basis whether a token requires automatic \"healing\". In particular, we propose an intelligent mangling technique, which, based on the decision of previously trained anomaly detectors, replaces suspicious parts in requests by benign data the system has seen in the past. Evaluation of our system in terms of accuracy is performed on two real-world data sets and a large variety of recent attacks. In comparison to state-of-the-art anomaly detectors, TokDoc is not only capable of detecting most attacks, but also significantly outperforms the other methods in terms of false positives. Runtime measurements show that our implementation can be deployed as an inline intrusion prevention system.",
"title": ""
},
{
"docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8",
"text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"title": ""
},
{
"docid": "11d130f2b757bab08c4d41169c29b3d5",
"text": "We present an approach to training a joint syntactic and semantic parser that combines syntactic training information from CCGbank with semantic training information from a knowledge base via distant supervision. The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary. We demonstrate our approach by training a parser whose semantic representation contains 130 predicates from the NELL ontology. A semantic evaluation demonstrates that this parser produces logical forms better than both comparable prior work and a pipelined syntax-then-semantics approach. A syntactic evaluation on CCGbank demonstrates that the parser’s dependency Fscore is within 2.5% of state-of-the-art.",
"title": ""
},
{
"docid": "729b29b5ab44102541f3ebf8d24efec3",
"text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "00eaa437ad2821482644ee75cfe6d7b3",
"text": "A 65nm digitally-modulated polar transmitter incorporates a fully-integrated 2.4GHz efficient switching Inverse Class D power amplifier. Low power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54Mbps WLAN standard with 18% average efficiency.",
"title": ""
},
{
"docid": "8756441420669a6845254242030e0a79",
"text": "We propose a recurrent neural network (RNN) based model for image multi-label classification. Our model uniquely integrates and learning of visual attention and Long Short Term Memory (LSTM) layers, which jointly learns the labels of interest and their co-occurrences, while the associated image regions are visually attended. Different from existing approaches utilize either model in their network architectures, training of our model does not require pre-defined label orders. Moreover, a robust inference process is introduced so that prediction errors would not propagate and thus affect the performance. Our experiments on NUS-WISE and MS-COCO datasets confirm the design of our network and its effectiveness in solving multi-label classification problems.",
"title": ""
},
{
"docid": "6987cb6d888d439220938d805cae29b0",
"text": "Entity Linking aims to link entity mentions in texts to knowledge bases, and neural models have achieved recent success in this task. However, most existing methods rely on local contexts to resolve entities independently, which may usually fail due to the data sparsity of local information. To address this issue, we propose a novel neural model for collective entity linking, named as NCEL. NCEL applies Graph Convolutional Network to integrate both local contextual features and global coherence information for entity linking. To improve the computation efficiency, we approximately perform graph convolution on a subgraph of adjacent entity mentions instead of those in the entire text. We further introduce an attention scheme to improve the robustness of NCEL to data noise and train the model on Wikipedia hyperlinks to avoid overfitting and domain bias. In experiments, we evaluate NCEL on five publicly available datasets to verify the linking performance as well as generalization ability. We also conduct an extensive analysis of time complexity, the impact of key modules, and qualitative results, which demonstrate the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "3840b8c709a8b2780b3d4a1b56bd986b",
"text": "A new scheme to resolve the intra-cell pilot collision for machine-to-machine (M2M) communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot. The simulation results coincide well with the analysis. It is also shown that, compared with the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.",
"title": ""
}
] | scidocsrr |
e982cf99edeaf681206fcf5daaff79f7 | Lip reading using a dynamic feature of lip images and convolutional neural networks | [
{
"docid": "d5c4e44514186fa1d82545a107e87c94",
"text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.",
"title": ""
}
] | [
{
"docid": "adb02577e7fba530c2406fbf53571d14",
"text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.",
"title": ""
},
{
"docid": "720a3d65af4905cbffe74ab21d21dd3f",
"text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.",
"title": ""
},
{
"docid": "e86ad4e9b61df587d9e9e96ab4eb3978",
"text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.",
"title": ""
},
{
"docid": "e85b5115a489835bc58a48eaa727447a",
"text": "State-of-the art machine learning methods such as deep learning rely on large sets of hand-labeled training data. Collecting training data is prohibitively slow and expensive, especially when technical domain expertise is required; even the largest technology companies struggle with this challenge. We address this critical bottleneck with Snorkel, a new system for quickly creating, managing, and modeling training sets. Snorkel enables users to generate large volumes of training data by writing labeling functions, which are simple functions that express heuristics and other weak supervision strategies. These user-authored labeling functions may have low accuracies and may overlap and conflict, but Snorkel automatically learns their accuracies and synthesizes their output labels. Experiments and theory show that surprisingly, by modeling the labeling process in this way, we can train high-accuracy machine learning models even using potentially lower-accuracy inputs. Snorkel is currently used in production at top technology and consulting companies, and used by researchers to extract information from electronic health records, after-action combat reports, and the scientific literature. In this demonstration, we focus on the challenging task of information extraction, a common application of Snorkel in practice. Using the task of extracting corporate employment relationships from news articles, we will demonstrate and build intuition for a radically different way of developing machine learning systems which allows us to effectively bypass the bottleneck of hand-labeling training data.",
"title": ""
},
{
"docid": "4eec5be6b29425e025f9e1b23b742639",
"text": "There is increasing interest in sharing the experience of products and services on the web platform, and social media has opened a way for product and service providers to understand their consumers needs and expectations. This paper explores reviews by cloud consumers that reflect consumers experiences with cloud services. The reviews of around 6,000 cloud service users were analysed using sentiment analysis to identify the attitude of each review, and to determine whether the opinion expressed was positive, negative, or neutral. The analysis used two data mining tools, KNIME and RapidMiner, and the results were compared. We developed four prediction models in this study to predict the sentiment of users reviews. The proposed model is based on four supervised machine learning algorithms: K-Nearest Neighbour (k-NN), Nave Bayes, Random Tree, and Random Forest. The results show that the Random Forest predictions achieve 97.06% accuracy, which makes this model a better prediction model than the other three.",
"title": ""
},
{
"docid": "b988525d515588da8becc18c2aa21e82",
"text": "Numerical optimization has been used as an extension of vehicle dynamics simulation in order to reproduce trajectories and driving techniques used by expert race drivers and investigate the effects of several vehicle parameters in the stability limit operation of the vehicle. In this work we investigate how different race-driving techniques may be reproduced by considering different optimization cost functions. We introduce a bicycle model with suspension dynamics and study the role of the longitudinal load transfer in limit vehicle operation, i.e., when the tires operate at the adhesion limit. Finally we demonstrate that for certain vehicle configurations the optimal trajectory may include large slip angles (drifting), which matches the techniques used by rally-race drivers.",
"title": ""
},
{
"docid": "73d3f51bdb913749665674ae8aea3a41",
"text": "Extracting and validating emotional cues through analysis of users' facial expressions is of high importance for improving the level of interaction in man machine communication systems. Extraction of appropriate facial features and consequent recognition of the user's emotional state that can be robust to facial expression variations among different users is the topic of this paper. Facial animation parameters (FAPs) defined according to the ISO MPEG-4 standard are extracted by a robust facial analysis system, accompanied by appropriate confidence measures of the estimation accuracy. A novel neurofuzzy system is then created, based on rules that have been defined through analysis of FAP variations both at the discrete emotional space, as well as in the 2D continuous activation-evaluation one. The neurofuzzy system allows for further learning and adaptation to specific users' facial expression characteristics, measured though FAP estimation in real life application of the system, using analysis by clustering of the obtained FAP values. Experimental studies with emotionally expressive datasets, generated in the EC IST ERMIS project indicate the good performance and potential of the developed technologies.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "fc09e1c012016c75418ec33dfe5868d5",
"text": "Big data is the word used to describe structured and unstructured data. The term big data is originated from the web search companies who had to query loosely structured very large",
"title": ""
},
{
"docid": "36787667e41db8d9c164e39a89f0c533",
"text": "This paper presents an improvement of the well-known conventional three-phase diode bridge rectifier with dc output capacitor. The proposed circuit increases the power factor (PF) at the ac input and reduces the ripple current stress on the smoothing capacitor. The basic concept is the arrangement of an active voltage source between the output of the diode bridge and the smoothing capacitor which is controlled in a way that it emulates an ideal smoothing inductor. With this the input currents of the diode bridge which usually show high peak amplitudes are converted into a 120/spl deg/ rectangular shape which ideally results in a total PF of 0.955. The active voltage source mentioned before is realized by a low-voltage switch-mode converter stage of small power rating as compared to the output power of the rectifier. Starting with a brief discussion of basic three-phase rectifier techniques and of the drawbacks of three-phase diode bridge rectifiers with capacitive smoothing, the concept of the proposed active smoothing is described and the stationary operation is analyzed. Furthermore, control concepts as well as design considerations and analyses of the dynamic systems behavior are given. Finally, measurements taken from a laboratory model are presented.",
"title": ""
},
{
"docid": "1d1cec012f9f78b40a0931ae5dea53d0",
"text": "Recursive subdivision using interval arithmetic allows us to render CSG combinations of implicit function surfaces with or without anti -aliasing, Related algorithms will solve the collision detection problem for dynamic simulation, and allow us to compute mass. center of gravity, angular moments and other integral properties required for Newtonian dynamics. Our hidden surface algorithms run in ‘constant time.’ Their running times are nearly independent of the number of primitives in a scene, for scenes in which the visible details are not much smaller than the pixels. The collision detection and integration algorithms are utterly robust — collisions are never missed due 10 numerical error and we can provide guaranteed bounds on the values of integrals. CR",
"title": ""
},
{
"docid": "c24bd4156e65d57eda0add458304988c",
"text": "Graphene is enabling a plethora of applications in a wide range of fields due to its unique electrical, mechanical, and optical properties. Among them, graphene-based plasmonic miniaturized antennas (or shortly named, graphennas) are garnering growing interest in the field of communications. In light of their reduced size, in the micrometric range, and an expected radiation frequency of a few terahertz, graphennas offer means for the implementation of ultra-short-range wireless communications. Motivated by their high radiation frequency and potentially wideband nature, this paper presents a methodology for the time-domain characterization and evaluation of graphennas. The proposed framework is highly vertical, as it aims to build a bridge between technological aspects, antenna design, and communications. Using this approach, qualitative and quantitative analyses of a particular case of graphenna are carried out as a function of two critical design parameters, namely, chemical potential and carrier mobility. The results are then compared to the performance of equivalent metallic antennas. Finally, the suitability of graphennas for ultra-short-range communications is briefly discussed.",
"title": ""
},
{
"docid": "ed509de8786ee7b4ba0febf32d0c87f7",
"text": "Threat detection and analysis are indispensable processes in today's cyberspace, but current state of the art threat detection is still limited to specific aspects of modern malicious activities due to the lack of information to analyze. By measuring and collecting various types of data, from traffic information to human behavior, at different vantage points for a long duration, the viewpoint seems to be helpful to deeply inspect threats, but faces scalability issues as the amount of collected data grows, since more computational resources are required for the analysis. In this paper, we report our experience from operating the Hadoop platform, called MATATABI, for threat detections, and present the micro-benchmarks with four different backends of data processing in typical use cases such as log data and packet trace analysis. The benchmarks demonstrate the advantages of distributed computation in terms of performance. Our extensive use cases of analysis modules showcase the potential benefit of deploying our threat analysis platform.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "895d5b01e984ef072b834976e0dfe378",
"text": "Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-theart methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the GromovWasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "ec26505d813ed98ac3f840ea54358873",
"text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.",
"title": ""
},
{
"docid": "06ba0cd00209a7f4f200395b1662003e",
"text": "Changes in human DNA methylation patterns are an important feature of cancer development and progression and a potential role in other conditions such as atherosclerosis and autoimmune diseases (e.g., multiple sclerosis and lupus) is being recognised. The cancer genome is frequently characterised by hypermethylation of specific genes concurrently with an overall decrease in the level of 5 methyl cytosine. This hypomethylation of the genome largely affects the intergenic and intronic regions of the DNA, particularly repeat sequences and transposable elements, and is believed to result in chromosomal instability and increased mutation events. This review examines our understanding of the patterns of cancer-associated hypomethylation, and how recent advances in understanding of chromatin biology may help elucidate the mechanisms underlying repeat sequence demethylation. It also considers how global demethylation of repeat sequences including transposable elements and the site-specific hypomethylation of certain genes might contribute to the deleterious effects that ultimately result in the initiation and progression of cancer and other diseases. The use of hypomethylation of interspersed repeat sequences and genes as potential biomarkers in the early detection of tumors and their prognostic use in monitoring disease progression are also examined.",
"title": ""
},
{
"docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a",
"text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.",
"title": ""
},
{
"docid": "5daeccb1a01df4f68f23c775828be41d",
"text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.",
"title": ""
}
] | scidocsrr |
aefff8b42a9a99977c326fb52e70fbaf | A Novel Association Rule Mining Method of Big Data for Power Transformers State Parameters Based on Probabilistic Graph Model | [
{
"docid": "55b405991dc250cd56be709d53166dca",
"text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.",
"title": ""
}
] | [
{
"docid": "49f4fd5bcb184e64a9874b864979eb79",
"text": "A major research goal for compilers and environments is the automatic derivation of tools from formal specifications. However, the formal model of the language is often inadequate; in particular, LR(k) grammars are unable to describe the natural syntax of many languages, such as C++ and Fortran, which are inherently non-deterministic. Designers of batch compilers work around such limitations by combining generated components with ad hoc techniques (for instance, performing partial type and scope analysis in tandem with parsing). Unfortunately, the complexity of incremental systems precludes the use of batch solutions. The inability to generate incremental tools for important languages inhibits the widespread use of language-rich interactive environments.We address this problem by extending the language model itself, introducing a program representation based on parse dags that is suitable for both batch and incremental analysis. Ambiguities unresolved by one stage are retained in this representation until further stages can complete the analysis, even if the reaolution depends on further actions by the user. Representing ambiguity explicitly increases the number and variety of languages that can be analyzed incrementally using existing methods.To create this representation, we have developed an efficient incremental parser for general context-free grammars. Our algorithm combines Tomita's generalized LR parser with reuse of entire subtrees via state-matching. Disambiguation can occur statically, during or after parsing, or during semantic analysis (using existing incremental techniques); program errors that preclude disambiguation retsin multiple interpretations indefinitely. Our representation and analyses gain efficiency by exploiting the local nature of ambiguities: for the SPEC95 C programs, the explicit representation of ambiguity requires only 0.5% additional space and less than 1% additional time during reconstruction.",
"title": ""
},
{
"docid": "ad59ca3f7c945142baf9353eeb68e504",
"text": "This essay considers dynamic security design and corporate financing, with particular emphasis on informational micro-foundations. The central idea is that firm insiders must retain an appropriate share of firm risk, either to align their incentives with those of outside investors (moral hazard) or to signal favorable information about the quality of the firm’s assets. Informational problems lead to inevitable inefficiencies imperfect risk sharing, the possibility of bankruptcy, investment distortions, etc. The design of contracts that minimize these inefficiencies is a central question. This essay explores the implications of dynamic security design on firm operations and asset prices.",
"title": ""
},
{
"docid": "63e58ac7e6f3b4a463e8f8182fee9be5",
"text": "In this work, we propose “global style tokens” (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-toend speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretable “labels” they generate can be used to control synthesis in novel ways, such as varying speed and speaking style – independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.",
"title": ""
},
{
"docid": "3ea5607d04419aae36592b6dcce25304",
"text": "Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.",
"title": ""
},
{
"docid": "298df39e9b415bc1eed95ed56d3f32df",
"text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.",
"title": ""
},
{
"docid": "2e1cb87045b5356a965aa52e9e745392",
"text": "Community detection is a common problem in graph data analytics that consists of finding groups of densely connected nodes with few connections to nodes outside of the group. In particular, identifying communities in large-scale networks is an important task in many scientific domains. In this review, we evaluated eight state-of-the-art and five traditional algorithms for overlapping and disjoint community detection on large-scale real-world networks with known ground-truth communities. These 13 algorithms were empirically compared using goodness metrics that measure the structural properties of the identified communities, as well as performance metrics that evaluate these communities against the ground-truth. Our results show that these two types of metrics are not equivalent. That is, an algorithm may perform well in terms of goodness metrics, but poorly in terms of performance metrics, or vice versa. © 2014 The Authors. WIREs Computational Statistics published by Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "4ef36c602963036f928b9dcb75592f78",
"text": "Health care-associated infections constitute one of the greatest challenges of modern medicine. Despite compelling evidence that proper hand washing can reduce the transmission of pathogens to patients and the spread of antimicrobial resistance, the adherence of health care workers to recommended hand-hygiene practices has remained unacceptably low. One of the key elements in improving hand-hygiene practice is the use of an alcohol-based hand rub instead of washing with soap and water. An alcohol-based hand rub requires less time, is microbiologically more effective, and is less irritating to skin than traditional hand washing with soap and water. Therefore, alcohol-based hand rubs should replace hand washing as the standard for hand hygiene in health care settings in all situations in which the hands are not visibly soiled. It is also important to change gloves between each patient contact and to use hand-hygiene procedures after glove removal. Reducing health care-associated infections requires that health care workers take responsibility for ensuring that hand hygiene becomes an everyday part of patient care.",
"title": ""
},
{
"docid": "ef3b9dd6b463940bc57cdf7605c24b1e",
"text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.",
"title": ""
},
{
"docid": "3a757d129c52b5c07c514d613795afce",
"text": "Camera motion estimation is useful for a range of applications. Usually, feature tracking is performed through the sequence of images to determine correspondences. Furthermore, robust statistical techniques are normally used to handle large number of outliers in correspondences. This paper proposes a new method that avoids both. Motion is calculated between two consecutive stereo images without any pre-knowledge or prediction about feature location or the possibly large camera movement. This permits a lower frame rate and almost arbitrary movements. Euclidean constraints are used to incrementally select inliers from a set of initial correspondences, instead of using robust statistics that has to handle all inliers and outliers together. These constraints are so strong that the set of initial correspondences can contain several times more outliers than inliers. Experiments on a worst-case stereo sequence show that the method is robust, accurate and can be used in real-time.",
"title": ""
},
{
"docid": "d026ebfc24e3e48d0ddb373f71d63162",
"text": "The claustrum has been proposed as a possible neural candidate for the coordination of conscious experience due to its extensive ‘connectome’. Herein we propose that the claustrum contributes to consciousness by supporting the temporal integration of cortical oscillations in response to multisensory input. A close link between conscious awareness and interval timing is suggested by models of consciousness and conjunctive changes in meta-awareness and timing in multiple contexts and conditions. Using the striatal beatfrequency model of interval timing as a framework, we propose that the claustrum integrates varying frequencies of neural oscillations in different sensory cortices into a coherent pattern that binds different and overlapping temporal percepts into a unitary conscious representation. The proposed coordination of the striatum and claustrum allows for time-based dimensions of multisensory integration and decision-making to be incorporated into consciousness.",
"title": ""
},
{
"docid": "e0a8035f9e61c78a482f2e237f7422c6",
"text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University",
"title": ""
},
{
"docid": "4872da79e7d01e8bb2a70ab17c523118",
"text": "In recent years, social media has become a customer touch-point for the business functions of marketing, sales and customer service. We aim to show that intention analysis might be useful to these business functions and that it can be performed effectively on short texts (at the granularity level of a single sentence). We demonstrate a scheme of categorization of intentions that is amenable to automation using simple machine learning techniques that are language-independent. We discuss the grounding that this scheme of categorization has in speech act theory. In the demonstration we go over a number of usage scenarios in an attempt to show that the use of automatic intention detection tools would benefit the business functions of sales, marketing and service. We also show that social media can be used not just to convey pleasure or displeasure (that is, to express sentiment) but also to discuss personal needs and to report problems (to express intentions). We evaluate methods for automatically discovering intentions in text, and establish that it is possible to perform intention analysis on social media with an accuracy of 66.97%± 0.10%.",
"title": ""
},
{
"docid": "ce12e1d38a2757c621a50209db5ce008",
"text": "Schloss Reisensburg. Physica-Verlag, 1994. Summary Traditional tests of the accuracy of statistical software have been based on a few limited paradigms for ordinary least squares regression. Test suites based on these criteria served the statistical computing community well when software was limited to a few simple procedures. Recent developments in statistical computing require both more and less sophisticated measures, however. We need tests for a broader variety of procedures and ones which are more likely to reveal incompetent programming. This paper summarizes these issues.",
"title": ""
},
{
"docid": "04b7d1197e9e5d78e948e0c30cbdfcfe",
"text": "Context: Software development depends significantly on team performance, as does any process that involves human interaction. Objective: Most current development methods argue that teams should self-manage. Our objective is thus to provide a better understanding of the nature of self-managing agile teams, and the teamwork challenges that arise when introducing such teams. Method: We conducted extensive fieldwork for 9 months in a software development company that introduced Scrum. We focused on the human sensemaking, on how mechanisms of teamwork were understood by the people involved. Results: We describe a project through Dickinson and McIntyre’s teamwork model, focusing on the interrelations between essential teamwork components. Problems with team orientation, team leadership and coordination in addition to highly specialized skills and corresponding division of work were important barriers for achieving team effectiveness. Conclusion: Transitioning from individual work to self-managing teams requires a reorientation not only by developers but also by management. This transition takes time and resources, but should not be neglected. In addition to Dickinson and McIntyre’s teamwork components, we found trust and shared mental models to be of fundamental importance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "29cbdeb95a221820a6425e1249763078",
"text": "The concept of “Industry 4.0” that covers the topics of Internet of Things, cyber-physical system, and smart manufacturing, is a result of increasing demand of mass customized manufacturing. In this paper, a smart manufacturing framework of Industry 4.0 is presented. In the proposed framework, the shop-floor entities (machines, conveyers, etc.), the smart products and the cloud can communicate and negotiate interactively through networks. The shop-floor entities can be considered as agents based on the theory of multi-agent system. These agents implement dynamic reconfiguration in a collaborative manner to achieve agility and flexibility. However, without global coordination, problems such as load-unbalance and inefficiency may occur due to different abilities and performances of agents. Therefore, the intelligent evaluation and control algorithms are proposed to reduce the load-unbalance with the assistance of big data feedback. The experimental results indicate that the presented algorithms can easily be deployed in smart manufacturing system and can improve both load-balance and efficiency.",
"title": ""
},
{
"docid": "ff5d2e3b2c2e5200f70f2644bbc521d6",
"text": "The idea that the conceptual system draws on sensory and motor systems has received considerable experimental support in recent years. Whether the tight coupling between sensory-motor and conceptual systems is modulated by factors such as context or task demands is a matter of controversy. Here, we tested the context sensitivity of this coupling by using action verbs in three different types of sentences in an fMRI study: literal action, apt but non-idiomatic action metaphors, and action idioms. Abstract sentences served as a baseline. The result showed involvement of sensory-motor areas for literal and metaphoric action sentences, but not for idiomatic ones. A trend of increasing sensory-motor activation from abstract to idiomatic to metaphoric to literal sentences was seen. These results support a gradual abstraction process whereby the reliance on sensory-motor systems is reduced as the abstractness of meaning as well as conventionalization is increased, highlighting the context sensitive nature of semantic processing.",
"title": ""
},
{
"docid": "1dcc48994fada1b46f7b294e08f2ed5d",
"text": "This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.",
"title": ""
},
{
"docid": "cf5205e3b27867324ef86f18083653de",
"text": "Sometimes, in order to properly restore teeth, surgical intervention in the form of a crown-lengthening procedure is required. Crown lengthening is a periodontal resective procedure, aimed at removing supporting periodontal structures to gain sound tooth structure above the alveolar crest level. Periodontal health is of paramount importance for all teeth, both sound and restored. For the restorative dentist to utilize crown lengthening, it is important to understand the concept of biologic width, indications, techniques and other principles. This article reviews these basic concepts of clinical crown lengthening and presents four clinical cases utilizing crown lengthening as an integral part of treatments, to restore teeth and their surrounding tissues to health.",
"title": ""
},
{
"docid": "fb87648c3bb77b1d9b162a8e9dbc5e86",
"text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"title": ""
},
{
"docid": "be0f836ec6431b74342b670921ac41f7",
"text": "This paper addresses the issue of expert finding in a social network. The task of expert finding, as one of the most important research issues in social networks, is aimed at identifying persons with relevant expertise or experience for a given topic. In this paper, we propose a propagation-based approach that takes into consideration of both person local information and network information (e.g. relationships between persons). Experimental results show that our approach can outperform the baseline approach.",
"title": ""
}
] | scidocsrr |
7405964a85c0b239ba7e1c7f80564e15 | A Kernel Fuzzy c-Means Clustering-Based Fuzzy Support Vector Machine Algorithm for Classification Problems With Outliers or Noises | [
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
}
] | [
{
"docid": "7fd33ebd4fec434dba53b15d741fdee4",
"text": "We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.",
"title": ""
},
{
"docid": "1cc586730cf0c1fd57cf6ff7548abe24",
"text": "Researchers have proposed various methods to extract 3D keypoints from the surface of 3D mesh models over the last decades, but most of them are based on geometric methods, which lack enough flexibility to meet the requirements for various applications. In this paper, we propose a new method on the basis of deep learning by formulating the 3D keypoint detection as a regression problem using deep neural network (DNN) with sparse autoencoder (SAE) as our regression model. Both local information and global information of a 3D mesh model in multi-scale space are fully utilized to detect whether a vertex is a keypoint or not. SAE can effectively extract the internal structure of these two kinds of information and formulate highlevel features for them, which is beneficial to the regression model. Three SAEs are used to formulate the hidden layers of the DNN and then a logistic regression layer is trained to process the high-level features produced in the third SAE. Numerical experiments show that the proposed DNN based 3D keypoint detection algorithm outperforms current five state-of-the-art methods for various 3D mesh models.",
"title": ""
},
{
"docid": "c8be0e643c72c7abea1ad758ac2b49a8",
"text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.",
"title": ""
},
{
"docid": "96bddddd86976f4dff0b984ef062704b",
"text": "How do the structures of the medial temporal lobe contribute to memory? To address this question, we examine the neurophysiological correlates of both recognition and associative memory in the medial temporal lobe of humans, monkeys, and rats. These cross-species comparisons show that the patterns of mnemonic activity observed throughout the medial temporal lobe are largely conserved across species. Moreover, these findings show that neurons in each of the medial temporal lobe areas can perform both similar as well as distinctive mnemonic functions. In some cases, similar patterns of mnemonic activity are observed across all structures of the medial temporal lobe. In the majority of cases, however, the hippocampal formation and surrounding cortex signal mnemonic information in distinct, but complementary ways.",
"title": ""
},
{
"docid": "efd6856e774b258858c43d7746639317",
"text": "In this paper, we propose a vision-based robust vehicle distance estimation algorithm that supports motorists to rapidly perceive relative distance of oncoming and passing vehicles thereby minimizing the risk of hazardous circumstances. And, as it is expected, the silhouettes of background stationary objects may appear in the motion scene, which pop-up due to motion of the camera, which is mounted on dashboard of the host vehicle. To avoid the effect of false positive detection of stationary objects and to determine the ego motion a new Morphological Strip Matching Algorithm and Recursive Stencil Mapping Algorithm(MSM-RSMA)is proposed. A new series of stencils are created where non-stationary objects are taken off after detecting stationary objects by applying a shape matching technique to each image strip pair. Then the vertical shift is estimated recursively with new stencils with identified stationary background objects. Finally, relative comparison of known templates are used to estimate the distance, which is further certified by value obtained for vertical shift. We apply analysis of relative dimensions of bounding box of the detected vehicle with relevant templates to calculate the relative distance. We prove that our method is capable of providing a comparatively fast distance estimation while keeping its robustness in different environments changes.",
"title": ""
},
{
"docid": "01472364545392cad69b9c7e1f65f4bb",
"text": "The designing of power transmission network is a difficult task due to the complexity of power system. Due to complexity in the power system there is always a loss of the stability due to the fault. Whenever a fault is intercepted in system, the whole system goes to severe transients. These transients cause oscillation in phase angle which leads poor power quality. The nature of oscillation is increasing instead being sustained, which leads system failure in form of generator damage. To reduce and eliminate the unstable oscillations one needs to use a stabilizer which can generate a perfect compensatory signal in order to minimize the harmonics generated due to instability. This paper presents a Power System stabilizer to reduce oscillations due to small signal disturbance. Additionally, a hybrid approach is proposed using FOPID stabilizer with the PSS connected SMIB. Genetic algorithm (GA), Particle swarm optimization (PSO) and Grey Wolf Optimization (GWO) are used for the parameter tuning of the stabilizer. Reason behind the use of GA, PSO and GWO instead of conventional methods is that it search the parameter heuristically, which leads better results. The efficiency of proposed approach is observed by rotor angle and power angle deviations in the SMIB system.",
"title": ""
},
{
"docid": "e7519a25915e5bb5359d0365513cad40",
"text": "Statistical and machine learning algorithms are increasingly used to inform decisions that have large impacts on individuals’ lives. Examples include hiring [8], predictive policing [13], pre-trial risk assessment of recidivism[6, 2], and risk of violence while incarcerated [5]. In many of these cases, the outcome variable to which the predictive models are trained is observed with bias with respect to some legally protected classes. For example, police records do not constitute a representative sample of all crimes [12]. In particular, black drug users are arrested at a rate that is several times that of white drug users despite the fact that black and white populations are estimated by public health officials to use drugs at roughly the same rate [11]. Algorithms trained on such data will produce predictions that are biased against groups that are disproportionately represented in the training data. Several approaches have been proposed to correct unfair predictive models. The simplest approach is to exclude the protected variable(s) from the analysis, under the belief that doing so will result in “race-neutral” predictions [14]. Of course, simply excluding a protected variable is insufficient to avoid discriminatory predictions, as any included variables that are correlated with the protected variables still contain information about the protected characteristic. In the case of linear models, this phenomenon is well-known, and is referred to as omitted variable bias [4]. Another approach that has been proposed in the computer science literature is to remove information about the protected variables from the set of covariates to be used in predictive models [7, 3]. A third alternative is to modify the outcome variable. For example, [9] use a naive Bayes classifier to rank each observation and perturb the outcome such that predictions produced by the algorithm are independent of the protected variable. A discussion of several more algorithms for binary protected and outcome variables can be found in [10]. The approach we propose is most similar to [7], though we approach the problem from a statistical modeling perspective. We define a procedure consisting of a chain of conditional models. Within this framework, both protecting and adjusting variables of arbitrary type becomes natural. Whereas previous work has been limited to protecting only binary or categorical variables and adjusting a limited number of covariates, our proposed framework allows for an arbitrary number of variables",
"title": ""
},
{
"docid": "3ca7b7b8e07eb5943d6ce2acf9a6fa82",
"text": "Excessive heat generation and occurrence of partial discharge have been observed in end-turn stress grading (SG) system in form-wound machines under PWM voltage. In this paper, multi-winding stress grading (SG) system is proposed as a method to change resistance of SG per length. Although the maximum field at the edge of stator and CAT are in a trade-off relationship, analytical results suggest that we can suppress field and excessive heat generation at both stator and CAT edges by multi-winding of SG and setting the length of CAT appropriately. This is also experimentally confirmed by measuring potential distribution of model bar-coil and observing partial discharge and temperature rise.",
"title": ""
},
{
"docid": "2a7bd6fbce4fef6e319664090755858d",
"text": "AIM\nThis paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability.\n\n\nBACKGROUND\nNurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability.\n\n\nMETHOD\nA cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire.\n\n\nFINDINGS\nWe identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09).\n\n\nCONCLUSION\nHospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.",
"title": ""
},
{
"docid": "159222cde67c2d08e0bde7996b422cd6",
"text": "Superficial thrombophlebitis of the dorsal vein of the penis, known as penile Mondor’s disease, is an uncommon genital disease. We report on a healthy 44-year-old man who presented with painful penile swelling, ecchymosis, and penile deviation after masturbation, which initially imitated a penile fracture. Thrombosis of the superficial dorsal vein of the penis without rupture of corpus cavernosum was found during surgical exploration. The patient recovered without erectile dysfunction.",
"title": ""
},
{
"docid": "1f05175a0dce51dcd7a1527dce2f1286",
"text": "The rapid growth in the volume of many real-world graphs (e.g., social networks, web graphs, and spatial networks) has led to the development of various vertex-centric distributed graph computing systems in recent years. However, real-world graphs from different domains have very different characteristics, which often create bottlenecks in vertex-centric parallel graph computation. We identify three such important characteristics from a wide spectrum of real-world graphs, namely (1)skewed degree distribution, (2)large diameter, and (3)(relatively) high density. Among them, only (1) has been studied by existing systems, but many real-world powerlaw graphs also exhibit the characteristics of (2) and (3). In this paper, we propose a block-centric framework, called Blogel, which naturally handles all the three adverse graph characteristics. Blogel programmers may think like a block and develop efficient algorithms for various graph problems. We propose parallel algorithms to partition an arbitrary graph into blocks efficiently, and blockcentric programs are then run over these blocks. Our experiments on large real-world graphs verified that Blogel is able to achieve orders of magnitude performance improvements over the state-ofthe-art distributed graph computing systems.",
"title": ""
},
{
"docid": "d761b2718cfcabe37b72768962492844",
"text": "In the most recent years, wireless communication networks have been facing a rapidly increasing demand for mobile traffic along with the evolvement of applications that require data rates of several 10s of Gbit/s. In order to enable the transmission of such high data rates, two approaches are possible in principle. The first one is aiming at systems operating with moderate bandwidths at 60 GHz, for example, where 7 GHz spectrum is dedicated to mobile services worldwide. However, in order to reach the targeted date rates, systems with high spectral efficiencies beyond 10 bit/s/Hz have to be developed, which will be very challenging. A second approach adopts moderate spectral efficiencies and requires ultra high bandwidths beyond 20 GHz. Such an amount of unregulated spectrum can be identified only in the THz frequency range, i.e. beyond 300 GHz. Systems operated at those frequencies are referred to as THz communication systems. The technology enabling small integrated transceivers with highly directive, steerable antennas becomes the key challenges at THz frequencies in face of the very high path losses. This paper gives an overview over THz communications, summarizing current research projects, spectrum regulations and ongoing standardization activities.",
"title": ""
},
{
"docid": "24fab96f67040ed6ac13ab0696b9421c",
"text": "In the past decade, resting-state functional MRI (R-fMRI) measures of brain activity have attracted considerable attention. Based on changes in the blood oxygen level-dependent signal, R-fMRI offers a novel way to assess the brain's spontaneous or intrinsic (i.e., task-free) activity with both high spatial and temporal resolutions. The properties of both the intra- and inter-regional connectivity of resting-state brain activity have been well documented, promoting our understanding of the brain as a complex network. Specifically, the topological organization of brain networks has been recently studied with graph theory. In this review, we will summarize the recent advances in graph-based brain network analyses of R-fMRI signals, both in typical and atypical populations. Application of these approaches to R-fMRI data has demonstrated non-trivial topological properties of functional networks in the human brain. Among these is the knowledge that the brain's intrinsic activity is organized as a small-world, highly efficient network, with significant modularity and highly connected hub regions. These network properties have also been found to change throughout normal development, aging, and in various pathological conditions. The literature reviewed here suggests that graph-based network analyses are capable of uncovering system-level changes associated with different processes in the resting brain, which could provide novel insights into the understanding of the underlying physiological mechanisms of brain function. We also highlight several potential research topics in the future.",
"title": ""
},
{
"docid": "dfb78a96f9af81aa3f4be1a28e4ce0a2",
"text": "This paper presents two ultra-high-speed SerDes dedicated for PAM4 and NRZ data. The PAM4 TX incorporates an output driver with 3-tap FFE and adjustable weighting to deliver clean outputs at 4 levels, and the PAM4 RX employs a purely linear full-rate CDR and CTLE/1-tap DFE combination to recover and demultiplex the data. NRZ TX includes a tree-structure MUX with built-in PLL and phase aligner. NRZ RX adopts linear PD with special vernier technique to handle the 56 Gb/s input data. All chips have been verified in silicon with reasonable performance, providing prospective design examples for next-generation 400 GbE.",
"title": ""
},
{
"docid": "2fa3e2a710cc124da80941545fbdffa4",
"text": "INTRODUCTION\nThe use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear.\n\n\nMETHODS\nWe reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear.\n\n\nRESULTS\nThe intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001).\n\n\nDISCUSSION\nOur findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.",
"title": ""
},
{
"docid": "6f77e74cd8667b270fae0ccc673b49a5",
"text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.",
"title": ""
},
{
"docid": "569f8890a294b69d688977fc235aef17",
"text": "Traditionally, voice communication over the local loop has been provided by wired systems. In particular, twisted pair has been the standard means of connection for homes and offices for several years. However in the recent past there has been an increased interest in the use of radio access technologies in local loops. Such systems which are now popular for their ease and low cost of installation and maintenance are called Wireless in Local Loop (WLL) systems. Subscribers' demands for greater capacity has grown over the years especially with the advent of the Internet. Wired local loops have responded to these increasing demands through the use of digital technologies such as ISDN and xDSL. Demands for enhanced data rates are being faced by WLL system operators too, thus entailing efforts towards more efficient bandwidth use. Multi-hop communication has already been studied extensively in Ad hoc network environments and has begun making forays into cellular systems as well. Multi-hop communication has been proven as one of the best ways to enhance throughput in a wireless network. Through this effort we study the issues involved in multi-hop communication in a wireless local loop system and propose a novel WLL architecture called Throughput enhanced Wireless in Local Loop (TWiLL). Through a realistic simulation model we show the tremendous performance improvement achieved by TWiLL over WLL. Traditional pricing schemes employed in single hop wireless networks cannot be applied in TWiLL -- a multi-hop environment. We also propose three novel cost reimbursement based pricing schemes which could be applied in such a multi-hop environment.",
"title": ""
},
{
"docid": "81f9a52b6834095cd7be70b39af0e7f0",
"text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.",
"title": ""
},
{
"docid": "1bfab561c8391dad6f0493fa7614feba",
"text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in",
"title": ""
},
{
"docid": "6fc6167d1ef6b96d239fea03b9653865",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.",
"title": ""
}
] | scidocsrr |
dcb9d91a28cd9d6a48e0e66d2a8bfe72 | LEARNING DEEP MODELS: CRITICAL POINTS AND LOCAL OPENNESS | [
{
"docid": "174cc0eae96aeb79841b1acfb4813dd4",
"text": "In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.",
"title": ""
},
{
"docid": "ad7862047259112ac01bfa68950cf95b",
"text": "In deep learning, depth, as well as nonlinearity, create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using this insight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models.",
"title": ""
}
] | [
{
"docid": "e839c6a8c5efcd50f96521238c96a5d3",
"text": "To improve the accuracy of lane detection in complex scenarios, an adaptive lane feature learning algorithm which can automatically learn the features of a lane in various scenarios is proposed. First, a two-stage learning network based on the YOLO v3 (You Only Look Once, v3) is constructed. The structural parameters of the YOLO v3 algorithm are modified to make it more suitable for lane detection. To improve the training efficiency, a method for automatic generation of the lane label images in a simple scenario, which provides label data for the training of the first-stage network, is proposed. Then, an adaptive edge detection algorithm based on the Canny operator is used to relocate the lane detected by the first-stage model. Furthermore, the unrecognized lanes are shielded to avoid interference in subsequent model training. Then, the images processed by the above method are used as label data for the training of the second-stage model. The experiment was carried out on the KITTI and Caltech datasets, and the results showed that the accuracy and speed of the second-stage model reached a high level.",
"title": ""
},
{
"docid": "8108c37cc3f3160c78252fcfbeb8d2f2",
"text": "It is well understood that the pancreas has two distinct roles: the endocrine and exocrine functions, that are functionally and anatomically closely related. As specialists in diabetes care, we are adept at managing pancreatic endocrine failure and its associated complications. However, there is frequent overlap and many patients with diabetes also suffer from exocrine insufficiency. Here we outline the different causes of exocrine failure, and in particular that associated with type 1 and type 2 diabetes and how this differs from diabetes that is caused by pancreatic exocrine disease: type 3c diabetes. Copyright © 2017 John Wiley & Sons. Practical Diabetes 2017; 34(6): 200–204",
"title": ""
},
{
"docid": "64e57a5382411ade7c0ad4ef7f094aa9",
"text": "In this paper we present the techniques used for the University of Montréal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions expressed by the primary human subject in short video clips extracted from feature length movies. This involves the analysis of video clips of acted scenes lasting approximately one-two seconds, including the audio track which may contain human voices as well as background music. Our approach combines multiple deep neural networks for different data modalities, including: (1) a deep convolutional neural network for the analysis of facial expressions within video frames; (2) a deep belief net to capture audio information; (3) a deep autoencoder to model the spatio-temporal information produced by the human actions depicted within the entire scene; and (4) a shallow network architecture focused on extracted features of the mouth of the primary human subject in the scene. We discuss each of these techniques, their performance characteristics and different strategies to aggregate their predictions. Our best single model was a convolutional neural network trained to predict emotions from static frames using two large data sets, the Toronto Face Database and our own set of faces images harvested from Google image search, followed by a per frame aggregation strategy that used the challenge training data. This yielded a test set accuracy of 35.58%. Using our best strategy for aggregating our top performing models into a single predictor we were able to produce an accuracy of 41.03% on the challenge test set. These compare favorably to the challenge baseline test set accuracy of 27.56%.",
"title": ""
},
{
"docid": "c0c7752c6b9416e281c3649e70f9daae",
"text": "Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.",
"title": ""
},
{
"docid": "b732824ec9677b639e34de68818aae50",
"text": "Although there is wide agreement that backfilling produces significant benefits in scheduling of parallel jobs, there is no clear consensus on which backfilling strategy is preferable e.g. should conservative backfilling be used or the more aggressive EASY backfilling scheme; should a First-Come First-Served(FCFS) queue-priority policy be used, or some other such as Shortest job First(SF) or eXpansion Factor(XF); In this paper, we use trace-based simulation to address these questions and glean new insights into the characteristics of backfilling strategies for job scheduling. We show that by viewing performance in terms of slowdowns and turnaround times of jobs within various categories based on their width (processor request size), length (job duration) and accuracy of the user’s estimate of run time, some consistent trends may be observed.",
"title": ""
},
{
"docid": "d08c24228e43089824357342e0fa0843",
"text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.",
"title": ""
},
{
"docid": "f3ca98a8e0600f0c80ef539cfc58e77e",
"text": "In this paper, we address a real life waste collection vehicle routing problem with time windows (VRPTW) with consideration of multiple disposal trips and drivers’ lunch breaks. Solomon’s well-known insertion algorithm is extended for the problem. While minimizing the number of vehicles and total traveling time is the major objective of vehicle routing problems in the literature, here we also consider the route compactness and workload balancing of a solution since they are very important aspects in practical applications. In order to improve the route compactness and workload balancing, a capacitated clustering-based waste collection VRPTW algorithm is developed. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems at Waste Management, Inc. A set of waste collection VRPTW benchmark problems is also presented in this paper. Waste collection problems are frequently considered as arc routing problems without time windows. However, that point of view can be applied only to residential waste collection problems. In the waste collection industry, there are three major areas: commercial waste collection, residential waste collection and roll-on-roll-off. In this paper, we mainly focus on the commercial waste collection problem. The problem can be characterized as a variant of VRPTW since commercial waste collection stops may have time windows. The major variation from a standard VRPTW is due to disposal operations and driver’s lunch break. When a vehicle is full, it needs to go to one of the disposal facilities (landfill or transfer station). Each vehicle can, and typically does, make multiple disposal trips per day. The purpose of this paper is to introduce the waste collection VRPTW, benchmark problem sets, and a solution approach for the problem. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems of Waste Management, the leading provider of comprehensive waste management services in North America with nearly 26,000 collection and transfer vehicles. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "169db6ecec2243e3566079cd473c7afe",
"text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.",
"title": ""
},
{
"docid": "d0778852e57dddf8a454dd609908ff87",
"text": "Abstract: Trivariate barycentric coordinates can be used both to express a point inside a tetrahedron as a convex combination of the four vertices and to linearly interpolate data given at the vertices. In this paper we generalize these coordinates to convex polyhedra and the kernels of star-shaped polyhedra. These coordinates generalize in a natural way a recently constructed set of coordinates for planar polygons, called mean value coordinates.",
"title": ""
},
{
"docid": "d3d57d67d4384f916f9e9e48f3fcdcdb",
"text": "Web-based social networks have become popular as a medium for disseminating information and connecting like-minded people. The public accessibility of such networks with the ability to share opinions, thoughts, information, and experience offers great promise to enterprises and governments. In addition to individuals using such networks to connect to their friends and families, governments and enterprises have started exploiting these platforms for delivering their services to citizens and customers. However, the success of such attempts relies on the level of trust that members have with each other as well as with the service provider. Therefore, trust becomes an essential and important element of a successful social network. In this article, we present the first comprehensive review of social and computer science literature on trust in social networks. We first review the existing definitions of trust and define social trust in the context of social networks. We then discuss recent works addressing three aspects of social trust: trust information collection, trust evaluation, and trust dissemination. Finally, we compare and contrast the literature and identify areas for further research in social trust.",
"title": ""
},
{
"docid": "46d46b2043019ad33e392d2d0a4b4d0d",
"text": "Ambient assisted living (AAL) is focused on providing assistance to people primarily in their natural environment. Over the past decade, the AAL domain has evolved at a fast pace in various directions. The stakeholders of AAL are not only limited to patients, but also include their relatives, social services, health workers, and care agencies. In fact, AAL aims at increasing the life quality of patients, their relatives and the health care providers with a holistic approach. This paper aims at providing a comprehensive overview of the AAL domain, presenting a systematic analysis of over 10 years of relevant literature focusing on the stakeholders’ needs, bridging the gap of existing reviews which focused on technologies. The findings of this review clearly show that until now the AAL domain neglects the view of the entire AAL ecosystem. Furthermore, the proposed solutions seem to be tailored more on the basis of the available existing technologies, rather than supporting the various stakeholders’ needs. Another major lack that this review is pointing out is a missing adequate evaluation of the various solutions. Finally, it seems that, as the domain of AAL is pretty new, it is still in its incubation phase. Thus, this review calls for moving the AAL domain to a more mature phase with respect to the research approaches.",
"title": ""
},
{
"docid": "18aa08888e4b2b412f154e47891b034d",
"text": "Roughly 1.3 billion people in developing countries still live without access to reliable electricity. As expanding access using current technologies will accelerate global climate change, there is a strong need for novel solutions that displace fossil fuels and are financially viable for developing regions. A novel DC microgrid solution that is geared at maximizing efficiency and reducing system installation cost is described in this paper. Relevant simulation and experimental results, as well as a proposal for undertaking field-testing of the technical and economic viability of the microgrid system are presented.",
"title": ""
},
{
"docid": "5bd68ea9ec37f954b2544e65cfff5626",
"text": "To improve ATMs’ cash demand forecasts, this paper advocates the prediction of cash demand for groups of ATMs with similar day-of-the week cash demand patterns. We first clustered ATM centers into ATM clusters having similar day-of-the week withdrawal patterns. To retrieve “day-of-the-week” withdrawal seasonality parameters (effect of a Monday, etc) we built a time series model for each ATMs. For clustering, the succession of 7 continuous daily withdrawal seasonality parameters of ATMs is discretized. Next, the similarity between the different ATMs’ discretized daily withdrawal seasonality sequence is measured by the Sequence Alignment Method (SAM). For each cluster of ATMs, four neural networks viz., general regression neural network (GRNN), multi layer feed forward neural network (MLFF), group method of data handling (GMDH) and wavelet neural network (WNN) are built to predict an ATM center’s cash demand. The proposed methodology is applied on the NN5 competition dataset. We observed that GRNN yielded the best result of 18.44% symmetric mean absolute percentage error (SMAPE), which is better than the result of Andrawis et al. (2011). This is due to clustering followed by a forecasting phase. Further, the proposed approach yielded much smaller SMAPE values than the approach of direct prediction on the entire sample without clustering. From a managerial perspective, the clusterwise cash demand forecast helps the bank’s top management to design similar cash replenishment plans for all the ATMs in the same cluster. This cluster-level replenishment plans could result in saving huge operational costs for ATMs operating in a similar geographical region.",
"title": ""
},
{
"docid": "335fbbf27b34e3937c2f6772b3227d51",
"text": "WordNet has facilitated important research in natural language processing but its usefulness is somewhat limited by its relatively small lexical coverage. The Paraphrase Database (PPDB) covers 650 times more words, but lacks the semantic structure of WordNet that would make it more directly useful for downstream tasks. We present a method for mapping words from PPDB to WordNet synsets with 89% accuracy. The mapping also lays important groundwork for incorporating WordNet’s relations into PPDB so as to increase its utility for semantic reasoning in applications.",
"title": ""
},
{
"docid": "c74b93fff768f024b921fac7f192102d",
"text": "Motivated by information-theoretic considerations, we pr opose a signalling scheme, unitary spacetime modulation, for multiple-antenna communication links. This modulati on s ideally suited for Rayleigh fast-fading environments, since it does not require the rec iv r to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T M space-time signals f `; ` = 1; : : : ; Lg, whereT represents the coherence interval during which the fading i s approximately constant, and M < T is the number of transmitter antennas. The columns of each ` are orthonormal. When the receiver does not know the propagation coefficients, which between pa irs of transmitter and receiver antennas are modeled as statistically independent, this modulation per forms very well either when the SNR is high or whenT M . We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit error probability with maximum likelihood decoding. We demonstrate that two antennas have a 6 dB diversity gain over one antenna at 15 dB SNR. Index Terms —Multi-element antenna arrays, wireless communications, channel coding, fading channels, transmitter and receiver diversity, space-time modu lation",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "5f54125c0114f4fadc055e721093a49e",
"text": "In this study, a fuzzy logic based autonomous vehicle control system is designed and tested in The Open Racing Car Simulator (TORCS) environment. The aim of this study is that vehicle complete the race without to get any damage and to get out of the way. In this context, an intelligent control system composed of fuzzy logic and conventional control structures has been developed such that the racing car is able to compete the race autonomously. In this proposed structure, once the vehicle's gearshifts have been automated, a fuzzy logic based throttle/brake control system has been designed such that the racing car is capable to accelerate/decelerate in a realistic manner as well as to drive at desired velocity. The steering control problem is also handled to end up with a racing car that is capable to travel on the road even in the presence of sharp curves. In this context, we have designed a fuzzy logic based positioning system that uses the knowledge of the curvature ahead to determine an appropriate position. The game performance of the developed fuzzy logic systems can be observed from https://youtu.be/qOvEz3-PzRo.",
"title": ""
},
{
"docid": "f8a5fb5f323f036d38959f97815337a5",
"text": "OBJECTIVE\nEarly screening of autism increases the chance of receiving timely intervention. Using the Parent Report Questionnaires is effective in screening autism. The Q-CHAT is a new instrument that has shown several advantages than other screening tools. Because there is no adequate tool for the early screening of autistic traits in Iranian children, we aimed to investigate the adequacy of the Persian translation of Q-CHAT.\n\n\nMETHOD\nAt first, we prepared the Persian translation of the Quantitative Checklist for Autism in Toddlers (Q-CHAT). After that, an appropriate sample was selected and the check list was administered. Our sample included 100 children in two groups (typically developing and autistic children) who had been selected conveniently. Pearson's r was used to determine test-retest reliability, and Cronbach's alpha coefficient was used to explore the internal consistency of Q-CHAT. We used the receiver operating characteristics curve (ROC) to investigate whether Q-CHAT can adequately discriminate between typically developing and ASD children or not. Data analysis was carried out by SPSS 19.\n\n\nRESULT\nThe typically developing group consisted of 50 children with the mean age of 27.14 months, and the ASD group included50 children with the mean age of 29.62 months. The mean of the total score for the typically developing group was 22.4 (SD=6.26) on Q-CHAT and it was 50.94 (SD=12.35) for the ASD group, which was significantly different (p=0.00).The Cronbach's alpha coefficient of the checklist was 0.886, and test-retest reliability was calculated as 0.997 (p<0.01). The estimated area under the curve (AUC) was 0.971. It seems that the total score equal to 30 can be a good cut point to identify toddlers who are at risk of autism (sensitivity= 0.96 and specificity= 0.90).\n\n\nCONCLUSION\nThe Persian translation of Q-CHAT has good reliability and predictive validity and can be used as a screening tool to detect 18 to 24 months old children who are at risk of autism.",
"title": ""
},
{
"docid": "db806183810547435075eb6edd28d630",
"text": "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues.,,We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.",
"title": ""
},
{
"docid": "4fd8eb1c592960a0334959fcd74f00d8",
"text": "Automatic grammatical error detection for Chinese has been a big challenge for NLP researchers. Due to the formal and strict grammar rules in Chinese, it is hard for foreign students to master Chinese. A computer-assisted learning tool which can automatically detect and correct Chinese grammatical errors is necessary for those foreign students. Some of the previous works have sought to identify Chinese grammatical errors using templateand learning-based methods. In contrast, this study introduced convolutional neural network (CNN) and long-short term memory (LSTM) for the shared task of Chinese Grammatical Error Diagnosis (CGED). Different from traditional word-based embedding, single word embedding was used as input of CNN and LSTM. The proposed single word embedding can capture both semantic and syntactic information to detect those four type grammatical error. In experimental evaluation, the recall and f1-score of our submitted results Run1 of the TOCFL testing data ranked the fourth place in all submissions in detection-level.",
"title": ""
}
] | scidocsrr |
ce302b49c125828cb906ffec23da62d1 | The critical hitch angle for jackknife avoidance during slow backing up of vehicle – trailer systems | [
{
"docid": "0a793374864ce2a8a723423a4f74759b",
"text": "Trailer reversing is a problem frequently considered in the literature, usually with fairly complex non-linear control theory based approaches. In this paper, we present a simple method for stabilizing a tractor-trailer system to a trajectory based on the notion of controlling the hitch-angle of the trailer rather than the steering angle of the tractor. The method is intuitive, provably stable, and shown to be viable through various experimental results conducted on our test platform, the CSIRO autonomous tractor.",
"title": ""
}
] | [
{
"docid": "80ac2373b3a01ab0f1f2665f0e070aa4",
"text": "This paper presents an overview of the state of the art control strategies specifically designed to coordinate distributed energy storage (ES) systems in microgrids. Power networks are undergoing a transition from the traditional model of centralised generation towards a smart decentralised network of renewable sources and ES systems, organised into autonomous microgrids. ES systems can provide a range of services, particularly when distributed throughout the power network. The introduction of distributed ES represents a fundamental change for power networks, increasing the network control problem dimensionality and adding long time-scale dynamics associated with the storage systems’ state of charge levels. Managing microgrids with many small distributed ES systems requires new scalable control strategies that are robust to power network and communication network disturbances. This paper reviews the range of services distributed ES systems can provide, and the control challenges they introduce. The focus of this paper is a presentation of the latest decentralised, centralised and distributed multi-agent control strategies designed to coordinate distributed microgrid ES systems. Finally, multi-agent control with agents satisfying Wooldridge’s definition of intelligence is proposed as a promising direction for future research.",
"title": ""
},
{
"docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2",
"text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.",
"title": ""
},
{
"docid": "34461f38c51a270e2f3b0d8703474dfc",
"text": "Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "56245b600dd082439d2b1b2a2452a6b7",
"text": "The electric drive systems used in many industrial applications require higher performance, reliability, variable speed due to its ease of controllability. The speed control of DC motor is very crucial in applications where precision and protection are of essence. Purpose of a motor speed controller is to take a signal representing the required speed and to drive a motor at that speed. Microcontrollers can provide easy control of DC motor. Microcontroller based speed control system consist of electronic component, microcontroller and the LCD. In this paper, implementation of the ATmega8L microcontroller for speed control of DC motor fed by a DC chopper has been investigated. The chopper is driven by a high frequency PWM signal. Controlling the PWM duty cycle is equivalent to controlling the motor terminal voltage, which in turn adjusts directly the motor speed. This work is a practical one and high feasibility according to economic point of view and accuracy. In this work, development of hardware and software of the close loop dc motor speed control system have been explained and illustrated. The desired objective is to achieve a system with the constant speed at any load condition. That means motor will run at a fixed speed instead of varying with amount of load. KeywordsDC motor, Speed control, Microcontroller, ATmega8, PWM.",
"title": ""
},
{
"docid": "e08bc715d679ba0442883b4b0e481998",
"text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality",
"title": ""
},
{
"docid": "a937f479b462758a089ed23cfa5a0099",
"text": "The paper outlines the development of a large vocabulary continuous speech recognition (LVCSR) system for the Indonesian language within the Asian speech translation (A-STAR) project. An overview of the A-STAR project and Indonesian language characteristics will be briefly described. We then focus on a discussion of the development of Indonesian LVCSR, including data resources issues, acoustic modeling, language modeling, the lexicon, and accuracy of recognition. There are three types of Indonesian data resources: daily news, telephone application, and BTEC tasks, which are used in this project. They are available in both text and speech forms. The Indonesian speech recognition engine was trained using the clean speech of both daily news and telephone application tasks. The optimum performance achieved on the BTEC task was 92.47% word accuracy. 1 A-STAR Project Overview The A-STAR project is an Asian consortium that is expected to advance the state-of-the-art in multilingual man-machine interfaces in the Asian region. This basic infrastructure will accelerate the development of large-scale spoken language corpora in Asia and also facilitate the development of related fundamental information communication technologies (ICT), such as multi-lingual speech translation, Figure 1: Outline of future speech-technology services connecting each area in the Asian region through network. multi-lingual speech transcription, and multi-lingual information retrieval. These fundamental technologies can be applied to the human-machine interfaces of various telecommunication devices and services connecting Asian countries through the network using standardized communication protocols as outlined in Fig. 1. They are expected to create digital opportunities, improve our digital capabilities, and eliminate the digital divide resulting from the differences in ICT levels in each area. The improvements to borderless communication in the Asian region are expected to result in many benefits in everyday life including tourism, business, education, and social security. The project was coordinated together by the Advanced Telecommunication Research (ATR) and the National Institute of Information and Communications Technology (NICT) Japan in cooperation with several research institutes in Asia, such as the National Laboratory of Pattern Recognition (NLPR) in China, the Electronics and Telecommunication Research Institute (ETRI) in Korea, the Agency for the Assessment and Application Technology (BPPT) in Indonesia, the National Electronics and Computer Technology Center (NECTEC) in Thailand, the Center for Development of Advanced Computing (CDAC) in India, the National Taiwan University (NTU) in Taiwan. Partners are still being sought for other languages in Asia. More details about the A-STAR project can be found in (Nakamura et al., 2007). 2 Indonesian Language Characteristic The Indonesian language, or so-called Bahasa Indonesia, is a unified language formed from hundreds of languages spoken throughout the Indonesian archipelago. Compared to other languages, which have a high density of native speakers, Indonesian is spoken as a mother tongue by only 7% of the population, and more than 195 million people speak it as a second language with varying degrees of proficiency. There are approximately 300 ethnic groups living throughout 17,508 islands, speaking 365 native languages or no less than 669 dialects (Tan, 2004). At home, people speak their own language, such as Javanese, Sundanese or Balinese, even though almost everybody has a good understanding of Indonesian as they learn it in school. Although the Indonesian language is infused with highly distinctive accents from different ethnic languages, there are many similarities in patterns across the archipelago. Modern Indonesian is derived from the literary of the Malay dialect. Thus, it is closely related to the Malay spoken in Malaysia, Singapore, Brunei, and some other areas. Unlike the Chinese language, it is not a tonal language. Compared with European languages, Indonesian has a strikingly small use of gendered words. Plurals are often expressed by means of word repetition. It is also a member of the agglutinative language family, meaning that it has a complex range of prefixes and suffixes, which are attached to base words. Consequently, a word can become very long. More details on Indonesian characteristics can be found in (Sakti et al., 2004). 3 Indonesian Phoneme Set The Indonesian phoneme set is defined based on Indonesian grammar described in (Alwi et al., 2003). A full phoneme set contains 33 phoneme symbols in total, which consists of 10 vowels (including diphthongs), 22 consonants, and one silent symbol. The vowel articulation pattern of the Indonesian language, which indicates the first two resonances of the vocal tract, F1 (height) and F2 (backness), is shown in Fig. 2.",
"title": ""
},
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "e2b8dd31dad42e82509a8df6cf21df11",
"text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.",
"title": ""
},
{
"docid": "3ed6df057a32b9dcf243b5ac367b4912",
"text": "This paper presents advancements in induction motor endring design to overcome mechanical limitations and extend the operating speed range and joint reliability of induction machines. A novel endring design met the challenging mechanical requirements of this high speed, high temperature, power dense application, without compromising electrical performance. Analysis is presented of the advanced endring design features including a non uniform cross section, hoop stress relief cuts, and an integrated joint boss, which reduced critical stress concentrations, allowing operation under a broad speed and temperature design range. A generalized treatment of this design approach is presented comparing the concept results to conventional design techniques. Additionally, a low temperature joining process of the bar/end ring connection is discussed that provides the required joint strength without compromising the mechanical strength of the age hardened parent metals. A description of a prototype 2 MW, 15,000 rpm flywheel motor generator embodying this technology is presented",
"title": ""
},
{
"docid": "b3fd58901706f7cb3ed653572e634c78",
"text": "This paper presents visual analysis of eye state and head pose (HP) for continuous monitoring of alertness of a vehicle driver. Most existing approaches to visual detection of nonalert driving patterns rely either on eye closure or head nodding angles to determine the driver drowsiness or distraction level. The proposed scheme uses visual features such as eye index (EI), pupil activity (PA), and HP to extract critical information on nonalertness of a vehicle driver. EI determines if the eye is open, half closed, or closed from the ratio of pupil height and eye height. PA measures the rate of deviation of the pupil center from the eye center over a time period. HP finds the amount of the driver's head movements by counting the number of video segments that involve a large deviation of three Euler angles of HP, i.e., nodding, shaking, and tilting, from its normal driving position. HP provides useful information on the lack of attention, particularly when the driver's eyes are not visible due to occlusion caused by large head movements. A support vector machine (SVM) classifies a sequence of video segments into alert or nonalert driving events. Experimental results show that the proposed scheme offers high classification accuracy with acceptably low errors and false alarms for people of various ethnicity and gender in real road driving conditions.",
"title": ""
},
{
"docid": "d16114259da9edf0022e2a3774c5acf0",
"text": "The multivesicular body (MVB) pathway is responsible for both the biosynthetic delivery of lysosomal hydrolases and the downregulation of numerous activated cell surface receptors which are degraded in the lysosome. We demonstrate that ubiquitination serves as a signal for sorting into the MVB pathway. In addition, we characterize a 350 kDa complex, ESCRT-I (composed of Vps23, Vps28, and Vps37), that recognizes ubiquitinated MVB cargo and whose function is required for sorting into MVB vesicles. This recognition event depends on a conserved UBC-like domain in Vps23. We propose that ESCRT-I represents a conserved component of the endosomal sorting machinery that functions in both yeast and mammalian cells to couple ubiquitin modification to protein sorting and receptor downregulation in the MVB pathway.",
"title": ""
},
{
"docid": "e6cba9e178f568c402be7b25c4f0777f",
"text": "This paper is a tutorial introduction to the Viterbi Algorithm, this is reinforced by an example use of the Viterbi Algorithm in the area of error correction in communications channels. Some extensions to the basic algorithm are also discussed briefly. Some of the many application areas where the Viterbi Algorithm has been used are considered, including it's use in communications, target tracking and pattern recognition problems. A proposal for further research into the use of the Viterbi Algorithm in Signature Verification is then presented, and is the area of present research at the moment.",
"title": ""
},
{
"docid": "0397514e0d4a87bd8b59d9b317f8c660",
"text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.",
"title": ""
},
{
"docid": "03e1ede18dcc78409337faf265940a4d",
"text": "Epidermal thickness and its relationship to age, gender, skin type, pigmentation, blood content, smoking habits and body site is important in dermatologic research and was investigated in this study. Biopsies from three different body sites of 71 human volunteers were obtained, and thickness of the stratum corneum and cellular epidermis was measured microscopically using a preparation technique preventing tissue damage. Multiple regressions analysis was used to evaluate the effect of the various factors independently of each other. Mean (SD) thickness of the stratum corneum was 18.3 (4.9) microm at the dorsal aspect of the forearm, 11.0 (2.2) microm at the shoulder and 14.9 (3.4) microm at the buttock. Corresponding values for the cellular epidermis were 56.6 (11.5) microm, 70.3 (13.6) microm and 81.5 (15.7) microm, respectively. Body site largely explains the variation in epidermal thickness, but also a significant individual variation was observed. Thickness of the stratum corneum correlated positively to pigmentation (p = 0.0008) and negatively to the number of years of smoking (p < 0.0001). Thickness of the cellular epidermis correlated positively to blood content (P = 0.028) and was greater in males than in females (P < 0.0001). Epidermal thickness was not correlated to age or skin type.",
"title": ""
},
{
"docid": "910c8ca022db7b806565e1c16c4cfb6a",
"text": "Three di¡erent understandings of causation, each importantly shaped by the work of statisticians, are examined from the point of view of their value to sociologists: causation as robust dependence, causation as consequential manipulation, and causation as generative process. The last is favoured as the basis for causal analysis in sociology. It allows the respective roles of statistics and theory to be clari¢ed and is appropriate to sociology as a largely non-experimental social science in which the concept of action is central.",
"title": ""
},
{
"docid": "97ec7149cbaedc6af3a26030067e2dba",
"text": "Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic.",
"title": ""
},
{
"docid": "2316e37df8796758c86881aaeed51636",
"text": "Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.",
"title": ""
},
{
"docid": "791314f5cee09fc8e27c236018a0927f",
"text": "© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/ publi cdoma in/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Oral presentations",
"title": ""
},
{
"docid": "7d7ea6239106f614f892701e527122e2",
"text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.",
"title": ""
}
] | scidocsrr |
ebba225894ba7ed1352745abc47dd099 | A SLIM WIDEBAND AND CONFORMAL UHF RFID TAG ANTENNA BASED ON U-SHAPED SLOTS FOR METALLIC OBJECTS | [
{
"docid": "48ea1d793f0ae2b79f406c87fe5980b5",
"text": "In this paper, we describe a UHF radio-frequency-identification tag test and measurement system based on National Instruments LabVIEW-controlled PXI RF hardware. The system operates in 800-1000-MHz frequency band with a variable output power up to 30 dBm and is capable of testing tags using Gen2 and other protocols. We explain testing methods and metrics, describe in detail the construction of our system, show its operation with real tag measurement examples, and draw general conclusions.",
"title": ""
}
] | [
{
"docid": "44bb8c5202edadc2f14fa27c0fbb9705",
"text": "In this paper, a new Near Field Communication (NFC) antenna solution that can be used for portable devices with metal back cover is proposed. In particular, there are two holes on metal back cover, a slit between the two holes, and antenna coil located behind the metal cover. With such an arrangement, the shielding effect of the metal cover can be totally eliminated. Simulated and measured results of the proposed antenna are presented.",
"title": ""
},
{
"docid": "abc1be23f803390c2aadd58059eb177e",
"text": "In the atomic force microscope (AFM) scanning system, the piezoscanner is significant in realizing high-performance tasks. To cater to this demand, a novel compliant two-degrees-of-freedom (2-DOF) micro-/nanopositioning stage with modified lever displacement amplifiers is proposed in this paper, which can be selected to work in dual modes. Moreover, the modified double four-bar P (P denotes prismatic) joints are adopted in designing the flexible limbs. The established models for the mechanical performance evaluation in terms of kinetostatics, dynamics, and workspace are validated by finite-element analysis. After a series of dimension optimizations carried out via particle swarm optimization algorithm, a novel active disturbance rejection controller, including the components of nonlinearity tracking differentiator, extended state observer, and nonlinear state error feedback, is designed for automatically estimating and suppressing the plant uncertainties arising from the hysteresis nonlinearity, creep effect, sensor noises, and other unknown disturbances. The closed-loop control results based on simulation and prototype indicate that the two working natural frequencies of the proposed stage are approximated to be 805.19 and 811.31 Hz, the amplification ratio in two axes is about 4.2, and the workspace is around 120 ×120 μm2, while the cross-coupling between the two axes is kept within 2%. All of the results indicate that the developed micro-/nanopositioning system has a good property for high-performance AFM scanning.",
"title": ""
},
{
"docid": "d1041afcb50a490034740add2cce3f0d",
"text": "Inverse synthetic aperture radar imaging of moving targets with a stepped frequency waveform presents unique challenges. Intra-step target motion introduces phase discontinuities between frequency bands, which in turn produce degraded range side lobes. Frequency stitching of the stepped-frequency waveform to emulate a contiguous bandwidth can dramatically reduce the effective pulse repetition frequency, which then may impact the maximize target size that can be unambiguously measured and imaged via ISAR. This paper analyzes these effects and validates results via simulated data.",
"title": ""
},
{
"docid": "be7d32aeffecc53c5d844a8f90cd5ce0",
"text": "Wordnets play a central role in many natural language processing tasks. This paper introduces a multilingual editing system for the Open Multilingual Wordnet (OMW: Bond and Foster, 2013). Wordnet development, like most lexicographic tasks, is slow and expensive. Moving away from the original Princeton Wordnet (Fellbaum, 1998) development workflow, wordnet creation and expansion has increasingly been shifting towards an automated and/or interactive system facilitated task. In the particular case of human edition/expansion of wordnets, a few systems have been developed to aid the lexicographers’ work. Unfortunately, most of these tools have either restricted licenses, or have been designed with a particular language in mind. We present a webbased system that is capable of multilingual browsing and editing for any of the hundreds of languages made available by the OMW. All tools and guidelines are freely available under an open license.",
"title": ""
},
{
"docid": "0e002aae88332f8143e6f3a19c4c578b",
"text": "While attachment research has demonstrated that parents' internal working models of attachment relationships tend to be transmitted to their children, affecting children's developmental trajectories, this study specifically examines associations between adult attachment status and observable parent, child, and dyadic behaviors among children with autism and associated neurodevelopmental disorders of relating and communicating. The Adult Attachment Interview (AAI) was employed to derive parental working models of attachment relationships. The Functional Emotional Assessment Scale (FEAS) was used to determine the quality of relational and functional behaviors in parents and their children. The sample included parents and their 4- to 16-year-old children with autism and associated neurodevelopmental disorders. Hypothesized relationships between AAI classifications and FEAS scores were supported. Significant correlations were found between AAI classification and FEAS scores, indicating that children with autism spectrum disorders whose parents demonstrated secure attachment representations were better able to initiate and respond in two-way pre-symbolic gestural communication; organize two-way social problem-solving communication; and engage in imaginative thinking, symbolic play, and verbal communication. These findings lend support to the relevance of the parent's state of mind pertaining to attachment status to child and parent relational behavior in cases wherein the child has been diagnosed with autism or an associated neurodevelopmental disorder of relating and communicating. A model emerges from these findings of conceptualizing relationships between parental internal models of attachment relationships and parent-child relational and functional levels that may aid in differentiating interventions.",
"title": ""
},
{
"docid": "37adbe33e4d83794fa85e7155a3e51d4",
"text": "Information technology matters to business success because it directly affects the mechanisms through which they create and capture value to earn a profit: IT is thus integral to a firm’s business-level strategy. Much of the extant research on the IT/strategy relationship, however, inaccurately frames IT as only a functionallevel strategy. This widespread under-appreciation of the business-level role of IT indicates a need for substantial retheorizing of its role in strategy and its complex and interdependent relationship with the mechanisms through which firms generate profit. Using a comprehensive framework of potential profit mechanisms, we argue that while IT activities remain integral to the functional-level strategies of the firm, they also play several significant roles in business strategy, with substantial performance implications. IT affects industry structure and the set of business-level strategic alternatives and value-creation opportunities that a firm may pursue. Along with complementary organizational changes, IT both enhances the firm’s current (ordinary) capabilities and enables new (dynamic) capabilities, including the flexibility to focus on rapidly changing opportunities or to abandon losing initiatives while salvaging substantial asset value. Such digitally attributable capabilities also determine how much of this value, once created, can be captured by the firm—and how much will be dissipated through competition or through the power of value chain partners, the governance of which itself depends on IT. We explore these business-level strategic roles of IT and discuss several provocative implications and future research directions in the converging information systems and strategy domains.",
"title": ""
},
{
"docid": "14fac04f802367a56a03fcdce88044f8",
"text": "Humidity measurement is one of the most significant issues in various areas of applications such as instrumentation, automated systems, agriculture, climatology and GIS. Numerous sorts of humidity sensors fabricated and developed for industrial and laboratory applications are reviewed and presented in this article. The survey frequently concentrates on the RH sensors based upon their organic and inorganic functional materials, e.g., porous ceramics (semiconductors), polymers, ceramic/polymer and electrolytes, as well as conduction mechanism and fabrication technologies. A significant aim of this review is to provide a distinct categorization pursuant to state of the art humidity sensor types, principles of work, sensing substances, transduction mechanisms, and production technologies. Furthermore, performance characteristics of the different humidity sensors such as electrical and statistical data will be detailed and gives an added value to the report. By comparison of overall prospects of the sensors it was revealed that there are still drawbacks as to efficiency of sensing elements and conduction values. The flexibility offered by thick film and thin film processes either in the preparation of materials or in the choice of shape and size of the sensor structure provides advantages over other technologies. These ceramic sensors show faster response than other types.",
"title": ""
},
{
"docid": "f4271386b02994f33a5eae3c6c67a879",
"text": "Joint FAO/WHO expert's consultation report defines probiotics as: Live microorganisms which when administered in adequate amounts confer a health benefit on the host. Most commonly used probiotics are Lactic acid bacteria (LAB) and bifidobacteria. There are other examples of species used as probiotics (certain yeasts and bacilli). Probiotic supplements are popular now a days. From the beginning of 2000, research on probiotics has increased remarkably. Probiotics are now day's widely studied for their beneficial effects in treatment of many prevailing diseases. Here we reviewed the beneficiary effects of probiotics in some diseases.",
"title": ""
},
{
"docid": "d03a86459dd461dcfac842ae55ae4ebb",
"text": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.",
"title": ""
},
{
"docid": "e65c5458a27fc5367be4fd6024e8eb43",
"text": "The aims of this article are to review low-voltage vs high-voltage electrical burn complications in adults and to identify novel areas that are not recognized to improve outcomes. An extensive literature search on electrical burn injuries was performed using OVID MEDLINE, PubMed, and EMBASE databases from 1946 to 2015. Studies relating to outcomes of electrical injury in the adult population (≥18 years of age) were included in the study. Forty-one single-institution publications with a total of 5485 electrical injury patients were identified and included in the present study. Fourty-four percent of these patients were low-voltage injuries (LVIs), 38.3% high-voltage injuries (HVIs), and 43.7% with voltage not otherwise specified. Forty-four percentage of studies did not characterize outcomes according to LHIs vs HVIs. Reported outcomes include surgical, medical, posttraumatic, and others (long-term/psychological/rehabilitative), all of which report greater incidence rates in HVI than in LVI. Only two studies report on psychological outcomes such as posttraumatic stress disorder. Mortality rates from electrical injuries are 2.6% in LVI, 5.2% in HVI, and 3.7% in not otherwise specified. Coroner's reports revealed a ratio of 2.4:1 for deaths caused by LVI compared with HVI. HVIs lead to greater morbidity and mortality than LVIs. However, the results of the coroner's reports suggest that immediate mortality from LVI may be underestimated. Furthermore, on the basis of this analysis, we conclude that the majority of studies report electrical injury outcomes; however, the majority of them do not analyze complications by low vs high voltage and often lack long-term psychological and rehabilitation outcomes after electrical injury indicating that a variety of central aspects are not being evaluated or assessed.",
"title": ""
},
{
"docid": "5ee490a307a0b6108701225170690386",
"text": "An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in \"normal\" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity.",
"title": ""
},
{
"docid": "e325351fd8eda7ebebd46df0d0a80c19",
"text": "This paper proposes a CLL resonant dc-dc converter as an option for offline applications. This topology can achieve zero-voltage switching from zero load to a full load and zero-current switching for output rectifiers and makes the implementation of a secondary rectifier easy. This paper also presents a novel methodology for designing CLL resonant converters based on efficiency and holdup time requirements. An optimal transformer structure is proposed, which uses a current-type synchronous rectifier (SR) drive scheme. An 800-kHz 250-W CLL resonant converter prototype is built to verify the proposed circuit, design method, transformer structure, and SR drive scheme.",
"title": ""
},
{
"docid": "1d0dbfe15768703f7d5a1a56bbee3cac",
"text": "This paper investigates the effect of non-audit services on audit quality. Following the announcement of the requirement to disclose non-audit fees, approximately one-third of UK quoted companies disclosed before the requirement became effective. Whilst distressed companies were more likely to disclose early, auditor size, directors’ shareholdings and non-audit fees were not signi cantly correlated with early disclosure. These results cast doubt on the view that voluntary disclosure of non-audit fees was used to signal audit quality. The evidence also indicates a positive weakly signi cant relationship between disclosed non-audit fees and audit quali cations. This suggests that when non-audit fees are disclosed, the provision of non-audit services does not reduce audit quality.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "63339fb80c01c38911994cd326e483a3",
"text": "Older adults are becoming a significant percentage of the world's population. A multitude of factors, from the normal aging process to the progression of chronic disease, influence the nutrition needs of this very diverse group of people. Appropriate micronutrient intake is of particular importance but is often suboptimal. Here we review the available data regarding micronutrient needs and the consequences of deficiencies in the ever growing aged population.",
"title": ""
},
{
"docid": "9794653cc79a0835851fdc890e908823",
"text": "In 1988, Hickerson proved the celebrated “mock theta conjectures”, a collection of ten identities from Ramanujan’s “lost notebook” which express certain modular forms as linear combinations of mock theta functions. In the context of Maass forms, these identities arise from the peculiar phenomenon that two different harmonic Maass forms may have the same non-holomorphic parts. Using this perspective, we construct several infinite families of modular forms which are differences of mock theta functions.",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "87ac799402c785e68db14636b0725523",
"text": "One of the challenges of creating applications from confederations of Internet-enabled things is the complexity of having to deal with spontaneously interacting and partially available heterogeneous devices. In this paper we describe the features of the MAGIC Broker 2 (MB2) a platform designed to offer a simple and consistent programming interface for collections of things. We report on the key abstractions offered by the platform and report on its use for developing two IoT applications involving spontaneous device interaction: 1) mobile phones and public displays, and 2) a web-based sensor actuator network portal called Sense Tecnic (STS). We discuss how the MB2 abstractions and implementation have evolved over time to the current design. Finally we present a preliminary performance evaluation and report qualitatively on the developers' experience of using our platform.",
"title": ""
},
{
"docid": "33cab0ec47af5e40d64e34f8ffc7dd6f",
"text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.",
"title": ""
},
{
"docid": "377e9bfebd979c25728fdede2af74335",
"text": "Youth Gangs: An Overview, the initial Bulletin in this series, brings together available knowledge on youth gangs by reviewing data and research. The author begins with a look at the history of youth gangs and their demographic characteristics. He then assesses the scope of the youth gang problem, including gang problems in juvenile detention and correctional facilities. A review of gang studies provides a clearer understanding of several issues. An extensive list of references is also included for further review.",
"title": ""
}
] | scidocsrr |
80b19612fbeafc0b6aa6df7c466c8d11 | Relative Camera Pose Estimation Using Convolutional Neural Networks | [
{
"docid": "4d7cbe7f5e854028277f0120085b8977",
"text": "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.",
"title": ""
}
] | [
{
"docid": "a4d7596cfcd4a9133c5677a481c88cf0",
"text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.",
"title": ""
},
{
"docid": "ce63aad5288d118eb6ca9d99b96e9cac",
"text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.",
"title": ""
},
{
"docid": "33b8012ae66f07c9de158f4c514c4e99",
"text": "Many mathematicians have a dismissive attitude towards paradoxes. This is unfortunate, because many paradoxes are rich in content, having connections with serious mathematical ideas as well as having pedagogical value in teaching elementary logical reasoning. An excellent example is the so-called “surprise examination paradox” (described below), which is an argument that seems at first to be too silly to deserve much attention. However, it has inspired an amazing variety of philosophical and mathematical investigations that have in turn uncovered links to Gödel’s incompleteness theorems, game theory, and several other logical paradoxes (e.g., the liar paradox and the sorites paradox). Unfortunately, most mathematicians are unaware of this because most of the literature has been published in philosophy journals.",
"title": ""
},
{
"docid": "91f20c48f5a4329260aadb87a0d8024c",
"text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.",
"title": ""
},
{
"docid": "c3dd3dd59afe491fcc6b4cd1e32c88a3",
"text": "The Semantic Web drives towards the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3Logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3Logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3Logic and illustrate through examples why it is an appropriate logic for the Web.",
"title": ""
},
{
"docid": "46ab85859bd3966b243db79696a236f0",
"text": "The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex, as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, whereas previous approaches to meta-optimization have tuned behavioural parameters to work well on just a single optimization problem. It is then found that the PSO method and its simplified variant not only have comparable performance for optimizing a number of Artificial Neural Network problems, but the simplified variant appears to offer a small improvement in some cases.",
"title": ""
},
{
"docid": "466bb7b70fc1c5973fbea3ade7ebd845",
"text": "High-speed and heavy-load stacking robot technology is a common key technique in nonferrous metallurgy areas. Specific layer stacking robot of aluminum ingot continuous casting production line, which has four-DOF, is designed in this paper. The kinematics model is built and studied in detail by D-H method. The transformation matrix method is utilized to solve the kinematics equation of robot. Mutual motion relations between each joint variables and the executive device of robot is got. The kinematics simulation of the robot is carried out via the ADAMS-software. The results of simulation verify the theoretical analysis and lay the foundation for following static and dynamic characteristics analysis of the robot.",
"title": ""
},
{
"docid": "ac0b562db18fac38663b210f599c2deb",
"text": "This paper proposes a fast and stable image-based modeling method which generates 3D models with high-quality face textures in a semi-automatic way. The modeler guides untrained users to quickly obtain 3D model data via several steps of simple user interface operations using predefined 3D primitives. The proposed method contains an iterative non-linear error minimization technique in the model estimation step with an error function based on finite line segments instead of infinite lines. The error corresponds to the difference between the observed structure and the predicted structure from current model parameters. Experimental results on real images validate the robustness and the accuracy of the algorithm. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "77cea98467305b9b3b11de8d3cec6ec2",
"text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.",
"title": ""
},
{
"docid": "e78d53a2790ac3b6011910f82cefaff9",
"text": "A two-dimensional crystal of molybdenum disulfide (MoS2) monolayer is a photoluminescent direct gap semiconductor in striking contrast to its bulk counterpart. Exfoliation of bulk MoS2 via Li intercalation is an attractive route to large-scale synthesis of monolayer crystals. However, this method results in loss of pristine semiconducting properties of MoS2 due to structural changes that occur during Li intercalation. Here, we report structural and electronic properties of chemically exfoliated MoS2. The metastable metallic phase that emerges from Li intercalation was found to dominate the properties of as-exfoliated material, but mild annealing leads to gradual restoration of the semiconducting phase. Above an annealing temperature of 300 °C, chemically exfoliated MoS2 exhibit prominent band gap photoluminescence, similar to mechanically exfoliated monolayers, indicating that their semiconducting properties are largely restored.",
"title": ""
},
{
"docid": "7e6b6f8bab3172457473d158960688a7",
"text": "BACKGROUND\nCancer is a leading cause of death worldwide. Given the complexity of caring work, recent studies have focused on the professional quality of life of oncology nurses. China, the world's largest developing country, faces heavy burdens of care for cancer patients. Chinese oncology nurses may be encountering the negative side of their professional life. However, studies in this field are scarce, and little is known about the prevalence and predictors of oncology nurses' professional quality of life.\n\n\nOBJECTIVES\nTo describe and explore the prevalence of predictors of professional quality of life (compassion fatigue, burnout and compassion satisfaction) among Chinese oncology nurses under the guidance of two theoretical models.\n\n\nDESIGN\nA cross-sectional design with a survey.\n\n\nSETTINGS\nTen tertiary hospitals and five secondary hospitals in Shanghai, China.\n\n\nPARTICIPANTS\nA convenience and cluster sample of 669 oncology nurses was used. All of the nurses worked in oncology departments and had over 1 year of oncology nursing experience. Of the selected nurses, 650 returned valid questionnaires that were used for statistical analyses.\n\n\nMETHODS\nThe participants completed the demographic and work-related questionnaire, the Chinese version of the Professional Quality of Life Scale for Nurses, the Chinese version of the Jefferson Scales of Empathy, the Simplified Coping Style Questionnaire, the Perceived Social Support Scale, and the Chinese Big Five Personality Inventory brief version. Descriptive statistics, t-tests, one-way analysis of variance, simple and multiple linear regressions were used to determine the predictors of the main research variables.\n\n\nRESULTS\nHigher compassion fatigue and burnout were found among oncology nurses who had more years of nursing experience, worked in secondary hospitals and adopted passive coping styles. Cognitive empathy, training and support from organizations were identified as significant protectors, and 'perspective taking' was the strongest predictor of compassion satisfaction, explaining 23.0% of the variance. Personality traits of openness and conscientiousness were positively associated with compassion satisfaction, while neuroticism was a negative predictor, accounting for 24.2% and 19.8% of the variance in compassion fatigue and burnout, respectively.\n\n\nCONCLUSIONS\nOncology care has unique features, and oncology nurses may suffer from more work-related stressors compared with other types of nurses. Various predictors can influence the professional quality of life, and some of these should be considered in the Chinese nursing context. The results may provide clues to help nurse administrators identify oncology nurses' vulnerability to compassion fatigue and burnout and develop comprehensive strategies to improve their professional quality of life.",
"title": ""
},
{
"docid": "a2fa1d74fcaa6891e1a43dca706015b0",
"text": "Smart meters have been deployed worldwide in recent years that enable real-time communications and networking capabilities in power distribution systems. Problematically, recent reports have revealed incidents of energy theft in which dishonest customers would lower their electricity bills (aka stealing electricity) by tampering with their meters. The physical attack can be extended to a network attack by means of false data injection (FDI). This paper is thus motivated to investigate the currently-studied FDI attack by introducing the combination sum of energy profiles (CONSUMER) attack in a coordinated manner on a number of customers' smart meters, which results in a lower energy consumption reading for the attacker and a higher reading for the others in a neighborhood. We propose a CONSUMER attack model that is formulated into one type of coin change problems, which minimizes the number of compromised meters subject to the equality of an aggregated load to evade detection. A hybrid detection framework is developed to detect anomalous and malicious activities by incorporating our proposed grid sensor placement algorithm with observability analysis to increase the detection rate. Our simulations have shown that the network observability and detection accuracy can be improved by means of grid-placed sensor deployment.",
"title": ""
},
{
"docid": "3e805d6724dc400d681b3b42393d5ebe",
"text": "This paper introduces a framework for conducting and writing an effective literature review. The target audience for the framework includes information systems (IS) doctoral students, novice IS researchers, and other IS researchers who are constantly struggling with the development of an effective literature-based foundation for a proposed research. The proposed framework follows the systematic data processing approach comprised of three major stages: 1) inputs (literature gathering and screening), 2) processing (following Bloom’s Taxonomy), and 3) outputs (writing the literature review). This paper provides the rationale for developing a solid literature review including detailed instructions on how to conduct each stage of the process proposed. The paper concludes by providing arguments for the value of an effective literature review to IS research.",
"title": ""
},
{
"docid": "1d9361cffd8240f3b691c887def8e2f5",
"text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.",
"title": ""
},
{
"docid": "5f0157139bff33057625686b7081a0c8",
"text": "A novel MIC/MMIC compatible microstrip to waveguide transition for X band is presented. The transition has realized on novel low cost substrate and its main features are: wideband operation, low insertion loss and feeding without a balun directly by the microstrip line.",
"title": ""
},
{
"docid": "c85a26f1bccf3b28ca6a46c5312040e7",
"text": "This paper describes a novel compact design of a planar circularly polarized (CP) tag antenna for use in a ultrahigh frequency (UHF) radio frequency identification (RFID) system. Introducing the meander strip into the right-arm of the square-ring structure enables the measured half-power bandwidth of the proposed CP tag antenna to exceed 100 MHz (860–960 MHz), which includes the entire operating bandwidth of the global UHF RFID system. A 3-dB axial-ratio bandwidth of approximately 36 MHz (902–938 MHz) can be obtained, which is suitable for American (902–928 MHz), European (918–926 MHz), and Taiwanese UHF RFID (922–928 MHz) applications. Since the overall antenna dimensions are only <inline-formula> <tex-math notation=\"LaTeX\">$54\\times54$ </tex-math></inline-formula> mm<sup>2</sup>, the proposed tag antenna can be operated with a size that is 64% smaller than that of the tag antennas attached on the safety glass. With a bidirectional reading pattern, the measured reading distance is about 8.3 m. Favorable tag sensitivity is obtained across the desired frequency band.",
"title": ""
},
{
"docid": "efc341c0a3deb6604708b6db361bfba5",
"text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.",
"title": ""
},
{
"docid": "ceb66016a57a936d33675756ee2e7eed",
"text": "Detecting small vehicles in aerial images is a difficult job that can be challenging even for humans. Rotating objects, low resolution, small inter-class variability and very large images comprising complicated backgrounds render the work of photo-interpreters tedious and wearisome. Unfortunately even the best classical detection pipelines like Ren et al. [2015] cannot be used off-the-shelf with good results because they were built to process object centric images from day-to-day life with multi-scale vertical objects. In this work we build on the Faster R-CNN approach to turn it into a detection framework that deals appropriately with the rotation equivariance inherent to any aerial image task. This new pipeline (Faster Rotation Equivariant Regions CNN) gives, without any bells and whistles, state-of-the-art results on one of the most challenging aerial imagery datasets: VeDAI Razakarivony and Jurie [2015] and give good results w.r.t. the baseline Faster R-CNN on two others: Munich Leitloff et al. [2014] and GoogleEarth Heitz and Koller [2008].",
"title": ""
},
{
"docid": "b1b6e670f21479956d2bbe281c6ff556",
"text": "Near real-time data from the MODIS satellite sensor was used to detect and trace a harmful algal bloom (HAB), or red tide, in SW Florida coastal waters from October to December 2004. MODIS fluorescence line height (FLH in W m 2 Am 1 sr ) data showed the highest correlation with near-concurrent in situ chlorophyll-a concentration (Chl in mg m ). For Chl ranging between 0.4 to 4 mg m 3 the ratio between MODIS FLH and in situ Chl is about 0.1 W m 2 Am 1 sr 1 per mg m 3 chlorophyll (Chl=1.255 (FLH 10), r =0.92, n =77). In contrast, the band-ratio chlorophyll product of either MODIS or SeaWiFS in this complex coastal environment provided false information. Errors in the satellite Chl data can be both negative and positive (3–15 times higher than in situ Chl) and these data are often inconsistent either spatially or temporally, due to interferences of other water constituents. The red tide that formed from November to December 2004 off SW Florida was revealed by MODIS FLH imagery, and was confirmed by field sampling to contain medium (10 to 10 cells L ) to high (>10 cells L ) concentrations of the toxic dinoflagellate Karenia brevis. The FLH imagery also showed that the bloom started in midOctober south of Charlotte Harbor, and that it developed and moved to the south and southwest in the subsequent weeks. Despite some artifacts in the data and uncertainty caused by factors such as unknown fluorescence efficiency, our results show that the MODIS FLH data provide an unprecedented tool for research and managers to study and monitor algal blooms in coastal environments. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "182c83e136dcc7f41c2d7a7a30321440",
"text": "Behavioral logs are traces of human behavior seen through the lenses of sensors that capture and record user activity. They include behavior ranging from low-level keystrokes to rich audio and video recordings. Traces of behavior have been gathered in psychology studies since the 1930s (Skinner, 1938 ), and with the advent of computerbased applications it became common practice to capture a variety of interaction behaviors and save them to log fi les for later analysis. In recent years, the rise of centralized, web-based computing has made it possible to capture human interactions with web services on a scale previously unimaginable. Largescale log data has enabled HCI researchers to observe how information diffuses through social networks in near real-time during crisis situations (Starbird & Palen, 2010 ), characterize how people revisit web pages over time (Adar, Teevan, & Dumais, 2008 ), and compare how different interfaces for supporting email organization infl uence initial uptake and sustained use (Dumais, Cutrell, Cadiz, Jancke, Sarin, & Robbins, 2003 ; Rodden & Leggett, 2010 ). In this chapter we provide an overview of behavioral log use in HCI. We highlight what can be learned from logs that capture people’s interactions with existing computer systems and from experiments that compare new, alternative systems. We describe how to design and analyze web experiments, and how to collect, clean and use log data responsibly. The goal of this chapter is to enable the reader to design log studies and to understand results from log studies that they read about. Understanding User Behavior Through Log Data and Analysis",
"title": ""
}
] | scidocsrr |
c39d3d007237b00c9aff9aaa4a0e6059 | EFFECTS OF INTERNET USE AND SOCIAL RESOURCES ON CHANGES IN DEPRESSION | [
{
"docid": "1a4a25e533adcd5ae0a1ce55ddcd80df",
"text": "The model introduced and tested in the current study suggests that lonely and depressed individuals may develop a preference for online social interaction, which, in turn, leads to negative outcomes associated with their Internet use. Participants completed measures of preference for online social interaction, depression, loneliness, problematic Internet use, and negative outcomes resulting from their Internet use. Results indicated that psychosocial health predicted levels of preference for online social interaction, which, in turn, predicted negative outcomes associated with problematic Internet use. In addition, the results indicated that the influence of psychosocial distress on negative outcomes due to Internet use is mediated by preference for online socialization and other symptoms of problematic Internet use. The results support the current hypothesis that that individuals’ preference for online, rather than face-to-face, social interaction plays an important role in the development of negative consequences associated with problematic Internet use.",
"title": ""
},
{
"docid": "a98c32ca34b5096a38d29a54ece2ba0b",
"text": "Those who feel better able to express their “true selves” in Internet rather than face-to-face interaction settings are more likely to form close relationships with people met on the Internet (McKenna, Green, & Gleason, this issue). Building on these correlational findings from survey data, we conducted three laboratory experiments to directly test the hypothesized causal role of differential self-expression in Internet relationship formation. Experiments 1 and 2, using a reaction time task, found that for university undergraduates, the true-self concept is more accessible in memory during Internet interactions, and the actual self more accessible during face-to-face interactions. Experiment 3 confirmed that people randomly assigned to interact over the Internet (vs. face to face) were better able to express their true-self qualities to their partners.",
"title": ""
}
] | [
{
"docid": "d81b67d0a4129ac2e118c9babb59299e",
"text": "Motivation\nA large number of newly sequenced proteins are generated by the next-generation sequencing technologies and the biochemical function assignment of the proteins is an important task. However, biological experiments are too expensive to characterize such a large number of protein sequences, thus protein function prediction is primarily done by computational modeling methods, such as profile Hidden Markov Model (pHMM) and k-mer based methods. Nevertheless, existing methods have some limitations; k-mer based methods are not accurate enough to assign protein functions and pHMM is not fast enough to handle large number of protein sequences from numerous genome projects. Therefore, a more accurate and faster protein function prediction method is needed.\n\n\nResults\nIn this paper, we introduce DeepFam, an alignment-free method that can extract functional information directly from sequences without the need of multiple sequence alignments. In extensive experiments using the Clusters of Orthologous Groups (COGs) and G protein-coupled receptor (GPCR) dataset, DeepFam achieved better performance in terms of accuracy and runtime for predicting functions of proteins compared to the state-of-the-art methods, both alignment-free and alignment-based methods. Additionally, we showed that DeepFam has a power of capturing conserved regions to model protein families. In fact, DeepFam was able to detect conserved regions documented in the Prosite database while predicting functions of proteins. Our deep learning method will be useful in characterizing functions of the ever increasing protein sequences.\n\n\nAvailability and implementation\nCodes are available at https://bhi-kimlab.github.io/DeepFam.",
"title": ""
},
{
"docid": "e1336d3d403f416c3899abf7386122d9",
"text": "Artificial synaptic devices have attracted a broad interest for hardware implementation of brain-inspired neuromorphic systems. In this letter, a short-term plasticity simulation in an indium-gallium-zinc oxide (IGZO) electric-double-layer (EDL) transistor is investigated. For synaptic facilitation and depression function emulation, three-terminal EDL transistor is reduced to a two-terminal synaptic device with two modified connection schemes. Furthermore, high-pass and low-pass filtering characteristics are also successfully emulated not only for fixed-rate spike train but also for Poisson-like spike train. Our results suggest that IGZO-based EDL transistors operated in two terminal mode can be used as the building blocks for brain-like chips and neuromorphic systems.",
"title": ""
},
{
"docid": "1f15775000a1837cfc168a91c4c1a2ae",
"text": "In the recent aging society, studies on health care services have been actively conducted to provide quality services to medical consumers in wire and wireless environments. However, there are some problems in these health care services due to the lack of personalized service and the uniformed way in services. For solving these issues, studies on customized services in medical markets have been processed. However, because a diet recommendation service is only focused on the personal disease information, it is difficult to provide specific customized services to users. This study provides a customized diet recommendation service for preventing and managing coronary heart disease in health care services. This service provides a customized diet to customers by considering the basic information, vital sign, family history of diseases, food preferences according to seasons and intakes for the customers who are concerning about the coronary heart disease. The users who receive this service can use a customized diet service differed from the conventional service and that supports continuous services and helps changes in customers living habits.",
"title": ""
},
{
"docid": "6bafdd357ad44debeda78d911a69da90",
"text": "We present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. These results, albeit still quite far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems.",
"title": ""
},
{
"docid": "6b5599f9041ca5dab06620ce9ee9e2fb",
"text": "Poor nutrition can lead to reduced immunity, increased susceptibility to disease, impaired physical and mental development, and reduced productivity. A conversational agent can support people as a virtual coach, however building such systems still have its associated challenges and limitations. This paper describes the background and motivation for chatbot systems in the context of healthy nutrition recommendation. We discuss current challenges associated with chatbot application, we tackled technical, theoretical, behavioural, and social aspects of the challenges. We then propose a pipeline to be used as guidelines by developers to implement theoretically and technically robust chatbot systems. Keywords-Health, Conversational agent, Recommender systems, HCI, Behaviour Change, Artificial intelligence",
"title": ""
},
{
"docid": "1d9b1ce73d8d2421092bb5a70016a142",
"text": "Social networks have the surprising property of being \"searchable\": Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases.",
"title": ""
},
{
"docid": "068a8ad7161ed2c4af5e5c3208c35c00",
"text": "Two field studies and a laboratory study examined the influence of reward for high performance on experienced performance pressure, intrinsic interest and creativity. Study 1 found that employees’ expected reward for high performance was positively related to performance pressure which, in turn, was positively associated with the employees’ interest in their jobs. Study 2 replicated this finding and showed that intrinsic interest, produced by performance pressure, was positively related to supervisors’ ratings of creative performance. Study 3 found that college students’ receipt of reward for high performance increased their experienced performance pressure which, in turn, was positively related to intrinsic interest and creativity. Copyright # 2008 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "6f19b45fbbe4385f86e345d4f5de2219",
"text": "Objective To evaluate ten-year survival and clinical performance of resin-based composite restorations placed at increased vertical dimension as a 'Dahl' type appliance to manage localised anterior tooth wear.Design A prospective survival analysis of restorations provided at a single centre.Setting UK NHS hospital and postgraduate institute.Methods The clinical performance of 283 composite resin restorations on 26 patients with localised anterior tooth wear was reviewed after a ten year follow-up period. The study used modified United States Public Health Service (USPHS) criteria for assessing the restorations. Survival of the restorations was analysed using Kaplan-Meier survival curves, the log-rank test, and the Cox proportional hazards regression analysis.Results The results indicated that the median survival time for composite resin restorations was 5.8 years and 4.75 years for replacement restorations when all types of failure were considered. The restorations commonly failed as a result of wear, fracture and marginal discoloration. The factors that significantly influenced the survival of these restorations were the incisal relationship, aetiology, material used, and the nature of opposing dentition. The biological complications associated with this treatment regime were rare. Patient satisfaction remained high despite the long term deterioration of the restorations.Conclusion With some degree of maintenance, repeated use of composite resin restorations to treat localised anterior tooth wear at an increased occlusal vertical dimension is a viable treatment option over a ten-year period.",
"title": ""
},
{
"docid": "db76ba085f43bc826f103c6dd4e2ebb5",
"text": "It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory-augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles.",
"title": ""
},
{
"docid": "f4ebbcebefbcc1ba8b6f8e5bf6096645",
"text": "With advances in wireless communication technology, more and more people depend heavily on portable mobile devices for businesses, entertainments and social interactions. Although such portable mobile devices can offer various promising applications, their computing resources remain limited due to their portable size. This however can be overcome by remotely executing computation-intensive tasks on clusters of near by computers known as cloudlets. As increasing numbers of people access the Internet via mobile devices, it is reasonable to envision in the near future that cloudlet services will be available for the public through easily accessible public wireless metropolitan area networks (WMANs). However, the outdated notion of treating cloudlets as isolated data-centers-in-a-box must be discarded as there are clear benefits to connecting multiple cloudlets together to form a network. In this paper we investigate how to balance the workload between multiple cloudlets in a network to optimize mobile application performance. We first introduce a system model to capture the response times of offloaded tasks, and formulate a novel optimization problem, that is to find an optimal redirection of tasks between cloudlets such that the maximum of the average response times of tasks at cloudlets is minimized. We then propose a fast, scalable algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. The experimental results demonstrate the significant potential of the proposed algorithm in reducing the response times of tasks.",
"title": ""
},
{
"docid": "4f560deecd54c9b809ce1a1e04512926",
"text": "BACKGROUND\nNurses in Sweden have a high absence due to illness and many retire before the age of sixty. Factors at work as well as in private life may contribute to health problems. To maintain a healthy work-force there is a need for actions on work-life balance in a salutogenic perspective. The aim of this study was to explore perceptions of resources in everyday life to balance work and private life among nurses in home help service.\n\n\nMETHODS\nThirteen semi-structured individual interviews and two focus group interviews were conducted with home help service nurses in Sweden. A qualitative content analysis was used for the analyses.\n\n\nRESULT\nIn the analyses, six themes of perceptions of recourses in everyday life emerged; (i) Reflecting on life. (ii) Being healthy and taking care of yourself. (iii) Having a meaningful job and a supportive work climate. (iv) Working shifts and part time. (v) Having a family and a supporting network. (vi) Making your home your castle.\n\n\nCONCLUSIONS\nThe result points out the complexity of work-life balance and support that the need for nurses to balance everyday life differs during different phases and transitions in life. In this salutogenic study, the result differs from studies with a pathogenic approach. Shift work and part time work were seen as two resources that contributed to flexibility and a prerequisite to work-life balance. To have time and energy for both private life and work was seen as essential. To reflect on and discuss life gave inner strength to set boundaries and to prioritize both in private life and in work life. Managers in nursing contexts have a great challenge to maintain and strengthen resources which enhance the work-life balance and health of nurses. Salutogenic research is needed to gain an understanding of resources that enhance work-life balance and health in nursing contexts.",
"title": ""
},
{
"docid": "dfb979060d5a1b8b7f5ff59957aa6b8e",
"text": "The present investigation provided a theoretically-driven analysis testing whether body shame helped account for the predicted positive associations between explicit weight bias in the form of possessing anti-fat attitudes (i.e., dislike, fear of fat, and willpower beliefs) and engaging in fat talk among 309 weight-diverse college women. We also evaluated whether self-compassion served as a protective factor in these relationships. Robust non-parametric bootstrap resampling procedures adjusted for body mass index (BMI) revealed stronger indirect and conditional indirect effects for dislike and fear of fat attitudes and weaker, marginal effects for the models inclusive of willpower beliefs. In general, the indirect effect of anti-fat attitudes on fat talk via body shame declined with increasing levels of self-compassion. Our preliminary findings may point to useful process variables to target in mitigating the impact of endorsing anti-fat prejudice on fat talk in college women and may help clarify who is at higher risk.",
"title": ""
},
{
"docid": "00cdaa724f262211919d4c7fc5bb0442",
"text": "With Tor being a popular anonymity network, many attacks have been proposed to break its anonymity or leak information of a private communication on Tor. However, guaranteeing complete privacy in the face of an adversary on Tor is especially difficult because Tor relays are under complete control of world-wide volunteers. Currently, one can gain private information, such as circuit identifiers and hidden service identifiers, by running Tor relays and can even modify their behaviors with malicious intent. This paper presents a practical approach to effectively enhancing the security and privacy of Tor by utilizing Intel SGX, a commodity trusted execution environment. We present a design and implementation of Tor, called SGX-Tor, that prevents code modification and limits the information exposed to untrusted parties. We demonstrate that our approach is practical and effectively reduces the power of an adversary to a traditional network-level adversary. Finally, SGX-Tor incurs moderate performance overhead; the end-to-end latency and throughput overheads for HTTP connections are 3.9% and 11.9%, respectively.",
"title": ""
},
{
"docid": "d35bc5ef2ea3ce24bbba87f65ae93a25",
"text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.",
"title": ""
},
{
"docid": "0ac679740e0e3911af04be9464f76a7d",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "edcf1cb4d09e0da19c917eab9eab3b23",
"text": "The paper describes a computerized process of myocardial perfusion diagnosis from cardiac single proton emission computed tomography (SPECT) images using data mining and knowledge discovery approach. We use a six-step knowledge discovery process. A database consisting of 267 cleaned patient SPECT images (about 3000 2D images), accompanied by clinical information and physician interpretation was created first. Then, a new user-friendly algorithm for computerizing the diagnostic process was designed and implemented. SPECT images were processed to extract a set of features, and then explicit rules were generated, using inductive machine learning and heuristic approaches to mimic cardiologist's diagnosis. The system is able to provide a set of computer diagnoses for cardiac SPECT studies, and can be used as a diagnostic tool by a cardiologist. The achieved results are encouraging because of the high correctness of diagnoses.",
"title": ""
},
{
"docid": "622d11d4eefeacbed785ee6fcc14b69b",
"text": "n our nursing program, we require a transcript for every course taken at any university or college, and it is always frustrating when we have to wait for copies to arrive before making our decisions. To be honest, if a candidate took Religion 101 at a community college and later transferred to the BSN program, I would be willing to pass on the community college transcript, but the admissions office is less flexible. And, although we used to be able to ask the student to have another copy sent if we did not have a transcript in the file, we now must wait for the student to have the college upload the transcript into an admissions system andwait for verification. I can assure you, most nurses, like other students today, take a lot of courses across many colleges without getting a degree. I sometimes have as many as 10 transcripts to review. When I saw an article titled “Blockchain: Letting Students Own Their Credentials” (Schaffnauser, 2017), I was therefore intrigued. I had already heard of blockchain as a tool to take the middleman out of the loop when doing financial transactions with Bitcoin. Now the thought of students owning their own credentials got me thinking about the movement toward new forms of credentialing from professional organizations (e.g., badges, certification documents). Hence, my decision to explore blockchain and its potential. Let’s start with some definitions. Simply put, blockchain is a distributed digital ledger. Technically speaking, it is “a peer-to-peer (P2P) distributed ledger technology for a new generation of transactional applications that establishes transparency and trust” (Linn & Koo, n.d.). Watter (2016) noted that “the blockchain is a distributed database that provides an unalterable, (semi-) public record of digital transactions. Each block aggregates a timestamped batch of transactions to be included in the ledger — or rather, in the blockchain. Each block is identified by a cryptographic signature. The blockchain contains an un-editable record of all the transactions made.” If we take this apart, here is what we have: a database that is distributed to computers associated with members of the network. Thus, rather than trying to access one central database, all members have copies of the database. Each time a transaction occurs, it is placed in a block that is given a time stamp and is “digitally signed using public key cryptography — which uses both a public and private key” (Watter, 2016). Locks are then connected so there is a historical record and they cannot be altered. According to Lin and Koo",
"title": ""
},
{
"docid": "3415fb5e9b994d6015a17327fc0fe4f4",
"text": "A human stress monitoring patch integrates three sensors of skin temperature, skin conductance, and pulsewave in the size of stamp (25 mm × 15 mm × 72 μm) in order to enhance wearing comfort with small skin contact area and high flexibility. The skin contact area is minimized through the invention of an integrated multi-layer structure and the associated microfabrication process; thus being reduced to 1/125 of that of the conventional single-layer multiple sensors. The patch flexibility is increased mainly by the development of flexible pulsewave sensor, made of a flexible piezoelectric membrane supported by a perforated polyimide membrane. In the human physiological range, the fabricated stress patch measures skin temperature with the sensitivity of 0.31 Ω/°C, skin conductance with the sensitivity of 0.28 μV/0.02 μS, and pulse wave with the response time of 70 msec. The skin-attachable stress patch, capable to detect multimodal bio-signals, shows potential for application to wearable emotion monitoring.",
"title": ""
},
{
"docid": "b7c9e2900423a0cd7cc21c3aa95ca028",
"text": "In this article, the state of the art of research on emotion work (emotional labor) is summarized with an emphasis on its effects on well-being. It starts with a definition of what emotional labor or emotion work is. Aspects of emotion work, such as automatic emotion regulation, surface acting, and deep acting, are discussed from an action theory point of view. Empirical studies so far show that emotion work has both positive and negative effects on health. Negative effects were found for emotional dissonance. Concepts related to the frequency of emotion expression and the requirement to be sensitive to the emotions of others had both positive and negative effects. Control and social support moderate relations between emotion work variables and burnout and job satisfaction. Moreover, there is empirical evidence that the cooccurrence of emotion work and organizational problems leads to high levels of burnout. D 2002 Published by Elsevier Science Inc.",
"title": ""
},
{
"docid": "bc72b7e2a2b151d9396cd9e51c049e9a",
"text": "Low resourced languages suffer from limited training data and resources. Data augmentation is a common approach to increasing the amount of training data. Additional data is synthesized by manipulating the original data with a variety of methods. Unlike most previous work that focuses on a single technique, we combine multiple, complementary augmentation approaches. The first stage adds noise and perturbs the speed of additional copies of the original audio. The data is further augmented in a second stage, where a novel fMLLR-based augmentation is applied to bottleneck features to further improve performance. A reduction in word error rate is demonstrated on four languages from the IARPA Babel program. We present an analysis exploring why these techniques are beneficial.",
"title": ""
}
] | scidocsrr |
fb1dac0bee58d622f78bb84c1f832af7 | Association between online social networking and depression in high school students: behavioral physiology viewpoint. | [
{
"docid": "89c9ad792245fc7f7e7e3b00c1e8147a",
"text": "Contrasting hypotheses were posed to test the effect of Facebook exposure on self-esteem. Objective Self-Awareness (OSA) from social psychology and the Hyperpersonal Model from computer-mediated communication were used to argue that Facebook would either diminish or enhance self-esteem respectively. The results revealed that, in contrast to previous work on OSA, becoming self-aware by viewing one's own Facebook profile enhances self-esteem rather than diminishes it. Participants that updated their profiles and viewed their own profiles during the experiment also reported greater self-esteem, which lends additional support to the Hyperpersonal Model. These findings suggest that selective self-presentation in digital media, which leads to intensified relationship formation, also influences impressions of the self.",
"title": ""
}
] | [
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "f82ce890d66c746a169a38fdad702749",
"text": "The following review paper presents an overview of the current crop yield forecasting methods and early warning systems for the global strategy to improve agricultural and rural statistics across the globe. Different sections describing simulation models, remote sensing, yield gap analysis, and methods to yield forecasting compose the manuscript. 1. Rationale Sustainable land management for crop production is a hierarchy of systems operating in— and interacting with—economic, ecological, social, and political components of the Earth. This hierarchy ranges from a field managed by a single farmer to regional, national, and global scales where policies and decisions influence crop production, resource use, economics, and ecosystems at other levels. Because sustainability concepts must integrate these diverse issues, agricultural researchers who wish to develop sustainable productive systems and policy makers who attempt to influence agricultural production are confronted with many challenges. A multiplicity of problems can prevent production systems from being sustainable; on the other hand, with sufficient attention to indicators of sustainability, a number of practices and policies could be implemented to accelerate progress. Indicators to quantify changes in crop production systems over time at different hierarchical levels are needed for evaluating the sustainability of different land management strategies. To develop and test sustainability concepts and yield forecast methods globally, it requires the implementation of long-term crop and soil management experiments that include measurements of crop yields, soil properties, biogeochemical fluxes, and relevant socioeconomic indicators. Long-term field experiments cannot be conducted with sufficient detail in space and time to find the best land management practices suitable for sustainable crop production. Crop and soil simulation models, when suitably tested in reasonably diverse space and time, provide a critical tool for finding combinations of management strategies to reach multiple goals required for sustainable crop production. The models can help provide land managers and policy makers with a tool to extrapolate experimental results from one location to others where there is a lack of response information. Agricultural production is significantly affected by environmental factors. Weather influences crop growth and development, causing large intra-seasonal yield variability. In addition, spatial variability of soil properties, interacting with the weather, cause spatial yield variability. Crop agronomic management (e.g. planting, fertilizer application, irrigation, tillage, and so on) can be used to offset the loss in yield due to effects of weather. As a result, yield forecasting represents an important tool for optimizing crop yield and to evaluate the crop-area insurance …",
"title": ""
},
{
"docid": "f6deeee48e0c8f1ed1d922093080d702",
"text": "Foreword: The ACM SIGCHI (Association for Computing Machinery Special Interest Group in Computer Human Interaction) community conducted a deliberative process involving a high-visibility committee, a day-long workshop at CHI99 (Pittsburgh, PA, May 15, 1999) and a collaborative authoring process. This interim report is offered to produce further discussion and input leading to endorsement by the SIGCHI Executive Committee and then other professional societies. The scope of this research agenda included advanced information and communications technology research that could yield benefits over the next two to five years.",
"title": ""
},
{
"docid": "001104ca832b10553b28bbd713e6cbd5",
"text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.",
"title": ""
},
{
"docid": "e82cd7c22668b0c9ed62b4afdf49d1f4",
"text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.",
"title": ""
},
{
"docid": "e095b0d96a6c0dcc87efbbc3e730b581",
"text": "In this paper, we present ObSteiner, an exact algorithm for the construction of obstacle-avoiding rectilinear Steiner minimum trees (OARSMTs) among complex rectilinear obstacles. This is the first paper to propose a geometric approach to optimally solve the OARSMT problem among complex obstacles. The optimal solution is constructed by the concatenation of full Steiner trees among complex obstacles, which are proven to be of simple structures in this paper. ObSteiner is able to handle complex obstacles, including both convex and concave ones. Benchmarks with hundreds of terminals among a large number of obstacles are solved optimally in a reasonable amount of time.",
"title": ""
},
{
"docid": "c05b6720cdfdf6170ccce6486d485dc0",
"text": "The naturalness of warps is gaining extensive attention in image stitching. Recent warps, such as SPHP and AANAP, use global similarity warps to mitigate projective distortion (which enlarges regions); however, they necessarily bring in perspective distortion (which generates inconsistencies). In this paper, we propose a novel quasi-homography warp, which effectively balances the perspective distortion against the projective distortion in the non-overlapping region to create a more natural-looking panorama. Our approach formulates the warp as the solution of a bivariate system, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization, respectively. Because our proposed warp only relies on a global homography, it is thus totally parameter free. A comprehensive experiment shows that a quasi-homography warp outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that it wins most users’ favor, compared to homography and SPHP.",
"title": ""
},
{
"docid": "046245929e709ef2935c9413619ab3d7",
"text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance. 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4d1eae0f247f1c2db9e3c544a65c041f",
"text": "This papers presents a new system using circular markers to estimate the pose of a camera. Contrary to most markersbased systems using square markers, we advocate the use of circular markers, as we believe that they are easier to detect and provide a pose estimate that is more robust to noise. Unlike existing systems using circular markers, our method computes the exact pose from one single circular marker, and do not need specific points being explicitly shown on the marker (like center, or axes orientation). Indeed, the center and orientation is encoded directly in the marker’s code. We can thus use the entire marker surface for the code design. After solving the back projection problem for one conic correspondence, we end up with two possible poses. We show how to find the marker’s code, rotation and final pose in one single step, by using a pyramidal cross-correlation optimizer. The marker tracker runs at 100 frames/second on a desktop PC and 30 frames/second on a hand-held UMPC.",
"title": ""
},
{
"docid": "38e6384522c9e3e961819ed5b00a7697",
"text": "Cloud gaming has been recognized as a promising shift in the online game industry, with the aim of implementing the “on demand” service concept that has achieved market success in other areas of digital entertainment such as movies and TV shows. The concepts of cloud computing are leveraged to render the game scene as a video stream that is then delivered to players in real-time. The main advantage of this approach is the capability of delivering high-quality graphics games to any type of end user device; however, at the cost of high bandwidth consumption and strict latency requirements. A key challenge faced by cloud game providers lies in configuring the video encoding parameters so as to maximize player Quality of Experience (QoE) while meeting bandwidth availability constraints. In this article, we tackle one aspect of this problem by addressing the following research question: Is it possible to improve service adaptation based on information about the characteristics of the game being streamed? To answer this question, two main challenges need to be addressed: the need for different QoE-driven video encoding (re-)configuration strategies for different categories of games, and how to determine a relevant game categorization to be used for assigning appropriate configuration strategies. We investigate these problems by conducting two subjective laboratory studies with a total of 80 players and three different games. Results indicate that different strategies should likely be applied for different types of games, and show that existing game classifications are not necessarily suitable for differentiating game types in this context. We thus further analyze objective video metrics of collected game play video traces as well as player actions per minute and use this as input data for clustering of games into two clusters. Subjective results verify that different video encoding configuration strategies may be applied to games belonging to different clusters.",
"title": ""
},
{
"docid": "93ea7c59bad8181b0379f39e00f4d2e8",
"text": "Breadth-First Search (BFS) is a key graph algorithm with many important applications. In this work, we focus on a special class of graph traversal algorithm - concurrent BFS - where multiple breadth-first traversals are performed simultaneously on the same graph. We have designed and developed a new approach called iBFS that is able to run i concurrent BFSes from i distinct source vertices, very efficiently on Graphics Processing Units (GPUs). iBFS consists of three novel designs. First, iBFS develops a single GPU kernel for joint traversal of concurrent BFS to take advantage of shared frontiers across different instances. Second, outdegree-based GroupBy rules enables iBFS to selectively run a group of BFS instances which further maximizes the frontier sharing within such a group. Third, iBFS brings additional performance benefit by utilizing highly optimized bitwise operations on GPUs, which allows a single GPU thread to inspect a vertex for concurrent BFS instances. The evaluation on a wide spectrum of graph benchmarks shows that iBFS on one GPU runs up to 30x faster than executing BFS instances sequentially, and on 112 GPUs achieves near linear speedup with the maximum performance of 57,267 billion traversed edges per second (TEPS).",
"title": ""
},
{
"docid": "a0566ac90d164db763c7efa977d4bc0d",
"text": "Dead-time controls for synchronous buck converter are challenging due to the difficulties in accurate sensing and processing the on/off dead-time errors. For the control of dead-times, an integral feedback control using switched capacitors and a fast timing sensing circuit composed of MOSFET differential amplifiers and switched current sources are proposed. Experiments for a 3.3 V input, 1.5 V-0.3 A output converter demonstrated 1.3 ~ 4.6% efficiency improvement over a wide load current range.",
"title": ""
},
{
"docid": "ce5ede79daee56d50f5b086ad8f18a28",
"text": "In order to improve the efficiency and classification ability of Support vector machines (SVM) based on stochastic gradient descent algorithm, three algorithms of improved stochastic gradient descent (SGD) are used to solve support vector machine, which are Momentum, Nesterov accelerated gradient (NAG), RMSprop. The experimental results show that the algorithm based on RMSprop for solving the linear support vector machine has faster convergence speed and higher testing precision on five datasets (Alpha, Gamma, Delta, Mnist, Usps).",
"title": ""
},
{
"docid": "dd732081865bb209276acd3bb76ee08f",
"text": "A 57-64-GHz low phase-error 5-bit switch-type phase shifter integrated with a low phase-variation variable gain amplifier (VGA) is implemented through TSMC 90-nm CMOS low-power technology. Using the phase compensation technique, the proposed VGA can provide appropriate gain tuning with almost constant phase characteristics, thus greatly reducing the phase-tuning complexity in a phased-array system. The measured root mean square (rms) phase error of the 5-bit phase shifter is 2° at 62 GHz. The phase shifter has a low group-delay deviation (phase distortion) of +/- 8.5 ps and an excellent insertion loss flatness of ±0.8 dB for a specific phase-shifting state, across 57-64 GHz. For all 32 states, the insertion loss is 14.6 ± 3 dB, including pad loss at 60 GHz. For the integrated phase shifter and VGA, the VGA can provide 6.2-dB gain tuning range, which is wide enough to cover the loss variation of the phase shifter, with only 1.86° phase variation. The measured rms phase error of the 5-bit phase shifter and VGA is 3.8° at 63 GHz. The insertion loss of all 32 states is 5.4 dB, including pad loss at 60 GHz, and the loss flatness is ±0.8 dB over 57-64 GHz. To the best of our knowledge, the 5-bit phase shifter presents the best rms phase error at center frequency among the V-band switch-type phase shifter.",
"title": ""
},
{
"docid": "646a1e7c1a71dc89fa92d76a19c7389e",
"text": "As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality system-atically requires insight into and prediction of cache behaviour. On sequential processors, stack distance or reuse distance theory is a well-known means to model cache behaviour. However, it is not straightforward to apply this theory to GPUs, mainly because of the parallel execution model and fine-grained multi-threading. This work extends reuse distance to GPUs by modelling: (1) the GPU's hierarchy of threads, warps, threadblocks, and sets of active threads, (2) conditional and non-uniform latencies, (3) cache associativity, (4) miss-status holding-registers, and (5) warp divergence. We implement the model in C++ and extend the Ocelot GPU emulator to extract lists of memory addresses. We compare our model with measured cache miss rates for the Parboil and PolyBench/GPU benchmark suites, showing a mean absolute error of 6% and 8% for two cache configurations. We show that our model is faster and even more accurate compared to the GPGPU-Sim simulator.",
"title": ""
},
{
"docid": "ec772eccaa45eb860582820e751f3415",
"text": "Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.",
"title": ""
},
{
"docid": "db61ab44bfb0e7eddf2959121a79a2ee",
"text": "This paper analyzes the supply and demand for Bitcoinbased Ponzi schemes. There are a variety of these types of scams: from long cons such as Bitcoin Savings & Loans to overnight doubling schemes that do not take off. We investigate what makes some Ponzi schemes successful and others less so. By scouring 11 424 threads on bitcointalk. org, we identify 1 780 distinct scams. Of these, half lasted a week or less. Using survival analysis, we identify factors that affect scam persistence. One approach that appears to elongate the life of the scam is when the scammer interacts a lot with their victims, such as by posting more than a quarter of the comments in the related thread. By contrast, we also find that scams are shorter-lived when the scammers register their account on the same day that they post about their scam. Surprisingly, more daily posts by victims is associated with the scam ending sooner.",
"title": ""
},
{
"docid": "35a063ab339f32326547cc54bee334be",
"text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.",
"title": ""
},
{
"docid": "6de71e8106d991d2c3d2b845a9e0a67e",
"text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.",
"title": ""
},
{
"docid": "0f3d520a6d09c136816a9e0493c45db1",
"text": "Specular reflection exists widely in photography and causes the recorded color deviating from its true value, thus, fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an L2 chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter, illumination parallel subspace, a property called pure diffuse pixels distribution rule helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.",
"title": ""
}
] | scidocsrr |
43eb39b8a39919d4867a75fa54b29c66 | Predicting Suicidal Behavior From Longitudinal Electronic Health Records. | [
{
"docid": "1c9644fa4e259da618d5371512f1e73d",
"text": "Suicidal behavior is a leading cause of injury and death worldwide. Information about the epidemiology of such behavior is important for policy-making and prevention. The authors reviewed government data on suicide and suicidal behavior and conducted a systematic review of studies on the epidemiology of suicide published from 1997 to 2007. The authors' aims were to examine the prevalence of, trends in, and risk and protective factors for suicidal behavior in the United States and cross-nationally. The data revealed significant cross-national variability in the prevalence of suicidal behavior but consistency in age of onset, transition probabilities, and key risk factors. Suicide is more prevalent among men, whereas nonfatal suicidal behaviors are more prevalent among women and persons who are young, are unmarried, or have a psychiatric disorder. Despite an increase in the treatment of suicidal persons over the past decade, incidence rates of suicidal behavior have remained largely unchanged. Most epidemiologic research on suicidal behavior has focused on patterns and correlates of prevalence. The next generation of studies must examine synergistic effects among modifiable risk and protective factors. New studies must incorporate recent advances in survey methods and clinical assessment. Results should be used in ongoing efforts to decrease the significant loss of life caused by suicidal behavior.",
"title": ""
}
] | [
{
"docid": "eb847700cef64d89b88ff57fef9fae4b",
"text": "Software Defined Networking (SDN) is a new programmable network construction technology that enables centrally management and control, which is considered to be the future evolution trend of networks. A modularized carrier-grade SDN controller according to the characteristics of carrier-grade networks is designed and proposed, resolving the problem of controlling large-scale networks of carrier. The modularized architecture offers the system flexibility, scalability and stability. Functional logic of modules and core modules, such as link discovery module and topology module, are designed to meet the carrier's need. Static memory allocation, multi-threads technique and stick-package processing are used to improve the performance of controller, which is C programming language based. Processing logic of the communication mechanism of the controller is introduced, proving that the controller conforms to the OpenFlow specification and has a good interaction with OpenFlow-based switches. A controller cluster management system is used to interact with controllers through the east-west interface in order to manage large-scale networks. Furthermore, the effectiveness and high performance of the work in this paper has been verified by the testing using Cbench testing program. Moreover, the SDN controller we proposed has been running in China Telecom's Cloud Computing Key Laboratory, which showed the good results is achieved.",
"title": ""
},
{
"docid": "7be1f8be2c74c438b1ed1761e157d3a3",
"text": "The feeding behavior and digestive physiology of the sea cucumber, Apostichopus japonicus are not well understood. A better understanding may provide useful information for the development of the aquaculture of this species. In this article the tentacle locomotion, feeding rhythms, ingestion rate (IR), feces production rate (FPR) and digestive enzyme activities were studied in three size groups (small, medium and large) of sea cucumber under a 12h light/12h dark cycle. Frame-by-frame video analysis revealed that all size groups had similar feeding strategies using a grasping motion to pick up sediment particles. The tentacle insertion rates of the large size group were significantly faster than those of the small and medium-sized groups (P<0.05). Feeding activities investigated by charge coupled device cameras with infrared systems indicated that all size groups of sea cucumber were nocturnal and their feeding peaks occurred at 02:00-04:00. The medium and large-sized groups also had a second feeding peak during the day. Both IR and FPR in all groups were significantly higher at night than those during the daytime (P<0.05). Additionally, the peak activities of digestive enzymes were 2-4h earlier than the peak of feeding. Taken together, these results demonstrated that the light/dark cycle was a powerful environment factor that influenced biological rhythms of A. japonicus, which had the ability to optimize the digestive processes for a forthcoming ingestion.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "0574f193736e10b13a22da2d9c0ee39a",
"text": "Preliminary communication In food production industry, forecasting the timing of demands is crucial in planning production scheduling to satisfy customer needs on time. In the literature, several statistical models have been used in demand forecasting in Food and Beverage (F&B) industry and the choice of the most suitable forecasting model remains a central concern. In this context, this article aims to compare the performances between Trend Analysis, Decomposition and Holt-Winters (HW) models for the prediction of a time series formed by a group of jam and sherbet product demands. Data comprised the series of monthly sales from January 2013 to December 2014 obtained from a private company. As performance measures, metric analysis of the Mean Absolute Percentage Error (MAPE) is used. In this study, the HW and Decomposition models obtained better results regarding the performance metrics.",
"title": ""
},
{
"docid": "33db7ac45c020d2a9e56227721b0be70",
"text": "This thesis proposes an extended version of the Combinatory Categorial Grammar (CCG) formalism, with the following features: 1. grammars incorporate inheritance hierarchies of lexical types, defined over a simple, feature-based constraint language 2. CCG lexicons are, or at least can be, functions from forms to these lexical types This formalism, which I refer to as ‘inheritance-driven’ CCG (I-CCG), is conceptualised as a partially model-theoretic system, involving a distinction between category descriptions and their underlying category models, with these two notions being related by logical satisfaction. I argue that the I-CCG formalism retains all the advantages of both the core CCG framework and proposed generalisations involving such things as multiset categories, unary modalities or typed feature structures. In addition, I-CCG: 1. provides non-redundant lexicons for human languages 2. captures a range of well-known implicational word order universals in terms of an acquisition-based preference for shorter grammars This thesis proceeds as follows: Chapter 2 introduces the ‘baseline’ CCG formalism, which incorporates just the essential elements of category notation, without any of the proposed extensions. Chapter 3 reviews parts of the CCG literature dealing with linguistic competence in its most general sense, showing how the formalism predicts a number of language universals in terms of either its restricted generative capacity or the prioritisation of simpler lexicons. Chapter 4 analyses the first motivation for generalising the baseline category notation, demonstrating how certain fairly simple implicational word order universals are not formally predicted by baseline CCG, although they intuitively do involve considerations of grammatical economy. Chapter 5 examines the second motivation underlying many of the customised CCG category notations — to reduce lexical redundancy, thus allowing for the construction of lexicons which assign (each sense of) open class words and morphemes to no more than one lexical category, itself denoted by a non-composite lexical type.",
"title": ""
},
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "537d6fdfb26e552fb3254addfbb6ac49",
"text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.",
"title": ""
},
{
"docid": "4f0d34e830387947f807213599d47652",
"text": "An essential feature of large scale free graphs, such as the Web, protein-to-protein interaction, brain connectivity, and social media graphs, is that they tend to form recursive communities. The latter are densely connected vertex clusters exhibiting quick local information dissemination and processing. Under the fuzzy graph model vertices are fixed while each edge exists with a given probability according to a membership function. This paper presents Fuzzy Walktrap and Fuzzy Newman-Girvan, fuzzy versions of two established community discovery algorithms. The proposed algorithms have been applied to a synthetic graph generated by the Kronecker model with different termination criteria and the results are discussed. Keywords-Fuzzy graphs; Membership function; Community detection; Termination criteria; Walktrap algorithm; NewmanGirvan algorithm; Edge density; Kronecker model; Large graph analytics; Higher order data",
"title": ""
},
{
"docid": "ca2e577e819ac49861c65bfe8d26f5a1",
"text": "A design of a delay based self-oscillating class-D power amplifier for piezoelectric actuators is presented and modelled. First order and second order configurations are discussed in detail and analytical results reveal the stability criteria of a second order system, which should be respected in the design. It also shows if the second order system converges, it will tend to give a correct pulse modulation regarding to the input modulation index. Experimental results show the effectiveness of this design procedure. For a piezoelectric load of 400 nF, powered by a 150 V 10 kHz sinusoidal signal, a total harmonic distortion (THD) of 4.3% is obtained.",
"title": ""
},
{
"docid": "dd51cc2138760f1dcdce6e150cabda19",
"text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.",
"title": ""
},
{
"docid": "acce5017b1138c67e24e661c1eabc185",
"text": "The main goal of the paper is to continuously enlarge the set of software building blocks that can be reused in the search and rescue domain.",
"title": ""
},
{
"docid": "a8a24c602c5f295495b7dc68c606590d",
"text": "This paper deals with the design of an AC-220-volt-mains-fed power supply for ozone generation. A power stage consisting of a buck converter to regulate the output power plus a current-fed parallel-resonant push-pull inverter to supply an ozone generator (OG) is proposed and analysed. A closed-loop operation is presented as a method to compensate variations in the AC source voltage. Inverter's step-up transformer issues and their effect on the performance of the overall circuit are also studied. The use of a UC3872 integrated circuit is proposed to control both the push-pull inverter and the buck converter, as well as to provide the possibility to protect the power supply in case a short circuit, an open-lamp operation or any other circumstance might occur. Implementation of a 100 W prototype and experimental results are shown and discussed.",
"title": ""
},
{
"docid": "93ed81d5244715aaaf402817aa674310",
"text": "Automatically recognized terminology is widely used for various domain-specific texts processing tasks, such as machine translation, information retrieval or ontology construction. However, there is still no agreement on which methods are best suited for particular settings and, moreover, there is no reliable comparison of already developed methods. We believe that one of the main reasons is the lack of state-of-the-art methods implementations, which are usually non-trivial to recreate. In order to address these issues, we present ATR4S, an open-source software written in Scala that comprises more than 15 methods for automatic terminology recognition (ATR) and implements the whole pipeline from text document preprocessing, to term candidates collection, term candidates scoring, and finally, term candidates ranking. It is highly scalable, modular and configurable tool with support of automatic caching. We also compare 13 state-of-the-art methods on 7 open datasets by average precision and processing time. Experimental comparison reveals that no single method demonstrates best average precision for all datasets and that other available tools for ATR do not contain the best methods.",
"title": ""
},
{
"docid": "40cf1e5ecb0e79f466c65f8eaff77cb2",
"text": "Spiral patterns on the surface of a sphere have been seen in laboratory experiments and in numerical simulations of reaction–diffusion equations and convection. We classify the possible symmetries of spirals on spheres, which are quite different from the planar case since spirals typically have tips at opposite points on the sphere. We concentrate on the case where the system has an additional sign-change symmetry, in which case the resulting spiral patterns do not rotate. Spiral patterns arise through a mode interaction between spherical harmonics degree l and l+1. Using the methods of equivariant bifurcation theory, possible symmetry types are determined for each l. For small values of l, the centre manifold equations are constructed and spiral solutions are found explicitly. Bifurcation diagrams are obtained showing how spiral states can appear at secondary bifurcations from primary solutions, or tertiary bifurcations. The results are consistent with numerical simulations of a model pattern-forming system.",
"title": ""
},
{
"docid": "a354b6c03cadf539ccd01a247447ebc1",
"text": "In the present study, we tested in vitro different parts of 35 plants used by tribals of the Similipal Biosphere Reserve (SBR, Mayurbhanj district, India) for the management of infections. From each plant, three extracts were prepared with different solvents (water, ethanol, and acetone) and tested for antimicrobial (E. coli, S. aureus, C. albicans); anthelmintic (C. elegans); and antiviral (enterovirus 71) bioactivity. In total, 35 plant species belonging to 21 families were recorded from tribes of the SBR and periphery. Of the 35 plants, eight plants (23%) showed broad-spectrum in vitro antimicrobial activity (inhibiting all three test strains), while 12 (34%) exhibited narrow spectrum activity against individual pathogens (seven as anti-staphylococcal and five as anti-candidal). Plants such as Alangium salviifolium, Antidesma bunius, Bauhinia racemosa, Careya arborea, Caseria graveolens, Cleistanthus patulus, Colebrookea oppositifolia, Crotalaria pallida, Croton roxburghii, Holarrhena pubescens, Hypericum gaitii, Macaranga peltata, Protium serratum, Rubus ellipticus, and Suregada multiflora showed strong antibacterial effects, whilst Alstonia scholaris, Butea monosperma, C. arborea, C. pallida, Diospyros malbarica, Gmelina arborea, H. pubescens, M. peltata, P. serratum, Pterospermum acerifolium, R. ellipticus, and S. multiflora demonstrated strong antifungal activity. Plants such as A. salviifolium, A. bunius, Aporosa octandra, Barringtonia acutangula, C. graveolens, C. pallida, C. patulus, G. arborea, H. pubescens, H. gaitii, Lannea coromandelica, M. peltata, Melastoma malabathricum, Millettia extensa, Nyctanthes arbor-tristis, P. serratum, P. acerifolium, R. ellipticus, S. multiflora, Symplocos cochinchinensis, Ventilago maderaspatana, and Wrightia arborea inhibit survival of C. elegans and could be a potential source for anthelmintic activity. Additionally, plants such as A. bunius, C. graveolens, C. patulus, C. oppositifolia, H. gaitii, M. extensa, P. serratum, R. ellipticus, and V. maderaspatana showed anti-enteroviral activity. Most of the plants, whose traditional use as anti-infective agents by the tribals was well supported, show in vitro inhibitory activity against an enterovirus, bacteria (E. coil, S. aureus), a fungus (C. albicans), or a nematode (C. elegans).",
"title": ""
},
{
"docid": "30c6829427aaa8d23989afcd666372f7",
"text": "We developed an optimizing compiler for intrusion detection rules popularized by an open-source Snort Network Intrusion Detection System (www.snort.org). While Snort and Snort-like rules are usually thought of as a list of independent patterns to be tested in a sequential order, we demonstrate that common compilation techniques are directly applicable to Snort rule sets and are able to produce high-performance matching engines. SNORTRAN combines several compilation techniques, including cost-optimized decision trees, pattern matching precompilation, and string set clustering. Although all these techniques have been used before in other domain-specific languages, we believe their synthesis in SNORTRAN is original and unique. Introduction Snort [RO99] is a popular open-source Network Intrusion Detection System (NIDS). Snort is controlled by a set of pattern/action rules residing in a configuration file of a specific format. Due to Snort’s popularity, Snort-like rules are accepted by several other NIDS [FSTM, HANK]. In this paper we describe an optimizing compiler for Snort rule sets called SNORTRAN that incorporates ideas of pattern matching compilation based on cost-optimized decision trees [DKP92, KS88] with setwise string search algorithms popularized by recent research in highperformance NIDS detection engines [FV01, CC01, GJMP]. The two main design goals were performance and compatibility with the original Snort rule interpreter. The primary application area for NIDS is monitoring IP traffic inside and outside of firewalls, looking for unusual activities that can be attributed to external attacks or internal misuse. Most NIDS are designed to handle T1/partial T3 traffic, but as the number of the known vulnerabilities grows and more and more weight is given to internal misuse monitoring on high-throughput networks (100Mbps/1Gbps), it gets harder to keep up with the traffic without dropping too many packets to make detection ineffective. Throwing hardware at the problem is not always possible because of growing maintenance and support costs, let alone the fact that the problem of making multi-unit system work in realistic environment is as hard as the original performance problem. Bottlenecks of the detection process were identified by many researchers and practitioners [FV01, ND02, GJMP], and several approaches were proposed [FV01, CC01]. Our benchmarking supported the performance analysis made by M. Fisk and G. Varghese [FV01], adding some interesting findings on worst-case behavior of setwise string search algorithms in practice. Traditionally, NIDS are designed around a packet grabber (system-specific or libcap) getting traffic packets off the wire, combined with preprocessors, packet decoders, and a detection engine looking for a static set of signatures loaded from a rule file at system startup. Snort [SNORT] and",
"title": ""
},
{
"docid": "5ce00014f84277aca0a4b7dfefc01cbb",
"text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.",
"title": ""
},
{
"docid": "bd62496839434c34bcf876a581d38c37",
"text": "We present results from an experiment similar to one performed by Packard [24], in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton’s λ parameter [17], and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near “critical” λ values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with λ values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to λ, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability. Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, New Mexico, U.S.A. 87501. Email: [email protected], [email protected] Physics Department, University of California, Berkeley, CA, U.S.A. 94720. Email: [email protected]",
"title": ""
},
{
"docid": "c302699cb7dec9f813117bfe62d3b5fb",
"text": "Pipe networks constitute the means of transporting fluids widely used nowadays. Increasing the operational reliability of these systems is crucial to minimize the risk of leaks, which can cause serious pollution problems to the environment and have disastrous consequences if the leak occurs near residential areas. Considering the importance in developing efficient systems for detecting leaks in pipelines, this work aims to detect the characteristic frequencies (predominant) in case of leakage and no leakage. The methodology consisted of capturing the experimental data through a microphone installed inside the pipeline and coupled to a data acquisition card and a computer. The Fast Fourier Transform (FFT) was used as the mathematical approach to the signal analysis from the microphone, generating a frequency response (spectrum) which reveals the characteristic frequencies for each operating situation. The tests were carried out using distinct sizes of leaks, situations without leaks and cases with blows in the pipe caused by metal instruments. From the leakage tests, characteristic peaks were found in the FFT frequency spectrum using the signal generated by the microphone. Such peaks were not observed in situations with no leaks. Therewith, it was realized that it was possible to distinguish, through spectral analysis, an event of leakage from an event without leakage.",
"title": ""
},
{
"docid": "d9fe0834ccf80bddadc5927a8199cd2c",
"text": "Deep Residual Networks (ResNets) have recently achieved state-of-the-art results on many challenging computer vision tasks. In this work we analyze the role of Batch Normalization (BatchNorm) layers on ResNets in the hope of improving the current architecture and better incorporating other normalization techniques, such as Normalization Propagation (NormProp), into ResNets. Firstly, we verify that BatchNorm helps distribute representation learning to residual blocks at all layers, as opposed to a plain ResNet without BatchNorm where learning happens mostly in the latter part of the network. We also observe that BatchNorm well regularizes Concatenated ReLU (CReLU) activation scheme on ResNets, whose magnitude of activation grows by preserving both positive and negative responses when going deeper into the network. Secondly, we investigate the use of NormProp as a replacement for BatchNorm in ResNets. Though NormProp theoretically attains the same effect as BatchNorm on generic convolutional neural networks, the identity mapping of ResNets invalidates its theoretical promise and NormProp exhibits a significant performance drop when naively applied. To bridge the gap between BatchNorm and NormProp in ResNets, we propose a simple modification to NormProp and employ the CReLU activation scheme. We experiment on visual object recognition benchmark datasets such as CIFAR10/100 and ImageNet and demonstrate that 1) the modified NormProp performs better than the original NormProp but is still not comparable to BatchNorm and 2) CReLU improves the performance of ResNets with or without normalizations.",
"title": ""
}
] | scidocsrr |
d956c35ab4e217a8c4517f565197d4a9 | Pressure ulcer prevention and healing using alternating pressure mattress at home: the PARESTRY project. | [
{
"docid": "511c90eadbbd4129fdf3ee9e9b2187d3",
"text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.",
"title": ""
},
{
"docid": "df5c384e9fb6ba57a5bbd7fef44ce5f0",
"text": "CONTEXT\nPressure ulcers are common in a variety of patient settings and are associated with adverse health outcomes and high treatment costs.\n\n\nOBJECTIVE\nTo systematically review the evidence examining interventions to prevent pressure ulcers.\n\n\nDATA SOURCES AND STUDY SELECTION\nMEDLINE, EMBASE, and CINAHL (from inception through June 2006) and Cochrane databases (through issue 1, 2006) were searched to identify relevant randomized controlled trials (RCTs). UMI Proquest Digital Dissertations, ISI Web of Science, and Cambridge Scientific Abstracts were also searched. All searches used the terms pressure ulcer, pressure sore, decubitus, bedsore, prevention, prophylactic, reduction, randomized, and clinical trials. Bibliographies of identified articles were further reviewed.\n\n\nDATA SYNTHESIS\nFifty-nine RCTs were selected. Interventions assessed in these studies were grouped into 3 categories, ie, those addressing impairments in mobility, nutrition, or skin health. Methodological quality for the RCTs was variable and generally suboptimal. Effective strategies that addressed impaired mobility included the use of support surfaces, mattress overlays on operating tables, and specialized foam and specialized sheepskin overlays. While repositioning is a mainstay in most pressure ulcer prevention protocols, there is insufficient evidence to recommend specific turning regimens for patients with impaired mobility. In patients with nutritional impairments, dietary supplements may be beneficial. The incremental benefit of specific topical agents over simple moisturizers for patients with impaired skin health is unclear.\n\n\nCONCLUSIONS\nGiven current evidence, using support surfaces, repositioning the patient, optimizing nutritional status, and moisturizing sacral skin are appropriate strategies to prevent pressure ulcers. Although a number of RCTs have evaluated preventive strategies for pressure ulcers, many of them had important methodological limitations. There is a need for well-designed RCTs that follow standard criteria for reporting nonpharmacological interventions and that provide data on cost-effectiveness for these interventions.",
"title": ""
}
] | [
{
"docid": "0e60cb8f9147f5334c3cfca2880c2241",
"text": "The quest for automatic Programming is the holy grail of artificial intelligence. The dream of having computer programs write other useful computer programs has haunted researchers since the nineteen fifties. In Genetic Progvamming III Darwinian Invention and Problem Solving (GP?) by John R. Koza, Forest H. Bennet 111, David Andre, and Martin A. Keane, the authors claim that the first inscription on this trophy should be the name Genetic Programming (GP). GP is about applying evolutionary algorithms to search the space of computer programs. The authors paraphrase Arthur Samuel of 1959 and argue that with this method it is possible to tell the computer what to do without telling it explicitly how t o do it.",
"title": ""
},
{
"docid": "9001f640ae3340586f809ab801f78ec0",
"text": "A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.",
"title": ""
},
{
"docid": "6a15a0a0b9b8abc0e66fa9702cc3a573",
"text": "Knowledge Graphs have proven to be extremely valuable to recommender systems, as they enable hybrid graph-based recommendation models encompassing both collaborative and content information. Leveraging this wealth of heterogeneous information for top-N item recommendation is a challenging task, as it requires the ability of effectively encoding a diversity of semantic relations and connectivity patterns. In this work, we propose entity2rec, a novel approach to learning user-item relatedness from knowledge graphs for top-N item recommendation. We start from a knowledge graph modeling user-item and item-item relations and we learn property-specific vector representations of users and items applying neural language models on the network. These representations are used to create property-specific user-item relatedness features, which are in turn fed into learning to rank algorithms to learn a global relatedness model that optimizes top-N item recommendations. We evaluate the proposed approach in terms of ranking quality on the MovieLens 1M dataset, outperforming a number of state-of-the-art recommender systems, and we assess the importance of property-specific relatedness scores on the overall ranking quality.",
"title": ""
},
{
"docid": "dae877409dca88fc6fed5cf6536e65ad",
"text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.",
"title": ""
},
{
"docid": "a5f17126a90b45921f70439ff96a0091",
"text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"title": ""
},
{
"docid": "4cdef79370abcd380357c8be92253fa5",
"text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.",
"title": ""
},
{
"docid": "cc90d1ac6aa63532282568f66ecd25fd",
"text": "Melphalan has been used in the treatment of various hematologic malignancies for almost 60 years. Today it is part of standard therapy for multiple myeloma and also as part of myeloablative regimens in association with autologous allogenic stem cell transplantation. Melflufen (melphalan flufenamide ethyl ester, previously called J1) is an optimized derivative of melphalan providing targeted delivery of active metabolites to cells expressing aminopeptidases. The activity of melflufen has compared favorably with that of melphalan in a series of in vitro and in vivo experiments performed preferentially on different solid tumor models and multiple myeloma. Melflufen is currently being evaluated in a clinical phase I/II trial in relapsed or relapsed and refractory multiple myeloma. Cytotoxicity of melflufen was assayed in lymphoma cell lines and in primary tumor cells with the Fluorometric Microculture Cytotoxicity Assay and cell cycle analyses was performed in two of the cell lines. Melflufen was also investigated in a xenograft model with subcutaneous lymphoma cells inoculated in mice. Melflufen showed activity with cytotoxic IC50-values in the submicromolar range (0.011-0.92 μM) in the cell lines, corresponding to a mean of 49-fold superiority (p < 0.001) in potency vs. melphalan. In the primary cultures melflufen yielded slightly lower IC50-values (2.7 nM to 0.55 μM) and an increased ratio vs. melphalan (range 13–455, average 108, p < 0.001). Treated cell lines exhibited a clear accumulation in the G2/M-phase of the cell cycle. Melflufen also showed significant activity and no, or minimal side effects in the xenografted animals. This study confirms previous reports of a targeting related potency superiority of melflufen compared to that of melphalan. Melflufen was active in cell lines and primary cultures of lymphoma cells, as well as in a xenograft model in mice and appears to be a candidate for further evaluation in the treatment of this group of malignant diseases.",
"title": ""
},
{
"docid": "b3f5176f49b467413d172134b1734ed8",
"text": "Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.",
"title": ""
},
{
"docid": "1768ecf6a2d8a42ea701d7f242edb472",
"text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.",
"title": ""
},
{
"docid": "be9971903bf3d754ed18cc89cf254bd1",
"text": "This paper presents a semi-supervised learning method for improving the performance of AUC-optimized classifiers by using both labeled and unlabeled samples. In actual binary classification tasks, there is often an imbalance between the numbers of positive and negative samples. For such imbalanced tasks, the area under the ROC curve (AUC) is an effective measure with which to evaluate binary classifiers. The proposed method utilizes generative models to assist the incorporation of unlabeled samples in AUC-optimized classifiers. The generative models provide prior knowledge that helps learn the distribution of unlabeled samples. To evaluate the proposed method in text classification, we employed naive Bayes models as the generative models. Our experimental results using three test collections confirmed that the proposed method provided better classifiers for imbalanced tasks than supervised AUC-optimized classifiers and semi-supervised classifiers trained to maximize the classification accuracy of labeled samples. Moreover, the proposed method improved the effect of using unlabeled samples for AUC optimization especially when we used appropriate generative models.",
"title": ""
},
{
"docid": "43233e45f07b80b8367ac1561356888d",
"text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.",
"title": ""
},
{
"docid": "65b2d6ea5e1089c52378b4fd6386224c",
"text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.",
"title": ""
},
{
"docid": "ffa5ae359807884c2218b92d2db2a584",
"text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.",
"title": ""
},
{
"docid": "9bce495ed14617fe05086f06be8279e0",
"text": "In previous chapters we reviewed Bayesian neural networks (BNNs) and historical techniques for approximate inference in these, as well as more recent approaches. We discussed the advantages and disadvantages of different techniques, examining their practicality. This, perhaps, is the most important aspect of modern techniques for approximate inference in BNNs. The field of deep learning is pushed forward by practitioners, working on real-world problems. Techniques which cannot scale to complex models with potentially millions of parameters, scale well with large amounts of data, need well studied models to be radically changed, or are not accessible to engineers, will simply perish. In this chapter we will develop on the strand of work of [Graves, 2011; Hinton and Van Camp, 1993], but will do so from the Bayesian perspective rather than the information theory one. Developing Bayesian approaches to deep learning, we will tie approximate BNN inference together with deep learning stochastic regularisation techniques (SRTs) such as dropout. These regularisation techniques are used in many modern deep learning tools, allowing us to offer a practical inference technique. We will start by reviewing in detail the tools used by [Graves, 2011]. We extend on these with recent research, commenting and analysing the variance of several stochastic estimators in variational inference (VI). Following that we will tie these derivations to SRTs, and propose practical techniques to obtain model uncertainty, even from existing models. We finish the chapter by developing specific examples for image based models (CNNs) and sequence based models (RNNs). These will be demonstrated in chapter 5, where we will survey recent research making use of the suggested tools in real-world problems.",
"title": ""
},
{
"docid": "87b67f9ed23c27a71b6597c94ccd6147",
"text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"title": ""
},
{
"docid": "56ff9c1be08569b6a881b070b0173797",
"text": "This paper examines a set of commercially representative embedded programs and compares them to an existing benchmark suite, SPEC2000. A new version of SimpleScalar that has been adapted to the ARM instruction set is used to characterize the performance of the benchmarks using configurations similar to current and next generation embedded processors. Several characteristics distinguish the representative embedded programs from the existing SPEC benchmarks including instruction distribution, memory behavior, and available parallelism. The embedded benchmarks, called MiBench, are freely available to all researchers.",
"title": ""
},
{
"docid": "ef598ba4f9a4df1f42debc0eabd1ead8",
"text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.",
"title": ""
},
{
"docid": "1ff5526e4a18c1e59b63a3de17101b11",
"text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.",
"title": ""
},
{
"docid": "fb89fd2d9bf526b8bc7f1433274859a6",
"text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes",
"title": ""
},
{
"docid": "8cb5659bdbe9d376e2a3b0147264d664",
"text": "Group brainstorming is widely adopted as a design method in the domain of software development. However, existing brainstorming literature has consistently proven group brainstorming to be ineffective under the controlled laboratory settings. Yet, electronic brainstorming systems informed by the results of these prior laboratory studies have failed to gain adoption in the field because of the lack of support for group well-being and member support. Therefore, there is a need to better understand brainstorming in the field. In this work, we seek to understand why and how brainstorming is actually practiced, rather than how brainstorming practices deviate from formal brainstorming rules, by observing brainstorming meetings at Microsoft. The results of this work show that, contrary to the conventional brainstorming practices, software teams at Microsoft engage heavily in the constraint discovery process in their brainstorming meetings. We identified two types of constraints that occur in brainstorming meetings. Functional constraints are requirements and criteria that define the idea space, whereas practical constraints are limitations that prioritize the proposed solutions.",
"title": ""
}
] | scidocsrr |
4da8e5ddac2a648e63d7d5661a25ee65 | Ethical Artificial Intelligence - An Open Question | [
{
"docid": "f76808350f95de294c2164feb634465a",
"text": "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: \"A curious aspect of the theory of evolution is that everybody thinks he understands it.\" (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about Artificial Intelligence than they actually do.",
"title": ""
}
] | [
{
"docid": "9747be055df9acedfdfe817eb7e1e06e",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
},
{
"docid": "c0e1be5859be1fc5871993193a709f2d",
"text": "This paper reviews the possible causes and effects for no-fault-found observations and intermittent failures in electronic products and summarizes them into cause and effect diagrams. Several types of intermittent hardware failures of electronic assemblies are investigated, and their characteristics and mechanisms are explored. One solder joint intermittent failure case study is presented. The paper then discusses when no-fault-found observations should be considered as failures. Guidelines for assessment of intermittent failures are then provided in the discussion and conclusions. Ó 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4380a5acaba5b534d13e1a4f09afe4f",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "01638567bf915e26bf9398132ca27264",
"text": "Uncontrolled bleeding from the cystic artery and its branches is a serious problem that may increase the risk of intraoperative lesions to vital vascular and biliary structures. On laparoscopic visualization anatomic relations are seen differently than during conventional surgery, so proper knowledge of the hepatobiliary triangle anatomic structures under the conditions of laparoscopic visualization is required. We present an original classification of the anatomic variations of the cystic artery into two main groups based on our experience with 200 laparoscopic cholecystectomies, with due consideration of the known anatomicotopographic relations. Group I designates a cystic artery situated within the hepatobiliary triangle on laparoscopic visualization. This group included three types: (1) normally lying cystic artery, found in 147 (73.5%) patients; (2) most common cystic artery variation, manifesting as its doubling, present in 31 (15.5%) patients; and (3) the cystic artery originating from the aberrant right hepatic artery, observed in 11 (5.5%) patients. Group II designates a cystic artery that could not be found within the hepatobiliary triangle on laparoscopic dissection. This group included two types of variation: (1) cystic artery originating from the gastroduodenal artery, found in nine (4.5%) patients; and (2) cystic artery originating from the left hepatic artery, recorded in two (1%) patients.",
"title": ""
},
{
"docid": "2663800ed92ce1cd44ab1b7760c43e0f",
"text": "Synchronous reluctance motor (SynRM) have rather poor power factor. This paper investigates possible methods to improve the power factor (pf) without impacting its torque density. The study found two possible aspects to improve the power factor with either refining rotor dimensions and followed by current control techniques. Although it is a non-linear mathematical field, it is analysed by analytical equations and FEM simulation is utilized to validate the design progression. Finally, an analytical method is proposed to enhance pf without compromising machine torque density. There are many models examined in this study to verify the design process. The best design with high performance is used for final current control optimization simulation.",
"title": ""
},
{
"docid": "c9750e95b3bd422f0f5e73cf6c465b35",
"text": "Lingual nerve damage complicating oral surgery would sometimes require electrographic exploration. Nevertheless, direct recording of conduction in lingual nerve requires its puncture at the foramen ovale. This method is too dangerous to be practiced routinely in these diagnostic indications. The aim of our study was to assess spatial relationships between lingual nerve and mandibular ramus in the infratemporal fossa using an original technique. Therefore, ten lingual nerves were dissected on five fresh cadavers. All the nerves were catheterized with a 3/0 wire. After meticulous repositioning of the nerve and medial pterygoid muscle reinsertion, CT-scan examinations were performed with planar acquisitions and three-dimensional reconstructions. Localization of lingual nerve in the infratemporal fossa was assessed successively at the level of the sigmoid notch of the mandible, lingula and third molar. At the level of the lingula, lingual nerve was far from the maxillary vessels; mean distance between the nerve and the anterior border of the ramus was 19.6 mm. The posteriorly opened angle between the medial side of the ramus and the line joining the lingual nerve and the anterior border of the ramus measured 17°. According to these findings, we suggest that the lingual nerve might be reached through the intra-oral puncture at the intermaxillary commissure; therefore, we modify the inferior alveolar nerve block technique to propose a safe and reproducible protocol likely to be performed routinely as electrographic exploration of the lingual nerve. What is more, this original study protocol provided interesting educational materials and could be developed for the conception of realistic 3D virtual anatomy supports.",
"title": ""
},
{
"docid": "3d56f88bf8053258a12e609129237b19",
"text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.",
"title": ""
},
{
"docid": "dbde47a4142bffc2bcbda988781e5229",
"text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"title": ""
},
{
"docid": "730d25d97f4ad67838a541f206cfcec2",
"text": "Semantic segmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionized the field of image semantic segmentation, its impact on point cloud data has been limited so far. Recent attempts, based on 3D deep learning approaches (3DCNNs), have achieved below-expected results. Such methods require voxelizations of the underlying point cloud data, leading to decreased spatial resolution and increased memory consumption. Additionally, 3D-CNNs greatly suffer from the limited availability of annotated datasets. In this paper, we propose an alternative framework that avoids the limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first project the point cloud onto a set of synthetic 2D-images. These images are then used as input to a 2D-CNN, designed for semantic segmentation. Finally, the obtained prediction scores are re-projected to the point cloud to obtain the segmentation results. We further investigate the impact of multiple modalities, such as color, depth and surface normals, in a multi-stream network architecture. Experiments are performed on the recent Semantic3D dataset. Our approach sets a new stateof-the-art by achieving a relative gain of 7.9%, compared to the previous best approach.",
"title": ""
},
{
"docid": "a3b18ade3e983d91b7a8fc8d4cb6a75d",
"text": "The IC stripline method is one of those suggested in IEC-62132 to evaluate the susceptibility of ICs to radiated electromagnetic interference. In practice, it allows the multiple injection of the interference through the capacitive and inductive coupling of the IC package with the guiding structure (the stripline) in which the device under test is inserted. The pros and cons of this method are discussed and a variant of it is proposed with the aim to address the main problems that arise when evaluating the susceptibility of ICs encapsulated in small packages.",
"title": ""
},
{
"docid": "385fc1f02645d4d636869317cde6d35e",
"text": "Events and their coreference offer useful semantic and discourse resources. We show that the semantic and discourse aspects of events interact with each other. However, traditional approaches addressed event extraction and event coreference resolution either separately or sequentially, which limits their interactions. This paper proposes a document-level structured learning model that simultaneously identifies event triggers and resolves event coreference. We demonstrate that the joint model outperforms a pipelined model by 6.9 BLANC F1 and 1.8 CoNLL F1 points in event coreference resolution using a corpus in the biology domain.",
"title": ""
},
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
},
{
"docid": "ac65c09468cd88765009abe49d9114cf",
"text": "It is known that head gesture and brain activity can reflect some human behaviors related to a risk of accident when using machine-tools. The research presented in this paper aims at reducing the risk of injury and thus increase worker safety. Instead of using camera, this paper presents a Smart Safety Helmet (SSH) in order to track the head gestures and the brain activity of the worker to recognize anomalous behavior. Information extracted from SSH is used for computing risk of an accident (a safety level) for preventing and reducing injuries or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of an Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reaches a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process.",
"title": ""
},
{
"docid": "500eca6c6fb88958662fd0210927d782",
"text": "Purpose – Force output is extremely important for electromagnetic linear machines. The purpose of this study is to explore new permanent magnet (PM) array and winding patterns to increase the magnetic flux density and thus to improve the force output of electromagnetic tubular linear machines. Design/methodology/approach – Based on investigations on various PM patterns, a novel dual Halbach PM array is proposed in this paper to increase the radial component of flux density in three-dimensional machine space, which in turn can increase the force output of tubular linear machine significantly. The force outputs and force ripples for different winding patterns are formulated and analyzed, to select optimized structure parameters. Findings – The proposed dual Halbach array can increase the radial component of flux density and force output of tubular linear machines effectively. It also helps to decrease the axial component of flux density and thus to reduce the deformation and vibration of machines. By using analytical force models, the influence of winding patterns and structure parameters on the machine force output and force ripples can be analyzed. As a result, one set of optimized structure parameters are selected for the design of electromagnetic tubular linear machines. Originality/value – The proposed dual Halbach array and winding patterns are effective ways to improve the linear machine performance. It can also be implemented into rotary machines. The analyzing and design methods could be extended into the development of other electromagnetic machines.",
"title": ""
},
{
"docid": "91e9f3b1ebd57ff472ab8848370c366f",
"text": "Time series prediction problems are becoming increasingly high-dimensional in modern applications, such as climatology and demand forecasting. For example, in the latter problem, the number of items for which demand needs to be forecast might be as large as 50,000. In addition, the data is generally noisy and full of missing values. Thus, modern applications require methods that are highly scalable, and can deal with noisy data in terms of corruptions or missing values. However, classical time series methods usually fall short of handling these issues. In this paper, we present a temporal regularized matrix factorization (TRMF) framework which supports data-driven temporal learning and forecasting. We develop novel regularization schemes and use scalable matrix factorization methods that are eminently suited for high-dimensional time series data that has many missing values. Our proposed TRMF is highly general, and subsumes many existing approaches for time series analysis. We make interesting connections to graph regularization methods in the context of learning the dependencies in an autoregressive framework. Experimental results show the superiority of TRMF in terms of scalability and prediction quality. In particular, TRMF is two orders of magnitude faster than other methods on a problem of dimension 50,000, and generates better forecasts on real-world datasets such as Wal-mart E-commerce datasets.",
"title": ""
},
{
"docid": "19f9e643decc8047d73a20d664eb458d",
"text": "There is considerable federal interest in disaster resilience as a mechanism for mitigating the impacts to local communities, yet the identification of metrics and standards for measuring resilience remain a challenge. This paper provides a methodology and a set of indicators for measuring baseline characteristics of communities that foster resilience. By establishing baseline conditions, it becomes possible to monitor changes in resilience over time in particular places and to compare one place to another. We apply our methodology to counties within the Southeastern United States as a proof of concept. The results show that spatial variations in disaster resilience exist and are especially evident in the rural/urban divide, where metropolitan areas have higher levels of resilience than rural counties. However, the individual drivers of the disaster resilience (or lack thereof)—social, economic, institutional, infrastructure, and community capacities—vary",
"title": ""
},
{
"docid": "6751bfa8495065db8f6f5b396bbbc2cd",
"text": "This paper proposes a new balanced realization and model reduction method for possibly unstable systems by introducing some new controllability and observability Gramians. These Gramians can be related to minimum control energy and minimum estimation error. In contrast to Gramians defined in the literature for unstable systems, these Gramians can always be computed for systems without imaginary axis poles and they reduce to the standard controllability and observability Gramians when the systems are stable. The proposed balanced model reduction method enjoys the similar error bounds as does for the standard balanced model reduction. Furthermore, the new error bounds and the actual approximation errors seem to be much smaller than the ones using the methods given in the literature for unstable systems. Copyright ( 1999 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "5b9a08e4edd7e44ed261d304bc8f78c3",
"text": "Cone beam computed tomography (CBCT) has been specifically designed to produce undistorted three-dimensional information of the maxillofacial skeleton, including the teeth and their surrounding tissues with a significantly lower effective radiation dose compared with conventional computed tomography (CT). Periapical disease may be detected sooner using CBCT compared with periapical views and the true size, extent, nature and position of periapical and resorptive lesions can be assessed. Root fractures, root canal anatomy and the nature of the alveolar bone topography around teeth may be assessed. The aim of this paper is to review current literature on the applications and limitations of CBCT in the management of endodontic problems.",
"title": ""
},
{
"docid": "0332be71a529382e82094239db31ea25",
"text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).",
"title": ""
},
{
"docid": "d6da3d9b1357c16bb2d9ea46e56fa60f",
"text": "The Supervisory Control and Data Acquisition System (SCADA) monitor and control real-time systems. SCADA systems are the backbone of the critical infrastructure, and any compromise in their security can have grave consequences. Therefore, there is a need to have a SCADA testbed for checking vulnerabilities and validating security solutions. In this paper we develop such a SCADA testbed.",
"title": ""
}
] | scidocsrr |
e9186d6222a2baf349f8ae3316689fdb | TWO What Does It Mean to be Biased : Motivated Reasoning and Rationality | [
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
}
] | [
{
"docid": "5b01c2e7bba6ab1abdda9b1a23568d2a",
"text": "First, we theoretically analyze the MMD-based estimates. Our analysis establishes that, under some mild conditions, the estimate is statistically consistent. More importantly, it provides an upper bound on the error in the estimate in terms of intuitive geometric quantities like class separation and data spread. Next, we use the insights obtained from the theoretical analysis, to propose a novel convex formulation that automatically learns the kernel to be employed in the MMD-based estimation. We design an efficient cutting plane algorithm for solving this formulation. Finally, we empirically compare our estimator with several existing methods, and show significantly improved performance under varying datasets, class ratios, and training sizes.",
"title": ""
},
{
"docid": "e0c52b0fdf2d67bca4687b8060565288",
"text": "Large graph databases are commonly collected and analyzed in numerous domains. For reasons related to either space efficiency or for privacy protection (e.g., in the case of social network graphs), it sometimes makes sense to replace the original graph with a summary, which removes certain details about the original graph topology. However, this summarization process leaves the database owner with the challenge of processing queries that are expressed in terms of the original graph, but are answered using the summary. In this paper, we propose a formal semantics for answering queries on summaries of graph structures. At its core, our formulation is based on a random worlds model. We show that important graph-structure queries (e.g., adjacency, degree, and eigenvector centrality) can be answered efficiently and in closed form using these semantics. Further, based on this approach to query answering, we formulate three novel graph partitioning/compression problems. We develop algorithms for finding a graph summary that least affects the accuracy of query results, and we evaluate our proposed algorithms using both real and synthetic data.",
"title": ""
},
{
"docid": "dff09daea034a765b858bc6a457cb6a7",
"text": "We study the problem of automatically and efficiently generating itineraries for users who are on vacation. We focus on the common case, wherein the trip duration is more than a single day. Previous efficient algorithms based on greedy heuristics suffer from two problems. First, the itineraries are often unbalanced, with excellent days visiting top attractions followed by days of exclusively lower-quality alternatives. Second, the trips often re-visit neighborhoods repeatedly in order to cover increasingly low-tier points of interest. Our primary technical contribution is an algorithm that addresses both these problems by maximizing the quality of the worst day. We give theoretical results showing that this algorithm»s competitive factor is within a factor two of the guarantee of the best available algorithm for a single day, across many variations of the problem. We also give detailed empirical evaluations using two distinct datasets:(a) anonymized Google historical visit data and(b) Foursquare public check-in data. We show first that the overall utility of our itineraries is almost identical to that of algorithms specifically designed to maximize total utility, while the utility of the worst day of our itineraries is roughly twice that obtained from other approaches. We then turn to evaluation based on human raters who score our itineraries only slightly below the itineraries created by human travel experts with deep knowledge of the area.",
"title": ""
},
{
"docid": "911ca70346689d6ba5fd01b1bc964dbe",
"text": "We present a novel texture compression scheme, called iPACKMAN, targeted for hardware implementation. In terms of image quality, it outperforms the previous de facto standard texture compression algorithms in the majority of all cases that we have tested. Our new algorithm is an extension of the PACKMAN texture compression system, and while it is a bit more complex than PACKMAN, it is still very low in terms of hardware complexity.",
"title": ""
},
{
"docid": "f2daa3fd822be73e3663520cc6afe741",
"text": "Low health literacy (LHL) remains a formidable barrier to improving health care quality and outcomes. Given the lack of precision of single demographic characteristics to predict health literacy, and the administrative burden and inability of existing health literacy measures to estimate health literacy at a population level, LHL is largely unaddressed in public health and clinical practice. To help overcome these limitations, we developed two models to estimate health literacy. We analyzed data from the 2003 National Assessment of Adult Literacy (NAAL), using linear regression to predict mean health literacy scores and probit regression to predict the probability of an individual having ‘above basic’ proficiency. Predictors included gender, age, race/ethnicity, educational attainment, poverty status, marital status, language spoken in the home, metropolitan statistical area (MSA) and length of time in U.S. All variables except MSA were statistically significant, with lower educational attainment being the strongest predictor. Our linear regression model and the probit model accounted for about 30% and 21% of the variance in health literacy scores, respectively, nearly twice as much as the variance accounted for by either education or poverty alone. Multivariable models permit a more accurate estimation of health literacy than single predictors. Further, such models can be applied to readily available administrative or census data to produce estimates of average health literacy and identify communities that would benefit most from appropriate, targeted interventions in the clinical setting to address poor quality care and outcomes related to LHL.",
"title": ""
},
{
"docid": "cc9de768281e58749cd073d25a97d39c",
"text": "The Dynamic Adaptive Streaming over HTTP (referred as MPEG DASH) standard is designed to provide high quality of media content over the Internet delivered from conventional HTTP web servers. The visual content, divided into a sequence of segments, is made available at a number of different bitrates so that an MPEG DASH client can automatically select the next segment to download and play back based on current network conditions. The task of transcoding media content to different qualities and bitrates is computationally expensive, especially in the context of large-scale video hosting systems. Therefore, it is preferably executed in a powerful cloud environment, rather than on the source computer (which may be a mobile device with limited memory, CPU speed and battery life). In order to support the live distribution of media events and to provide a satisfactory user experience, the overall processing delay of videos should be kept to a minimum. In this paper, we propose a novel dynamic scheduling methodology on video transcoding for MPEG DASH in a cloud environment, which can be adapted to different applications. The designed scheduler monitors the workload on each processor in the cloud environment and selects the fastest processors to run high-priority jobs. It also adjusts the video transcoding mode (VTM) according to the system load. Experimental results show that the proposed scheduler performs well in terms of the video completion time, system load balance, and video playback smoothness.",
"title": ""
},
{
"docid": "7eba71bb191a31bd87cd9d2678a7b860",
"text": "In winter, rainbow smelt (Osmerus mordax) accumulate glycerol and produce an antifreeze protein (AFP), which both contribute to freeze resistance. The role of differential gene expression in the seasonal pattern of these adaptations was investigated. First, cDNAs encoding smelt and Atlantic salmon (Salmo salar) phosphoenolpyruvate carboxykinase (PEPCK) and smelt glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were cloned so that all sequences required for expression analysis would be available. Using quantitative PCR, expression of beta actin in rainbow smelt liver was compared with that of GAPDH in order to determine its validity as a reference gene. Then, levels of glycerol-3-phosphate dehydrogenase (GPDH), PEPCK, and AFP relative to beta actin were measured in smelt liver over a fall-winter-spring interval. Levels of GPDH mRNA increased in the fall just before plasma glycerol accumulation, implying a driving role in glycerol synthesis. GPDH mRNA levels then declined during winter, well in advance of serum glycerol, suggesting the possibility of GPDH enzyme or glycerol conservation in smelt during the winter months. PEPCK mRNA levels rose in parallel with serum glycerol in the fall, consistent with an increasing requirement for amino acids as metabolic precursors, remained elevated for much of the winter, and then declined in advance of the decline in plasma glycerol. AFP mRNA was elevated at the onset of fall sampling in October and remained elevated until April, implying separate regulation from GPDH and PEPCK. Thus, winter freezing point depression in smelt appears to result from a seasonal cycle of GPDH gene expression, with an ensuing increase in the expression of PEPCK, and a similar but independent cycle of AFP gene expression.",
"title": ""
},
{
"docid": "4cd1eeb516d602390703b66d3201a9dc",
"text": "A thorough understanding of the orbit, structures within it, and complex spatial relationships among these structures bears relevance in a variety of neurosurgical cases. We describe the 3-dimensional surgical anatomy of the orbit and fragile and complex network of neurovascular architectures, flanked by a series of muscular and glandular structures, found within the orbital dura.",
"title": ""
},
{
"docid": "4a837ccd9e392f8c7682446d9a3a3743",
"text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.",
"title": ""
},
{
"docid": "666137f1b598a25269357d6926c0b421",
"text": "representation techniques. T he World Wide Web is possible because a set of widely established standards guarantees interoperability at various levels. Until now, the Web has been designed for direct human processing, but the next-generation Web, which Tim Berners-Lee and others call the “Semantic Web,” aims at machine-processible information.1 The Semantic Web will enable intelligent services—such as information brokers, search agents, and information filters—which offer greater functionality and interoperability than current stand-alone services. The Semantic Web will only be possible once further levels of interoperability have been established. Standards must be defined not only for the syntactic form of documents, but also for their semantic content. Notable among recent W3C standardization efforts are XML/XML schema and RDF/RDF schema, which facilitate semantic interoperability. In this article, we explain the role of ontologies in the architecture of the Semantic Web. We then briefly summarize key elements of XML and RDF, showing why using XML as a tool for semantic interoperability will be ineffective in the long run. We argue that a further representation and inference layer is needed on top of the Web’s current layers, and to establish such a layer, we propose a general method for encoding ontology representation languages into RDF/RDF schema. We illustrate the extension method by applying it to Ontology Interchange Language (OIL), an ontology representation and inference language.2",
"title": ""
},
{
"docid": "bc28f28d21605990854ac9649d244413",
"text": "Mobile devices can provide people with contextual information. This information may benefit a primary activity, assuming it is easily accessible. In this paper, we present DisplaySkin, a pose-aware device with a flexible display circling the wrist. DisplaySkin creates a kinematic model of a user's arm and uses it to place information in view, independent of body pose. In doing so, DisplaySkin aims to minimize the cost of accessing information without being intrusive. We evaluated our pose-aware display with a rotational pointing task, which was interrupted by a notification on DisplaySkin. Results show that a pose-aware display reduces the time required to respond to notifications on the wrist.",
"title": ""
},
{
"docid": "6fcfbe651d6c4f3a47bf07ee7d38eee2",
"text": "\"People-nearby applications\" (PNAs) are a form of ubiquitous computing that connect users based on their physical location data. One example is Grindr, a popular PNA that facilitates connections among gay and bisexual men. Adopting a uses and gratifications approach, we conducted two studies. In study one, 63 users reported motivations for Grindr use through open-ended descriptions. In study two, those descriptions were coded into 26 items that were completed by 525 Grindr users. Factor analysis revealed six uses and gratifications: social inclusion, sex, friendship, entertainment, romantic relationships, and location-based search. Two additional analyses examine (1) the effects of geographic location (e.g., urban vs. suburban/rural) on men's use of Grindr and (2) how Grindr use is related to self-disclosure of information. Results highlight how the mixed-mode nature of PNA technology may change the boundaries of online and offline space, and how gay and bisexual men navigate physical environments.",
"title": ""
},
{
"docid": "ae70b9ef5eeb6316b5b022662191cc4f",
"text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.",
"title": ""
},
{
"docid": "96c14e4c9082920edb835e85ce99dc21",
"text": "When filling out privacy-related forms in public places such as hospitals or clinics, people usually are not aware that the sound of their handwriting leaks personal information. In this paper, we explore the possibility of eavesdropping on handwriting via nearby mobile devices based on audio signal processing and machine learning. By presenting a proof-of-concept system, WritingHacker, we show the usage of mobile devices to collect the sound of victims' handwriting, and to extract handwriting-specific features for machine learning based analysis. WritingHacker focuses on the situation where the victim's handwriting follows certain print style. An attacker can keep a mobile device, such as a common smart-phone, touching the desk used by the victim to record the audio signals of handwriting. Then the system can provide a word-level estimate for the content of the handwriting. To reduce the impacts of various writing habits and writing locations, the system utilizes the methods of letter clustering and dictionary filtering. Our prototype system's experimental results show that the accuracy of word recognition reaches around 50% - 60% under certain conditions, which reveals the danger of privacy leakage through the sound of handwriting.",
"title": ""
},
{
"docid": "f93e72b45a185e06d03d15791d312021",
"text": "BACKGROUND\nAbnormal scar development following burn injury can cause substantial physical and psychological distress to children and their families. Common burn scar prevention and management techniques include silicone therapy, pressure garment therapy, or a combination of both. Currently, no definitive, high-quality evidence is available for the effectiveness of topical silicone gel or pressure garment therapy for the prevention and management of burn scars in the paediatric population. Thus, this study aims to determine the effectiveness of these treatments in children.\n\n\nMETHODS\nA randomised controlled trial will be conducted at a large tertiary metropolitan children's hospital in Australia. Participants will be randomised to one of three groups: Strataderm® topical silicone gel only, pressure garment therapy only, or combined Strataderm® topical silicone gel and pressure garment therapy. Participants will include 135 children (45 per group) up to 16 years of age who are referred for scar management for a new burn. Children up to 18 years of age will also be recruited following surgery for burn scar reconstruction. Primary outcomes are scar itch intensity and scar thickness. Secondary outcomes include scar characteristics (e.g. colour, pigmentation, pliability, pain), the patient's, caregiver's and therapist's overall opinion of the scar, health service costs, adherence, health-related quality of life, treatment satisfaction and adverse effects. Measures will be completed on up to two sites per person at baseline and 1 week post scar management commencement, 3 months and 6 months post burn, or post burn scar reconstruction. Data will be analysed using descriptive statistics and univariate and multivariate regression analyses.\n\n\nDISCUSSION\nResults of this study will determine the effectiveness of three noninvasive scar interventions in children at risk of, and with, scarring post burn or post reconstruction.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry, ACTRN12616001100482 . Registered on 5 August 2016.",
"title": ""
},
{
"docid": "4a2de9235a698a3b5e517446088d2ac6",
"text": "In recent years, there has been a growing interest in designing multi-robot systems (hereafter MRSs) to provide cost effective, fault-tolerant and reliable solutions to a variety of automated applications. Here, we review recent advancements in MRSs specifically designed for cooperative object transport, which requires the members of MRSs to coordinate their actions to transport objects from a starting position to a final destination. To achieve cooperative object transport, a wide range of transport, coordination and control strategies have been proposed. Our goal is to provide a comprehensive summary for this relatively heterogeneous and fast-growing body of scientific literature. While distilling the information, we purposefully avoid using hierarchical dichotomies, which have been traditionally used in the field of MRSs. Instead, we employ a coarse-grain approach by classifying each study based on the transport strategy used; pushing-only, grasping and caging. We identify key design constraints that may be shared among these studies despite considerable differences in their design methods. In the end, we discuss several open challenges and possible directions for future work to improve the performance of the current MRSs. Overall, we hope to increasethe visibility and accessibility of the excellent studies in the field and provide a framework that helps the reader to navigate through them more effectively.",
"title": ""
},
{
"docid": "e7c2134b446c4e0e7343ea8812673597",
"text": "Lexical embeddings can serve as useful representations for words for a variety of NLP tasks, but learning embeddings for phrases can be challenging. While separate embeddings are learned for each word, this is infeasible for every phrase. We construct phrase embeddings by learning how to compose word embeddings using features that capture phrase structure and context. We propose efficient unsupervised and task-specific learning objectives that scale our model to large datasets. We demonstrate improvements on both language modeling and several phrase semantic similarity tasks with various phrase lengths. We make the implementation of our model and the datasets available for general use.",
"title": ""
},
{
"docid": "0a2be958c7323d3421304d1613421251",
"text": "Stock price forecasting has aroused great concern in research of economy, machine learning and other fields. Time series analysis methods are usually utilized to deal with this task. In this paper, we propose to combine news mining and time series analysis to forecast inter-day stock prices. News reports are automatically analyzed with text mining techniques, and then the mining results are used to improve the accuracy of time series analysis algorithms. The experimental result on a half year Chinese stock market data indicates that the proposed algorithm can help to improve the performance of normal time series analysis in stock price forecasting significantly. Moreover, the proposed algorithm also performs well in stock price trend forecasting.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "269c1cb7fe42fd6403733fdbd9f109e3",
"text": "Myofibroblasts are the key players in extracellular matrix remodeling, a core phenomenon in numerous devastating fibrotic diseases. Not only in organ fibrosis, but also the pivotal role of myofibroblasts in tumor progression, invasion and metastasis has recently been highlighted. Myofibroblast targeting has gained tremendous attention in order to inhibit the progression of incurable fibrotic diseases, or to limit the myofibroblast-induced tumor progression and metastasis. In this review, we outline the origin of myofibroblasts, their general characteristics and functions during fibrosis progression in three major organs: liver, kidneys and lungs as well as in cancer. We will then discuss the state-of-the art drug targeting technologies to myofibroblasts in context of the above-mentioned organs and tumor microenvironment. The overall objective of this review is therefore to advance our understanding in drug targeting to myofibroblasts, and concurrently identify opportunities and challenges for designing new strategies to develop novel diagnostics and therapeutics against fibrosis and cancer.",
"title": ""
}
] | scidocsrr |
ec3c9b3126a6eef574a0668a06629594 | Comparison of Unigram, Bigram, HMM and Brill's POS tagging approaches for some South Asian languages | [
{
"docid": "89aa60cefe11758e539f45c5cba6f48a",
"text": "For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the \"Resources\" tab to View Downloadable Files:Solutions Power Point Lecture Slides Chapters 1-5, 8-10, 12-13 and 24 Now Available! For additional resourcse visit the author website: http://www.cs.colorado.edu/~martin/slp.html",
"title": ""
}
] | [
{
"docid": "b428ee2a14b91fee7bb80058e782774d",
"text": "Recurrent connectionist networks are important because they can perform temporally extended tasks, giving them considerable power beyond the static mappings performed by the now-familiar multilayer feedforward networks. This ability to perform highly nonlinear dynamic mappings makes these networks particularly interesting to study and potentially quite useful in tasks which have an important temporal component not easily handled through the use of simple tapped delay lines. Some examples are tasks involving recognition or generation of sequential patterns and sensorimotor control. This report examines a number of learning procedures for adjusting the weights in recurrent networks in order to train such networks to produce desired temporal behaviors from input-output stream examples. The procedures are all based on the computation of the gradient of performance error with respect to network weights, and a number of strategies for computing the necessary gradient information are described. Included here are approaches which are familiar and have been rst described elsewhere, along with several novel approaches. One particular purpose of this report is to provide uniform and detailed descriptions and derivations of the various techniques in order to emphasize how they relate to one another. Another important contribution of this report is a detailed analysis of the computational requirements of the various approaches discussed.",
"title": ""
},
{
"docid": "4cb25adf48328e1e9d871940a97fdff2",
"text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.",
"title": ""
},
{
"docid": "3e570e415690daf143ea30a8554b0ac8",
"text": "Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature.",
"title": ""
},
{
"docid": "5288f4bbc2c9b8531042ce25b8df05b0",
"text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.",
"title": ""
},
{
"docid": "997a0392359ae999dfca6a0d339ea27f",
"text": "Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined. One type of anomaly, for example, concerns the fact that, with certain reference strings and paging algorithms, an increase in mean memory allocation may result in an increase in fault rate. Two paging algorithms, the page fault frequency and working set algorithms, are examined in terms of their anomaly potential, and reference string examples of various anomalies are presented. Two paging algorithm properties, the inclusion property and the generalized inclusion property, are discussed and the anomaly implications of these properties presented.",
"title": ""
},
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "13150a58d86b796213501d26e4b41e5b",
"text": "In this work, CoMoO4@NiMoO4·xH2O core-shell heterostructure electrode is directly grown on carbon fabric (CF) via a feasible hydrothermal procedure with CoMoO4 nanowires (NWs) as the core and NiMoO4 nanosheets (NSs) as the shell. This core-shell heterostructure could provide fast ion and electron transfer, a large number of active sites, and good strain accommodation. As a result, the CoMoO4@NiMoO4·xH2O electrode yields high-capacitance performance with a high specific capacitance of 1582 F g-1, good cycling stability with the capacitance retention of 97.1% after 3000 cycles and good rate capability. The electrode also shows excellent mechanical flexibility. Also, a flexible Fe2O3 nanorods/CF electrode with enhanced electrochemical performance was prepared. A solid-state asymmetric supercapacitor device is successfully fabricated by using flexible CoMoO4@NiMoO4·xH2O as the positive electrode and Fe2O3 as the negative electrode. The asymmetric supercapacitor with a maximum voltage of 1.6 V demonstrates high specific energy (41.8 Wh kg-1 at 700 W kg-1), high power density (12000 W kg-1 at 26.7 Wh kg-1), and excellent cycle ability with the capacitance retention of 89.3% after 5000 cycles (at the current density of 3A g-1).",
"title": ""
},
{
"docid": "27d8022f6545503c1145d46dfd30c1db",
"text": "Research has demonstrated support for objectification theory and has established that music affects listeners’ thoughts and behaviors, however, no research to date joins these two fields. The present study considers potential effects of objectifying hip hop songs on female listeners. Among African American participants, exposure to an objectifying song resulted in increased self-objectification. However, among White participants, exposure to an objectifying song produced no measurable difference in self-objectification. This finding along with interview data suggests that white women distance themselves from objectifying hip hop songs, preventing negative effects of such music. EFFECTS OF OBJECTIFYING HIP HOP 3 The Effects of Objectifying Hip-Hop Lyrics on Female Listeners Music is an important part of adolescents’ and young adults’ lives. It is a way to learn about our social world, express emotions, and relax (Agbo-Quaye, 2010). Music today is highly social, shared and listened to in social situations as a way to bolster the mood or experience. However, the effects of music are not always positive. Considering this, how does music affect young adults? Specifically, how does hip-hop music with objectifying lyrics affect female listeners? To begin to answer this question, I will first present previous research on music’s effects, specifically the effects of aggressive, sexualized, and misogynistic lyrics. Next, I will discuss theories regarding the processing of lyrics. Another important aspect of this question is objectification theory, thus I will explain this theory and the evidence to support it. I will then discuss further applications of this theory to various visual media forms. Finally, I will describe gaps in research, as well as the importance of this study. Multiple studies have looked at the effects of music’s lyrics on listeners. Various aspects and trends in popular music have been considered. Anderson, Carnagey, and Eubanks (2003) examined the effects of songs with violent lyrics on listeners. Participants who had been exposed to songs with violent lyrics reported feeling more hostile than those who listened to songs with non-violent lyrics. Those exposed to violent lyrics also had an increase in aggressive thoughts. Researchers also considered trait hostility and found that, although correlated with state hostility, it did not account for the differences in condition. Other studies have explored music’s effects on behavior. One such study considered the effects of exposure to sexualized lyrics (Carpentier, Knobloch-Westerwick, & Blumhoff, 2007). After exposure to overtly sexualized pop lyrics, participants rated potential romantic partners EFFECTS OF OBJECTIFYING HIP HOP 4 with a stronger emphasis on sexual appeal in comparison to the ratings of those participants who heard nonsexual pop songs. Another study exposed male participants to either sexually aggressive misogynistic lyrics or neutral lyrics (Fischer & Greitemeyer, 2006). Those participants who had been exposed to the sexually aggressive lyrics demonstrated more aggressive behaviors towards females. The study was replicated with female participants and aggressive man-hating lyrics and similar results were found. Similarly, another study found that exposure to misogynous rap music influenced sexually aggressive behaviors (Barongan & Hall, 1995). Participants were exposed to either misogynous or neutral rap songs and then presented with three vignettes and were informed they would have to select one to share with a female confederate. Those who listened to the misogynous song selected the assaultive vignette at a significantly higher rate. The selection of the assaultive vignette demonstrated sexually aggressive behavior. These studies demonstrate the real and disturbing effects that music can have on listener’s behaviors. There are multiple theories as to why these lyrical effects are found. Some researchers suggest that social learning and cultivation theories are responsible (Sprankle & End, 2009). Both theories argue that our thoughts and our actions are influenced by what we see. Social learning theory suggests that observing others’ behaviors and the responses they receive will influence the observer’s behavior. As most rap music depicts the positive outcomes of increased sexual activity and objectification of women and downplays or omits the negative outcomes, listeners will start to engage in these activities and consider them acceptable. Cultivation theory argues that the more a person observes the world of sex portrayed in objectifying music, the more likely they are to believe that that world is reality. That is, the more they see “evidence” of EFFECTS OF OBJECTIFYING HIP HOP 5 the attitudes and behaviors portrayed in hip hop, the more likely they are to believe that such behaviors are normal. Cobb and Boettcher (2007) suggest that theories of priming and social stereotyping support the findings that exposure to misogynistic music increases sexist views. They also suggest that some observed gender differences in these responses are the result of different kinds of information processing. Women, as the targets of these lyrics, will process misogynistic lyrics centrally and will attempt to understand the information they are receiving more thoroughly. Thus, they are more likely to reject the lyrics. This finding highlights the importance of attention and how the lyrics are received and the impact these factors can have on listeners’ reactions. These theories were supported in their study as participants exposed to misogynistic music demonstrated few differences from the control group, in which participants were not exposed to any music, in levels of hostile and benevolent sexism (Cobb & Boettcher, 2007). However, exposure to nonmisogynistic rap resulted in significantly increased levels of hostile and benevolent sexism. Researchers suggested that this may be because the processing of misogynistic lyrics meant that listeners were aware of the sexism present in the lyrics and thus the music was unable to prime their latent sexism. However, we live in a society in which rap music is associated with misogyny and violence (Fried, 1999). When participants listened to nonmisogynistic lyrics this association was primed. Because the lyrics weren’t explicit the processing involved was not critical and these assumptions went unchallenged and latent sexism was primed. Objectification theory provides another hypothesis for the processing and potential effects of media. Objectification theory posits that in a society in which women are frequently objectified, that is, seen as bodies that perform tasks rather than as people, women begin to selfEFFECTS OF OBJECTIFYING HIP HOP 6 objectify, or see themselves as objects for others’ viewing (Fredrickson & Roberts, 1997). They internalize an outsider’s perspective of their body. This self-objectification comes with anxiety and shame as well as frequent appearance monitoring (Fredrickson & Roberts, 1997). The authors suggest that the frequent objectification and self-objectification that occurs in our society could contribute to depression and eating disorders. They also suggest that frequent selfmonitoring, shame, and anxiety could make it difficult to reach and maintain peak motivational states (that is, an extended period of time in which we are voluntarily absorbed in a challenging physical or mental task with the goal of accomplishing something that’s considered worthwhile). These states are psychologically beneficial. Multiple studies support this theory. One such study looked at the effects of being in a self-objectifying state on the ability to reach and maintain a peak motivational state (Fredrickson, Roberts, Noll, Quinn, & Twenge, 1998). Participants were asked to try on either a swimsuit or a sweater and spend some time in that article of clothing. After this time they were asked questions about their self-objectifying behaviors and attitudes, such as depressed mood, self-surveillance, and body shame. They were then asked to complete a difficult math task, an activity meant to produce a peak motivational state. A similar study was completed with members of different ethnic groups (Hebl, King, & Lin, 2004). In this study a nearly identical procedure was followed. In addition, researchers aimed to create a more objectifying state for men, having them wear Speedos rather than swim trunks. In both of these studies female participants wearing swimsuits performed significantly worse on the math test than female participants wearing sweaters. There were no significant differences between the swim trunks and sweater conditions for male participants. However, when male participants wore Speedos they performed significantly worse on the math test. Further, the results of measures of self-objectifying EFFECTS OF OBJECTIFYING HIP HOP 7 behaviors, like body shame and surveillance, were significantly higher for those in the swimsuit condition. These findings demonstrate support for objectification theory and suggest that it crosses ethnic boundaries. The decreased math scores for men in Speedos suggest that it is possible to put anyone in a self-objectifying state. However, it is women who most often find themselves in this situation in our society. With empirical support for the central premises of objectification theory, research has turned to effects of popular media on self-objectification of women. One such study looked at the links between music video consumption, self-surveillance, body esteem, dieting status, depressive symptoms, and math confidence (Grabe & Hyde, 2009). Researchers found a positive relationship between music video consumption, self-objectification, and the host of psychological factors proposed by Fredrickson and Roberts, such that as music video consumption increased, so did self-objectifying behaviors. Another study looked at the effects of portrayals of the thin ideal in m",
"title": ""
},
{
"docid": "41a54cd203b0964a6c3d9c2b3addff46",
"text": "Increasing occupancy rates and revenue by improving customer experience is the aim of modern hospitality organizations. To achieve these results, hotel managers need to have a deep knowledge of customers’ needs, behavior, and preferences and be aware of the ways in which the services delivered create value for the customers and then stimulate their retention and loyalty. In this article a methodological framework to analyze the guest–hotel relationship and to profile hotel guests is discussed, focusing on the process of designing a customer information system and particularly the guest information matrix on which the system database will be built.",
"title": ""
},
{
"docid": "b333be40febd422eae4ae0b84b8b9491",
"text": "BACKGROUND\nRarely, basal cell carcinomas (BCCs) have the potential to become extensively invasive and destructive, a phenomenon that has led to the term \"locally advanced BCC\" (laBCC). We identified and described the diverse settings that could be considered \"locally advanced\".\n\n\nMETHODS\nThe panel of experts included oncodermatologists, dermatological and maxillofacial surgeons, pathologists, radiotherapists and geriatricians. During a 1-day workshop session, an interactive flow/sequence of questions and inputs was debated.\n\n\nRESULTS\nDiscussion of nine cases permitted us to approach consensus concerning what constitutes laBCC. The expert panel retained three major components for the complete assessment of laBCC cases: factors of complexity related to the tumour itself, factors related to the operability and the technical procedure, and factors related to the patient. Competing risks of death should be precisely identified. To ensure homogeneous multidisciplinary team (MDT) decisions in different clinical settings, the panel aimed to develop a practical tool based on the three components.\n\n\nCONCLUSION\nThe grid presented is not a definitive tool, but rather, it is a method for analysing the complexity of laBCC.",
"title": ""
},
{
"docid": "b0d11ab83aa6ae18d1a2be7c8e8803b5",
"text": "Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.",
"title": ""
},
{
"docid": "508ce0c5126540ad7f46b8f375c50df8",
"text": "Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n = 33) than in female vervets (n = 30) (P < .05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P < .01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children. D 2002 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "8405f30ca5f4bd671b056e9ca1f4d8df",
"text": "The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.",
"title": ""
},
{
"docid": "913777c94a55329ddf42955900a51096",
"text": "In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal.",
"title": ""
},
{
"docid": "659deeead04953483a3ed6c5cc78cd76",
"text": "We describe ParsCit, a freely available, open-source imple entation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label th token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference string s from a plain text file, and to retrieve the citation contexts . The package comes with utilities to run it as a web service or as a standalone uti lity. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.",
"title": ""
},
{
"docid": "6f410e93fa7ab9e9c4a7a5710fea88e2",
"text": "We propose a fast, scalable locality-sensitive hashing method for the problem of retrieving similar physiological waveform time series. When compared to the naive k-nearest neighbor search, the method vastly speeds up the retrieval time of similar physiological waveforms without sacrificing significant accuracy. Our result shows that we can achieve 95% retrieval accuracy or better with up to an order of magnitude of speed-up. The extra time required in advance to create the optimal data structure is recovered when query quantity equals 15% of the repository, while the method incurs a trivial additional memory cost. We demonstrate the effectiveness of this method on an arterial blood pressure time series dataset extracted from the ICU physiological waveform repository of the MIMIC-II database.",
"title": ""
},
{
"docid": "fe77a632bae11d9333cd867960e47375",
"text": "Here we present a projection augmented reality (AR) based assistive robot, which we call the Pervasive Assistive Robot System (PARS). The PARS aims to improve the quality of life by of the elderly and less able-bodied. In particular, the proposed system will support dynamic display and monitoring systems, which will be helpful for older adults who have difficulty moving their limbs and who have a weak memory.We attempted to verify the usefulness of the PARS using various scenarios. We expected that PARSs will be used as assistive robots for people who experience physical discomfort in their daily lives.",
"title": ""
},
{
"docid": "97af4f8e35a7d773bb85969dd027800b",
"text": "For an intelligent transportation system (ITS), traffic incident detection is one of the most important issues, especially for urban area which is full of signaled intersections. In this paper, we propose a novel traffic incident detection method based on the image signal processing and hidden Markov model (HMM) classifier. First, a traffic surveillance system was set up at a typical intersection of china, traffic videos were recorded and image sequences were extracted for image database forming. Second, compressed features were generated through several image processing steps, image difference with FFT was used to improve the recognition rate. Finally, HMM was used for classification of traffic signal logics (East-West, West-East, South-North, North-South) and accident of crash, the total correct rate is 74% and incident recognition rate is 84%. We believe, with more types of incident adding to the database, our detection algorithm could serve well for the traffic surveillance system.",
"title": ""
}
] | scidocsrr |
b3560ff550f50e2f79dae2a24428fcbd | Energy-Efficient Indoor Localization of Smart Hand-Held Devices Using Bluetooth | [
{
"docid": "4c7d66d767c9747fdd167f1be793d344",
"text": "In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model achieves accuracy that is similar to other published models and algorithms. By harnessing prior knowledge, our model eliminates the requirement for training data as compared with existing approaches, thereby introducing the notion of a fully adaptive zero profiling approach to location estimation.",
"title": ""
}
] | [
{
"docid": "58fe53f045228772b3a04dc0de095970",
"text": "Heterogeneous systems, that marry CPUs and GPUs together in a range of configurations, are quickly becoming the design paradigm for today's platforms because of their impressive parallel processing capabilities. However, in many existing heterogeneous systems, the GPU is only treated as an accelerator by the CPU, working as a slave to the CPU master. But recently we are starting to see the introduction of a new class of devices and changes to the system runtime model, which enable accelerators to be treated as first-class computing devices. To support programmability and efficiency of heterogeneous programming, the HSA foundation introduced the Heterogeneous System Architecture (HSA), which defines a platform and runtime architecture that provides rich support for OpenCL 2.0 features including shared virtual memory, dynamic parallelism, and improved atomic operations. In this paper, we provide the first comprehensive study of OpenCL 2.0 and HSA 1.0 execution, considering OpenCL 1.2 as the baseline. For workloads, we develop a suite of OpenCL micro-benchmarks designed to highlight the features of these emerging standards and also utilize real-world applications to better understand their impact at an application level. To fully exercise the new features provided by the HSA model, we experiment with a producer-consumer algorithm and persistent kernels. We find that by using HSA signals, we can remove 92% of the overhead due to synchronous kernel launches. In our real-world applications, the OpenCL 2.0 runtime achieves up to a 1.2X speedup, while the HSA 1.0 runtime achieves a 2.7X speedup over OpenCL 1.2.",
"title": ""
},
{
"docid": "16be435a946f8ff5d8d084f77373a6f3",
"text": "Answer selection is a core component in any question-answering systems. It aims to select correct answer sentences for a given question from a pool of candidate sentences. In recent years, many deep learning methods have been proposed and shown excellent results for this task. However, these methods typically require extensive parameter (and hyper-parameter) tuning, which gives rise to efficiency issues for large-scale datasets, and potentially makes them less portable across new datasets and domains (as re-tuning is usually required). In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view. FastHybrid is a light-weight model that requires little tuning and adaptation across different domains. It combines a fast deep model (which will be introduced in the method section) with an initial information retrieval model to effectively and efficiently handle answer selection. We introduce a new efficient attention mechanism in the hybrid model and demonstrate its effectiveness on several QA datasets. Experimental results show that although the hybrid uses no training data, its accuracy is often on-par with supervised deep learning techniques, while significantly reducing training and tuning costs across different domains.",
"title": ""
},
{
"docid": "b6ab7ac8029950f85d412b90963e679d",
"text": "Adaptive traffic signal control system is needed to avoid traffic congestion that has many disadvantages. This paper presents an adaptive traffic signal control system using camera as an input sensor that providing real-time traffic data. Principal Component Analysis (PCA) is used to analyze and to classify object on video frame for detecting vehicles. Distributed Constraint Satisfaction Problem (DCSP) method determine the duration of each traffic signal, based on counted number of vehicles at each lane. The system is implemented in embedded systems using BeagleBoard™.",
"title": ""
},
{
"docid": "6c3be94fe73ef79d711ef5f8b9c789df",
"text": "• Belief update based on m last rewards • Gaussian belief model instead of Beta • Limited lookahead to h steps and a myopic function in the horizon. • Noisy rewards Motivation: Correct sequential decision-making is critical for life success, and optimal approaches require signi!cant computational look ahead. However, simple models seem to explain people’s behavior. Questions: (1) Why we seem so simple compared to a rational agent? (2) What is the built-in model that we use to sequentially choose between courses of actions?",
"title": ""
},
{
"docid": "5454fbb1a924f3360a338c11a88bea89",
"text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.",
"title": ""
},
{
"docid": "d2b27ab3eb0aa572fdf8f8e3de6ae952",
"text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "fef4383a5a06687636ba4001ab0e510c",
"text": "In this paper, a depth camera-based novel approach for human activity recognition is presented using robust depth silhouettes context features and advanced Hidden Markov Models (HMMs). During HAR framework, at first, depth maps are processed to identify human silhouettes from noisy background by considering frame differentiation constraints of human body motion and compute depth silhouette area for each activity to track human movements in a scene. From the depth silhouettes context features, temporal frames information are computed for intensity differentiation measurements, depth history features are used to store gradient orientation change in overall activity sequence and motion difference features are extracted for regional motion identification. Then, these features are processed by Principal component analysis for dimension reduction and kmean clustering for code generation to make better activity representation. Finally, we proposed a new way to model, train and recognize different activities using advanced HMM. Each activity has been chosen with the highest likelihood value. Experimental results show superior recognition rate, resulting up to the mean recognition of 57.69% over the state of the art methods for fifteen daily routine activities using IM-Daily Depth Activity dataset. In addition, MSRAction3D dataset also showed some promising results.",
"title": ""
},
{
"docid": "7a37df81ad70697549e6da33384b4f19",
"text": "Water scarcity is now one of the major global crises, which has affected many aspects of human health, industrial development and ecosystem stability. To overcome this issue, water desalination has been employed. It is a process to remove salt and other minerals from saline water, and it covers a variety of approaches from traditional distillation to the well-established reverse osmosis. Although current water desalination methods can effectively provide fresh water, they are becoming increasingly controversial due to their adverse environmental impacts including high energy intensity and highly concentrated brine waste. For millions of years, microorganisms, the masters of adaptation, have survived on Earth without the excessive use of energy and resources or compromising their ambient environment. This has encouraged scientists to study the possibility of using biological processes for seawater desalination and the field has been exponentially growing ever since. Here, the term biodesalination is offered to cover all of the techniques which have their roots in biology for producing fresh water from saline solution. In addition to reviewing and categorizing biodesalination processes for the first time, this review also reveals unexplored research areas in biodesalination having potential to be used in water treatment.",
"title": ""
},
{
"docid": "7f47a4b5152acf7e38d5c39add680f9d",
"text": "unit of computation and a processor a piece of physical hardware In addition to reading to and writing from local memory a process can send and receive messages by making calls to a library of message passing routines The coordinated exchange of messages has the e ect of synchronizing processes This can be achieved by the synchronous exchange of messages in which the sending operation does not terminate until the receive operation has begun A di erent form of synchronization occurs when a message is sent asynchronously but the receiving process must wait or block until the data arrives Processes can be mapped to physical processors in various ways the mapping employed does not a ect the semantics of a program In particular multiple processes may be mapped to a single processor The message passing model provides a mechanism for talking about locality data contained in the local memory of a process are close and other data are remote We now examine some other properties of the message passing programming model performance mapping independence and modularity",
"title": ""
},
{
"docid": "a33e8a616955971014ceea9da1e8fcbe",
"text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.",
"title": ""
},
{
"docid": "4f1070b988605290c1588918a716cef2",
"text": "The aim of this paper was to predict the static bending modulus of elasticity (MOES) and modulus of rupture (MOR) of Scots pine (Pinus sylvestris L.) wood using three nondestructive techniques. The mean values of the dynamic modulus of elasticity based on flexural vibration (MOEF), longitudinal vibration (MOELV), and indirect ultrasonic (MOEUS) were 13.8, 22.3, and 30.9 % higher than the static modulus of elasticity (MOES), respectively. The reduction of this difference, taking into account the shear deflection effect in the output values for static bending modulus of elasticity, was also discussed in this study. The three dynamic moduli of elasticity correlated well with the static MOES and MOR; correlation coefficients ranged between 0.68 and 0.96. The correlation coefficients between the dynamic moduli and MOES were higher than those between the dynamic moduli and MOR. The highest correlation between the dynamic moduli and static bending properties was obtained by the flexural vibration technique in comparison with longitudinal vibration and indirect ultrasonic techniques. Results showed that there was no obvious relationship between the density and the acoustic wave velocity that was obtained from the longitudinal vibration and ultrasonic techniques.",
"title": ""
},
{
"docid": "6921cd9c2174ca96ec0061ae2dd881eb",
"text": "Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games' popularity and dedicated fan base are testaments to the allure of novel social interactions granted to people by allowing them an alternative life as a new character and persona. In this paper we investigate the phenomenon of \"gender swapping,\" which refers to players choosing avatars of genders opposite to their natural ones. We report the behavioral patterns observed in players of Fairyland Online, a globally serviced MMORPG, during social interactions when playing as in-game avatars of their own real gender or gender-swapped. We also discuss the effect of gender role and self-image in virtual social situations and the potential of our study for improving MMORPG quality and detecting online identity frauds.",
"title": ""
},
{
"docid": "44e5c86afbe3814ad718aa27880941c4",
"text": "This paper introduces genetic algorithms (GA) as a complete entity, in which knowledge of this emerging technology can be integrated together to form the framework of a design tool for industrial engineers. An attempt has also been made to explain “why’’ and “when” GA should be used as an optimization tool.",
"title": ""
},
{
"docid": "93a39df6ee080e359f50af46d02cdb71",
"text": "Mobile edge computing (MEC) providing information technology and cloud-computing capabilities within the radio access network is an emerging technique in fifth-generation networks. MEC can extend the computational capacity of smart mobile devices (SMDs) and economize SMDs’ energy consumption by migrating the computation-intensive task to the MEC server. In this paper, we consider a multi-mobile-users MEC system, where multiple SMDs ask for computation offloading to a MEC server. In order to minimize the energy consumption on SMDs, we jointly optimize the offloading selection, radio resource allocation, and computational resource allocation coordinately. We formulate the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints. In order to solve the problem, we propose a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy. Considering the complexity of RTLBB cannot be guaranteed, we further design a Gini coefficient-based greedy heuristic (GCGH) to solve the MINLP problem in polynomial complexity by degrading the MINLP problem into the convex problem. Many simulation results demonstrate the energy saving enhancements of RLTBB and GCGH.",
"title": ""
},
{
"docid": "28352c478552728dddf09a2486f6c63c",
"text": "Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.",
"title": ""
},
{
"docid": "c784bfbd522bb4c9908c3f90a31199fe",
"text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.",
"title": ""
},
{
"docid": "88e582927c4e4018cb4071eeeb6feff4",
"text": "While previous studies have correlated the Dark Triad traits (i.e., narcissism, psychopathy, and Machiavellianism) with a preference for short-term relationships, little research has addressed possible correlations with short-term relationship sub-types. In this online study using Amazon’s Mechanical Turk system (N = 210) we investigated the manner in which scores on the Dark Triad relate to the selection of different mating environments using a budget-allocation task. Overall, the Dark Triad were positively correlated with preferences for short-term relationships and negatively correlated with preferences for a long-term relationship. Specifically, narcissism was uniquely correlated with preferences for one-night stands and friends-with-benefits and psychopathy was uniquely correlated with preferences for bootycall relationships. Both narcissism and psychopathy were negatively correlated with preferences for serious romantic relationships. In mediation analyses, psychopathy partially mediated the sex difference in preferences for booty-call relationships and narcissism partially mediated the sex difference in preferences for one-night stands. In addition, the sex difference in preference for serious romantic relationships was partially mediated by both narcissism and psychopathy. It appears the Dark Triad traits facilitate the adoption of specific mating environments providing fit with people’s personality traits. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7ce79a08969af50c1712f0e291dd026c",
"text": "Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.",
"title": ""
},
{
"docid": "9c1e518c80dfbf201291923c9c55f1fd",
"text": "Computation underlies the organization of cells into higher-order structures, for example during development or the spatial association of bacteria in a biofilm. Each cell performs a simple computational operation, but when combined with cell–cell communication, intricate patterns emerge. Here we study this process by combining a simple genetic circuit with quorum sensing to produce more complex computations in space. We construct a simple NOR logic gate in Escherichia coli by arranging two tandem promoters that function as inputs to drive the transcription of a repressor. The repressor inactivates a promoter that serves as the output. Individual colonies of E. coli carry the same NOR gate, but the inputs and outputs are wired to different orthogonal quorum-sensing ‘sender’ and ‘receiver’ devices. The quorum molecules form the wires between gates. By arranging the colonies in different spatial configurations, all possible two-input gates are produced, including the difficult XOR and EQUALS functions. The response is strong and robust, with 5- to >300-fold changes between the ‘on’ and ‘off’ states. This work helps elucidate the design rules by which simple logic can be harnessed to produce diverse and complex calculations by rewiring communication between cells.",
"title": ""
}
] | scidocsrr |
763338ac575cee16828202cf29effc84 | Dominant Color Embedded Markov Chain Model for Object Image Retrieval | [
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
}
] | [
{
"docid": "733b998017da30fe24521158a6aaa749",
"text": "Memristor crossbars were fabricated at 40 nm half-pitch, using nanoimprint lithography on the same substrate with Si metal-oxide-semiconductor field effect transistor (MOS FET) arrays to form fully integrated hybrid memory resistor (memristor)/transistor circuits. The digitally configured memristor crossbars were used to perform logic functions, to serve as a routing fabric for interconnecting the FETs and as the target for storing information. As an illustrative demonstration, the compound Boolean logic operation (A AND B) OR (C AND D) was performed with kilohertz frequency inputs, using resistor-based logic in a memristor crossbar with FET inverter/amplifier outputs. By routing the output signal of a logic operation back onto a target memristor inside the array, the crossbar was conditionally configured by setting the state of a nonvolatile switch. Such conditional programming illuminates the way for a variety of self-programmed logic arrays, and for electronic synaptic computing.",
"title": ""
},
{
"docid": "e51f7fde238b0896df22d196b8c59c1a",
"text": "The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions such as the grey-world and white patch assumptions. In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions found in images. To this end, images are first classified into stages (rough 3D geometry models). According to the stage models, images are divided into different regions using hard and soft segmentation. After that, the best color constancy algorithm is selected for each geometry segment. As a result, light source estimation is tuned to the global scene geometry. Our algorithm opens the possibility to estimate the remote scene illumination color, by distinguishing nearby light source from distant illuminants. Experiments on large scale image datasets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 14% of median angular error. When using an ideal classifier (i.e, all of the test images are correctly classified into stages), the performance of the proposed method achieves an improvement of 31% of median angular error compared to the best-performing single color constancy algorithm.",
"title": ""
},
{
"docid": "1cbac59380ee798a621d58a6de35361f",
"text": "With the fast development of modern power semiconductors in the last years, the development of current measurement technologies has to adapt to this evolution. The challenge for the power electronic engineer is to provide a current sensor with a high bandwidth and a high immunity against external interferences. Rogowski current transducers are popular for monitoring transient currents in power electronic applications without interferences caused by external magnetic fields. But the trend of even higher current and voltage gradients generates a dilemma regarding the Rogowski current transducer technology. On the one hand, a high current gradient requires a current sensor with a high bandwidth. On the other hand, high voltage gradients forces to use a shielding around the Rogowski coil in order to protect the measurement signal from a capacitive displacement current caused by an unavoidable capacitive coupling to the setup, which reduces the bandwidth substantially. This paper presents a new Rogowski coil design which allows to measure high current gradients close to high voltage gradients without interferences and without reducing the bandwidth by a shielding. With this new measurement technique, it is possible to solve the mentioned dilemma and to get ready to measure the current of modern power semiconductors such as SiC and GaN with a Rogowski current transducer.",
"title": ""
},
{
"docid": "1d8765a407f2b9f8728982f54ddb6ae1",
"text": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. Materials and Methods: The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. Results: We evaluate similar medical concepts across diagnosis, medication and procedure. The results show xx% relevancy between similar pairs of medical concepts. Our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. Conclusion: We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance.",
"title": ""
},
{
"docid": "d0e8265bf57729b74375c9b476c4b028",
"text": "As experts in the health care of children and adolescents, pediatricians may be called on to advise legislators concerning the potential impact of changes in the legal status of marijuana on adolescents. Parents, too, may look to pediatricians for advice as they consider whether to support state-level initiatives that propose to legalize the use of marijuana for medical purposes or to decriminalize possession of small amounts of marijuana. This policy statement provides the position of the American Academy of Pediatrics on the issue of marijuana legalization, and the accompanying technical report (available online) reviews what is currently known about the relationship between adolescents' use of marijuana and its legal status to better understand how change might influence the degree of marijuana use by adolescents in the future.",
"title": ""
},
{
"docid": "776b1f07dfd93ff78e97a6a90731a15b",
"text": "Precise destination prediction of taxi trajectories can benefit many intelligent location based services such as accurate ad for passengers. Traditional prediction approaches, which treat trajectories as one-dimensional sequences and process them in single scale, fail to capture the diverse two-dimensional patterns of trajectories in different spatial scales. In this paper, we propose T-CONV which models trajectories as two-dimensional images, and adopts multi-layer convolutional neural networks to combine multi-scale trajectory patterns to achieve precise prediction. Furthermore, we conduct gradient analysis to visualize the multi-scale spatial patterns captured by T-CONV and extract the areas with distinct influence on the ultimate prediction. Finally, we integrate multiple local enhancement convolutional fields to explore these important areas deeply for better prediction. Comprehensive experiments based on real trajectory data show that T-CONV can achieve higher accuracy than the state-of-the-art methods.",
"title": ""
},
{
"docid": "1057ed913b857d0b22f5c535f919d035",
"text": "The purpose of this series is to convey the principles governing our aesthetic senses. Usually meaning visual perception, aesthetics is not merely limited to the ocular apparatus. The concept of aesthetics encompasses both the time-arts such as music, theatre, literature and film, as well as space-arts such as paintings, sculpture and architecture.",
"title": ""
},
{
"docid": "c4ad78f8d997fbbca0f376557276218c",
"text": "To coupe with the difficulties in the process of inspection and classification of defects in Printed Circuit Board (PCB), other researchers have proposed many methods. However, few of them published their dataset before, which hindered the introduction and comparison of new methods. In this paper, we published a synthesized PCB dataset containing 1386 images with 6 kinds of defects for the use of detection, classification and registration tasks. Besides, we proposed a reference based method to inspect and trained an end-to-end convolutional neural network to classify the defects. Unlike conventional approaches that require pixel-by-pixel processing, our method firstly locate the defects and then classify them by neural networks, which shows superior performance on our dataset.",
"title": ""
},
{
"docid": "e9d42505aebdcd2307852cf13957d407",
"text": "We report a broadband polarization-independent perfect absorber with wide-angle near unity absorbance in the visible regime. Our structure is composed of an array of thin Au squares separated from a continuous Au film by a phase change material (Ge2Sb2Te5) layer. It shows that the near perfect absorbance is flat and broad over a wide-angle incidence up to 80° for either transverse electric or magnetic polarization due to a high imaginary part of the dielectric permittivity of Ge2Sb2Te5. The electric field, magnetic field and current distributions in the absorber are investigated to explain the physical origin of the absorbance. Moreover, we carried out numerical simulations to investigate the temporal variation of temperature in the Ge2Sb2Te5 layer and to show that the temperature of amorphous Ge2Sb2Te5 can be raised from room temperature to > 433 K (amorphous-to-crystalline phase transition temperature) in just 0.37 ns with a low light intensity of 95 nW/μm(2), owing to the enhanced broadband light absorbance through strong plasmonic resonances in the absorber. The proposed phase-change metamaterial provides a simple way to realize a broadband perfect absorber in the visible and near-infrared (NIR) regions and is important for a number of applications including thermally controlled photonic devices, solar energy conversion and optical data storage.",
"title": ""
},
{
"docid": "772b3f74b6eecf82099b2e5b3709e507",
"text": "A common prerequisite for many vision-based driver assistance systems is the knowledge of the vehicle's own movement. In this paper we propose a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images. Our method is directly based on the trifocal geometry between image triples, thus no time expensive recovery of the 3-dimensional scene structure is needed. The only assumption we make is a known camera geometry, where the calibration may also vary over time. We employ an Iterated Sigma Point Kalman Filter in combination with a RANSAC-based outlier rejection scheme which yields robust frame-to-frame motion estimation even in dynamic environments. A high-accuracy inertial navigation system is used to evaluate our results on challenging real-world video sequences. Experiments show that our approach is clearly superior compared to other filtering techniques in terms of both, accuracy and run-time.",
"title": ""
},
{
"docid": "dc91774abd58e19066a110bbff9fa306",
"text": "Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip.",
"title": ""
},
{
"docid": "f0f7bd0223d69184f3391aaf790a984d",
"text": "Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.",
"title": ""
},
{
"docid": "e462c0cfc1af657cb012850de1b7b717",
"text": "ASSOCIATIONS BETWEEN PHYSICAL ACTIVITY, PHYSICAL FITNESS, AND FALLS RISK IN HEALTHY OLDER INDIVIDUALS Christopher Deane Vaughan Old Dominion University, 2016 Chair: Dr. John David Branch Objective: The purpose of this study was to assess relationships between objectively measured physical activity, physical fitness, and the risk of falling. Methods: A total of n=29 subjects completed the study, n=15 male and n=14 female age (mean±SD)= 70± 4 and 71±3 years, respectively. In a single testing session, subjects performed pre-post evaluations of falls risk (Short-from PPA) with a 6-minute walking intervention between the assessments. The falls risk assessment included tests of balance, knee extensor strength, proprioception, reaction time, and visual contrast. The sub-maximal effort 6-minute walking task served as an indirect assessment of cardiorespiratory fitness. Subjects traversed a walking mat to assess for variation in gait parameters during the walking task. Additional center of pressure (COP) balance measures were collected via forceplate during the falls risk assessments. Subjects completed a Modified Falls Efficacy Scale (MFES) falls confidence survey. Subjects’ falls histories were also collected. Subjects wore hip mounted accelerometers for a 7-day period to assess time spent in moderate to vigorous physical activity (MVPA). Results: Males had greater body mass and height than females (p=0.001, p=0.001). Males had a lower falls risk than females at baseline (p=0.043) and post-walk (p=0.031). MFES scores were similar among all subjects (Median = 10). Falls history reporting revealed; fallers (n=8) and non-fallers (n=21). No significant relationships were found between main outcome measures of MVPA, cardiorespiratory fitness, or falls risk. Fallers had higher knee extensor strength than non-fallers at baseline (p=0.028) and post-walk (p=0.011). Though not significant (p=0.306), fallers spent 90 minutes more time in MVPA than non-fallers (427.8±244.6 min versus 335.7±199.5). Variations in gait and COP variables were not significant. Conclusions: This study found no apparent relationship between objectively measured physical activity, indirectly measured cardiorespiratory fitness, and falls risk.",
"title": ""
},
{
"docid": "b0989fb1775c486317b5128bc1c31c76",
"text": "Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations.",
"title": ""
},
{
"docid": "ade3f3c778cf29e7c03bf96196916d6d",
"text": "Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.",
"title": ""
},
{
"docid": "86bbaffa7e9a58c06d695443224cbf01",
"text": "Movie studios often have to choose among thousands of scripts to decide which ones to turn into movies. Despite the huge amount of money at stake, this process, known as “green-lighting” in the movie industry, is largely a guesswork based on experts’ experience and intuitions. In this paper, we propose a new approach to help studios evaluate scripts which will then lead to more profitable green-lighting decisions. Our approach combines screenwriting domain knowledge, natural language processing techniques, and statistical learning methods to forecast a movie’s return-on-investment based only on textual information available in movie scripts. We test our model in a holdout decision task to show that our model is able to improve a studio’s gross return-on-investment significantly.",
"title": ""
},
{
"docid": "d5bc3147e23f95a070bce0f37a96c2a8",
"text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.",
"title": ""
},
{
"docid": "e36e318dd134fd5840d5a5340eb6e265",
"text": "Business Intelligence (BI) promises a range of technologies for using information to ensure compliance to strategic and tactical objectives, as well as government laws and regulations. These technologies can be used in conjunction with conceptual models of business objectives, processes and situations (aka business schemas) to drive strategic decision-making about opportunities and threats etc. This paper focuses on three key concepts for strategic business models -situation, influence and indicator -and how they are used for strategic analysis. The semantics of these concepts are defined using a state-ofthe-art upper ontology (DOLCE+). We also propose a method for building a business schema, and demonstrate alternative ways of formal analysis of the schema based on existing tools for goal and probabilistic reasoning.",
"title": ""
},
{
"docid": "8d99f6fd95fb329e16294b7884090029",
"text": "The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.",
"title": ""
}
] | scidocsrr |
9ed69e982cc40429518a3be5270ec540 | Population validity for educational data mining models: A case study in affect detection | [
{
"docid": "892c75c6b719deb961acfe8b67b982bb",
"text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.",
"title": ""
}
] | [
{
"docid": "ffd45fa5cd9c2ce6b4dc7c5433864fd4",
"text": "AIM\nTo evaluate validity of the Greek version of a global measure of perceived stress PSS-14 (Perceived Stress Scale - 14 item).\n\n\nMATERIALS AND METHODS\nThe original PSS-14 (theoretical range 0-56) was translated into Greek and then back-translated. One hundred men and women (39 +/- 10 years old, 40 men) participated in the validation process. Firstly, participants completed the Greek PSS-14 and, then they were interviewed by a psychologist specializing in stress management. Cronbach's alpha (a) evaluated internal consistency of the measurement, whereas Kendall's tau-b and Bland & Altman methods assessed consistency with the clinical evaluation. Exploratory and Confirmatory Factor analyses were conducted to reveal hidden factors within the data and to confirm the two-dimensional character of the scale.\n\n\nRESULTS\nMean (SD) PSS-14 score was 25(7.9). Strong internal consistency (Cronbach's alpha = 0.847) as well as moderate-to-good concordance between clinical assessment and PSS-14 (Kendall's tau-b = 0.43, p < 0.01) were observed. Two factors were extracted. Factor one explained 34.7% of variability and was heavily laden by positive items, and factor two that explained 10.6% of the variability by negative items. Confirmatory factor analysis revealed that the model with 2 factors had chi-square equal to 241.23 (p < 0.001), absolute fix indexes were good (i.e. GFI = 0.733, AGFI = 0.529), and incremental fix indexes were also adequate (i.e. NFI = 0.89 and CFI = 0.92).\n\n\nCONCLUSION\nThe developed Greek version of PSS-14 seems to be a valid instrument for the assessment of perceived stress in the Greek adult population living in urban areas; a finding that supports its local use in research settings as an evaluation tool measuring perceived stress, mainly as a risk factor but without diagnostic properties.",
"title": ""
},
{
"docid": "340a2fd43f494bb1eba58629802a738c",
"text": "A new image decomposition scheme, called the adaptive directional total variation (ADTV) model, is proposed to achieve effective segmentation and enhancement for latent fingerprint images in this work. The proposed model is inspired by the classical total variation models, but it differentiates itself by integrating two unique features of fingerprints; namely, scale and orientation. The proposed ADTV model decomposes a latent fingerprint image into two layers: cartoon and texture. The cartoon layer contains unwanted components (e.g., structured noise) while the texture layer mainly consists of the latent fingerprint. This cartoon-texture decomposition facilitates the process of segmentation, as the region of interest can be easily detected from the texture layer using traditional segmentation methods. The effectiveness of the proposed scheme is validated through experimental results on the entire NIST SD27 latent fingerprint database. The proposed scheme achieves accurate segmentation and enhancement results, leading to improved feature detection and latent matching performance.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "2e93d2ba94e0c468634bf99be76706bb",
"text": "Entheses are sites where tendons, ligaments, joint capsules or fascia attach to bone. Inflammation of the entheses (enthesitis) is a well-known hallmark of spondyloarthritis (SpA). As entheses are associated with adjacent, functionally related structures, the concepts of an enthesis organ and functional entheses have been proposed. This is important in interpreting imaging findings in entheseal-related diseases. Conventional radiographs and CT are able to depict the chronic changes associated with enthesitis but are of very limited use in early disease. In contrast, MRI is sensitive for detecting early signs of enthesitis and can evaluate both soft-tissue changes and intraosseous abnormalities of active enthesitis. It is therefore useful for the early diagnosis of enthesitis-related arthropathies and monitoring therapy. Current knowledge and typical MRI features of the most commonly involved entheses of the appendicular skeleton in patients with SpA are reviewed. The MRI appearances of inflammatory and degenerative enthesopathy are described. New options for imaging enthesitis, including whole-body MRI and high-resolution microscopy MRI, are briefly discussed.",
"title": ""
},
{
"docid": "6f13d2d8e511f13f6979859a32e68fdd",
"text": "As an innovative measurement technique, the so-called Fiber Bragg Grating (FBG) sensors are used to measure local and global strains in a growing number of application scenarios. FBGs facilitate a reliable method to sense strain over large distances and in explosive atmospheres. Currently, there is only little knowledge available concerning mechanical properties of FGBs, e.g. under quasi-static, cyclic and thermal loads. To address this issue, this work quantifies typical loads on FGB sensors in operating state and moreover aims to determine their mechanical response resulting from certain load cases. Copyright © 2013 IFSA.",
"title": ""
},
{
"docid": "2dde173faac8d5cbb63aed8d379308fa",
"text": "Delineating infarcted tissue in ischemic stroke lesions is crucial to determine the extend of damage and optimal treatment for this life-threatening condition. However, this problem remains challenging due to high variability of ischemic strokes’ location and shape. Recently, fully-convolutional neural networks (CNN), in particular those based on U-Net [27], have led to improved performances for this task [7]. In this work, we propose a novel architecture that improves standard U-Net based methods in three important ways. First, instead of combining the available image modalities at the input, each of them is processed in a different path to better exploit their unique information. Moreover, the network is densely-connected (i.e., each layer is connected to all following layers), both within each path and across different paths, similar to HyperDenseNet [11]. This gives our model the freedom to learn the scale at which modalities should be processed and combined. Finally, inspired by the Inception architecture [32], we improve standard U-Net modules by extending inception modules with two convolutional blocks with dilated convolutions of different scale. This helps handling the variability in lesion sizes. We split the 93 stroke datasets into training and validation sets containing 83 and 9 examples respectively. Our network was trained on a NVidia TITAN XP GPU with 16 GBs RAM, using ADAM as optimizer and a learning rate of 1×10−5 during 200 epochs. Training took around 5 hours and segmentation of a whole volume took between 0.2 and 2 seconds, as average. The performance on the test set obtained by our method is compared to several baselines, to demonstrate the effectiveness of our architecture, and to a state-of-art architecture that employs factorized dilated convolutions, i.e., ERFNet [26].",
"title": ""
},
{
"docid": "ed0f70e6e53666a6f5562cfb082a9a9a",
"text": "Biometrics aims at reliable and robust identification of humans from their personal traits, mainly for security and authentication purposes, but also for identifying and tracking the users of smarter applications. Frequently considered modalities are fingerprint, face, iris, palmprint and voice, but there are many other possible biometrics, including gait, ear image, retina, DNA, and even behaviours. This chapter presents a survey of machine learning methods used for biometrics applications, and identifies relevant research issues. We focus on three areas of interest: offline methods for biometric template construction and recognition, information fusion methods for integrating multiple biometrics to obtain robust results, and methods for dealing with temporal information. By introducing exemplary and influential machine learning approaches in the context of specific biometrics applications, we hope to provide the reader with the means to create novel machine learning solutions to challenging biometrics problems.",
"title": ""
},
{
"docid": "4b051e3908eabb5f550094ebabf6583d",
"text": "This paper presents a review of modern cooling system employed for the thermal management of power traction machines. Various solutions for heat extractions are described: high thermal conductivity insulation materials, spray cooling, high thermal conductivity fluids, combined liquid and air forced convection, and loss mitigation techniques.",
"title": ""
},
{
"docid": "9cad66a6f3cfb1112a4072de71c6de3e",
"text": "This paper presents a novel method for position sensorless control of high-speed brushless DC motors with low inductance and nonideal back electromotive force (EMF) in order to improve the reliability of the motor system of a magnetically suspended control moment gyro for space application. The commutation angle error of the traditional line-to-line voltage zero-crossing points detection method is analyzed. Based on the characteristics measurement of the nonideal back EMF, a two-stage commutation error compensation method is proposed to achieve the high-reliable and high-accurate commutation in the operating speed region of the proposed sensorless control process. The commutation angle error is compensated by the transformative line voltages, the hysteresis comparators, and the appropriate design of the low-pass filters in the low-speed and high-speed region, respectively. High-precision commutations are achieved especially in the high-speed region to decrease the motor loss in steady state. The simulated and experimental results show that the proposed method can achieve an effective compensation effect in the whole operating speed region.",
"title": ""
},
{
"docid": "beba751220fc4f8df7be8d8e546150d0",
"text": "Theoretical analysis and implementation of autonomous staircase detection and stair climbing algorithms on a novel rescue mobile robot are presented in this paper. The main goals are to find the staircase during navigation and to implement a fast, safe and smooth autonomous stair climbing algorithm. Silver is used here as the experimental platform. This tracked mobile robot is a tele-operative rescue mobile robot with great capabilities in climbing obstacles in destructed areas. Its performance has been demonstrated in rescue robot league of international RoboCup competitions. A fuzzy controller is applied to direct the robot during stair climbing. Controller inputs are generated by processing the range data from two LASER range finders which scan the environment one horizontally and the other vertically. The experimental results of stair detection algorithm and stair climbing controller are demonstrated at the end.",
"title": ""
},
{
"docid": "817f9509afcdbafc60ecac2d0b8ef02d",
"text": "Abstract—In most regards, the twenty-first century may not bring revolutionary changes in electronic messaging technology in terms of applications or protocols. Security issues that have long been a concern in messaging application are finally being solved using a variety of products. Web-based messaging systems are rapidly evolving the text-based conversation. The users have the right to protect their privacy from the eavesdropper, or other parties which interferes the privacy of the users for such purpose. The chatters most probably use the instant messages to chat with others for personal issue; in which no one has the right eavesdrop the conversation channel and interfere this privacy. This is considered as a non-ethical manner and the privacy of the users should be protected. The author seeks to identify the security features for most public instant messaging services used over the internet and suggest some solutions in order to encrypt the instant messaging over the conversation channel. The aim of this research is to investigate through forensics and sniffing techniques, the possibilities of hiding communication using encryption to protect the integrity of messages exchanged. Authors used different tools and methods to run the investigations. Such tools include Wireshark packet sniffer, Forensics Tool Kit (FTK) and viaForensic mobile forensic toolkit. Finally, authors will report their findings on the level of security that encryption could provide to instant messaging services.",
"title": ""
},
{
"docid": "90dd589be3f8f78877367486e0f66e11",
"text": "Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel \"RomePatches\" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval.",
"title": ""
},
{
"docid": "29a2c5082cf4db4f4dde40f18c88ca85",
"text": "Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.",
"title": ""
},
{
"docid": "f4cbdcdb55e2bf49bcc62a79293f19b7",
"text": "Network slicing for 5G provides Network-as-a-Service (NaaS) for different use cases, allowing network operators to build multiple virtual networks on a shared infrastructure. With network slicing, service providers can deploy their applications and services flexibly and quickly to accommodate diverse services’ specific requirements. As an emerging technology with a number of advantages, network slicing has raised many issues for the industry and academia alike. Here, the authors discuss this technology’s background and propose a framework. They also discuss remaining challenges and future research directions.",
"title": ""
},
{
"docid": "029c5753adfbdcbfc38b92fbcc7f7e5c",
"text": "The Internet of Things (IoT) is the latest evolution of the Internet, encompassing an enormous number of connected physical \"things.\" The access-control oriented (ACO) architecture was recently proposed for cloud-enabled IoT, with virtual objects (VOs) and cloud services in the middle layers. A central aspect of ACO is to control communication among VOs. This paper develops operational and administrative access control models for this purpose, assuming topic-based publishsubscribe interaction among VOs. Operational models are developed using (i) access control lists for topics and capabilities for virtual objects and (ii) attribute-based access control, and it is argued that role-based access control is not suitable for this purpose. Administrative models for these two operational models are developed using (i) access control lists, (ii) role-based access control, and (iii) attribute-based access control. A use case illustrates the details of these access control models for VO communication, and their differences. An assessment of these models with respect to security and privacy preserving objectives of IoT is also provided.",
"title": ""
},
{
"docid": "9fd56a2261ade748404fcd0c6302771a",
"text": "Despite limited scientific knowledge, stretching of human skeletal muscle to improve flexibility is a widespread practice among athletes. This article reviews recent findings regarding passive properties of the hamstring muscle group during stretch based on a model that was developed which could synchronously and continuously measure passive hamstring resistance and electromyographic activity, while the velocity and angle of stretch was controlled. Resistance to stretch was defined as passive torque (Nm) offered by the hamstring muscle group during passive knee extension using an isokinetic dynamometer with a modified thigh pad. To simulate a clinical static stretch, the knee was passively extended to a pre-determined final position (0.0875 rad/s, dynamic phase) where it remained stationary for 90 s (static phase). Alternatively, the knee was extended to the point of discomfort (stretch tolerance). From the torque-angle curve of the dynamic phase of the static stretch, and in the stretch tolerance protocol, passive energy and stiffness were calculated. Torque decline in the static phase was considered to represent viscoelastic stress relaxation. Using the model, studies were conducted which demonstrated that a single static stretch resulted in a 30% viscoelastic stress relaxation. With repeated stretches muscle stiffness declined, but returned to baseline values within 1 h. Long-term stretching (3 weeks) increased joint range of motion as a result of a change in stretch tolerance rather than in the passive properties. Strength training resulted in increased muscle stiffness, which was unaffected by daily stretching. The effectiveness of different stretching techniques was attributed to a change in stretch tolerance rather than passive properties. Inflexible and older subjects have increased muscle stiffness, but a lower stretch tolerance compared to subjects with normal flexibility and younger subjects, respectively. Although far from all questions regarding the passive properties of humans skeletal muscle have been answered in these studies, the measurement technique permitted some initial important examinations of vicoelastic behavior of human skeletal muscle.",
"title": ""
},
{
"docid": "2d94f76a2c79b36c3fa8aeaf3f574bbd",
"text": "In this paper I discuss the role of Machine Learning (ML) in sound design. I focus on the modelling of a particular aspect of human intelligence which is believed to play an important role in musical creativity: the Generalisation of Perceptual Attributes (GPA). By GPA I mean the process by which a listener tries to find common sound attributes when confronted with a series of sounds. The paper introduces the basics of GPA and ML in the context of ARTIST, a prototype case study system. ARTIST (Artificial Intelligence Sound Tools) is a sound design system that works in co-operation with the user, providing useful levels of automated reasoning to render the synthesis tasks less laborious (tasks such as calculating an appropriate stream of synthesis parameters for each single sound) and to enable the user to explore alternatives when designing a certain sound. The system synthesises sounds from input requests in a relatively high-level language; for instance, using attribute-value expressions such as \"normal vibrato\", \"high openness\" and \"sharp attack\". ARTIST stores information about sounds as clusters of attribute-value expressions and has the ability to interpret these expressions in the lower-level terms of sound synthesis algorithms. The user may, however, be interested in producing a sound which is \"unknown\" to the system. In this case, the system will attempt to compute the attribute values for this yet unknown sound by making analogies with other known sounds which have similar constituents. ARTIST uses ML to infer which sound attributes should be considered to make the analogies.",
"title": ""
},
{
"docid": "20f6a794edae8857a04036afc84f532e",
"text": "Genetic algorithms play a significant role, as search techniques forhandling complex spaces, in many fields such as artificial intelligence, engineering, robotic, etc. Genetic algorithms are based on the underlying genetic process in biological organisms and on the naturalevolution principles of populations. These algorithms process apopulation of chromosomes, which represent search space solutions,with three operations: selection, crossover and mutation. Under its initial formulation, the search space solutions are coded using the binary alphabet. However, the good properties related with these algorithms do not stem from the use of this alphabet; other coding types have been considered for the representation issue, such as real coding, which would seem particularly natural when tackling optimization problems of parameters with variables in continuous domains. In this paper we review the features of real-coded genetic algorithms. Different models of genetic operators and some mechanisms available for studying the behaviour of this type of genetic algorithms are revised and compared.",
"title": ""
},
{
"docid": "91713d85bdccb2c06d7c50365bd7022c",
"text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).",
"title": ""
},
{
"docid": "503756888df43d745e4fb5051f8855fb",
"text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.",
"title": ""
}
] | scidocsrr |
931d7404b9114918be2c0087b6cb38c0 | Reliable, Consistent, and Efficient Data Sync for Mobile Apps | [
{
"docid": "64a48cd3af7b029c331921618d05c9ad",
"text": "Cloud-based file synchronization services have become enormously popular in recent years, both for their ability to synchronize files across multiple clients and for the automatic cloud backups they provide. However, despite the excellent reliability that the cloud back-end provides, the loose coupling of these services and the local file system makes synchronized data more vulnerable than users might believe. Local corruption may be propagated to the cloud, polluting all copies on other devices, and a crash or untimely shutdown may lead to inconsistency between a local file and its cloud copy. Even without these failures, these services cannot provide causal consistency. To address these problems, we present ViewBox, an integrated synchronization service and local file system that provides freedom from data corruption and inconsistency. ViewBox detects these problems using ext4-cksum, a modified version of ext4, and recovers from them using a user-level daemon, cloud helper, to fetch correct data from the cloud. To provide a stable basis for recovery,ViewBox employs the view manager on top of ext4-cksum. The view manager creates and exposes views, consistent inmemory snapshots of the file system, which the synchronization client then uploads. Our experiments show that ViewBox detects and recovers from both corruption and inconsistency, while incurring minimal overhead.",
"title": ""
}
] | [
{
"docid": "98356590ae18e09c04be6386559f9946",
"text": "BACKGROUND AND PURPOSE\nInformation has been sparse on the comparison of pulse pressure (PP) and mean arterial pressure (MAP) in relation to ischemic stroke among patients with uncontrolled hypertension. The present study examined the relation among PP, MAP, and ischemic stroke in uncontrolled hypertensive subjects in China.\n\n\nMETHODS\nA total of 6104 uncontrolled hypertensive subjects aged > or = 35 years were screened with a stratified cluster multistage sampling scheme in Fuxin county of Liaoning province of China, of which 317 had ischemic stroke.\n\n\nRESULTS\nAfter multivariable adjustment for age, gender, and other confounders, individuals with the highest quartile of PP and MAP had ORs for ischemic stroke of 1.479 (95% CI: 1.027 to 2.130) and 2.000 (95% CI: 1.373 to 2.914) with the lowest quartile as the reference. Adjusted ORs for ischemic stroke were 1.306 for MAP and 1.118 for PP with an increment of 1 SD, respectively. Ischemic stroke prediction of PP was annihilated when PP and MAP were entered in a single model. In patients aged < 65 years, on a continuous scale using receive operating characteristics curve, ischemic stroke was predicted by PP (P=0.001) and MAP (P<0.001). The area under the curve of PP (0.570, 95% CI: 0.531 to 0.609) differed from the area under the curve of MAP (0.633, 95% CI: 0.597 to 0.669; P<0.05). Among patients aged > or = 65 years, presence of ischemic stroke was only predicted by MAP.\n\n\nCONCLUSIONS\nPP and MAP were both associated with ischemic stroke. Ischemic stroke prediction of PP depended on MAP. On a continuous scale, MAP better predicted ischemic stroke than PP did in diagnostic accuracy.",
"title": ""
},
{
"docid": "543348825e8157926761b2f6a7981de2",
"text": "With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.",
"title": ""
},
{
"docid": "c976fcbe0c095a4b7cfd6e3968964c55",
"text": "The introduction of Network Functions Virtualization (NFV) enables service providers to offer software-defined network functions with elasticity and flexibility. Its core technique, dynamic allocation procedure of NFV components onto cloud resources requires rapid response to changes on-demand to remain cost and QoS effective. In this paper, Markov Decision Process (MDP) is applied to the NP-hard problem to dynamically allocate cloud resources for NFV components. In addition, Bayesian learning method is applied to monitor the historical resource usage in order to predict future resource reliability. Experimental results show that our proposed strategy outperforms related approaches.",
"title": ""
},
{
"docid": "8f21eee8a4320baebe0fe40364f6580e",
"text": "The dup system related subjects others recvfrom and user access methods. The minimal facilities they make up. A product before tackling 'the design, decisions they probably should definitely. Multiplexer'' interprocess communication in earlier addison wesley has the important features a tutorial. Since some operating system unstructured devices a process init see. At berkeley software in earlier authoritative technical information on write operations. The lowest unused multiprocessor support for, use this determination. No name dot spelled with the system. Later it a file several, reasons often single user interfacesis excluded except.",
"title": ""
},
{
"docid": "c86b44aef6e23d4a61e6a062a7a50883",
"text": "In this paper we investigate the applications of Elo ratings (originally designed for 2-player chess) to a heterogeneous nonlinear multiagent system to determine an agent’s overall impact on its team’s performance. Measuring this impact has been attempted in many different ways, including reward shaping; the generation of heirarchies, holarchies, and teams; mechanism design; and the creation of subgoals. We show that in a multiagent system, an Elo rating will accurately reflect the an agent’s ability to contribute positively to a team’s success with no need for any other feedback than a repeated binary win/loss signal. The Elo rating not only measures “personal” success, but simultaneously success in assisting other agents to perform favorably.",
"title": ""
},
{
"docid": "7e68ac0eee3ab3610b7c68b69c27f3b6",
"text": "When digitizing a document into an image, it is common to include a surrounding border region to visually indicate that the entire document is present in the image. However, this border should be removed prior to automated processing. In this work, we present a deep learning system, PageNet, which identifies the main page region in an image in order to segment content from both textual and non-textual border noise. In PageNet, a Fully Convolutional Network obtains a pixel-wise segmentation which is post-processed into a quadrilateral region. We evaluate PageNet on 4 collections of historical handwritten documents and obtain over 94% mean intersection over union on all datasets and approach human performance on 2 collections. Additionally, we show that PageNet can segment documents that are overlayed on top of other documents.",
"title": ""
},
{
"docid": "1b69388c83a0883b3eeddc47ce44b82a",
"text": "1 Lawrence E. Whitman, Wichita State University, Industrial & Manufacturing Engineering Department, 120G Engineering Building, Wichita, KS 672600035 [email protected] 2 Tonya A. Witherspoon, Wichita State University, College of Education, 156C Corbin Education Center, Wichita, KS 67260-0131 [email protected] Abstract Wichita State University is actively using LEGOs to encourage science math engineering and technology (SMET). There are two major thrusts in our efforts. The college of engineering uses LEGO blocks to simulate a factory environment in the building of LEGO airplanes. This participative demonstration has been used at middle school, high school, and college classes. LEGOs are used to present four manufacturing scenarios of traditional, cellular, pull, and single piece flow manufacturing. The demonstration presents to students how the design of a factory has significant impact on the success of the company. It also encourages students to pursue engineering careers. The college of education uses robotics as a vehicle to integrate technology and engineering into math and science preservice and inservice teacher education.. The purpose is to develop technologically astute and competent teachers who are capable of integrating technology into their curriculum to improve the teaching and learning of their students. This paper will discuss each effort, the collaboration between the two, and provide examples of success.",
"title": ""
},
{
"docid": "22241857a42ffcad817356900f52df66",
"text": "Most of the intensive care units (ICU) are equipped with commercial pulse oximeters for monitoring arterial blood oxygen saturation (SpO2) and pulse rate (PR). Photoplethysmographic (PPG) data recorded from pulse oximeters usually corrupted by motion artifacts (MA), resulting in unreliable and inaccurate estimated measures of SpO2. In this paper, a simple and efficient MA reduction method based on Ensemble Empirical Mode Decomposition (E2MD) is proposed for the estimation of SpO2 from processed PPGs. Performance analysis of the proposed E2MD is evaluated by computing the statistical and quality measures indicating the signal reconstruction like SNR and NRMSE. Intentionally created MAs (Horizontal MA, Vertical MA and Bending MA) in the recorded PPGs are effectively reduced by the proposed one and proved to be the best suitable method for reliable and accurate SpO2 estimation from the processed PPGs.",
"title": ""
},
{
"docid": "a286f9f594ef563ba082fb454eddc8bc",
"text": "The visual inspection of Mura defects is still a challenging task in the quality control of panel displays because of the intrinsically nonuniform brightness and blurry contours of these defects. The current methods cannot detect all Mura defect types simultaneously, especially small defects. In this paper, we introduce an accurate Mura defect visual inspection (AMVI) method for the fast simultaneous inspection of various Mura defect types. The method consists of two parts: an outlier-prejudging-based image background construction (OPBC) algorithm is proposed to quickly reduce the influence of image backgrounds with uneven brightness and to coarsely estimate the candidate regions of Mura defects. Then, a novel region-gradient-based level set (RGLS) algorithm is applied only to these candidate regions to quickly and accurately segment the contours of the Mura defects. To demonstrate the performance of AMVI, several experiments are conducted to compare AMVI with other popular visual inspection methods are conducted. The experimental results show that AMVI tends to achieve better inspection performance and can quickly and accurately inspect a greater number of Mura defect types, especially for small and large Mura defects with uneven backlight. Note to Practitioners—The traditional Mura visual inspection method can address only medium-sized Mura defects, such as region Mura, cluster Mura, and vertical-band Mura, and is not suitable for small Mura defects, for example, spot Mura. The proposed accurate Mura defect visual inspection (AMVI) method can accurately and simultaneously inspect not only medium-sized Mura defects but also small and large Mura defects. The proposed outlier-prejudging-based image background construction (OPBC) algorithm of the AMVI method is employed to improve the Mura true detection rate, while the proposed region-gradient-based level set (RGLS) algorithm is used to reduce the Mura false detection rate. Moreover, this method can be applied to online vision inspection: OPBC can be implemented in parallel processing units, while RGLS is applied only to the candidate regions of the inspected image. In addition, AMVI can be extended to other low-contrast defect vision inspection tasks, such as the inspection of glass, steel strips, and ceramic tiles.",
"title": ""
},
{
"docid": "4703b02dc285a55002f15d06d98251e7",
"text": "Nowadays, most Photovoltaic installations are grid connected system. From distribution system point of view, the main point and concern related to PV grid-connected are overvoltage or overcurrent in the distribution network. This paper describes the simulation study which focuses on ferroresonance phenomenon of PV system on lower side of distribution transformer. PSCAD program is selected to simulate the ferroresonance phenomenon in this study. The example of process that creates ferroresonance by the part of PV system and ferroresonance effect will be fully described in detail.",
"title": ""
},
{
"docid": "37ef43a6ed0dcf0817510b84224d9941",
"text": "Contrast enhancement is one of the most important issues of image processing, pattern recognition and computer vision. The commonly used techniques for contrast enhancement fall into two categories: (1) indirect methods of contrast enhancement and (2) direct methods of contrast enhancement. Indirect approaches mainly modify histogram by assigning new values to the original intensity levels. Histogram speci\"cation and histogram equalization are two popular indirect contrast enhancement methods. However, histogram modi\"cation technique only stretches the global distribution of the intensity. The basic idea of direct contrast enhancement methods is to establish a criterion of contrast measurement and to enhance the image by improving the contrast measure. The contrast can be measured globally and locally. It is more reasonable to de\"ne a local contrast when an image contains textual information. Fuzzy logic has been found many applications in image processing, pattern recognition, etc. Fuzzy set theory is a useful tool for handling the uncertainty in the images associated with vagueness and/or imprecision. In this paper, we propose a novel adaptive direct fuzzy contrast enhancement method based on the fuzzy entropy principle and fuzzy set theory. We have conducted experiments on many images. The experimental results demonstrate that the proposed algorithm is very e!ective in contrast enhancement as well as in preventing over-enhancement. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8385f72bd060eee8c59178bc0b74d1e3",
"text": "Gesture recognition plays an important role in human-computer interaction. However, most existing methods are complex and time-consuming, which limit the use of gesture recognition in real-time environments. In this paper, we propose a static gesture recognition system that combines depth information and skeleton data to classify gestures. Through feature fusion, hand digit gestures of 0-9 can be recognized accurately and efficiently. According to the experimental results, the proposed gesture recognition system is effective and robust, which is invariant to complex background, illumination changes, reversal, structural distortion, rotation etc. We have tested the system both online and offline which proved that our system is satisfactory to real-time requirements, and therefore it can be applied to gesture recognition in real-world human-computer interaction systems.",
"title": ""
},
{
"docid": "cc56706151e027c89eea5639486d4cd3",
"text": "To refine user interest profiling, this paper focuses on extending scientific subject ontology via keyword clustering and on improving the accuracy and effectiveness of recommendation of the electronic academic publications in online services. A clustering approach is proposed for domain keywords for the purpose of the subject ontology extension. Based on the keyword clusters, the construction of user interest profiles is presented on a rather fine granularity level. In the construction of user interest profiles, we apply two types of interest profiles: explicit profiles and implicit profiles. The explicit eighted keyword graph",
"title": ""
},
{
"docid": "7c6708511e8a19c7a984ccc4b5c5926e",
"text": "INTRODUCTION\nOtoplasty or correction of prominent ears, is one of most commonly performed surgeries in plastic surgery both in children and adults. Until nowadays, there have been more than 150 techniques described, but all with certain percentage of recurrence which varies from just a few up to 24.4%.\n\n\nOBJECTIVE\nThe authors present an otoplasty technique, a combination of Mustardé's original procedure with other techniques, which they have been using successfully in their everyday surgical practice for the last 9 years. The technique is based on posterior antihelical and conchal approach.\n\n\nMETHODS\nThe study included 102 patients (60 males and 42 females) operated on between 1999 and 2008. The age varied between 6 and 49 years. Each procedure was tailored to the aberrant anatomy which was analysed after examination. Indications and the operative procedure are described in step-by-step detail accompanied by drawings and photos taken during the surgery.\n\n\nRESULTS\nAll patients had bilateral ear deformity. In all cases was performed a posterior antihelical approach. The conchal reduction was done only when necessary and also through the same incision. The follow-up was from 1 to 5 years. There were no recurrent cases. A few minor complications were presented. Postoperative care, complications and advantages compared to other techniques are discussed extensively.\n\n\nCONCLUSION\nAll patients showed a high satisfaction rate with the final result and there was no necessity for further surgeries. The technique described in this paper is easy to reproduce even for young surgeons.",
"title": ""
},
{
"docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d",
"text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.",
"title": ""
},
{
"docid": "e6e7ee19b958b40abeed760be50f2583",
"text": "All distributed-generation units need to be equipped with an anti-islanding protection (AIP) scheme in order to avoid unintentional islanding. Unfortunately, most AIP methods fail to detect islanding if the demand in the islanded circuit matches the production in the island. Another concern is that many active AIP schemes cause power-quality problems. This paper proposes an AIP method which is based on the combination of a reactive power versus frequency droop and rate of change of frequency (ROCOF). The method is designed so that the injection of reactive power is of minor scale during normal operating conditions. Yet, the method can rapidly detect islanding which is verified by PSCAD/EMTDC simulations.",
"title": ""
},
{
"docid": "b6de6f391c11178843bc16b51bf26803",
"text": "Crowd analysis becomes very popular research topic in the area of computer vision. A growing requirement for smarter video surveillance of private and public space using intelligent vision systems which can differentiate what is semantically important in the direction of the human observer as normal behaviors and abnormal behaviors. People counting, people tracking and crowd behavior analysis are different stages for computer based crowd analysis algorithm. This paper focus on crowd behavior analysis which can detect normal behavior or abnormal behavior.",
"title": ""
},
{
"docid": "a56a3592d704c917d5e8452eabb74cb0",
"text": "Current text-to-speech synthesis (TTS) systems are often perceived as lacking expressiveness, limiting the ability to fully convey information. This paper describes initial investigations into improving expressiveness for statistical speech synthesis systems. Rather than using hand-crafted definitions of expressive classes, an unsupervised clustering approach is described which is scalable to large quantities of training data. To incorporate this “expression cluster” information into an HMM-TTS system two approaches are described: cluster questions in the decision tree construction; and average expression speech synthesis (AESS) using cluster-based linear transform adaptation. The performance of the approaches was evaluated on audiobook data in which the reader exhibits a wide range of expressiveness. A subjective listening test showed that synthesising with AESS results in speech that better reflects the expressiveness of human speech than a baseline expression-independent system.",
"title": ""
},
{
"docid": "1d7bbd7aaa65f13dd72ffeecc8499cb6",
"text": "Due to the 60Hz or higher LCD refresh operations, display controller (DC) reads the pixels out from frame buffer at fixed rate. Accessing frame buffer consumes not only memory bandwidth, but power as well. Thus frame buffer compression (FBC) can contribute to alleviating both bandwidth and power consumption. A conceptual frame buffer compression model is proposed, and to the best of our knowledge, an arithmetic expression concerning the compression ratio and the read/update ratio of frame buffer is firstly presented, which reveals the correlation between frame buffer compression and target applications. Moreover, considering the linear access feature of frame buffer, we investigate a frame buffer compression without color information loss, named LFBC (Loss less Frame-Buffer Compression). LFBC defines new frame buffer compression data format, and employs run-length encoding (RLE) to implement the compression. For the applications suitable for frame buffer compression, LFBC reduces 50%90% bandwidth consumption and memory accesses caused by LCD refresh operations.",
"title": ""
},
{
"docid": "5f4d10a1a180f6af3d35ca117cd4ee19",
"text": "This work addresses the task of instance-aware semantic segmentation. Our key motivation is to design a simple method with a new modelling-paradigm, which therefore has a different trade-off between advantages and disadvantages compared to known approaches. Our approach, we term InstanceCut, represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard convolutional neural network for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. We evaluate our approach on the challenging CityScapes dataset. Despite the conceptual simplicity of our approach, we achieve the best result among all published methods, and perform particularly well for rare object classes.",
"title": ""
}
] | scidocsrr |
9e44f01957f05b39a959becfb42b17e9 | Rainmakers: why bad weather means good productivity. | [
{
"docid": "13c6e4fc3a20528383ef7625c9dd2b79",
"text": "Seasonal affective disorder (SAD) is a syndrome characterized by recurrent depressions that occur annually at the same time each year. We describe 29 patients with SAD; most of them had a bipolar affective disorder, especially bipolar II, and their depressions were generally characterized by hypersomnia, overeating, and carbohydrate craving and seemed to respond to changes in climate and latitude. Sleep recordings in nine depressed patients confirmed the presence of hypersomnia and showed increased sleep latency and reduced slow-wave (delta) sleep. Preliminary studies in 11 patients suggest that extending the photoperiod with bright artificial light has an antidepressant effect.",
"title": ""
}
] | [
{
"docid": "1bdbfe7d11ca567adcce97a853761939",
"text": "Dynamic contrast enhanced MRI (DCE-MRI) is an emerging imaging protocol in locating, identifying and characterizing breast cancer. However, due to image artifacts in MR, pixel intensity alone cannot accurately characterize the tissue properties. We propose a robust method based on the temporal sequence of textural change and wavelet transform for pixel-by-pixel classification. We first segment the breast region using an active contour model. We then compute textural change on pixel blocks. We apply a three-scale discrete wavelet transform on the texture temporal sequence to further extract frequency features. We employ a progressive feature selection scheme and a committee of support vector machines for the classification. We trained the system on ten cases and tested it on eight independent test cases. Receiver-operating characteristics (ROC) analysis shows that the texture temporal sequence (Az: 0.966 and 0.949 in training and test) is much more effective than the intensity sequence (Az: 0.871 and 0.868 in training and test). The wavelet transform further improves the classification performance (Az: 0.989 and 0.984 in training and test).",
"title": ""
},
{
"docid": "345a59aac1e89df5402197cca90ca464",
"text": "Tony Velkov,* Philip E. Thompson, Roger L. Nation, and Jian Li* School of Medicine, Deakin University, Pigdons Road, Geelong 3217, Victoria, Australia, Medicinal Chemistry and Drug Action and Facility for Anti-infective Drug Development and Innovation, Drug Delivery, Disposition and Dynamics, Monash Institute of Pharmaceutical Sciences, Monash University, 381 Royal Parade, Parkville 3052, Victoria, Australia",
"title": ""
},
{
"docid": "ffca07962ddcdfa0d016df8020488b5d",
"text": "Differential-drive mobile robots are usually equipped with video-cameras for navigation purposes. In order to ensure proper operational capabilities of such systems, several calibration steps are required to estimate the following quantities: the video-camera intrinsic and extrinsic parameters, the relative pose between the camera and the vehicle frame and, finally, the odometric parameters of the vehicle. In this paper the simultaneous estimation of the above mentioned quantities is achieved by a systematic and effective calibration procedure that does not require any iterative step. The calibration procedure needs only on-board measurements given by the wheels encoders, the camera and a number of properly taken camera snapshots of a set of known landmarks. Numerical simulations and experimental results with a mobile robot Khepera III equipped with a low-cost camera confirm the effectiveness of the proposed technique.",
"title": ""
},
{
"docid": "019c27341b9811a7347467490cea6a72",
"text": "For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation.",
"title": ""
},
{
"docid": "68b15f0708c256d674f018b667f97bb5",
"text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.",
"title": ""
},
{
"docid": "94160496e0a470dc278f71c67508ae21",
"text": "In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images.",
"title": ""
},
{
"docid": "f8724f8166eeb48461f9f4ac8fdd87d3",
"text": "The simultaneous use of images from different spectra can be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.",
"title": ""
},
{
"docid": "9b52a659fb6383e92c5968a082b01b71",
"text": "The internet of things (IoT) has a variety of application domains, including smart homes. This paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attacks from the smart home perspective. Further, this paper proposes an intelligent collaborative security management model to minimize security risk. The security challenges of the IoT for a smart home scenario are encountered, and a comprehensive IoT security management for smart homes has been proposed.",
"title": ""
},
{
"docid": "36142a4c0639662fe52dcc3fdf7b1ca4",
"text": "We present hierarchical change-detection tests (HCDTs), as effective online algorithms for detecting changes in datastreams. HCDTs are characterized by a hierarchical architecture composed of a detection layer and a validation layer. The detection layer steadily analyzes the input datastream by means of an online, sequential CDT, which operates as a low-complexity trigger that promptly detects possible changes in the process generating the data. The validation layer is activated when the detection one reveals a change, and performs an offline, more sophisticated analysis on recently acquired data to reduce false alarms. Our experiments show that, when the process generating the datastream is unknown, as it is mostly the case in the real world, HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart. Moreover, the successful interplay between the two layers permits HCDTs to automatically reconfigure after having detected and validated a change. Thus, HCDTs are able to reveal further departures from the postchange state of the data-generating process.",
"title": ""
},
{
"docid": "bf50151700f0e286ee5aa3a2bd74c249",
"text": "Computer systems that augment the process of finding the right expert for a given problem in an organization or world-wide are becoming feasible more than ever before, thanks to the prevalence of corporate Intranets and the Internet. This paper investigates such systems in two parts. We first explore the expert finding problem in depth, review and analyze existing systems in this domain, and suggest a domain model that can serve as a framework for design and development decisions. Based on our analyses of the problem and solution spaces, we then bring to light the gaps that remain to be addressed. Finally, we present our approach called DEMOIR, which is a modular architecture for expert finding systems that is based on a centralized expertise modeling server while also incorporating decentralized components for expertise information gathering and exploitation.",
"title": ""
},
{
"docid": "ae1f75aa978fd702be9b203487269517",
"text": "This paper presents a system that performs skill extraction from text documents. It outputs a list of professional skills that are relevant to a given input text. We argue that the system can be practical for hiring and management of personnel in an organization. We make use of the texts and the hyperlink graph of Wikipedia, as well as a list of professional skills obtained from the LinkedIn social network. The system is based on first computing similarities between an input document and the texts of Wikipedia pages and then using a biased, hub-avoiding version of the Spreading Activation algorithm on the Wikipedia graph in order to associate the input document with skills.",
"title": ""
},
{
"docid": "aa3be1c132e741d2c945213cfb0d96ad",
"text": "Collaborative filtering (CF) is one of the most successful recommendation approaches. It typically associates a user with a group of like-minded users based on their preferences over all the items, and recommends to the user those items enjoyed by others in the group. However we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more natural to make preference predictions for a user via the correlated subgroups than the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate the Multiclass Co-Clustering (MCoC) problem and propose an effective solution to it. Then we propose an unified framework to extend the traditional CF algorithms by utilizing the subgroups information for improving their top-N recommendation performance. Our approach can be seen as an extension of traditional clustering CF models. Systematic experiments on three real world data sets have demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "2d105fcec4109a6bc290c616938012f3",
"text": "One of the biggest challenges in automated driving is the ability to determine the vehicleâĂŹs location in realtime - a process known as self-localization or ego-localization. An automated driving system must be reliable under harsh conditions and environmental uncertainties (e.g. GPS denial or imprecision), sensor malfunction, road occlusions, poor lighting, and inclement weather. To cope with this myriad of potential problems, systems typically consist of a GPS receiver, in-vehicle sensors (e.g. cameras and LiDAR devices), and 3D High-Definition (3D HD) Maps. In this paper, we review state-of-the-art self-localization techniques, and present a benchmark for the task of image-based vehicle self-localization. Our dataset was collected on 10km of the Warren Freeway in the San Francisco Area under reasonable traffic and weather conditions. As input to the localization process, we provide timestamp-synchronized, consumer-grade monocular video frames (with camera intrinsic parameters), consumer-grade GPS trajectory, and production-grade 3D HD Maps. For evaluation, we provide survey-grade GPS trajectory. The goal of this dataset is to standardize and formalize the challenge of accurate vehicle self-localization and provide a benchmark to develop and evaluate algorithms.",
"title": ""
},
{
"docid": "592431c03450be59f10e56dcabed0ebf",
"text": "Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.",
"title": ""
},
{
"docid": "98f8994f1ad9315f168878ff40c29afc",
"text": "OBJECTIVE\nSuicide remains a major global public health issue for young people. The reach and accessibility of online and social media-based interventions herald a unique opportunity for suicide prevention. To date, the large body of research into suicide prevention has been undertaken atheoretically. This paper provides a rationale and theoretical framework (based on the interpersonal theory of suicide), and draws on our experiences of developing and testing online and social media-based interventions.\n\n\nMETHOD\nThe implementation of three distinct online and social media-based intervention studies, undertaken with young people at risk of suicide, are discussed. We highlight the ways that these interventions can serve to bolster social connectedness in young people, and outline key aspects of intervention implementation and moderation.\n\n\nRESULTS\nInsights regarding the implementation of these studies include careful protocol development mindful of risk and ethical issues, establishment of suitably qualified teams to oversee development and delivery of the intervention, and utilisation of key aspects of human support (i.e., moderation) to encourage longer-term intervention engagement.\n\n\nCONCLUSIONS\nOnline and social media-based interventions provide an opportunity to enhance feelings of connectedness in young people, a key component of the interpersonal theory of suicide. Our experience has shown that such interventions can be feasibly and safely conducted with young people at risk of suicide. Further studies, with controlled designs, are required to demonstrate intervention efficacy.",
"title": ""
},
{
"docid": "36af986f61252f221a8135e80fe6432d",
"text": "This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing our most basic knowledge of the world, particularly for causal structures in physical, biological, psychological or social domains (Atran, 1995; Carey, 1985a; Kelley, 1973; McCloskey, 1983; Murphy & Medin, 1985; Nichols & Stich, 2003). A principal function of intuitive theories in these domains is to support the learning of new causal knowledge: generating and constraining people’s hypotheses about possible causal relations, highlighting variables, actions and observations likely to be informative about those hypotheses, and guiding people’s interpretation of the data they observe (Ahn & Kalish, 2000; Pazzani, 1987; Pazzani, Dyer & Flowers, 1986; Waldmann, 1996). Leading accounts of cognitive development argue for the importance of intuitive theories in children’s mental lives and frame the major transitions of cognitive development as instances of theory change (Carey, 1985a; Gopnik & Meltzoff, 1997; Inagaki & Hatano 2002; Wellman & Gelman, 1992). Here we attempt to lay out some prospects for understanding the structure, function, and acquisition of intuitive theories from a rational computational perspective. From this viewpoint, theory-like representations are not just a convenient way of summarizing certain aspects of human knowledge. They provide crucial foundations for successful learning and reasoning, and we want to understand how they do so. With this goal in mind, we focus on",
"title": ""
},
{
"docid": "45f120b05b3c48cd95d5dd55031987cb",
"text": "n engl j med 359;6 www.nejm.org august 7, 2008 628 From the Department of Medicine (O.O.F., E.S.A.) and the Division of Infectious Diseases (P.A.M.), Johns Hopkins Bayview Medical Center, Johns Hopkins School of Medicine, Baltimore; the Division of Infectious Diseases (D.R.K.) and the Division of General Medicine (S.S.), University of Michigan Medical School, Ann Arbor; and the Department of Veterans Affairs Health Services Research and Development Center of Excellence, Ann Arbor, MI (S.S.). Address reprint requests to Dr. Antonarakis at the Johns Hopkins Bayview Medical Center, Department of Medicine, B-1 North, 4940 Eastern Ave., Baltimore, MD 21224, or at eantona1@ jhmi.edu.",
"title": ""
},
{
"docid": "d11a113fdb0a30e2b62466c641e49d6d",
"text": "Apache Spark has emerged as the de facto framework for big data analytics with its advanced in-memory programming model and upper-level libraries for scalable machine learning, graph analysis, streaming and structured data processing. It is a general-purpose cluster computing framework with language-integrated APIs in Scala, Java, Python and R. As a rapidly evolving open source project, with an increasing number of contributors from both academia and industry, it is difficult for researchers to comprehend the full body of development and research behind Apache Spark, especially those who are beginners in this area. In this paper, we present a technical review on big data analytics using Apache Spark. This review focuses on the key components, abstractions and features of Apache Spark. More specifically, it shows what Apache Spark has for designing and implementing big data algorithms and pipelines for machine learning, graph analysis and stream processing. In addition, we highlight some research and development directions on Apache Spark for big data analytics.",
"title": ""
},
{
"docid": "5e124199283b333e9b12877fd69dd051",
"text": "One of the major concerns of Integrated Traffic Management System (ITMS) in India is the identification of vehicles violating the stop-line at a road crossing. A large number of Indian vehicles do not stop at the designated stop-line and pose serious threat to the pedestrians crossing the roads. The current work reports the technicalities of the i $$ i $$ LPR (Indian License Plate Recognition) system implemented at five busy road-junctions in one populous metro city in India. The designed system is capable of localizing single line and two-line license plates of various sizes and shapes, recognizing characters of standard/ non-standard fonts and performing seamlessly in varying weather conditions. The performance of the system is evaluated with a large database of images for different environmental conditions. We have published a limited database of Indian vehicle images in http://code.google.com/p/cmaterdb/ for non-commercial use by fellow researchers. Despite unparallel complexity in the Indian city-traffic scenario, we have achieved around 92 % plate localization accuracy and 92.75 % plate level recognition accuracy over the localized vehicle images.",
"title": ""
},
{
"docid": "9cb28706a45251e3d2fb5af64dd9351f",
"text": "This article proposes an informational perspective on comparison consequences in social judgment. It is argued that to understand the variable consequences of comparison, one has to examine what target knowledge is activated during the comparison process. These informational underpinnings are conceptualized in a selective accessibility model that distinguishes 2 fundamental comparison processes. Similarity testing selectively makes accessible knowledge indicating target-standard similarity, whereas dissimilarity testing selectively makes accessible knowledge indicating target-standard dissimilarity. These respective subsets of target knowledge build the basis for subsequent target evaluations, so that similarity testing typically leads to assimilation whereas dissimilarity testing typically leads to contrast. The model is proposed as a unifying conceptual framework that integrates diverse findings on comparison consequences in social judgment.",
"title": ""
}
] | scidocsrr |
3387d0ddea6ff80834f71a31b8234ee0 | The Scyther Tool: Verification, Falsification, and Analysis of Security Protocols | [
{
"docid": "7d634a9abe92990de8cb41a78c25d2cc",
"text": "We present a new automatic cryptographic protocol verifier based on a simple representation of the protocol by Prolog rules, and on a new efficient algorithm that determines whether a fact can be proved from these rules or not. This verifier proves secrecy properties of the protocols. Thanks to its use of unification, it avoids the problem of the state space explosion. Another advantage is that we do not need to limit the number of runs of the protocol to analyze it. We have proved the correctness of our algorithm, and have implemented it. The experimental results show that many examples of protocols of the literature, including Skeme [24], can be analyzed by our tool with very small resources: the analysis takes from less than 0.1 s for simple protocols to 23 s for the main mode of Skeme. It uses less than 2 Mb of memory in our tests.",
"title": ""
}
] | [
{
"docid": "8d0221daae5933760698b8f4f7943870",
"text": "We introduce a novel, online method to predict pedestrian trajectories using agent-based velocity-space reasoning for improved human-robot interaction and collision-free navigation. Our formulation uses velocity obstacles to model the trajectory of each moving pedestrian in a robot’s environment and improves the motion model by adaptively learning relevant parameters based on sensor data. The resulting motion model for each agent is computed using statistical inferencing techniques, including a combination of Ensemble Kalman filters and a maximum-likelihood estimation algorithm. This allows a robot to learn individual motion parameters for every agent in the scene at interactive rates. We highlight the performance of our motion prediction method in real-world crowded scenarios, compare its performance with prior techniques, and demonstrate the improved accuracy of the predicted trajectories. We also adapt our approach for collision-free robot navigation among pedestrians based on noisy data and highlight the results in our simulator.",
"title": ""
},
{
"docid": "57c705e710f99accab3d9242fddc5ac8",
"text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.",
"title": ""
},
{
"docid": "605201e9b3401149da7e0e22fdbc908b",
"text": "Roadway traffic safety is a major concern for transportation governing agencies as well as ordinary citizens. In order to give safe driving suggestions, careful analysis of roadway traffic data is critical to find out variables that are closely related to fatal accidents. In this paper we apply statistics analysis and data mining algorithms on the FARS Fatal Accident dataset as an attempt to address this problem. The relationship between fatal rate and other attributes including collision manner, weather, surface condition, light condition, and drunk driver were investigated. Association rules were discovered by Apriori algorithm, classification model was built by Naive Bayes classifier, and clusters were formed by simple K-means clustering algorithm. Certain safety driving suggestions were made based on statistics, association rules, classification model, and clusters obtained.",
"title": ""
},
{
"docid": "9d82ce8e6630a9432054ed97752c7ec6",
"text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.",
"title": ""
},
{
"docid": "d639f6b922e24aca7229ce561e852b31",
"text": "As digital video becomes more pervasive, e cient ways of searching and annotating video according to content will be increasingly important. Such tasks arise, for example, in the management of digital video libraries for content-based retrieval and browsing. In this paper, we develop tools based on camera motion for analyzing and annotating a class of structured video using the low-level information available directly from MPEG compressed video. In particular, we show that in certain structured settings it is possible to obtain reliable estimates of camera motion by directly processing data easily obtained from the MPEG format. Working directly with the compressed video greatly reduces the processing time and enhances storage e ciency. As an illustration of this idea, we have developed a simple basketball annotation system which combines the low-level information extracted from an MPEG stream with the prior knowledge of basketball structure to provide high level content analysis, annotation and browsing for events such as wide-angle and close-up views, fast breaks, probable shots at the basket, etc. The methods used in this example should also be useful in the analysis of high-level content of structured video in other domains.",
"title": ""
},
{
"docid": "60697a4e8dd7d13147482a0992ee1862",
"text": "Static analysis of JavaScript has proven useful for a variety of purposes, including optimization, error checking, security auditing, program refactoring, and more. We propose a technique called type refinement that can improve the precision of such static analyses for JavaScript without any discernible performance impact. Refinement is a known technique that uses the conditions in branch guards to refine the analysis information propagated along each branch path. The key insight of this paper is to recognize that JavaScript semantics include many implicit conditional checks on types, and that performing type refinement on these implicit checks provides significant benefit for analysis precision.\n To demonstrate the effectiveness of type refinement, we implement a static analysis tool for reporting potential type-errors in JavaScript programs. We provide an extensive empirical evaluation of type refinement using a benchmark suite containing a variety of JavaScript application domains, ranging from the standard performance benchmark suites (Sunspider and Octane), to open-source JavaScript applications, to machine-generated JavaScript via Emscripten. We show that type refinement can significantly improve analysis precision by up to 86% without affecting the performance of the analysis.",
"title": ""
},
{
"docid": "9489210bfc8884d8290f772996629095",
"text": "Semantic interaction techniques in visual data analytics allow users to indirectly adjust model parameters by directly manipulating the visual output of the models. Many existing tools that support semantic interaction do so with a number of similar features, including using an underlying bidirectional pipeline, using a series of statistical models, and performing inverse computations to transform user interactions into model updates. We propose a visual analytics pipeline that captures these necessary features of semantic interactions. Our flexible, multi-model, bidirectional pipeline has modular functionality to enable rapid prototyping. This enables quick alterations to the type of data being visualized, models for transforming the data, semantic interaction methods, and visual encodings. To demonstrate how this pipeline can be used, we developed a series of applications that employ semantic interactions. We also discuss how the pipeline can be used or extended for future research on semantic interactions in visual analytics.",
"title": ""
},
{
"docid": "ac86e950866646a0b86d76bb3c087d0a",
"text": "In this paper, an SVM-based approach is proposed for stock market trend prediction. The proposed approach consists of two parts: feature selection and prediction model. In the feature selection part, a correlation-based SVM filter is applied to rank and select a good subset of financial indexes. And the stock indicators are evaluated based on the ranking. In the prediction model part, a so called quasi-linear SVM is applied to predict stock market movement direction in term of historical data series by using the selected subset of financial indexes as the weighted inputs. The quasi-linear SVM is an SVM with a composite quasi-linear kernel function, which approximates a nonlinear separating boundary by multi-local linear classifiers with interpolation. Experimental results on Taiwan stock market datasets demonstrate that the proposed SVM-based stock market trend prediction method produces better generalization performance over the conventional methods in terms of the hit ratio. Moreover, the experimental results also show that the proposed SVM-based stock market trend prediction system can find out a good subset and evaluate stock indicators which provide useful information for investors.",
"title": ""
},
{
"docid": "22e559b9536b375ded6516ceb93652ef",
"text": "In this paper we explore the linguistic components of toxic behavior by using crowdsourced data from over 590 thousand cases of accused toxic players in a popular match-based competition game, League of Legends. We perform a series of linguistic analyses to gain a deeper understanding of the role communication plays in the expression of toxic behavior. We characterize linguistic behavior of toxic players and compare it with that of typical players in an online competition game. We also find empirical support describing how a player transitions from typical to toxic behavior. Our findings can be helpful to automatically detect and warn players who may become toxic and thus insulate potential victims from toxic playing in advance.",
"title": ""
},
{
"docid": "5679a329a132125d697369ca4d39b93e",
"text": "This paper proposes a method to explore the design space of FinFETs with double fin heights. Our study shows that if one fin height is sufficiently larger than the other and the greatest common divisor of their equivalent transistor widths is small, the fin height pair will incur less width quantization effect and lead to better area efficiency. We design a standard cell library based on this technology using a tailored FreePDK15. With respect to a standard cell library designed with FreePDK15, about 86% of the cells designed with FinFETs of double fin heights have a smaller delay and 54% of the cells take a smaller area. We also demonstrate the advantages of FinFETs with double fin heights through chip designs using our cell library.",
"title": ""
},
{
"docid": "dca6d14c168f0836411df562444e71c5",
"text": "Obesity is a growing global health concern, with a rapid increase being observed in morbid obesity. Obesity is associated with an increased cardiovascular risk and earlier onset of cardiovascular morbidity. The growing obesity epidemic is a major source of unsustainable health costs and morbidity and mortality because of hypertension, type 2 diabetes mellitus, dyslipidemia, certain cancers and major cardiovascular diseases. Similar to obesity, hypertension is a key unfavorable health metric that has disastrous health implications: currently, hypertension is the leading contributor to global disease burden, and the direct and indirect costs of treating hypertension are exponentially higher. Poor lifestyle characteristics and health metrics often cluster together to create complex and difficult-to-treat phenotypes: excess body mass is such an example, facilitating a cascade of pathophysiological sequelae that create such as a direct obesity–hypertension link, which consequently increases cardiovascular risk. Although some significant issues regarding assessment/management of obesity remain to be addressed and the underlying mechanisms governing these disparate effects of obesity on cardiovascular disease are complex and not completely understood, a variety of factors could have a critical role. Consequently, a comprehensive and exhaustive investigation of this relationship should analyze the pathogenetic factors and pathophysiological mechanisms linking obesity to hypertension as they provide the basis for a rational therapeutic strategy in the aim to fully describe and understand the obesity–hypertension link and discuss strategies to address the potential negative consequences from the perspective of both primordial prevention and treatment for those already impacted by this condition.",
"title": ""
},
{
"docid": "be76c7f877ad43668fe411741478c43b",
"text": "With the surging of smartphone sensing, wireless networking, and mobile social networking techniques, Mobile Crowd Sensing and Computing (MCSC) has become a promising paradigm for cross-space and large-scale sensing. MCSC extends the vision of participatory sensing by leveraging both participatory sensory data from mobile devices (offline) and user-contributed data from mobile social networking services (online). Further, it explores the complementary roles and presents the fusion/collaboration of machine and human intelligence in the crowd sensing and computing processes. This article characterizes the unique features and novel application areas of MCSC and proposes a reference framework for building human-in-the-loop MCSC systems. We further clarify the complementary nature of human and machine intelligence and envision the potential of deep-fused human--machine systems. We conclude by discussing the limitations, open issues, and research opportunities of MCSC.",
"title": ""
},
{
"docid": "bd5e127cc3454bbf8a89c3f7d66fd624",
"text": "Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring and management point, and lack of a clear line of defense. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. Building on our prior work on anomaly detection, we investigate how to improve the anomaly detection approach to provide more details on attack types and sources. For several well-known attacks, we can apply a simple rule to identify the attack type when an anomaly is reported. In some cases, these rules can also help identify the attackers. We address the run-time resource constraint problem using a cluster-based detection scheme where periodically a node is elected as the ID agent for a cluster. Compared with the scheme where each node is its own ID agent, this scheme is much more efficient while maintaining the same level of effectiveness. We have conducted extensive experiments using the ns-2 and MobiEmu environments to validate our research.",
"title": ""
},
{
"docid": "1e8acf321f7ff3a1a496e4820364e2a8",
"text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.",
"title": ""
},
{
"docid": "147b207125fcda1dece25a6c5cd17318",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "d2c0e71db2957621eca42bdc221ffb8f",
"text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.",
"title": ""
},
{
"docid": "1c0eaeea7e1bfc777bb6e391eb190b59",
"text": "We review machine learning (ML)-based optical performance monitoring (OPM) techniques in optical communications. Recent applications of ML-assisted OPM in different aspects of fiber-optic networking including cognitive fault detection and management, network equipment failure prediction, and dynamic planning and optimization of software-defined networks are also discussed.",
"title": ""
},
{
"docid": "6c62e51d723d523fa286e94d3037a76f",
"text": "Stochastic programming can effectively describe many deci sion making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the ran dom parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discr ete, Gaussian, exponential, etc.) and moments (mean and cov ariance matrix). We demonstrate that for a wide range of cost fun ctio s the associated distributionally robust (or min-max ) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilis tic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a pra ctical example of portfolio selection, where our framework leads to better performing policies on the “true” distribut on underlying the daily returns of financial assets.",
"title": ""
},
{
"docid": "2da214ec8cd7e2380c0ee17adc3ad9fb",
"text": "Machine intelligence is an important problem to be solved for artificial intelligence to be truly impactful in our lives. While many question answering models have been explored for existing machine comprehension datasets, there has been little work with the newly released MS Marco dataset, which poses many unique challenges. We explore an end-to-end neural architecture with attention mechanisms capable of comprehending relevant information and generating text answers for MS Marco.",
"title": ""
},
{
"docid": "10fd41c0ff246545ceab663b9d9b3853",
"text": "Because structural equation modeling (SEM) has become a very popular data-analytic technique, it is important for clinical scientists to have a balanced perception of its strengths and limitations. We review several strengths of SEM, with a particular focus on recent innovations (e.g., latent growth modeling, multilevel SEM models, and approaches for dealing with missing data and with violations of normality assumptions) that underscore how SEM has become a broad data-analytic framework with flexible and unique capabilities. We also consider several limitations of SEM and some misconceptions that it tends to elicit. Major themes emphasized are the problem of omitted variables, the importance of lower-order model components, potential limitations of models judged to be well fitting, the inaccuracy of some commonly used rules of thumb, and the importance of study design. Throughout, we offer recommendations for the conduct of SEM analyses and the reporting of results.",
"title": ""
}
] | scidocsrr |
d0c87d798ac1ff9a5a967e9dcefe81f7 | Chinese Preposition Selection for Grammatical Error Diagnosis | [
{
"docid": "aa80366addac8af9cc5285f98663b9b6",
"text": "Automatic detection of sentence errors is an important NLP task and is valuable to assist foreign language learners. In this paper, we investigate the problem of word ordering errors in Chinese sentences and propose classifiers to detect this type of errors. Word n-gram features in Google Chinese Web 5-gram corpus and ClueWeb09 corpus, and POS features in the Chinese POStagged ClueWeb09 corpus are adopted in the classifiers. The experimental results show that integrating syntactic features, web corpus features and perturbation features are useful for word ordering error detection, and the proposed classifier achieves 71.64% accuracy in the experimental datasets. 協助非中文母語學習者偵測中文句子語序錯誤 自動偵測句子錯誤是自然語言處理研究一項重要議題,對於協助外語學習者很有價值。在 這篇論文中,我們研究中文句子語序錯誤的問題,並提出分類器來偵測這種類型的錯誤。 在分類器中我們使用的特徵包括:Google 中文網路 5-gram 語料庫、與 ClueWeb09 語料庫 的中文詞彙 n-grams及中文詞性標注特徵。實驗結果顯示,整合語法特徵、網路語料庫特 徵、及擾動特徵對偵測中文語序錯誤有幫助。在實驗所用的資料集中,合併使用這些特徵 所得的分類器效能可達 71.64%。",
"title": ""
}
] | [
{
"docid": "ab157111a39a4f081bdf0126e869f65d",
"text": "As event-related brain potential (ERP) researchers have increased the number of recording sites, they have gained further insights into the electrical activity in the neural networks underlying explicit memory. A review of the results of such ERP mapping studies suggests that there is good correspondence between ERP results and those from brain imaging studies that map hemodynamic changes. This concordance is important because the combination of the high temporal resolution of ERPs with the high spatial resolution of hemodynamic imaging methods will provide a greatly increased understanding of the spatio-temporal dynamics of the brain networks that encode and retrieve explicit memories.",
"title": ""
},
{
"docid": "0ecb00d99dc497a0e902cda198219dff",
"text": "Security vulnerabilities typically arise from bugs in input validation and in the application logic. Fuzz-testing is a popular security evaluation technique in which hostile inputs are crafted and passed to the target software in order to reveal bugs. However, in the case of SCADA systems, the use of proprietary protocols makes it difficult to apply existing fuzz-testing techniques as they work best when the protocol semantics are known, targets can be instrumented and large network traces are available. This paper describes a fuzz-testing solution involving LZFuzz, an inline tool that provides a domain expert with the ability to effectively fuzz SCADA devices.",
"title": ""
},
{
"docid": "8a3d5500299676e160f661d87c13d617",
"text": "A novel method for visual place recognition is introduced and evaluated, demonstrating robustness to perceptual aliasing and observation noise. This is achieved by increasing discrimination through a more structured representation of visual observations. Estimation of observation likelihoods are based on graph kernel formulations, utilizing both the structural and visual information encoded in covisibility graphs. The proposed probabilistic model is able to circumvent the typically difficult and expensive posterior normalization procedure by exploiting the information available in visual observations. Furthermore, the place recognition complexity is independent of the size of the map. Results show improvements over the state-of-theart on a diverse set of both public datasets and novel experiments, highlighting the benefit of the approach.",
"title": ""
},
{
"docid": "3c444d8918a31831c2dc73985d511985",
"text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.",
"title": ""
},
{
"docid": "76151ea99f24bb16f98bf7793f253002",
"text": "The banning in 2006 of the use of antibiotics as animal growth promoters in the European Union has increased demand from producers for alternative feed additives that can be used to improve animal production. This review gives an overview of the most common non-antibiotic feed additives already being used or that could potentially be used in ruminant nutrition. Probiotics, dicarboxylic acids, enzymes and plant-derived products including saponins, tannins and essential oils are presented. The known modes of action and effects of these additives on feed digestion and more especially on rumen fermentations are described. Their utility and limitations in field conditions for modern ruminant production systems and their compliance with the current legislation are also discussed.",
"title": ""
},
{
"docid": "19b96cd469f1b81e45cf11a0530651a8",
"text": "only Painful initially, patient preference No cost Digitation Pilot RCTs 28 Potential risk of premature closure No cost Open wound (fig 4⇓) RCT = randomised controlled trial. For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe BMJ 2017;356:j475 doi: 10.1136/bmj.j475 (Published 2017 February 21) Page 4 of 6",
"title": ""
},
{
"docid": "2fbe9db6c676dd64c95e72e8990c63f0",
"text": "Community detection is one of the most important problems in the field of complex networks in recent years. Themajority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improvedmulti-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.",
"title": ""
},
{
"docid": "19977bf55573bb1d51a85b0a2febba2b",
"text": "In the general 3D scene, the correlation of depth image and corresponding color image exists, so many filtering methods have been proposed to improve the quality of depth images according to this correlation. Unlike the conventional methods, in this paper both depth and color information can be jointly employed to improve the quality of compressed depth image by the way of iterative guidance. Firstly, due to noises and blurring in the compressed image, a depth pre-filtering method is essential to remove artifact noises. Considering that the received geometry structure in the distorted depth image is more reliable than its color image, the color information is merged with depth image to get depth-merged color image. Then the depth image and its corresponding depth-merged color image can be used to refine the quality of the distorted depth image using joint iterative guidance filtering method. Therefore, the efficient depth structural information included in the distorted depth images are preserved relying on depth itself, while the corresponding color structural information are employed to improve the quality of depth image. We demonstrate the efficiency of the proposed filtering method by comparing objective and visual quality of the synthesized image with many existing depth filtering methods.",
"title": ""
},
{
"docid": "c174facf9854db5aae149e82f9f2a239",
"text": "A new feeding technique for printed Log-periodic dipole arrays (LPDAs) is presented, and used to design a printed LPDA operating between 4 and 18 GHz. The antenna has been designed using CST MICROWAVE STUDIO 2010, and the simulation results show that the antenna can be used as an Ultra Wideband Antenna in the range 6-9 GHz.",
"title": ""
},
{
"docid": "ee027c9ee2f66bc6cf6fb32a5697ee49",
"text": "Patellofemoral pain (PFP) is a very common problem in athletes who participate in jumping, cutting and pivoting sports. Several risk factors may play a part in the pathogenesis of PFP. Overuse, trauma and intrinsic risk factors are particularly important among athletes. Physical examination has a key role in PFP diagnosis. Furthermore, common risk factors should be investigated, such as hip muscle dysfunction, poor core muscle endurance, muscular tightness, excessive foot pronation and patellar malalignment. Imaging is seldom needed in special cases. Many possible interventions are recommended for PFP management. Due to the multifactorial nature of PFP, the clinical approach should be individualized, and the contribution of different factors should be considered and managed accordingly. In most cases, activity modification and rehabilitation should be tried before any surgical interventions.",
"title": ""
},
{
"docid": "296ce1f0dd7bf02c8236fa858bb1957c",
"text": "As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings.",
"title": ""
},
{
"docid": "045ce09ddca696e2882413a8d251c5f6",
"text": "Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student's mark for a module, given the student's performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu- Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.",
"title": ""
},
{
"docid": "d90acdfc572cf39d295cb78dd313e5f5",
"text": "The TORCS racing simulator has become a standard testbed used in many recent reinforcement learning competitions, where an agent must learn to drive a car around a track using a small set of task-specific features. In this paper, large, recurrent neural networks (with over 1 million weights) are evolved to solve a much more challenging version of the task that instead uses only a stream of images from the driver’s perspective as input. Evolving such large nets is made possible by representing them in the frequency domain as a set of coefficients that are transformed into weight matrices via an inverse Fourier-type transform. To our knowledge this is the first attempt to tackle TORCS using vision, and successfully evolve a neural network controllers of this size.",
"title": ""
},
{
"docid": "2bd5dd2d220d3715be8228050593c4ca",
"text": "We present a sensitivity analysis-based method for explaining prediction models that can be applied to any type of classification or regression model. Its advantage over existing general methods is that all subsets of input features are perturbed, so interactions and redundancies between features are taken into account. Furthermore, when explaining an additive model, the method is equivalent to commonly used additive model-specific methods. We illustrate the method’s usefulness with examples from artificial and real-world data sets and an empirical analysis of running times. Results from a controlled experiment with 122 participants suggest that the method’s explanations improved the participants’ understanding of the model.",
"title": ""
},
{
"docid": "f79eca0cafc35ed92fd8ffd2e7a4ab60",
"text": "We investigate the novel task of online dispute detection and propose a sentiment analysis solution to the problem: we aim to identify the sequence of sentence-level sentiments expressed during a discussion and to use them as features in a classifier that predicts the DISPUTE/NON-DISPUTE label for the discussion as a whole. We evaluate dispute detection approaches on a newly created corpus of Wikipedia Talk page disputes and find that classifiers that rely on our sentiment tagging features outperform those that do not. The best model achieves a very promising F1 score of 0.78 and an accuracy of 0.80.",
"title": ""
},
{
"docid": "aa1c565018371cf12e703e06f430776b",
"text": "We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.",
"title": ""
},
{
"docid": "9766e0507346e46e24790a4873979aa4",
"text": "Extreme learning machine (ELM) is proposed for solving a single-layer feed-forward network (SLFN) with fast learning speed and has been confirmed to be effective and efficient for pattern classification and regression in different fields. ELM originally focuses on the supervised, semi-supervised, and unsupervised learning problems, but just in the single domain. To our best knowledge, ELM with cross-domain learning capability in subspace learning has not been exploited very well. Inspired by a cognitive-based extreme learning machine technique (Cognit Comput. 6:376–390, 1; Cognit Comput. 7:263–278, 2.), this paper proposes a unified subspace transfer framework called cross-domain extreme learning machine (CdELM), which aims at learning a common (shared) subspace across domains. Three merits of the proposed CdELM are included: (1) A cross-domain subspace shared by source and target domains is achieved based on domain adaptation; (2) ELM is well exploited in the cross-domain shared subspace learning framework, and a new perspective is brought for ELM theory in heterogeneous data analysis; (3) the proposed method is a subspace learning framework and can be combined with different classifiers in recognition phase, such as ELM, SVM, nearest neighbor, etc. Experiments on our electronic nose olfaction datasets demonstrate that the proposed CdELM method significantly outperforms other compared methods.",
"title": ""
},
{
"docid": "9920660432c2a2cf1f83ed6b8412b433",
"text": "We propose a new approach for metric learning by framing it as learning a sparse combination of locally discriminative metrics that are inexpensive to generate from the training data. This flexible framework allows us to naturally derive formulations for global, multi-task and local metric learning. The resulting algorithms have several advantages over existing methods in the literature: a much smaller number of parameters to be estimated and a principled way to generalize learned metrics to new testing data points. To analyze the approach theoretically, we derive a generalization bound that justifies the sparse combination. Empirically, we evaluate our algorithms on several datasets against state-of-theart metric learning methods. The results are consistent with our theoretical findings and demonstrate the superiority of our approach in terms of classification performance and scalability.",
"title": ""
},
{
"docid": "9cc997e886bea0ac5006c9ee734b7906",
"text": "Additive manufacturing technology using inkjet offers several improvements to electronics manufacturing compared to current nonadditive masking technologies. Manufacturing processes can be made more efficient, straightforward and flexible compared to subtractive masking processes, several time-consuming and expensive steps can be omitted. Due to the additive process, material loss is minimal, because material is never removed as with etching processes. The amounts of used material and waste are smaller, which is advantageous in both productivity and environmental means. Furthermore, the additive inkjet manufacturing process is flexible allowing fast prototyping, easy design changes and personalization of products. Additive inkjet processing offers new possibilities to electronics integration, by enabling direct writing on various surfaces, and component interconnection without a specific substrate. The design and manufacturing of inkjet printed modules differs notably from the traditional way to manufacture electronics. In this study a multilayer inkjet interconnection process to integrate functional systems was demonstrated, and the issues regarding the design and manufacturing were considered. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4be66419d03715ca686bea9665bf734",
"text": "Data augmentation is a key element in training high-dimensional models. In this approach, one synthesizes new observations by applying pre-specified transformations to the original training data; e.g. new images are formed by rotating old ones. Current augmentation schemes, however, rely on manual specification of the applied transformations, making data augmentation an implicit form of feature engineering. With an eye towards true end-to-end learning, we suggest learning the applied transformations on a per-class basis. Particularly, we align image pairs within each class under the assumption that the spatial transformation between images belongs to a large class of diffeomorphisms. We then learn a class-specific probabilistic generative models of the transformations in a Riemannian submanifold of the Lie group of diffeomorphisms. We demonstrate significant performance improvements in training deep neural nets over manually-specified augmentation schemes. Our code and augmented datasets are available online. Appearing in Proceedings of the 19 International Conference on Artificial Intelligence and Statistics (AISTATS) 2016, Cadiz, Spain. JMLR: W&CP volume 41. Copyright 2016 by the authors.",
"title": ""
}
] | scidocsrr |
e1340c9d28265bce016b4422fc1d0ecc | Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto | [
{
"docid": "931e6f034abd1a3004d021492382a47a",
"text": "SARSA (Sutton, 1996) is applied to a simulated, traac-light control problem (Thorpe, 1997) and its performance is compared with several, xed control strategies. The performance of SARSA with four diierent representations of the current state of traac is analyzed using two reinforcement schemes. Training on one intersection is compared to, and is as eeective as training on all intersections in the environment. SARSA is shown to be better than xed-duration light timing and four-way stops for minimizing total traac travel time, individual vehicle travel times, and vehicle wait times. Comparisons of performance using a constant reinforcement function versus a variable reinforcement function dependent on the number of vehicles at an intersection showed that the variable reinforcement resulted in slightly improved performance for some cases.",
"title": ""
}
] | [
{
"docid": "7933e531385d90a6b485abe155f06e3a",
"text": "We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure. For which we obtain generalization error guarantees and derive an optimization algorithm based on the Fenchel dual representation. Experiments on real-world datasets from the application domains of computational biology and computer vision show that convex localized multiple kernel learning can achieve higher prediction accuracies than its global and non-convex local counterparts.",
"title": ""
},
{
"docid": "7ebff2391401cef25b27d510675e9acd",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "7210c2e82441b142f722bcc01bfe9aca",
"text": "In the beginning of the last decade, agile methodologies emerged as a response to software development processes that were based on rigid approaches. In fact, the flexible characteristics of agile methods are expected to be suitable to the less-defined and uncertain nature of software development. However, many studies in this area lack empirical evaluation in order to provide more confident evidences about which contexts the claims are true. This paper reports an empirical study performed to analyze the impact of Scrum adoption on customer satisfaction as an external success perspective for software development projects in a software intensive organization. The study uses data from real-life projects executed in a major software intensive organization located in a nation wide software ecosystem. The empirical method applied was a cross-sectional survey using a sample of 19 real-life software development projects involving 156 developers. The survey aimed to determine whether there is any impact on customer satisfaction caused by the Scrum adoption. However, considering that sample, our results indicate that it was not possible to establish any evidence that using Scrum may help to achieve customer satisfaction and, consequently, increase the success rates in software projects, in contrary to general claims made by Scrum's advocates.",
"title": ""
},
{
"docid": "7c5d0139d729ad6f90332a9d1cd28f70",
"text": "Cloud based ERP system architecture provides solutions to all the difficulties encountered by conventional ERP systems. It provides flexibility to the existing ERP systems and improves overall efficiency. This paper aimed at comparing the performance traditional ERP systems with cloud base ERP architectures. The challenges before the conventional ERP implementations are analyzed. All the main aspects of an ERP systems are compared with cloud based approach. The distinct advantages of cloud ERP are explained. The difficulties in cloud architecture are also mentioned.",
"title": ""
},
{
"docid": "cec6b4d1e547575a91bdd7e852ecbc3c",
"text": "The apps installed on a smartphone can reveal much information about a user, such as their medical conditions, sexual orientation, or religious beliefs. In addition, the presence or absence of particular apps on a smartphone can inform an adversary, who is intent on attacking the device. In this paper, we show that a passive eavesdropper can feasibly identify smartphone apps by fingerprinting the network traffic that they send. Although SSL/TLS hides the payload of packets, side-channel data, such as packet size and direction is still leaked from encrypted connections. We use machine learning techniques to identify smartphone apps from this side-channel data. In addition to merely fingerprinting and identifying smartphone apps, we investigate how app fingerprints change over time, across devices, and across different versions of apps. In addition, we introduce strategies that enable our app classification system to identify and mitigate the effect of ambiguous traffic, i.e., traffic in common among apps, such as advertisement traffic. We fully implemented a framework to fingerprint apps and ran a thorough set of experiments to assess its performance. We fingerprinted 110 of the most popular apps in the Google Play Store and were able to identify them six months later with up to 96% accuracy. Additionally, we show that app fingerprints persist to varying extents across devices and app versions.",
"title": ""
},
{
"docid": "382eec3778d98cb0c8445633c16f59ef",
"text": "In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.",
"title": ""
},
{
"docid": "e444dcc97882005658aca256991e816e",
"text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.",
"title": ""
},
{
"docid": "b6fdde5d6baeb546fd55c749af14eec1",
"text": "Action recognition is an important research problem of human motion analysis (HMA). In recent years, 3D observation-based action recognition has been receiving increasing interest in the multimedia and computer vision communities, due to the recent advent of cost-effective sensors, such as depth camera Kinect. This work takes this one step further, focusing on early recognition of ongoing 3D human actions, which is beneficial for a large variety of time-critical applications, e.g., gesture-based human machine interaction, somatosensory games, and so forth. Our goal is to infer the class label information of 3D human actions with partial observation of temporally incomplete action executions. By considering 3D action data as multivariate time series (m.t.s.) synchronized to a shared common clock (frames), we propose a stochastic process called dynamic marked point process (DMP) to model the 3D action as temporal dynamic patterns, where both timing and strength information are captured. To achieve even more early and better accuracy of recognition, we also explore the temporal dependency patterns between feature dimensions. A probabilistic suffix tree is constructed to represent sequential patterns among features in terms of the variable-order Markov model (VMM). Our approach and several baselines are evaluated on five 3D human action datasets. Extensive results show that our approach achieves superior performance for early recognition of 3D human actions.",
"title": ""
},
{
"docid": "4e23da50d4f1f0c4ecdbbf5952290c98",
"text": "[Context and motivation] User stories are an increasingly popular textual notation to capture requirements in agile software development. [Question/Problem] To date there is no scientific evidence on the effectiveness of user stories. The goal of this paper is to explore how practicioners perceive this artifact in the context of requirements engineering. [Principal ideas/results] We explore perceived effectiveness of user stories by reporting on a survey with 182 responses from practitioners and 21 follow-up semi-structured interviews. The data shows that practitioners agree that using user stories, a user story template and quality guidelines such as the INVEST mnemonic improve their productivity and the quality of their work deliverables. [Contribution] By combining the survey data with 21 semi-structured follow-up interviews, we present 12 findings on the usage and perception of user stories by practitioners that employ user stories in their everyday work environment.",
"title": ""
},
{
"docid": "d9eed063ea6399a8f33c6cbda3a55a62",
"text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "35f74f11a60ad58171b74e755cd0476b",
"text": "Recent studies show that the performances of face recognition systems degrade in presence of makeup on face. In this paper, a facial makeup detector is proposed to further reduce the impact of makeup in face recognition. The performance of the proposed technique is tested using three publicly available facial makeup databases. The proposed technique extracts a feature vector that captures the shape and texture characteristics of the input face. After feature extraction, two types of classifiers (i.e. SVM and Alligator) are applied for comparison purposes. In this study, we observed that both classifiers provide significant makeup detection accuracy. There are only few studies regarding facial makeup detection in the state-of-the art. The proposed technique is novel and outperforms the state-of-the art significantly.",
"title": ""
},
{
"docid": "1301030c091eeb23d43dd3bfa6763e77",
"text": "A new system for web attack detection is presented. It follows the anomaly-based approach, therefore known and unknown attacks can be detected. The system relies on a XML file to classify the incoming requests as normal or anomalous. The XML file, which is built from only normal traffic, contains a description of the normal behavior of the target web application statistically characterized. Any request which deviates from the normal behavior is considered an attack. The system has been applied to protect a real web application. An increasing number of training requests have been used to train the system. Experiments show that when the XML file has enough information to closely characterize the normal behavior of the target web application, a very high detection rate is reached while the false alarm rate remains very low.",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "41e3ec35f9ca27eef6e70c963628281e",
"text": "An emerging problem in computer vision is the reconstruction of 3D shape and pose of an object from a single image. Hitherto, the problem has been addressed through the application of canonical deep learning methods to regress from the image directly to the 3D shape and pose labels. These approaches, however, are problematic from two perspectives. First, they are minimizing the error between 3D shapes and pose labels - with little thought about the nature of this “label error” when reprojecting the shape back onto the image. Second, they rely on the onerous and ill-posed task of hand labeling natural images with respect to 3D shape and pose. In this paper we define the new task of pose-aware shape reconstruction from a single image, and we advocate that cheaper 2D annotations of objects silhouettes in natural images can be utilized. We design architectures of pose-aware shape reconstruction which reproject the predicted shape back on to the image using the predicted pose. Our evaluation on several object categories demonstrates the superiority of our method for predicting pose-aware 3D shapes from natural images.",
"title": ""
},
{
"docid": "464f7d25cb2a845293a3eb8c427f872f",
"text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.",
"title": ""
},
{
"docid": "139adbef378fa0b195477e75d4d71e12",
"text": "Alu elements are primate-specific repeats and comprise 11% of the human genome. They have wide-ranging influences on gene expression. Their contribution to genome evolution, gene regulation and disease is reviewed.",
"title": ""
},
{
"docid": "9573c50b4cd5dfdcabd09676a757d06f",
"text": "Fall detection is a major challenge in the public healthcare domain, especially for the elderly as the decline of their physical fitness, and timely and reliable surveillance is necessary to mitigate the negative effects of falls. This paper develops a novel fall detection system based on a wearable device. The system monitors the movements of human body, recognizes a fall from normal daily activities by an effective quaternion algorithm, and automatically sends request for help to the caregivers with the patient's location.",
"title": ""
},
{
"docid": "4075eb657e87ad13e0f47ab36d33df54",
"text": "MOTIVATION\nControlled vocabularies such as the Medical Subject Headings (MeSH) thesaurus and the Gene Ontology (GO) provide an efficient way of accessing and organizing biomedical information by reducing the ambiguity inherent to free-text data. Different methods of automating the assignment of MeSH concepts have been proposed to replace manual annotation, but they are either limited to a small subset of MeSH or have only been compared with a limited number of other systems.\n\n\nRESULTS\nWe compare the performance of six MeSH classification systems [MetaMap, EAGL, a language and a vector space model-based approach, a K-Nearest Neighbor (KNN) approach and MTI] in terms of reproducing and complementing manual MeSH annotations. A KNN system clearly outperforms the other published approaches and scales well with large amounts of text using the full MeSH thesaurus. Our measurements demonstrate to what extent manual MeSH annotations can be reproduced and how they can be complemented by automatic annotations. We also show that a statistically significant improvement can be obtained in information retrieval (IR) when the text of a user's query is automatically annotated with MeSH concepts, compared to using the original textual query alone.\n\n\nCONCLUSIONS\nThe annotation of biomedical texts using controlled vocabularies such as MeSH can be automated to improve text-only IR. Furthermore, the automatic MeSH annotation system we propose is highly scalable and it generates improvements in IR comparable with those observed for manual annotations.",
"title": ""
},
{
"docid": "6e4dfb4c6974543246003350b5e3e07f",
"text": "Zero-shot object detection is an emerging research topic that aims to recognize and localize previously ‘unseen’ objects. This setting gives rise to several unique challenges, e.g., highly imbalanced positive vs. negative instance ratio, ambiguity between background and unseen classes and the proper alignment between visual and semantic concepts. Here, we propose an end-to-end deep learning framework underpinned by a novel loss function that puts more emphasis on difficult examples to avoid class imbalance. We call our objective the ‘Polarity loss’ because it explicitly maximizes the gap between positive and negative predictions. Such a margin maximizing formulation is important as it improves the visual-semantic alignment while resolving the ambiguity between background and unseen. Our approach is inspired by the embodiment theories in cognitive science, that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word dictionary) and the perception of the physical world (visual imagery). To this end, we learn to attend to a dictionary of related semantic concepts that eventually refines the noisy semantic embeddings and helps establish a better synergy between visual and semantic domains. Our extensive results on MS-COCO and Pascal VOC datasets show as high as 14× mAP improvement over state of the art.1",
"title": ""
},
{
"docid": "e33e3e46a4bcaaae32a5743672476cd9",
"text": "This paper is based on the notion of data quality. It includes correctness, completeness and minimality for which a notational framework is shown. In long living databases the maintenance of data quality is a rst order issue. This paper shows that even well designed and implemented information systems cannot guarantee correct data in any circumstances. It is shown that in any such system data quality tends to decrease and therefore some data correction procedure should be applied from time to time. One aspect of increasing data quality is the correction of data values. Characteristics of a software tool which supports this data value correction process are presented and discussed.",
"title": ""
}
] | scidocsrr |
d2948c21194cbc2254fd8603d3702a81 | RaptorX-Property: a web server for protein structure property prediction | [
{
"docid": "44bd234a8999260420bb2a07934887af",
"text": "T e purpose of this review is to assess the nature and magnitudes of the dominant forces in protein folding. Since proteins are only marginally stable at room temperature,’ no type of molecular interaction is unimportant, and even small interactions can contribute significantly (positively or negatively) to stability (Alber, 1989a,b; Matthews, 1987a,b). However, the present review aims to identify only the largest forces that lead to the structural features of globular proteins: their extraordinary compactness, their core of nonpolar residues, and their considerable amounts of internal architecture. This review explores contributions to the free energy of folding arising from electrostatics (classical charge repulsions and ion pairing), hydrogen-bonding and van der Waals interactions, intrinsic propensities, and hydrophobic interactions. An earlier review by Kauzmann (1959) introduced the importance of hydrophobic interactions. His insights were particularly remarkable considering that he did not have the benefit of known protein structures, model studies, high-resolution calorimetry, mutational methods, or force-field or statistical mechanical results. The present review aims to provide a reassessment of the factors important for folding in light of current knowledge. Also considered here are the opposing forces, conformational entropy and electrostatics. The process of protein folding has been known for about 60 years. In 1902, Emil Fischer and Franz Hofmeister independently concluded that proteins were chains of covalently linked amino acids (Haschemeyer & Haschemeyer, 1973) but deeper understanding of protein structure and conformational change was hindered because of the difficulty in finding conditions for solubilization. Chick and Martin (191 1) were the first to discover the process of denaturation and to distinguish it from the process of aggregation. By 1925, the denaturation process was considered to be either hydrolysis of the peptide bond (Wu & Wu, 1925; Anson & Mirsky, 1925) or dehydration of the protein (Robertson, 1918). The view that protein denaturation was an unfolding process was",
"title": ""
},
{
"docid": "5a1f4efc96538c1355a2742f323b7a0e",
"text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.",
"title": ""
}
] | [
{
"docid": "f1e5e00fe3a0610c47918de526e87dc6",
"text": "The current paper reviews research that has explored the intergenerational effects of the Indian Residential School (IRS) system in Canada, in which Aboriginal children were forced to live at schools where various forms of neglect and abuse were common. Intergenerational IRS trauma continues to undermine the well-being of today's Aboriginal population, and having a familial history of IRS attendance has also been linked with more frequent contemporary stressor experiences and relatively greater effects of stressors on well-being. It is also suggested that familial IRS attendance across several generations within a family appears to have cumulative effects. Together, these findings provide empirical support for the concept of historical trauma, which takes the perspective that the consequences of numerous and sustained attacks against a group may accumulate over generations and interact with proximal stressors to undermine collective well-being. As much as historical trauma might be linked to pathology, it is not possible to go back in time to assess how previous traumas endured by Aboriginal peoples might be related to subsequent responses to IRS trauma. Nonetheless, the currently available research demonstrating the intergenerational effects of IRSs provides support for the enduring negative consequences of these experiences and the role of historical trauma in contributing to present day disparities in well-being.",
"title": ""
},
{
"docid": "c38dc288a59e39785dfa87f46d2371e5",
"text": "Silver molybdate (Ag2MoO4) and silver tungstate (Ag2WO4) nanomaterials were prepared using two complementary methods, microwave assisted hydrothermal synthesis (MAH) (pH 7, 140 °C) and coprecipitation (pH 4, 70 °C), and were then used to prepare two core/shell composites, namely α-Ag2WO4/β-Ag2MoO4 (MAH, pH 4, 140 °C) and β-Ag2MoO4/β-Ag2WO4 (coprecipitation, pH 4, 70 °C). The shape and size of the microcrystals were observed by field emission scanning electron microscopy (FE-SEM), different morphologies such as balls and nanorods. These powders were characterized by X-ray powder diffraction and UV-vis (diffuse reflectance and photoluminescence). X-ray diffraction patterns showed that the Ag2MoO4 samples obtained by the two methods were single-phased and belonged to the β-Ag2MoO4 structure (spinel type). In contrast, the Ag2WO4 obtained in the two syntheses were structurally different: MAH exhibited the well-known tetrameric stable structure α-Ag2WO4, while coprecipitation afforded the metastable β-Ag2WO4 allotrope, coexisting with a weak amount of the α-phase. The optical gap of β-Ag2WO4 (3.3 eV) was evaluated for the first time. In contrast to β-Ag2MoO4/β-Ag2WO4, the αAg2WO4/β-Ag2MoO4 exhibited strongly-enhanced photoluminescence in the low-energy band (650 nm), tentatively explained by the creation of a large density of local defects (distortions) at the core-shell interface, due to the presence of two different types of MOx polyhedra in the two structures.",
"title": ""
},
{
"docid": "d8938884a61e7c353d719dbbb65d00d0",
"text": "Image encryption plays an important role to ensure confidential transmission and storage of image over internet. However, a real–time image encryption faces a greater challenge due to large amount of data involved. This paper presents a review on image encryption techniques of both full encryption and partial encryption schemes in spatial, frequency and hybrid domains.",
"title": ""
},
{
"docid": "ce63aad5288d118eb6ca9d99b96e9cac",
"text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.",
"title": ""
},
{
"docid": "c00c6539b78ed195224063bcff16fb12",
"text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.",
"title": ""
},
{
"docid": "d6707c10e68dcbb5cde0920631bdaf8b",
"text": "Game playing has been an important testbed for artificial intelligence. Board games, first-person shooters, and real-time strategy games have well-defined win conditions and rely on strong feedback from a simulated environment. Text adventures require natural language understanding to progress through the game but still have an underlying simulated environment. In this paper, we propose tabletop roleplaying games as a challenge due to an infinite action space, multiple (collaborative) players and models of the world, and no explicit reward signal. We present an approach for reinforcement learning agents that can play tabletop roleplaying games.",
"title": ""
},
{
"docid": "5411326f95abd20a141ad9e9d3ff72bf",
"text": "media files and almost universal use of email, information sharing is almost instantaneous anywhere in the world. Because many of the procedures performed in dentistry represent established protocols that should be read, learned and then practiced, it becomes clear that photography aids us in teaching or explaining to our patients what we think are common, but to them are complex and mysterious procedures. Clinical digital photography. Part 1: Equipment and basic documentation",
"title": ""
},
{
"docid": "ce174b6dce6e2dee62abca03b4a95112",
"text": "This article proposes a novel framework for representing and measuring local coherence. Central to this approach is the entity-grid representation of discourse, which captures patterns of entity distribution in a text. The algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional, syntactic, and referential information about discourse entities. We re-conceptualize coherence assessment as a learning task and show that our entity-based representation is well-suited for ranking-based generation and text classification tasks. Using the proposed representation, we achieve good performance on text ordering, summary coherence evaluation, and readability assessment.",
"title": ""
},
{
"docid": "3f33882e4bece06e7a553eb9133f8aa9",
"text": "Research on the relationship between affect and cognition in Artificial Intelligence in Education (AIEd) brings an important dimension to our understanding of how learning occurs and how it can be facilitated. Emotions are crucial to learning, but their nature, the conditions under which they occur, and their exact impact on learning for different learners in diverse contexts still needs to be mapped out. The study of affect during learning can be challenging, because emotions are subjective, fleeting phenomena that are often difficult for learners to report accurately and for observers to perceive reliably. Context forms an integral part of learners’ affect and the study thereof. This review provides a synthesis of the current knowledge elicitation methods that are used to aid the study of learners’ affect and to inform the design of intelligent technologies for learning. Advantages and disadvantages of the specific methods are discussed along with their respective potential for enhancing research in this area, and issues related to the interpretation of data that emerges as the result of their use. References to related research are also provided together with illustrative examples of where the individual methods have been used in the past. Therefore, this review is intended as a resource for methodological decision making for those who want to study emotions and their antecedents in AIEd contexts, i.e. where the aim is to inform the design and implementation of an intelligent learning environment or to evaluate its use and educational efficacy.",
"title": ""
},
{
"docid": "cd877197b06304b379d5caf9b5b89d30",
"text": "Research is now required on factors influencing adults' sedentary behaviors, and effective approaches to behavioral-change intervention must be identified. The strategies for influencing sedentary behavior will need to be informed by evidence on the most important modifiable behavioral determinants. However, much of the available evidence relevant to understanding the determinants of sedentary behaviors is from cross-sectional studies, which are limited in that they identify only behavioral \"correlates.\" As is the case for physical activity, a behavior- and context-specific approach is needed to understand the multiple determinants operating in the different settings within which these behaviors are most prevalent. To this end, an ecologic model of sedentary behaviors is described, highlighting the behavior settings construct. The behaviors and contexts of primary concern are TV viewing and other screen-focused behaviors in domestic environments, prolonged sitting in the workplace, and time spent sitting in automobiles. Research is needed to clarify the multiple levels of determinants of prolonged sitting time, which are likely to operate in distinct ways in these different contexts. Controlled trials on the feasibility and efficacy of interventions to reduce and break up sedentary behaviors among adults in domestic, workplace, and transportation environments are particularly required. It would be informative for the field to have evidence on the outcomes of \"natural experiments,\" such as the introduction of nonseated working options in occupational environments or new transportation infrastructure in communities.",
"title": ""
},
{
"docid": "0e521af53f9faf4fee38843a22ec2185",
"text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.",
"title": ""
},
{
"docid": "fb4630a6b558ac9b8d8444275e1978e3",
"text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.",
"title": ""
},
{
"docid": "12a8d007ca4dce21675ddead705c7b62",
"text": "This paper presents an ethnographic account of the implementation of Lean service redesign methodologies in one UK NHS hospital operating department. It is suggested that this popular management 'technology', with its emphasis on creating value streams and reducing waste, has the potential to transform the social organisation of healthcare work. The paper locates Lean healthcare within wider debates related to the standardisation of clinical practice, the re-configuration of occupational boundaries and the stratification of clinical communities. Drawing on the 'technologies-in-practice' perspective the study is attentive to the interaction of both the intent to transform work and the response of clinicians to this intent as an ongoing and situated social practice. In developing this analysis this article explores three dimensions of social practice to consider the way Lean is interpreted and articulated (rhetoric), enacted in social practice (ritual), and experienced in the context of prevailing lines of power (resistance). Through these interlinked analytical lenses the paper suggests the interaction of Lean and clinical practice remains contingent and open to negotiation. In particular, Lean follows in a line of service improvements that bring to the fore tensions between clinicians and service leaders around the social organisation of healthcare work. The paper concludes that Lean might not be the easy remedy for making both efficiency and effectiveness improvements in healthcare.",
"title": ""
},
{
"docid": "cb70ab2056242ca739adde4751fbca2c",
"text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1",
"title": ""
},
{
"docid": "b81b29c232fb9cb5dcb2dd7e31003d77",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "545509f9e3aa65921a7d6faa41247ae6",
"text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.",
"title": ""
},
{
"docid": "38f289b085f2c6e2d010005f096d8fd7",
"text": "We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.",
"title": ""
},
{
"docid": "7d14bd767964cba3cfc152ee20c7ffbc",
"text": "Most typical statistical and machine learning approaches to time series modeling optimize a singlestep prediction error. In multiple-step simulation, the learned model is iteratively applied, feeding through the previous output as its new input. Any such predictor however, inevitably introduces errors, and these compounding errors change the input distribution for future prediction steps, breaking the train-test i.i.d assumption common in supervised learning. We present an approach that reuses training data to make a no-regret learner robust to errors made during multi-step prediction. Our insight is to formulate the problem as imitation learning; the training data serves as a “demonstrator” by providing corrections for the errors made during multi-step prediction. By this reduction of multistep time series prediction to imitation learning, we establish theoretically a strong performance guarantee on the relation between training error and the multi-step prediction error. We present experimental results of our method, DAD, and show significant improvement over the traditional approach in two notably different domains, dynamic system modeling and video texture prediction. Determining models for time series data is important in applications ranging from market prediction to the simulation of chemical processes and robotic systems. Many supervised learning approaches have been proposed for this task, such as neural networks (Narendra and Parthasarathy 1990), Expectation-Maximization (Ghahramani and Roweis 1999; Coates, Abbeel, and Ng 2008), Support Vector Regression (Müller, Smola, and Rätsch 1997), Gaussian process regression (Wang, Hertzmann, and Blei 2005; Ko et al. 2007), Nadaraya-Watson kernel regression (Basharat and Shah 2009), Gaussian mixture models (Khansari-Zadeh and Billard 2011), and Kernel PCA (Ralaivola and D’Alche-Buc 2004). Common to most of these methods is that the objective being optimized is the single-step prediction loss. However, this criterion does not guarantee accurate multiple-step simulation accuracy in which the output of a prediction step is used as input for the next inference. The prevalence of single-step modeling approaches is a result of the difficulty in directly optimizing the multipleCopyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. step prediction error. As an example, consider fitting a simple linear dynamical system model for the multi-step error over the time horizon T from an initial condition x0,",
"title": ""
},
{
"docid": "dd3781fe97c7dd935948c55584313931",
"text": "The radiation of RFID antitheft gate system has been simulated in FEKO. The obtained numerical results for the electric field and magnetic field have been compared to the exposure limits proposed by the ICNIRP Guidelines. No significant violation of limits, regarding both occupational and public exposure, has been shown.",
"title": ""
},
{
"docid": "53b32cdb6c3d511180d8cb194c286ef5",
"text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.",
"title": ""
}
] | scidocsrr |
6dc8bd3bc0c04c92fc132f2697cdf226 | Combining control-flow integrity and static analysis for efficient and validated data sandboxing | [
{
"docid": "83c81ecb870e84d4e8ab490da6caeae2",
"text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.",
"title": ""
}
] | [
{
"docid": "d945ae2fe20af58c2ca4812c797d361d",
"text": "Triple-negative breast cancers (TNBC) are genetically characterized by aberrations in TP53 and a low rate of activating point mutations in common oncogenes, rendering it challenging in applying targeted therapies. We performed whole-exome sequencing (WES) and RNA sequencing (RNA-seq) to identify somatic genetic alterations in mouse models of TNBCs driven by loss of Trp53 alone or in combination with Brca1 Amplifications or translocations that resulted in elevated oncoprotein expression or oncoprotein-containing fusions, respectively, as well as frameshift mutations of tumor suppressors were identified in approximately 50% of the tumors evaluated. Although the spectrum of sporadic genetic alterations was diverse, the majority had in common the ability to activate the MAPK/PI3K pathways. Importantly, we demonstrated that approved or experimental drugs efficiently induce tumor regression specifically in tumors harboring somatic aberrations of the drug target. Our study suggests that the combination of WES and RNA-seq on human TNBC will lead to the identification of actionable therapeutic targets for precision medicine-guided TNBC treatment.Significance: Using combined WES and RNA-seq analyses, we identified sporadic oncogenic events in TNBC mouse models that share the capacity to activate the MAPK and/or PI3K pathways. Our data support a treatment tailored to the genetics of individual tumors that parallels the approaches being investigated in the ongoing NCI-MATCH, My Pathway Trial, and ESMART clinical trials. Cancer Discov; 8(3); 354-69. ©2017 AACR.See related commentary by Natrajan et al., p. 272See related article by Matissek et al., p. 336This article is highlighted in the In This Issue feature, p. 253.",
"title": ""
},
{
"docid": "e2ce393fade02f0dfd20b9aca25afd0f",
"text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.",
"title": ""
},
{
"docid": "42b810b7ecd48590661cc5a538bec427",
"text": "Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available.",
"title": ""
},
{
"docid": "ca41837dd01a66259854c03b820a46ff",
"text": "We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.",
"title": ""
},
{
"docid": "05d8383eb6b1c6434f75849859c35fd0",
"text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.",
"title": ""
},
{
"docid": "f91ba4b37a2a9d80e5db5ace34e6e50a",
"text": "Bearing currents and shaft voltages of an induction motor are measured under hardand soft-switching inverter excitation. The objective is to investigate whether the soft-switching technologies can provide solutions for reducing the bearing currents and shaft voltages. Two of the prevailing soft-switching inverters, the resonant dc-link inverter and the quasi-resonant dc-link inverter, are tested. The results are compared with those obtained using the conventional hard-switching inverter. To ensure objective comparisons between the softand hard-switching inverters, all inverters were configured identically and drove the same induction motor under the same operating conditions when the test data were collected. An insightful explanation of the experimental results is also provided to help understand the mechanisms of bearing currents and shaft voltages produced in the inverter drives. Consistency between the bearing current theory and the experimental results has been demonstrated. Conclusions are then drawn regarding the effectiveness of the soft-switching technologies as a solution to the bearing current and shaft voltage problems.",
"title": ""
},
{
"docid": "3eaba817610278c4b1a82036ccfb6cc4",
"text": "We propose to use thought-provoking children's questions (TPCQs), namely Highlights BrainPlay questions, to drive artificial intelligence research. These questions are designed to stimulate thought and learning in children , and they can be used to do the same thing in AI systems. We introduce the TPCQ task, which consists of taking a TPCQ question as input and producing as output both (1) answers to the question and (2) learned generalizations. We discuss how BrainPlay questions stimulate learning. We analyze 244 BrainPlay questions, and we report statistics on question type, question class, answer cardinality, answer class, types of knowledge needed, and types of reasoning needed. We find that BrainPlay questions span many aspects of intelligence. We envision an AI system based on the society of mind (Minsky 1986; Minsky 2006) consisting of a multilevel architecture with diverse resources that run in parallel to jointly answer and learn from questions. Because the answers to BrainPlay questions and the generalizations learned from them are often highly open-ended, we suggest using human judges for evaluation.",
"title": ""
},
{
"docid": "b4b20c33b7f683cfead2fede8088f09b",
"text": "Bus protection is typically a station-wide protection function, as it uses the majority of the high voltage (HV) electrical signals available in a substation. All current measurements that define the bus zone of protection are needed. Voltages may be included in bus protection relays, as the number of voltages is relatively low, so little additional investment is not needed to integrate them into the protection system. This paper presents a new Distributed Bus Protection System that represents a step forward in the concept of a Smart Substation solution. This Distributed Bus Protection System has been conceived not only as a protection system, but as a platform that incorporates the data collection from the HV equipment in an IEC 61850 process bus scheme. This new bus protection system is still a distributed bus protection solution. As opposed to dedicated bay units, this system uses IEC 61850 process interface units (that combine both merging units and contact I/O) for data collection. The main advantage then, is that as the bus protection is deployed, it is also deploying the platform to do data collection for other protection, control, and monitoring functions needed in the substation, such as line, transformer, and feeder. By installing the data collection pieces, this provides for the simplification of engineering tasks, and substantial savings in wiring, number of components, cabinets, installation, and commissioning. In this way the new bus protection system is the gateway to process bus, as opposed to an addon to a process bus system. The paper analyzes and describes the new Bus Protection System as a new conceptual design for a Smart Substation, highlighting the advantages in a vision that comprises not only a single element, but the entire installation. Keyword: Current Transformer, Digital Fault Recorder, Fiber Optic Cable, International Electro Technical Commission, Process Interface Units",
"title": ""
},
{
"docid": "ca6001c3ed273b4f23565f4d40ddeb29",
"text": "Learning semantic representations and tree structures of bilingual phrases is beneficial for statistical machine translation. In this paper, we propose a new neural network model called Bilingual Correspondence Recursive Autoencoder (BCorrRAE) to model bilingual phrases in translation. We incorporate word alignments into BCorrRAE to allow it freely access bilingual constraints at different levels. BCorrRAE minimizes a joint objective on the combination of a recursive autoencoder reconstruction error, a structural alignment consistency error and a crosslingual reconstruction error so as to not only generate alignment-consistent phrase structures, but also capture different levels of semantic relations within bilingual phrases. In order to examine the effectiveness of BCorrRAE, we incorporate both semantic and structural similarity features built on bilingual phrase representations and tree structures learned by BCorrRAE into a state-of-the-art SMT system. Experiments on NIST Chinese-English test sets show that our model achieves a substantial improvement of up to 1.55 BLEU points over the baseline.",
"title": ""
},
{
"docid": "f698b77df48a5fac4df7ba81b4444dd5",
"text": "Discontinuous-conduction mode (DCM) operation is usually employed in DC-DC converters for small inductor on printed circuit board (PCB) and high efficiency at light load. However, it is normally difficult for synchronous converter to realize the DCM operation, especially in high frequency applications, which requires a high speed and high precision comparator to detect the zero crossing point at cost of extra power losses. In this paper, a novel zero current detector (ZCD) circuit with an adaptive delay control loop for high frequency synchronous buck converter is presented. Compared to the conventional ZCD, proposed technique is proven to offer 8.5% efficiency enhancement when performed in a buck converter at the switching frequency of 4MHz and showed less sensitivity to the transistor mismatch of the sensor circuit.",
"title": ""
},
{
"docid": "5bebef3a6ca0d595b6b3232e18f8789f",
"text": "The usability of a software product has recently become a key software quality factor. The International Organization for Standardization (ISO) has developed a variety of models to specify and measure software usability but these individual models do not support all usability aspects. Furthermore, they are not yet well integrated into current software engineering practices and lack tool support. The aim of this research is to survey the actual representation (meanings and interpretations) of usability in ISO standards, indicate some of existing limitations and address them by proposing an enhanced, normative model for the evaluation of software usability.",
"title": ""
},
{
"docid": "bac623d79d39991032fc46cc215b9fdd",
"text": "The convergence of mobile computing and cloud computing enables new mobile applications that are both resource-intensive and interactive. For these applications, end-to-end network bandwidth and latency matter greatly when cloud resources are used to augment the computational power and battery life of a mobile device. This dissertation designs and implements a new architectural element called a cloudlet, that arises from the convergence of mobile computing and cloud computing. Cloudlets represent the middle tier of a 3-tier hierarchy, mobile device — cloudlet — cloud, to achieve the right balance between cloud consolidation and network responsiveness. We first present quantitative evidence that shows cloud location can affect the performance of mobile applications and cloud consolidation. We then describe an architectural solution using cloudlets that are a seamless extension of todays cloud computing infrastructure. Finally, we define minimal functionalities that cloudlets must offer above/beyond standard cloud computing, and address corresponding technical challenges.",
"title": ""
},
{
"docid": "0b71458d700565bec9b91318023243df",
"text": "The Humor Styles Questionnaire (HSQ; Martin et al., 2003) is one of the most frequently used questionnaires in humor research and has been adapted to several languages. The HSQ measures four humor styles (affiliative, self-enhancing, aggressive, and self-defeating), which should be adaptive or potentially maladaptive to psychosocial well-being. The present study analyzes the internal consistency, factorial validity, and factorial invariance of the HSQ on the basis of several German-speaking samples combined (total N = 1,101). Separate analyses were conducted for gender (male/female), age groups (16-24, 25-35, >36 years old), and countries (Germany/Switzerland). Internal consistencies were good for the overall sample and the demographic subgroups (.80-.89), with lower values obtained for the aggressive scale (.66-.73). Principal components and confirmatory factor analyses mostly supported the four-factor structure of the HSQ. Weak factorial invariance was found across gender and age groups, while strong factorial invariance was supported across countries. Two subsamples also provided self-ratings on ten styles of humorous conduct (n = 344) and of eight comic styles (n = 285). The four HSQ scales showed small to large correlations to the styles of humorous conduct (-.54 to .65) and small to medium correlations to the comic styles (-.27 to .42). The HSQ shared on average 27.5-35.0% of the variance with the styles of humorous conduct and 13.0-15.0% of the variance with the comic styles. Thus-despite similar labels-these styles of humorous conduct and comic styles differed from the HSQ humor styles.",
"title": ""
},
{
"docid": "e677799d3bee1b25e74dc6c547c1b6c2",
"text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.",
"title": ""
},
{
"docid": "fdaf0a7bc6dfa30d0c3ed3a96950d8c8",
"text": "In this article we exploit the discrete-time dynamics of a single neuron with self-connection to systematically design simple signal filters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configurations. Extending this neuro-module by two more recurrent neurons leads to versatile highand band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots.",
"title": ""
},
{
"docid": "2af0ef7c117ace38f44a52379c639e78",
"text": "Examination of a child with genital or anal disease may give rise to suspicion of sexual abuse. Dermatologic, traumatic, infectious, and congenital disorders may be confused with sexual abuse. Seven children referred to us are representative of such confusion.",
"title": ""
},
{
"docid": "52017fa7d6cf2e6a18304b121225fc6f",
"text": "In comparison to dense matrices multiplication, sparse matrices multiplication real performance for CPU is roughly 5–100 times lower when expressed in GFLOPs. For sparse matrices, microprocessors spend most of the time on comparing matrices indices rather than performing floating-point multiply and add operations. For 16-bit integer operations, like indices comparisons, computational power of the FPGA significantly surpasses that of CPU. Consequently, this paper presents a novel theoretical study how matrices sparsity factor influences the indices comparison to floating-point operation workload ratio. As a result, a novel FPGAs architecture for sparse matrix-matrix multiplication is presented for which indices comparison and floating-point operations are separated. We also verified our idea in practice, and the initial implementations results are very promising. To further decrease hardware resources required by the floating-point multiplier, a reduced width multiplication is proposed in the case when IEEE-754 standard compliance is not required.",
"title": ""
},
{
"docid": "6341eaeb32d0e25660de6be6d3943e81",
"text": "Theorists have speculated that primary psychopathy (or Factor 1 affective-interpersonal features) is prominently heritable whereas secondary psychopathy (or Factor 2 social deviance) is more environmentally determined. We tested this differential heritability hypothesis using a large adolescent twin sample. Trait-based proxies of primary and secondary psychopathic tendencies were assessed using Multidimensional Personality Questionnaire (MPQ) estimates of Fearless Dominance and Impulsive Antisociality, respectively. The environmental contexts of family, school, peers, and stressful life events were assessed using multiple raters and methods. Consistent with prior research, MPQ Impulsive Antisociality was robustly associated with each environmental risk factor, and these associations were significantly greater than those for MPQ Fearless Dominance. However, MPQ Fearless Dominance and Impulsive Antisociality exhibited similar heritability, and genetic effects mediated the associations between MPQ Impulsive Antisociality and the environmental measures. Results were largely consistent across male and female twins. We conclude that gene-environment correlations rather than main effects of genes and environments account for the differential environmental correlates of primary and secondary psychopathy.",
"title": ""
},
{
"docid": "47ef46ef69a23e393d8503154f110a81",
"text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",
"title": ""
},
{
"docid": "028be19d9b8baab4f5982688e41bfec8",
"text": "The activation function for neurons is a prominent element in the deep learning architecture for obtaining high performance. Inspired by neuroscience findings, we introduce and define two types of neurons with different activation functions for artificial neural networks: excitatory and inhibitory neurons, which can be adaptively selected by selflearning. Based on the definition of neurons, in the paper we not only unify the mainstream activation functions, but also discuss the complementariness among these types of neurons. In addition, through the cooperation of excitatory and inhibitory neurons, we present a compositional activation function that leads to new state-of-the-art performance comparing to rectifier linear units. Finally, we hope that our framework not only gives a basic unified framework of the existing activation neurons to provide guidance for future design, but also contributes neurobiological explanations which can be treated as a window to bridge the gap between biology and computer science.",
"title": ""
}
] | scidocsrr |
c1c84ea618835e7592aedf1fdf0bb1c2 | Improving the Reproducibility of PAN's Shared Tasks: - Plagiarism Detection, Author Identification, and Author Profiling | [
{
"docid": "c43785187ce3c4e7d1895b628f4a2df3",
"text": "In this paper we focus on the connection between age and language use, exploring age prediction of Twitter users based on their tweets. We discuss the construction of a fine-grained annotation effort to assign ages and life stages to Twitter users. Using this dataset, we explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. We find that an automatic system achieves better performance than humans on these tasks and that both humans and the automatic systems have difficulties predicting the age of older people. Moreover, we present a detailed analysis of variables that change with age. We find strong patterns of change, and that most changes occur at young ages.",
"title": ""
},
{
"docid": "515e4ae8fabe93495d8072fe984d8bb6",
"text": "Most studies in statistical or machine learning based authorship attribution focus on two or a few authors. This leads to an overestimation of the importance of the features extracted from the training data and found to be discriminating for these small sets of authors. Most studies also use sizes of training data that are unrealistic for situations in which stylometry is applied (e.g., forensics), and thereby overestimate the accuracy of their approach in these situations. A more realistic interpretation of the task is as an authorship verification problem that we approximate by pooling data from many different authors as negative examples. In this paper, we show, on the basis of a new corpus with 145 authors, what the effect is of many authors on feature selection and learning, and show robustness of a memory-based learning approach in doing authorship attribution and verification with many authors and limited training data when compared to eager learning methods such as SVMs and maximum entropy learning.",
"title": ""
}
] | [
{
"docid": "503277b20b3fd087df5c91c1a7c7a173",
"text": "Among vertebrates, only microchiropteran bats, cetaceans and some rodents are known to produce and detect ultrasounds (frequencies greater than 20 kHz) for the purpose of communication and/or echolocation, suggesting that this capacity might be restricted to mammals. Amphibians, reptiles and most birds generally have limited hearing capacity, with the ability to detect and produce sounds below ∼12 kHz. Here we report evidence of ultrasonic communication in an amphibian, the concave-eared torrent frog (Amolops tormotus) from Huangshan Hot Springs, China. Males of A. tormotus produce diverse bird-like melodic calls with pronounced frequency modulations that often contain spectral energy in the ultrasonic range. To determine whether A. tormotus communicates using ultrasound to avoid masking by the wideband background noise of local fast-flowing streams, or whether the ultrasound is simply a by-product of the sound-production mechanism, we conducted acoustic playback experiments in the frogs' natural habitat. We found that the audible as well as the ultrasonic components of an A. tormotus call can evoke male vocal responses. Electrophysiological recordings from the auditory midbrain confirmed the ultrasonic hearing capacity of these frogs and that of a sympatric species facing similar environmental constraints. This extraordinary upward extension into the ultrasonic range of both the harmonic content of the advertisement calls and the frog's hearing sensitivity is likely to have co-evolved in response to the intense, predominantly low-frequency ambient noise from local streams. Because amphibians are a distinct evolutionary lineage from microchiropterans and cetaceans (which have evolved ultrasonic hearing to minimize congestion in the frequency bands used for sound communication and to increase hunting efficacy in darkness), ultrasonic perception in these animals represents a new example of independent evolution.",
"title": ""
},
{
"docid": "7458ca6334cf5f02c6a30466cd8de2ce",
"text": "BACKGROUND\nFecal incontinence (FI) in children is frequently encountered in pediatric practice, and often occurs in combination with urinary incontinence. In most cases, FI is constipation-associated, but in 20% of children presenting with FI, no constipation or other underlying cause can be found - these children suffer from functional nonretentive fecal incontinence (FNRFI).\n\n\nOBJECTIVE\nTo summarize the evidence-based recommendations of the International Children's Continence Society for the evaluation and management of children with FNRFI.\n\n\nRECOMMENDATIONS\nFunctional nonretentive fecal incontinence is a clinical diagnosis based on medical history and physical examination. Except for determining colonic transit time, additional investigations are seldom indicated in the workup of FNRFI. Treatment should consist of education, a nonaccusatory approach, and a toileting program encompassing a daily bowel diary and a reward system. Special attention should be paid to psychosocial or behavioral problems, since these frequently occur in affected children. Functional nonretentive fecal incontinence is often difficult to treat, requiring prolonged therapies with incremental improvement on treatment and frequent relapses.",
"title": ""
},
{
"docid": "7087355045b28921ebc63296780415d9",
"text": "The Indian regional navigational satellite system (IRNSS) developed by the Indian Space Research Organization (ISRO) is an autonomous regional satellite navigation system which is under the complete control of Government of India. The requirement of indigenous regional navigational satellite system is driven by the fact that access to Global Navigation Satellite System, like GPS is not guaranteed in hostile situations. Design of IRNSS antenna at user segment is mandatory for Indian region. The IRNSS satellites will be placed at a higher geostationary orbit to have a larger signal footprint and minimum satellites for regional mapping. IRNSS signals will consist of a Special Positioning Service and a Precision Service. Both will be carried on L5 band (1176.45 MHz) and S band (2492.08 MHz). As it is be long range communication system needs high frequency signals and high gain receiving antennas. So, different antennas can be designed to enhance the gain and directivity. Based on this the rectangular Microstrip patch antenna, planar array of patch antennas and planar, wideband feed slot spiral antenna are designed by using various software simulations. Use of array of spiral antennas will increase the gain position. Spiral antennas are comparatively small size and these antennas with its windings making it an extremely small structure. The performance of the designed antennas was compared in terms of return loss, bandwidth, directivity, radiation pattern and gain. In this paper, Review results of all antennas designed for IRNSS have presented.",
"title": ""
},
{
"docid": "f6d87c501bae68fe1b788e5b01bd17cc",
"text": "The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical non-linear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive second-order models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for large-scale problems and compares favorable with the state-of-the-art, while outperforming most existing solvers.",
"title": ""
},
{
"docid": "f5360ff8d8cc5d0a852cebeb09a29a98",
"text": "In this paper, we propose a collaborative deep reinforcement learning (C-DRL) method for multi-object tracking. Most existing multiobject tracking methods employ the tracking-by-detection strategy which first detects objects in each frame and then associates them across different frames. However, the performance of these methods rely heavily on the detection results, which are usually unsatisfied in many real applications, especially in crowded scenes. To address this, we develop a deep prediction-decision network in our C-DRL, which simultaneously detects and predicts objects under a unified network via deep reinforcement learning. Specifically, we consider each object as an agent and track it via the prediction network, and seek the optimal tracked results by exploiting the collaborative interactions of different agents and environments via the decision network.Experimental results on the challenging MOT15 and MOT16 benchmarks are presented to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "a7e3338d682278643fdd7eefa795f3f3",
"text": "State of the art models using deep neural networks have become very good in learning an accurate mapping from inputs to outputs. However, they still lack generalization capabilities in conditions that differ from the ones encountered during training. This is even more challenging in specialized, and knowledge intensive domains, where training data is limited. To address this gap, we introduce MedNLI1 – a dataset annotated by doctors, performing a natural language inference task (NLI), grounded in the medical history of patients. We present strategies to: 1) leverage transfer learning using datasets from the open domain, (e.g. SNLI) and 2) incorporate domain knowledge from external data and lexical sources (e.g. medical terminologies). Our results demonstrate performance gains using both strategies.",
"title": ""
},
{
"docid": "e584e7e0c96bc78bc2b2166d1af272a6",
"text": "In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints. Our approach called \"projective generative adversarial networks\" (PrGANs) trains a deep generative model of 3D shapes whose projections match the distributions of the input 2D views. The addition of a projection module allows us to infer the underlying 3D shape distribution without using any 3D, viewpoint information, or annotation during the learning phase. We show that our approach produces 3D shapes of comparable quality to GANs trained on 3D data for a number of shape categories including chairs, airplanes, and cars. Experiments also show that the disentangled representation of 2D shapes into geometry and viewpoint leads to a good generative model of 2D shapes. The key advantage is that our model allows us to predict 3D, viewpoint, and generate novel views from an input image in a completely unsupervised manner.",
"title": ""
},
{
"docid": "fff6c1ca2fde7f50c3654f1953eb97e6",
"text": "This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.",
"title": ""
},
{
"docid": "1bc285b8bd63e701a55cf956179abbac",
"text": "A new anode/cathode design and process concept for thin wafer based silicon devices is proposed to achieve the goal of providing improved control for activating the injecting layer and forming a good ohmic contact. The concept is based on laser annealing in a melting regime of a p-type anode layer covered with a thin titanium layer with high melting temperature and high laser light absorption. The improved activation control of a boron anode layer is demonstrated on the Soft Punch Through IGBT with a nominal breakdown voltage of 1700 V. Furthermore, the silicidation of the titanium absorbing layer, which is necessary for achieving a low VCE ON, is discussed in terms of optimization of the device electrical parameters.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "c182be9222690ffe1c94729b2b79d8ed",
"text": "A balanced level of muscle strength between the different parts of the scapular muscles is important in optimizing performance and preventing injuries in athletes. Emerging evidence suggests that many athletes lack balanced strength in the scapular muscles. Evidence-based recommendations are important for proper exercise prescription. This study determines scapular muscle activity during strengthening exercises for scapular muscles performed at low and high intensities (Borg CR10 levels 3 and 8). Surface electromyography (EMG) from selected scapular muscles was recorded during 7 strengthening exercises and expressed as a percentage of the maximal EMG. Seventeen women (aged 24-55 years) without serious disorders participated. Several of the investigated exercises-press-up, prone flexion, one-arm row, and prone abduction at Borg 3 and press-up, push-up plus, and one-arm row at Borg 8-predominantly activated the lower trapezius over the upper trapezius (activation difference [Δ] 13-30%). Likewise, several of the exercises-push-up plus, shoulder press, and press-up at Borg 3 and 8-predominantly activated the serratus anterior over the upper trapezius (Δ18-45%). The middle trapezius was activated over the upper trapezius by one-arm row and prone abduction (Δ21-30%). Although shoulder press and push-up plus activated the serratus anterior over the lower trapezius (Δ22-33%), the opposite was true for prone flexion, one-arm row, and prone abduction (Δ16-54%). Only the press-up and push-up plus activated both the lower trapezius and the serratus anterior over the upper trapezius. In conclusion, several of the investigated exercises both at low and high intensities predominantly activated the serratus anterior and lower and middle trapezius, respectively, over the upper trapezius. These findings have important practical implications for exercise prescription for optimal shoulder function. For example, both workers with neck pain and athletes at risk of shoulder impingement (e.g., overhead sports) should perform push-up plus and press-ups to specifically strengthen the serratus anterior and lower trapezius.",
"title": ""
},
{
"docid": "a01abbced99f14ae198c6abef6454126",
"text": "Coreference Resolution September 2014 Present Kevin Clark, Christopher Manning Stanford University Developed coreference systems that build up coreference chains with agglomerative clustering. These models are more accurate than the mention-pair systems commonly used in prior work. Developed neural coreference systems that do not require the large number of complex hand-engineered features commonly found in statistical coreference systems. Applied imitation and reinforcement learning to directly optimize coreference systems for evaluation metrics instead of relying on hand-tuned heuristic loss functions. Made substantial advancements to the current state-of-the-art for English and Chinese coreference. Publicly released all models through Stanford’s CoreNLP.",
"title": ""
},
{
"docid": "4ec91fd15f10c1c8616a890447c2b063",
"text": "Texture is an important visual clue for various classification and segmentation tasks in the scene understanding challenge. Today, successful deployment of deep learning algorithms for texture recognition leads to tremendous precisions on standard datasets. In this paper, we propose a new learning framework to train deep neural networks in parallel and with variable depth for texture recognition. Our framework learns scales, orientations and resolutions of texture filter banks. Due to the learning of parameters not the filters themselves, computational costs are highly reduced. It is also capable of extracting very deep features through distributed computing architectures. Our experiments on publicly available texture datasets show significant improvements in the recognition performance over other deep local descriptors in recently published benchmarks.",
"title": ""
},
{
"docid": "a79f9ad24c4f047d8ace297b681ccf0a",
"text": "BACKGROUND\nLe Fort III distraction advances the Apert midface but leaves the central concavity and vertical compression untreated. The authors propose that Le Fort II distraction and simultaneous zygomatic repositioning as a combined procedure can move the central midface and lateral orbits in independent vectors in order to improve the facial deformity. The purpose of this study was to determine whether this segmental movement results in more normal facial proportions than Le Fort III distraction.\n\n\nMETHODS\nComputed tomographic scan analyses were performed before and after distraction in patients undergoing Le Fort III distraction (n = 5) and Le Fort II distraction with simultaneous zygomatic repositioning (n = 4). The calculated axial facial ratios and vertical facial ratios relative to the skull base were compared to those of unoperated Crouzon (n = 5) and normal (n = 6) controls.\n\n\nRESULTS\nWith Le Fort III distraction, facial ratios did not change with surgery and remained lower (p < 0.01; paired t test comparison) than normal and Crouzon controls. Although the face was advanced, its shape remained abnormal. With the Le Fort II segmental movement procedure, the central face advanced and lengthened more than the lateral orbit. This differential movement changed the abnormal facial ratios that were present before surgery into ratios that were not significantly different from normal controls (p > 0.05).\n\n\nCONCLUSION\nCompared with Le Fort III distraction, Le Fort II distraction with simultaneous zygomatic repositioning normalizes the position and the shape of the Apert face.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, III.",
"title": ""
},
{
"docid": "e6610d23c69a140fdf07d1ee2e58c8a1",
"text": "Purpose – The purpose of this paper is to contribute to the body of knowledge about to what extent integrated information systems, such as ERP and SEM systems, affect the ability to solve different management accounting tasks. Design/methodology/approach – The relationship between IIS and management accounting practices was investigated quantitatively. A total of 349 responses were collected using a survey, and the data were analysed using linear regression models. Findings – Analyses indicate that ERP systems support the data collection and the organisational breadth of management accounting better than SEM systems. SEM systems, on the other hand, seem to be better at supporting reporting and analysis. In addition, modern management accounting techniques involving the use of non-financial data are better supported by an SEM system. This indicates that different management accounting tasks are supported by different parts of the IIS. Research limitations/implications – The study applies the methods of quantitative research. Thus, the internal validity is threatened. Conducting in-depth studies might be able to reduce this possible shortcoming. Practical implications – On the basis of the findings, there is a need to consider the potential of closer integration of ERP and SEM systems in order to solve management accounting tasks. Originality/value – This paper adds to the limited body of knowledge about the relationship between IIS and management accounting practices.",
"title": ""
},
{
"docid": "3ce021aa52dac518e1437d397c63bf68",
"text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.",
"title": ""
},
{
"docid": "375ab5445e81c7982802bdb8b9cbd717",
"text": "Advances in healthcare have led to longer life expectancy and an aging population. The cost of caring for the elderly is rising progressively and threatens the economic well-being of many nations around the world. Instead of professional nursing facilities, many elderly people prefer living independently in their own homes. To enable the aging to remain active, this research explores the roles of technology in improving their quality of life while reducing the cost of healthcare to the elderly population. In particular, we propose a multi-agent service framework, called Context-Aware Service Integration System (CASIS), to integrate applications and services. This paper demonstrates several context-aware service scenarios these have been developed on the proposed framework to demonstrate how context technologies and mobile web services can help enhance the quality of care for an elder’s daily",
"title": ""
},
{
"docid": "e9e620742992a6b6aa50e6e0e5894b6f",
"text": "A significant amount of information in today’s world is stored in structured and semistructured knowledge bases. Efficient and simple methods to query these databases are essential and must not be restricted to only those who have expertise in formal query languages. The field of semantic parsing deals with converting natural language utterances to logical forms that can be easily executed on a knowledge base. In this survey, we examine the various components of a semantic parsing system and discuss prominent work ranging from the initial rule based methods to the current neural approaches to program synthesis. We also discuss methods that operate using varying levels of supervision and highlight the key challenges involved in the learning of such systems.",
"title": ""
},
{
"docid": "0b973f37e2d9c3d7f427b939db233f12",
"text": "Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. However, the central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles of such models they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. This is beneficial, e.g. for general understanding, for teaching, for learning, for research, and it can be helpful in court. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.",
"title": ""
}
] | scidocsrr |
f97d72f8e43ed080e21db780ff110aa4 | Tropical rat mites (Ornithonyssus bacoti) - serious ectoparasites. | [
{
"docid": "5d7d7a49b254e08c95e40a3bed0aa10e",
"text": "Five mentally handicapped individuals living in a home for disabled persons in Southern Germany were seen in our outpatient department with pruritic, red papules predominantly located in groups on the upper extremities, neck, upper trunk and face. Over several weeks 40 inhabitants and 5 caretakers were affected by the same rash. Inspection of their home and the sheds nearby disclosed infestation with rat populations and mites. Finally the diagnosis of tropical rat mite dermatitis was made by the identification of the arthropod Ornithonyssus bacoti or so-called tropical rat mite. The patients were treated with topical corticosteroids and antihistamines. After elimination of the rats and disinfection of the rooms by a professional exterminator no new cases of rat mite dermatitis occurred. The tropical rat mite is an external parasite occurring on rats, mice, gerbils, hamsters and various other small mammals. When the principal animal host is not available, human beings can become the victim of mite infestation.",
"title": ""
}
] | [
{
"docid": "447e62529ed6b1b428e6edd78aabb637",
"text": "Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.",
"title": ""
},
{
"docid": "7d0dfce24bd539cb790c0c25348d075d",
"text": "When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of X conditional on Y = 1, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many realworld applications, the observed positive examples are dependent on the conditional probability P (Y = 1|X) and should be sampled biasedly. In this paper, we assume that a positive example with a higher P (Y = 1|X) is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Specically, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classier with a consistency guarantee. e relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. e proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets. ∗UBTECH Sydney Articial Intelligence Centre and the School of Information Technologies, Faculty of Engineering and Information Technologies, e University of Sydney, Darlington, NSW 2008, Australia, [email protected]; [email protected]; [email protected]. †Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia, geo[email protected]. 1 ar X iv :1 80 8. 02 18 0v 1 [ cs .L G ] 7 A ug 2 01 8",
"title": ""
},
{
"docid": "af0178d0bb154c3995732e63b94842ca",
"text": "Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.",
"title": ""
},
{
"docid": "b4ac5df370c0df5fdb3150afffd9158b",
"text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.",
"title": ""
},
{
"docid": "7fe0c40d6f62d24b4fb565d3341c1422",
"text": "Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code.",
"title": ""
},
{
"docid": "f01a1679095a163894660cb0748334d3",
"text": "We present a novel approach for event extraction and abstraction from movie descriptions. Our event frame consists of ‘who”, “did what” “to whom”, “where”, and “when”. We formulate our problem using a recurrent neural network, enhanced with structural features extracted from syntactic parser, and trained using curriculum learning by progressively increasing the difficulty of the sentences. Our model serves as an intermediate step towards question answering systems, visual storytelling, and story completion tasks. We evaluate our approach on MovieQA dataset.",
"title": ""
},
{
"docid": "130efef512294d14094a900693efebfd",
"text": "Metaphor comprehension involves an interaction between the meaning of the topic and the vehicle terms of the metaphor. Meaning is represented by vectors in a high-dimensional semantic space. Predication modifies the topic vector by merging it with selected features of the vehicle vector. The resulting metaphor vector can be evaluated by comparing it with known landmarks in the semantic space. Thus, metaphorical prediction is treated in the present model in exactly the same way as literal predication. Some experimental results concerning metaphor comprehension are simulated within this framework, such as the nonreversibility of metaphors, priming of metaphors with literal statements, and priming of literal statements with metaphors.",
"title": ""
},
{
"docid": "c8e23bc60783125d5bf489cddd3e8290",
"text": "An efficient probabilistic algorithm for the concurrent mapping and localization problem that arises in mobile robotics is presented. The algorithm addresses the problem in which a team of robots builds a map on-line while simultaneously accommodating errors in the robots’ odometry. At the core of the algorithm is a technique that combines fast maximum likelihood map growing with a Monte Carlo localizer that uses particle representations. The combination of both yields an on-line algorithm that can cope with large odometric errors typically found when mapping environments with cycles. The algorithm can be implemented in a distributed manner on multiple robot platforms, enabling a team of robots to cooperatively generate a single map of their environment. Finally, an extension is described for acquiring three-dimensional maps, which capture the structure and visual appearance of indoor environments in three dimensions. KEY WORDS—mobile robotics, map acquisition, localization, robotic exploration, multi-robot systems, threedimensional modeling",
"title": ""
},
{
"docid": "b69f7c0db77c3012ae5e550b23a313fb",
"text": "Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. As a result, speckle noise reduction is an important prerequisite, whenever ultrasound imaging is used for tissue characterization. Among the many methods that have been proposed to perform this task, there exists a class of approaches that use a multiplicative model of speckled image formation and take advantage of the logarithmical transformation in order to convert multiplicative speckle noise into additive noise. The common assumption made in a dominant number of such studies is that the samples of the additive noise are mutually uncorrelated and obey a Gaussian distribution. The present study shows conceptually and experimentally that this assumption is oversimplified and unnatural. Moreover, it may lead to inadequate performance of the speckle reduction methods. The study introduces a simple preprocessing procedure, which modifies the acquired radio-frequency images (without affecting the anatomical information they contain), so that the noise in the log-transformation domain becomes very close in its behavior to a white Gaussian noise. As a result, the preprocessing allows filtering methods based on assuming the noise to be white and Gaussian, to perform in nearly optimal conditions. The study evaluates performances of three different, nonlinear filters - wavelet denoising, total variation filtering, and anisotropic diffusion - and demonstrates that, in all these cases, the proposed preprocessing significantly improves the quality of resultant images. Our numerical tests include a series of computer-simulated and in vivo experiments.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "e36e0c8659b8bae3acf0f178fce362c3",
"text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.",
"title": ""
},
{
"docid": "56c5ec77f7b39692d8b0d5da0e14f82a",
"text": "Using tweets extracted from Twitter during the Australian 2010-2011 floods, social network analysis techniques were used to generate and analyse the online networks that emerged at that time. The aim was to develop an understanding of the online communities for the Queensland, New South Wales and Victorian floods in order to identify active players and their effectiveness in disseminating critical information. A secondary goal was to identify important online resources disseminated by these communities. Important and effective players during the Queensland floods were found to be: local authorities (mainly the Queensland Police Services), political personalities (Queensland Premier, Prime Minister, Opposition Leader, Member of Parliament), social media volunteers, traditional media reporters, and people from not-for-profit, humanitarian, and community associations. A range of important resources were identified during the Queensland flood; however, they appeared to be of a more general information nature rather than vital information and updates on the disaster. Unlike Queensland, there was no evidence of Twitter activity from the part of local authorities and the government in the New South Wales and Victorian floods. Furthermore, the level of Twitter activity during the NSW floods was almost nil. Most of the active players during the NSW and Victorian floods were volunteers who were active during the Queensland floods. Given the positive results obtained by the active involvement of the local authorities and government officials in Queensland, and the increasing adoption of Twitter in other parts of the world for emergency situations, it seems reasonable to push for greater adoption of Twitter from local and federal authorities Australia-wide during periods of mass emergencies.",
"title": ""
},
{
"docid": "9d37baf5ce33826a59cc7bd0fd7955c0",
"text": "A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.",
"title": ""
},
{
"docid": "d46434bbbf73460bf422ebe4bd65b590",
"text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.",
"title": ""
},
{
"docid": "7830c4737197e84a247349f2e586424e",
"text": "This paper describes VPL, a Virtual Programming Lab module for Moodle, developed at the University of Las Palmas of Gran Canaria (ULPGC) and released for free uses under GNU/GPL license. For the students, it is a simple development environment with auto evaluation capabilities. For the instructors, it is a students' work management system, with features to facilitate the preparation of assignments, manage the submissions, check for plagiarism, and do assessments with the aid of powerful and flexible assessment tools based on program testing, all of that being independent of the programming language used for the assignments and taken into account critical security issues.",
"title": ""
},
{
"docid": "1241bc6b7d3522fe9e285ae843976524",
"text": "In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.",
"title": ""
},
{
"docid": "51cd0219f96b4ae6984df37ed439bbaa",
"text": "This paper introduces an unsupervised framework to extract semantically rich features for video representation. Inspired by how the human visual system groups objects based on motion cues, we propose a deep convolutional neural network that disentangles motion, foreground and background information. The proposed architecture consists of a 3D convolutional feature encoder for blocks of 16 frames, which is trained for reconstruction tasks over the first and last frames of the sequence. A preliminary supervised experiment was conducted to verify the feasibility of proposed method by training the model with a fraction of videos from the UCF-101 dataset taking as ground truth the bounding boxes around the activity regions. Qualitative results indicate that the network can successfully segment foreground and background in videos as well as update the foreground appearance based on disentangled motion features. The benefits of these learned features are shown in a discriminative classification task, where initializing the network with the proposed pretraining method outperforms both random initialization and autoencoder pretraining. Our model and source code are publicly available at https: //allenovo.github.io/cvprw17_webpage/ .",
"title": ""
},
{
"docid": "ad9a94a4deafceedccdd5f4164cde293",
"text": "In this paper, we investigate the application of machine learning techniques and word embeddings to the task of Recognizing Textual Entailment (RTE) in Social Media. We look at a manually labeled dataset (Lendvai et al., 2016) consisting of user generated short texts posted on Twitter (tweets) and related to four recent media events (the Charlie Hebdo shooting, the Ottawa shooting, the Sydney Siege, and the German Wings crash) and test to what extent neural techniques and embeddings are able to distinguish between tweets that entail or contradict each other or that claim unrelated things. We obtain comparable results to the state of the art in a train-test setting, but we show that, due to the noisy aspect of the data, results plummet in an evaluation strategy crafted to better simulate a real-life train-test scenario.",
"title": ""
},
{
"docid": "896fe681f79ef025a6058a51dd4f19c0",
"text": "Semantic parsing is the construction of a complete, formal, symbolic meaning representation of a sentence. While it is crucial to natural language understanding, the problem of semantic parsing has received relatively little attention from the machine learning community. Recent work on natural language understanding has mainly focused on shallow semantic analysis, such as word-sense disambiguation and semantic role labeling. Semantic parsing, on the other hand, involves deep semantic analysis in which word senses, semantic roles and other components are combined to produce useful meaning representations for a particular application domain (e.g. database query). Prior research in machine learning for semantic parsing is mainly based on inductive logic programming or deterministic parsing, which lack some of the robustness that characterizes statistical learning. Existing statistical approaches to semantic parsing, however, are mostly concerned with relatively simple application domains in which a meaning representation is no more than a single semantic frame. In this proposal, we present a novel statistical approach to semantic parsing, WASP, which can handle meaning representations with a nested structure. The WASP algorithm learns a semantic parser given a set of sentences annotated with their correct meaning representations. The parsing model is based on the synchronous context-free grammar, where each rule maps a natural-language substring to its meaning representation. The main innovation of the algorithm is its use of state-of-the-art statistical machine translation techniques. A statistical word alignment model is used for lexical acquisition, and the parsing model itself can be seen as an instance of a syntax-based translation model. In initial evaluation on several real-world data sets, we show that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order. In future work, we intend to pursue several directions in developing accurate semantic parsers for a variety of application domains. This will involve exploiting prior knowledge about the natural-language syntax and the application domain. We also plan to construct a syntax-aware word-based alignment model for lexical acquisition. Finally, we will generalize the learning algorithm to handle contextdependent sentences and accept noisy training data.",
"title": ""
},
{
"docid": "6a455fd9c86feb287a3c5a103bb681de",
"text": "This paper presents two approaches to semantic search by incorporating Linked Data annotations of documents into a Generalized Vector Space Model. One model exploits taxonomic relationships among entities in documents and queries, while the other model computes term weights based on semantic relationships within a document. We publish an evaluation dataset with annotated documents and queries as well as user-rated relevance assessments. The evaluation on this dataset shows significant improvements of both models over traditional keyword based search.",
"title": ""
}
] | scidocsrr |
77471bab1c814fe955730bc9b60d8fef | Efficient Storage of Multi-Sensor Object-Tracking Data | [
{
"docid": "4fa73e04ccc8620c12aaea666ea366a6",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
}
] | [
{
"docid": "5b0e33ede34f6532a48782e423128f49",
"text": "The literature on globalisation reveals wide agreement concerning the relevance of international sourcing strategies as key competitive factors for companies seeking globalisation, considering such strategies to be a purchasing management approach focusing on supplies from vendors in the world market, rather than relying exclusively on domestic offerings (Petersen, Frayer, & Scannel, 2000; Stevens, 1995; Trent & Monczka, 1998). Thus, the notion of “international sourcing” mentioned by these authors describes the level of supply globalisation in companies’ purchasing strategy, as related to supplier source (Giunipero & Pearcy, 2000; Levy, 1995; Trent & Monczka, 2003b).",
"title": ""
},
{
"docid": "c296244ea4283a43623d3a3aabd4d672",
"text": "With growing interest in Chinese Language Processing, numerous NLP tools (e.g., word segmenters, part-of-speech taggers, and parsers) for Chinese have been developed all over the world. However, since no large-scale bracketed corpora are available to the public, these tools are trained on corpora with different segmentation criteria, part-of-speech tagsets and bracketing guidelines, and therefore, comparisons are difficult. As a first step towards addressing this issue, we have been preparing a large bracketed corpus since late 1998. The first two installments of the corpus, 250 thousand words of data, fully segmented, POS-tagged and syntactically bracketed, have been released to the public via LDC (www.ldc.upenn.edu). In this paper, we discuss several Chinese linguistic issues and their implications for our treebanking efforts and how we address these issues when developing our annotation guidelines. We also describe our engineering strategies to improve speed while ensuring annotation quality.",
"title": ""
},
{
"docid": "680b2b1c938e381b4070a4d0a44d4ec8",
"text": "The significance of aligning IT with corporate strategy is widely recognized, but the lack of appropriate methodologies prevented practitioners from integrating IT projects with competitive strategies effectively. This article addresses the issue of deploying Web services strategically using the concept of a widely accepted management tool, the balanced scorecard. A framework is developed to match potential benefits of Web services with corporate strategy in four business dimensions: innovation and learning, internal business process, customer, and financial. It is argued that the strategic benefits of implementing Web services can only be realized if the Web services initiatives are planned and implemented within the framework of an IT strategy that is designed to support the business strategy of a firm.",
"title": ""
},
{
"docid": "f266646478196476fb93ea507ea6e23e",
"text": "The aim of this paper is to develop a human tracking system that is resistant to environmental changes and covers wide area. Simply structured floor sensors are low-cost and can track people in a wide area. However, the sensor reading is discrete and missing; therefore, footsteps do not represent the precise location of a person. A Markov chain Monte Carlo method (MCMC) is a promising tracking algorithm for these kinds of signals. We applied two prediction models to the MCMC: a linear Gaussian model and a highly nonlinear bipedal model. The Gaussian model was efficient in terms of computational cost while the bipedal model discriminated people more accurate than the Gaussian model. The Gaussian model can be used to track a number of people, and the bipedal model can be used in situations where more accurate tracking is required.",
"title": ""
},
{
"docid": "9c9c031767526777ee680f184de4b092",
"text": "The study of interleukin-23 (IL-23) over the past 8 years has led to the realization that cellular immunity is far more complex than previously appreciated, because it is controlled by additional newly identified players. From the analysis of seemingly straightforward cytokine regulation of autoimmune diseases, many limitations of the established paradigms emerged that required reevaluation of the 'rules' that govern the initiation and maintenance of immune responses. This information led to a major revision of the T-helper 1 (Th1)/Th2 hypothesis and discovery of an unexpected link between transforming growth factor-beta-dependent Th17 and inducible regulatory T cells. The aim of this review is to explore the multiple characteristics of IL-23 with respect to its 'id' in autoimmunity, 'ego' in T-cell help, and 'superego' in defense against mucosal pathogens.",
"title": ""
},
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
},
{
"docid": "bcc00e5db8f484a37528aae2740314f4",
"text": "Multi-Instance Multi-Label (MIML) is a learning framework where an example is associated with multiple labels and represented by a set of feature vectors (multiple instances). In the formalization of MIML learning, instances come from a single source (single view). To leverage multiple information sources (multi-view), we develop a multi-view MIML framework based on hierarchical Bayesian Network, and derive an effective learning algorithm based on variational inference. The model can naturally deal with examples in which some views could be absent (partial examples). On multi-view datasets, it is shown that our method is better than other multi-view and single-view approaches particularly in the presence of partial examples. On single-view benchmarks, extensive evaluation shows that our method is highly competitive or better than other MIML approaches on labeling examples and instances. Moreover, our method can effectively handle datasets with a large number of labels.",
"title": ""
},
{
"docid": "ffaa8edb1fccf68e6b7c066fb994510a",
"text": "A fast and precise determination of the DOA (direction of arrival) for immediate object classification becomes increasingly important for future automotive radar generations. Hereby, the elevation angle of an object is considered as a key parameter especially in complex urban environments. An antenna concept allowing the determination of object angles in azimuth and elevation is proposed and discussed in this contribution. This antenna concept consisting of a linear patch array and a cylindrical dielectric lens is implemented into a radar sensor and characterized in terms of angular accuracy and ambiguities using correlation algorithms and the CRLB (Cramer Rao Lower Bound).",
"title": ""
},
{
"docid": "aa65dc18169238ef973ef24efb03f918",
"text": "A number of national studies point to a trend in which highly selective and elite private and public universities are becoming less accessible to lower-income students. At the same time there have been surprisingly few studies of the actual characteristics and academic experiences of low-income students or comparisons of their undergraduate experience with those of more wealthy students. This paper explores the divide between poor and rich students, first comparing a group of selective US institutions and their number and percentage of Pell Grant recipients and then, using institutional data and results from the University of California Undergraduate Experience Survey (UCUES), presenting an analysis of the high percentage of low-income undergraduate students within the University of California system — who they are, their academic performance and quality of their undergraduate experience. Among our conclusions: The University of California has a strikingly higher number of lowincome students when compared to a sample group of twenty-four other selective public and private universities and colleges, including the Ivy Leagues and a sub-group of other California institutions such as Stanford and the University of Southern California. Indeed, the UC campuses of Berkeley, Davis, and UCLA each have more Pell Grant students than all of the eight Ivy League institutions combined. However, one out of three Pell Grant recipients at UC have at least one parent with a four-year college degree, calling into question the assumption that “low-income” and “first-generation” are interchangeable groups of students. Low-income students, and in particular Pell Grant recipients, at UC have only slightly lower GPAs than their more wealthy counterparts in both math, science and engineering, and in humanities and social science fields. Contrary to some previous research, we find that low-income students have generally the same academic and social satisfaction levels; and are similar in their sense of belonging within their campus communities. However, there are some intriguing results across UC campuses, with low-income students somewhat less satisfied at those campuses where there are more affluent student bodies and where lower-income students have a smaller presence. An imbalance between rich and poor is the oldest and most fatal ailment of all republics — Plutarch There has been a growing and renewed concern among scholars of higher education and policymakers about increasing socioeconomic disparities in American society. Not surprisingly, these disparities are increasingly reflected * The SERU Project is a collaborative study based at the Center for Studies in Higher Education at UC Berkeley and focused on developing new types of data and innovative policy relevant scholarly analyses on the academic and civic experience of students at major research universities. For further information on the project, see http://cshe.berkeley.edu/research/seru/ ** John Aubrey Douglass is Senior Research Fellow – Public Policy and Higher Education at the Center for Studies in Higher Education at UC Berkeley and coPI of the SERU Project; Gregg Thomson is Director of the Office of Student Research at UC Berkeley and a co-PI of the SERU Project. We would like to thank David Radwin at OSR and a SERU Project Research Associate for his collaboration with data analysis. Douglass and Thomson: Poor and Rich 2 CSHE Research & Occasional Paper Series in the enrollment of students in the nation’s cadre of highly selective, elite private universities, and increasingly among public universities. Particularly over the past three decades, “brand name” prestige private universities and colleges have moved to a high tuition fee and high financial aid model, with the concept that a significant portion of generated tuition revenue can be redirected toward financial aid for either low-income or merit-based scholarships. With rising costs, declining subsidization by state governments, and the shift of federal financial aid toward loans versus grants in aid, public universities are moving a low fee model toward what is best called a moderate fee and high financial aid model – a model that is essentially evolving. There is increasing evidence, however, that neither the private nor the evolving public tuition and financial aid model is working. Students from wealthy families congregate at the most prestigious private and public institutions, with significant variance depending on the state and region of the nation, reflecting the quality and composition of state systems of higher education. A 2004 study by Sandy Astin and Leticia Oseguera looked at a number of selective private and public universities and concluded that the number and percentage of low-income and middle-income families had declined while the number from wealthy families increased. “American higher education, in other words, is more socioeconomically stratified today than at any other time during the past three decades,” they note. One reason, they speculated, may be “the increasing competitiveness among prospective college students for admission to the country’s most selective colleges and universities” (Astin and Oseguera 2004). A more recent study by Danette Gerald and Kati Haycock (2006) looked at the socioeconomic status (SES) of undergraduate students at a selective group of fifty “premier” public universities and had a similar conclusion – but one more alarming because of the important historical mission of public universities to provide broad access, a formal mandate or social contract. Though more open to students from low-income families than their private counterparts, the premier publics had declined in the percentage of students with federally funded Pell Grants (federal grants to students generally with family incomes below $40,000 annually) when compared to other four-year public institutions in the nation. Ranging from $431 to a maximum of $4,731, Pell Grants, and the criteria for selection of recipients, has long served as a benchmark on SES access. Pell Grant students have, on average, a family income of only $19,300. On average, note Gerald and Haycock, the selected premier publics have some 22% of their enrolled undergraduates with Pell Grants; all public four-year institutions have some 31% with Pell Grants; private institutions have an average of around 14% (Gerald and Haycock 2006). But it is important to note that there are a great many dimensions in understanding equity and access among private and public higher education institutions (HEIs). For one, there is a need to disaggregate types of institutions, for example, private versus public, university versus community college. Public and private institutions, and particularly highly selective universities and colleges, tend to draw from different demographic pools, with public universities largely linked to the socioeconomic stratification of their home state. Second, there are the factors related to rising tuition and increasingly complicated and, one might argue, inadequate approaches to financial aid in the U.S. With the slow down in the US economy, the US Department of Education recently estimated that demand for Pell Grants was exceeded projected demand by some 800,000 students; total applications for the grant program are up 16 percent over the previous year. This will require an additional $6 billion to the Pell Grant’s current budget of $14 billion next year.1 Economic downturns tend to push demand up for access to higher education among the middle and lower class, although most profoundly at the community college level. This phenomenon plus continued growth in the nation’s population, and in particularly in states such as California, Texas and Florida, means an inadequate financial aid system, where the maximum Pell Grant award has remained largely the same for the last decade when adjusted for inflation, will be further eroded. But in light of the uncertainty in the economy and the lack of resolve at the federal level to support higher education, it is not clear the US government will fund the increased demand – it may cut the maximum award. And third, there are larger social trends, such as increased disparities in income and the erosion of public services, declines in the quality of many public schools, the stagnation and real declines for some socioeconomic groups in high school graduation rates; and the large increase in the number of part-time students, most of whom must work to stay financially solvent. Douglass and Thomson: Poor and Rich 3 CSHE Research & Occasional Paper Series This paper examines low-income, and upper income, student access to the University of California and how lowincome access compares with a group of elite privates (specifically Ivy League institutions) and selective publics. Using data from the University of California’s Undergraduate Experience Survey (UCUES) and institutional data, we discuss what makes UC similar and different in the SES and demographic mix of students. Because the maximum Pell Grant is under $5,000, the cost of tuition alone is higher in the publics, and much higher in our group of selective privates, the percentage and number of Pell Grant students at an institution provides evidence of its resolve, creativity, and financial commitment to admit and enroll working and middle-class students. We then analyze the undergraduate experience of our designation of poor students (defined for this analysis as Pell Grant recipients) and rich students (from high-income families, defined as those with household incomes above $125,000 and no need-based aid).2 While including other income groups, we use these contrasting categories of wealth to observe differences in the background of students, their choice of major, general levels of satisfaction, academic performance, and sense of belonging at the university. There is very little analytical work on the characteristics and percepti",
"title": ""
},
{
"docid": "5506207c5d11a464b1bca39d6092089e",
"text": "Scalp recorded event-related potentials were used to investigate the neural activity elicited by emotionally negative and emotionally neutral words during the performance of a recognition memory task. Behaviourally, the principal difference between the two word classes was that the false alarm rate for negative items was approximately double that for the neutral words. Correct recognition of neutral words was associated with three topographically distinct ERP memory 'old/new' effects: an early, bilateral, frontal effect which is hypothesised to reflect familiarity-driven recognition memory; a subsequent left parietally distributed effect thought to reflect recollection of the prior study episode; and a late onsetting, right-frontally distributed effect held to be a reflection of post-retrieval monitoring. The old/new effects elicited by negative words were qualitatively indistinguishable from those elicited by neutral items and, in the case of the early frontal effect, of equivalent magnitude also. However, the left parietal effect for negative words was smaller in magnitude and shorter in duration than that elicited by neutral words, whereas the right frontal effect was not evident in the ERPs to negative items. These differences between neutral and negative words in the magnitude of the left parietal and right frontal effects were largely attributable to the increased positivity of the ERPs elicited by new negative items relative to the new neutral items. Together, the behavioural and ERP findings add weight to the view that emotionally valenced words influence recognition memory primarily by virtue of their high levels of 'semantic cohesion', which leads to a tendency for 'false recollection' of unstudied items.",
"title": ""
},
{
"docid": "980dc3d4b01caac3bf56df039d5ca513",
"text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.",
"title": ""
},
{
"docid": "7346e00ebadc27c1656e381dbbe39dd0",
"text": "This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning. We show that even a simple probabilistic model achieves state-ofthe-art on a standard k-shot learning dataset by a large margin. Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning.",
"title": ""
},
{
"docid": "eaca5794d84a96f8c8e7807cf83c3f00",
"text": "Background Women represent 15% of practicing general surgeons. Gender-based discrimination has been implicated as discouraging women from surgery. We sought to determine women's perceptions of gender-based discrimination in the surgical training and working environment. Methods Following IRB approval, we fielded a pilot survey measuring perceptions and impact of gender-based discrimination in medical school, residency training, and surgical practice. It was sent electronically to 1,065 individual members of the Association of Women Surgeons. Results We received 334 responses from medical students, residents, and practicing physicians with a response rate of 31%. Eighty-seven percent experienced gender-based discrimination in medical school, 88% in residency, and 91% in practice. Perceived sources of gender-based discrimination included superiors, physician peers, clinical support staff, and patients, with 40% emanating from women and 60% from men. Conclusions The majority of responses indicated perceived gender-based discrimination during medical school, residency, and practice. Gender-based discrimination comes from both sexes and has a significant impact on women surgeons.",
"title": ""
},
{
"docid": "1682c1be8397a4d8e859e76cdc849740",
"text": "With the advent of RFLPs, genetic linkage maps are now being assembled for a number of organisms including both inbred experimental populations such as maize and outbred natural populations such as humans. Accurate construction of such genetic maps requires multipoint linkage analysis of particular types of pedigrees. We describe here a computer package, called MAPMAKER, designed specifically for this purpose. The program uses an efficient algorithm that allows simultaneous multipoint analysis of any number of loci. MAPMAKER also includes an interactive command language that makes it easy for a geneticist to explore linkage data. MAPMAKER has been applied to the construction of linkage maps in a number of organisms, including the human and several plants, and we outline the mapping strategies that have been used.",
"title": ""
},
{
"docid": "7cd655bbea3b088618a196382b33ed1e",
"text": "Story generation is a well-recognized task in computational creativity research, but one that can be difficult to evaluate empirically. It is often inefficient and costly to rely solely on human feedback for judging the quality of generated stories. We address this by examining the use of linguistic analyses for automated evaluation, using metrics from existing work on predicting writing quality. We apply these metrics specifically to story continuation, where a model is given the beginning of a story and generates the next sentence, which is useful for systems that interactively support authors’ creativity in writing. We compare sentences generated by different existing models to human-authored ones according to the analyses. The results show some meaningful differences between the models, suggesting that this evaluation approach may be advantageous for future research.",
"title": ""
},
{
"docid": "e50ba614fc997f058f8d495b59c18af5",
"text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.",
"title": ""
},
{
"docid": "8fd762096225ed2474ed740835f5268d",
"text": "In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semanticaware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms. © 2016 SPIE and IS&T [DOI: 10.1117/1.JEI.26.1.011007]",
"title": ""
},
{
"docid": "293cdb11d0701f9bd2ccfe82bc457ab8",
"text": "Modern neural network models have achieved the state-ofthe-art performance on relation extraction (RE) tasks. Although distant supervision (DS) can automatically generate training labels for RE, the effectiveness of DS highly depends on datasets and relation types, and sometimes it may introduce large labeling noises. In this paper, we propose a deep pattern diagnosis framework, DIAG-NRE, that aims to diagnose and improve neural relation extraction (NRE) models trained on DS-generated data. DIAG-NRE includes three stages: (1) The deep pattern extraction stage employs reinforcement learning to extract regular-expression-style patterns from NRE models. (2) The pattern refinement stage builds a pattern hierarchy to find the most representative patterns and lets human reviewers evaluate them quantitatively by annotating a certain number of pattern-matched examples. In this way, we minimize both the number of labels to annotate and the difficulty of writing heuristic patterns. (3) The weak label fusion stage fuses multiple weak label sources, including DS and refined patterns, to produce noise-reduced labels that can train a better NRE model. To demonstrate the broad applicability of DIAG-NRE, we use it to diagnose 14 relation types of two public datasets with one simple hyperparameter configuration. We observe different noise behaviors and obtain significant F1 improvements on all relation types suffering from large labeling noises.",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "a6a98d0599c1339c1f2c6a6c7525b843",
"text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.",
"title": ""
}
] | scidocsrr |
d44ed5c436ff5cec861c3e49d122fab2 | Design space exploration of FPGA accelerators for convolutional neural networks | [
{
"docid": "5c8c391a10f32069849d743abc5e8210",
"text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.",
"title": ""
}
] | [
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "8f29a231b801a018a6d18befc0d06d0b",
"text": "The paper introduces a deep learningbased Twitter hate-speech text classification system. The classifier assigns each tweet to one of four predefined categories: racism, sexism, both (racism and sexism) and non-hate-speech. Four Convolutional Neural Network models were trained on resp. character 4-grams, word vectors based on semantic information built using word2vec, randomly generated word vectors, and word vectors combined with character n-grams. The feature set was down-sized in the networks by maxpooling, and a softmax function used to classify tweets. Tested by 10-fold crossvalidation, the model based on word2vec embeddings performed best, with higher precision than recall, and a 78.3% F-score.",
"title": ""
},
{
"docid": "9b60816097ccdff7b1eec177aac0b9b8",
"text": "We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.",
"title": ""
},
{
"docid": "2e812c0a44832721fcbd7272f9f6a465",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "5ea42460dc2bdd2ebc2037e35e01dca9",
"text": "Mobile edge clouds (MECs) are small cloud-like infrastructures deployed in close proximity to users, allowing users to have seamless and low-latency access to cloud services. When users move across different locations, their service applications often need to be migrated to follow the user so that the benefit of MEC is maintained. In this paper, we propose a layered framework for migrating running applications that are encapsulated either in virtual machines (VMs) or containers. We evaluate the migration performance of various real applications under the proposed framework.",
"title": ""
},
{
"docid": "a9052b10f9750d58eb33b9e5d564ee6e",
"text": "Cyber Physical Systems (CPS) play significant role in shaping smart manufacturing systems. CPS integrate computation with physical processes where behaviors are represented in both cyber and physical parts of the system. In order to understand CPS in the context of smart manufacturing, an overview of CPS technologies, components, and relevant standards is presented. A detailed technical review of the existing engineering tools and practices from major control vendors has been conducted. Furthermore, potential research areas have been identified in order to enhance the tools functionalities and capabilities in supporting CPS development process.",
"title": ""
},
{
"docid": "a8f27679e13572d00d5eae3496cec014",
"text": "Today, we are forward to meeting an older people society in the world. The elderly people have become a high risk of dementia or depression. In recent years, with the rapid development of internet of things (IoT) techniques, it has become a feasible solution to build a system that combines IoT and cloud techniques for detecting and preventing the elderly dementia or depression. This paper proposes an IoT-based elderly behavioral difference warning system for early depression and dementia warning. The proposed system is composed of wearable smart glasses, a BLE-based indoor trilateration position, and a cloud-based service platform. As a result, the proposed system can not only reduce human and medical costs, but also improve the cure rate of depression or delay the deterioration of dementia.",
"title": ""
},
{
"docid": "2e4ac47cdc063d76089c17f30a379765",
"text": "Determination of the type and origin of the body fluids found at a crime scene can give important insights into crime scene reconstruction by supporting a link between sample donors and actual criminal acts. For more than a century, numerous types of body fluid identification methods have been developed, such as chemical tests, immunological tests, protein catalytic activity tests, spectroscopic methods and microscopy. However, these conventional body fluid identification methods are mostly presumptive, and are carried out for only one body fluid at a time. Therefore, the use of a molecular genetics-based approach using RNA profiling or DNA methylation detection has been recently proposed to supplant conventional body fluid identification methods. Several RNA markers and tDMRs (tissue-specific differentially methylated regions) which are specific to forensically relevant body fluids have been identified, and their specificities and sensitivities have been tested using various samples. In this review, we provide an overview of the present knowledge and the most recent developments in forensic body fluid identification and discuss its possible practical application to forensic casework.",
"title": ""
},
{
"docid": "05b4df16c35a89ee2a5b9ac482e0a297",
"text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.",
"title": ""
},
{
"docid": "e2c9c7c26436f0f7ef0067660b5f10b8",
"text": "The naive Bayesian classifier (NBC) is a simple yet very efficient classification technique in machine learning. But the unpractical condition independence assumption of NBC greatly degrades its performance. There are two primary ways to improve NBC's performance. One is to relax the condition independence assumption in NBC. This method improves NBC's accuracy by searching additional condition dependencies among attributes of the samples in a scope. It usually involves in very complex search algorithms. Another is to change the representation of the samples by creating new attributes from the original attributes, and construct NBC from these new attributes while keeping the condition independence assumption. Key problem of this method is to guarantee strong condition independencies among the new attributes. In the paper, a new means of making attribute set, which maps the original attributes to new attributes according to the information geometry and Fisher score, is presented, and then the FS-NBC on the new attributes is constructed. The condition dependence relation among the new attributes theoretically is discussed. We prove that these new attributes are condition independent of each other under certain conditions. The experimental results show that our method improves performance of NBC excellently",
"title": ""
},
{
"docid": "4816f221d67922009a308058139aa56b",
"text": "In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T ) bits of precision suffice to support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.",
"title": ""
},
{
"docid": "a0d34b1c003b7e88c2871deaaba761ed",
"text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1",
"title": ""
},
{
"docid": "df1ea45a4b20042abd99418ff6d1f44e",
"text": "This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution.",
"title": ""
},
{
"docid": "da816b4a0aea96feceefe22a67c45be4",
"text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.",
"title": ""
},
{
"docid": "3e727d70f141f52fb9c432afa3747ceb",
"text": "In this paper, we propose an improvement of Adversarial Transformation Networks(ATN) [1]to generate adversarial examples, which can fool white-box models and blackbox models with a state of the art performance and won the SECOND place in the non-target task in CAAD 2018. In this section, we first introduce the whole architecture about our method, then we present our improvement on loss functions to generate adversarial examples satisfying the L∞ norm restriction in the non-targeted attack problem. Then we illustrate how to use a robust-enhance module to make our adversarial examples more robust and have better transfer-ability. At last we will show our method on how to attack an ensemble of models.",
"title": ""
},
{
"docid": "a0d1d59fc987d90e500b3963ac11b2ad",
"text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fd171b73ea88d9b862149e1c1d72aea8",
"text": "Localization of people and devices is one of the main building blocks of context aware systems since the user position represents the core information for detecting user's activities, devices activations, proximity to points of interest, etc. While for outdoor scenarios Global Positioning System (GPS) constitutes a reliable and easily available technology, for indoor scenarios GPS is largely unavailable. In this paper we present a range-based indoor localization system that exploits the Received Signal Strength (RSS) of Bluetooth Low Energy (BLE) beacon packets broadcast by anchor nodes and received by a BLE-enabled device. The method used to infer the user's position is based on stigmergy. We exploit the stigmergic marking process to create an on-line probability map identifying the user's position in the indoor environment.",
"title": ""
},
{
"docid": "b959bce5ea9db71d677586eb1b6f023e",
"text": "We consider autonomous racing of two cars and present an approach to formulate the decision making as a non-cooperative non-zero-sum game. The game is formulated by restricting both players to fulfill static track constraints as well as collision constraints which depend on the combined actions of the two players. At the same time the players try to maximize their own progress. In the case where the action space of the players is finite, the racing game can be reformulated as a bimatrix game. For this bimatrix game, we show that the actions obtained by a sequential maximization approach where only the follower considers the action of the leader are identical to a Stackelberg and a Nash equilibrium in pure strategies. Furthermore, we propose a game promoting blocking, by additionally rewarding the leading car for staying ahead at the end of the horizon. We show that this changes the Stackelberg equilibrium, but has a minor influence on the Nash equilibria. For an online implementation, we propose to play the games in a moving horizon fashion, and we present two methods for guaranteeing feasibility of the resulting coupled repeated games. Finally, we study the performance of the proposed approaches in simulation for a set-up that replicates the miniature race car tested at the Automatic Control Laboratory of ETH Zürich. The simulation study shows that the presented games can successfully model different racing behaviors and generate interesting racing situations.",
"title": ""
},
{
"docid": "516ef94fad7f7e5801bf1ef637ffb136",
"text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1",
"title": ""
},
{
"docid": "bed29a89354c1dfcebbdde38d1addd1d",
"text": "Eosinophilic skin diseases, commonly termed as eosinophilic dermatoses, refer to a broad spectrum of skin diseases characterized by eosinophil infiltration and/or degranulation in skin lesions, with or without blood eosinophilia. The majority of eosinophilic dermatoses lie in the allergy-related group, including allergic drug eruption, urticaria, allergic contact dermatitis, atopic dermatitis, and eczema. Parasitic infestations, arthropod bites, and autoimmune blistering skin diseases such as bullous pemphigoid, are also common. Besides these, there are several rare types of eosinophilic dermatoses with unknown origin, in which eosinophil infiltration is a central component and affects specific tissue layers or adnexal structures of the skin, such as the dermis, subcutaneous fat, fascia, follicles, and cutaneous vessels. Some typical examples are eosinophilic cellulitis, granuloma faciale, eosinophilic pustular folliculitis, recurrent cutaneous eosinophilic vasculitis, and eosinophilic fasciitis. Although tissue eosinophilia is a common feature shared by these disorders, their clinical and pathological properties differ dramatically. Among these rare entities, eosinophilic pustular folliculitis may be associated with human immunodeficiency virus (HIV) infection or malignancies, and some other diseases, like eosinophilic fasciitis and eosinophilic cellulitis, may be associated with an underlying hematological disorder, while others are considered idiopathic. However, for most of these rare eosinophilic dermatoses, the causes and the pathogenic mechanisms remain largely unknown, and systemic, high-quality clinical investigations are needed for advances in better strategies for clinical diagnosis and treatment. Here, we present a comprehensive review on the etiology, pathogenesis, clinical features, and management of these rare entities, with an emphasis on recent advances and current consensus.",
"title": ""
}
] | scidocsrr |
4e97169528430631823341734e2375ec | Rich Image Captioning in the Wild | [
{
"docid": "6a1e614288a7977b72c8037d9d7725fb",
"text": "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.",
"title": ""
},
{
"docid": "30260d1a4a936c79e6911e1e91c3a84a",
"text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.",
"title": ""
}
] | [
{
"docid": "3a7a7fa5e41a6195ca16f172b72f89a1",
"text": "To integrate unpredictable human behavior in the assessment of active and passive pedestrian safety systems, we introduce a virtual reality (VR)-based pedestrian simulation system. The device uses the Xsens Motion Capture platform and can be used without additional infrastructure. To show the systems applicability for pedestrian behavior studies, we conducted a pilot study evaluating the degree of realism such a system can achieve in a typical unregulated pedestrian crossing scenario. Six participants had to estimate vehicle speeds and distances in four scenarios with varying gaps between vehicles. First results indicate an acceptable level of realism so that the device can be used for further user studies addressing pedestrian behavior, pedestrian interaction with (automated) vehicles, risk assessment and investigation of the pre-crash phase without the risk of injuries.",
"title": ""
},
{
"docid": "88cf953ba92b54f89cdecebd4153bee3",
"text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.",
"title": ""
},
{
"docid": "b82c7c8f36ea16c29dfc5fa00a58b229",
"text": "Green cloud computing has become a major concern in both industry and academia, and efficient scheduling approaches show promising ways to reduce the energy consumption of cloud computing platforms while guaranteeing QoS requirements of tasks. Existing scheduling approaches are inadequate for realtime tasks running in uncertain cloud environments, because those approaches assume that cloud computing environments are deterministic and pre-computed schedule decisions will be statically followed during schedule execution. In this paper, we address this issue. We introduce an interval number theory to describe the uncertainty of the computing environment and a scheduling architecture to mitigate the impact of uncertainty on the task scheduling quality for a cloud data center. Based on this architecture, we present a novel scheduling algorithm (PRS) that dynamically exploits proactive and reactive scheduling methods, for scheduling real-time, aperiodic, independent tasks. To improve energy efficiency, we propose three strategies to scale up and down the system’s computing resources according to workload to improve resource utilization and to reduce energy consumption for the cloud data center. We conduct extensive experiments to compare PRS with four typical baseline scheduling algorithms. The experimental results show that PRS performs better than those algorithms, and can effectively improve the performance of a cloud data center.",
"title": ""
},
{
"docid": "215bb5273dbf5c301ae4170b5da39a34",
"text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.",
"title": ""
},
{
"docid": "e2606242fcc89bfcf5c9c4cd71dd2c18",
"text": "This letter introduces the class of generalized punctured convolutional codes (GPCCs), which is broader than and encompasses the class of the standard punctured convolutional codes (PCCs). A code in this class can be represented by a trellis module, the GPCC trellis module, whose topology resembles that of the minimal trellis module. he GPCC trellis module for a PCC is isomorphic to the minimal trellis module. A list containing GPCCs with better distance spectrum than the best known PCCs with same code rate and trellis complexity is presented.",
"title": ""
},
{
"docid": "316e4fa32d0b000e6f833d146a9e0d80",
"text": "Magnetic equivalent circuits (MECs) are becoming an accepted alternative to electrical-equivalent lumped-parameter models and finite-element analysis (FEA) for simulating electromechanical devices. Their key advantages are moderate computational effort, reasonable accuracy, and flexibility in model size. MECs are easily extended into three dimensions. But despite the successful use of MEC as a modeling tool, a generalized 3-D formulation useable for a comprehensive computer-aided design tool has not yet emerged (unlike FEA, where general modeling tools are readily available). This paper discusses the framework of a 3-D MEC modeling approach, and presents the implementation of a variable-sized reluctance network distribution based on 3-D elements. Force calculation and modeling of moving objects are considered. Two experimental case studies, a soft-ferrite inductor and an induction machine, show promising results when compared to measurements and simulations of lumped parameter and FEA models.",
"title": ""
},
{
"docid": "b058bbc1485f99f37c0d72b960dd668b",
"text": "In two experiments short-term forgetting was investigated in a short-term cued recall task designed to examine proactive interference effects. Mixed modality study lists were tested at varying retention intervals using verbal and non-verbal distractor activities. When an interfering foil was read aloud and a target item read silently, strong PI effects were observed for both types of distractor activity. When the target was read aloud and followed by a verbal distractor activity, weak PI effects emerged. However, when a target item was read aloud and non-verbal distractor activity filled the retention interval, performance was immune to the effects of PI for at least eight seconds. The results indicate that phonological representations of items read aloud still influence performance after 15 seconds of distractor activity. Short-term Forgetting 3 Determinants of Short-term Forgetting: Decay, Retroactive Interference or Proactive Interference? Most current models of short-term memory assert that to-be-remembered items are represented in terms of easily degraded phonological representations. However, there is disagreement on how the traces become degraded. Some propose that trace degradation is due to decay brought about by the prevention of rehearsal (Baddeley, 1986; Burgess & Hitch, 1992; 1996), or a switch in attention (Cowan, 1993); others attribute degradation to retroactive interference (RI) from other list items (Nairne, 1990; Tehan & Fallon; in press; Tehan & Humphreys, 1998). We want to add proactive interference (PI) to the possible causes of short-term forgetting, and by showing how PI effects change as a function of the type of distractor task employed during a filled retention interval, we hope to evaluate the causes of trace degradation. By manipulating the type of distractor activity in a brief retention interval it is possible to test some of the assumptions about decay versus interference explanations of short-term forgetting. The decay position is quite straightforward. If rehearsal is prevented, then the trace should decay; the type of distractor activity should be immaterial as long as rehearsal is prevented. From the interference perspective both the Feature Model (Nairne, 1990) and the Tehan and Humphreys (1995,1998) connectionist model predict that there should be occasions where very little forgetting occurs. In the Feature Model items are represented as sets of modality dependent and modality independent features. Forgetting occurs when adjacent list items have common features. Some of the shared features of the first item are overwritten by the latter item, thereby producing a trace that bears only partial resemblance to the Short-term Forgetting 4 original item. One occasion in which interference would be minimized is when an auditory list is followed by a non-auditory distractor task. The modality dependent features of the list items would not be overwritten or degraded by the distractor activity because the modality dependent features of the list and distractor items are different to each other. By the same logic, a visually presented list should not be affected by an auditory distractor task, since modality specific features are again different in each case. In the Tehan and Humphreys (1995) approach, presentation modality is related to the strength of phonological representations that support recall. They assume that auditory activity produces stronger representations than does visual activity. Thus this model also predicts that when a list is presented auditorially, it will not be much affected by subsequent non-auditory distractor activity. However, in the case of a visual list with auditory distraction, the assumption would be that interference would be maximised. The phonological codes for the list items would be relatively weak in the first instance and a strong source of auditory retroactive interference follows. This prediction is the opposite of that derived from the Feature Model. Since PI effects appear to be sensitive to retention interval effects (Tehan & Humphreys, 1995; Wickens, Moody & Dow, 1981), we have chosen to employ a PI task to explore these differential predictions. We have recently developed a short-term cued recall task in which PI can easily be manipulated (Tehan & Humphreys, 1995; 1996; 1998). In this task, participants study a series of trials in which items are presented in blocks of four items with each trial consisting of either one or two blocks. Each trial has a target item that is an instance of either a taxonomic or rhyme category, and the category label is presented at test as a retrieval cue. The two-block trials are the important trials Short-term Forgetting 5 because it is in these trials that PI is manipulated. In these trials the two blocks are presented under directed forgetting instructions. That is, once participants find out that it is a two-block trial they are to forget the first block and remember the second block because the second block contains the target item. On control trials, all nontarget items in both blocks are unrelated to the target. On interference trials, a foil that is related to the target is embedded among three other to-be-forgotten fillers in the first block and the target is embedded among three unrelated filler items in the second block. Following the presentation of the second block the category cue is presented and subjects are asked to recall the word from the second block that is an instance of that category. Using this task we have been able to show that when taxonomic categories are used on an immediate test (e.g., dog is the foil, cat is the target and ANIMAL is the cue), performance is immune to PI. However, when recall is tested after a 2-second filled retention interval, PI effects are observed; target recall is depressed and the foil is often recalled instead of the target. In explaining these results, Tehan and Humphreys (1995) assumed that items were represented in terms of sets of features. The representation of an item was seen to involve both semantic and phonological features, with the phonological features playing a dominant role in item recall. They assumed that the cue would elicit the representations of the two items in the list, and that while the semantic features of both target and foil would be available, only the target would have active phonological features. Thus on an immediate test, knowing that the target ended in -at would make the task of discriminating between cat and dog relatively easy. On a delayed test they assumed that all phonological features were inactive and the absence of phonological information would make discrimination more difficult. Short-term Forgetting 6 A corollary of the Tehan and Humphreys (1995) assumption is that if phonological codes could be provided for a non-rhyming foil, then discrimination should again be problematic. Presentation modality is one variable that appears to produce differences in strength of phonological codes with reading aloud producing stronger representations than reading silently. Tehan and Humphreys (Experiment 5) varied the modality of the two blocks such that participants either read the first block silently and then read the second block aloud or vice versa. In the silent aloud condition performance was immune to PI. The assumption was that the phonological representation of the target item in the second block was very strong with the result that there were no problems in discrimination. However, PI effects were present in the aloud-silent condition. The phonological representation of the read-aloud foil appeared to serve as a strong source of competition to the read-silently target item. All the above research has been based on the premise that phonological representations for visually presented items are weak and rapidly lose their ability to support recall. This assumption seems tenable given that phonological similarity effects and phonological intrusion effects in serial recall are attenuated rapidly with brief periods of distractor activity (Conrad, 1967; Estes, 1973; Tehan & Humphreys, 1995). The cued recall experiments that have used a filled retention interval have always employed silent visual presentation of the study list and required spoken shadowing of the distractor items. That is, the phonological representations of both target and foil are assumed to be quite weak and the shadowing task would provide a strong source of interference. These are likely to be the conditions that produce maximum levels of PI. The patterns of PI may change with mixed modality study lists and alternative forms of distractor activity. For example, given a strong phonological representation of the target, weak representations of the foil and a weak source of Short-term Forgetting 7 retroactive interference, it might be possible to observe immunity to PI on a delayed test. The following experiments explore the relationship between presentation modality, distractor modality and PI Experiment 1 The Tehan and Humphreys (1995) mixed modality experiment indicated that PI effects were sensitive to the modalities of the first and second block of items. In the current study we use mixed modality study lists but this time include a two-second retention interval, the same as that used by Tehan and Humphreys. However, the modality of the distractor activity was varied as well. Participants either had to respond aloud verbally or make a manual response that did not involve any verbal output. From the Tehan and Humphreys perspective the assumption made is that the verbal distractor activity will produce more disruption to the phonological representation of the target item than will a non-verbal distractor activity and the PI will be observed. However, it is quite possible that with silent-aloud presentation and a non-verbal distractor activity immunity to PI might be maintained across a twosecond retention interval. From the Nairne perspective, interfe",
"title": ""
},
{
"docid": "b1239f2e9bfec604ac2c9851c8785c09",
"text": "BACKGROUND\nDecoding neural activities associated with limb movements is the key of motor prosthesis control. So far, most of these studies have been based on invasive approaches. Nevertheless, a few researchers have decoded kinematic parameters of single hand in non-invasive ways such as magnetoencephalogram (MEG) and electroencephalogram (EEG). Regarding these EEG studies, center-out reaching tasks have been employed. Yet whether hand velocity can be decoded using EEG recorded during a self-routed drawing task is unclear.\n\n\nMETHODS\nHere we collected whole-scalp EEG data of five subjects during a sequential 4-directional drawing task, and employed spatial filtering algorithms to extract the amplitude and power features of EEG in multiple frequency bands. From these features, we reconstructed hand movement velocity by Kalman filtering and a smoothing algorithm.\n\n\nRESULTS\nThe average Pearson correlation coefficients between the measured and the decoded velocities are 0.37 for the horizontal dimension and 0.24 for the vertical dimension. The channels on motor, posterior parietal and occipital areas are most involved for the decoding of hand velocity. By comparing the decoding performance of the features from different frequency bands, we found that not only slow potentials in 0.1-4 Hz band but also oscillatory rhythms in 24-28 Hz band may carry the information of hand velocity.\n\n\nCONCLUSIONS\nThese results provide another support to neural control of motor prosthesis based on EEG signals and proper decoding methods.",
"title": ""
},
{
"docid": "1fb87bc370023dc3fdfd9c9097288e71",
"text": "Protein is essential for living organisms, but digestibility of crude protein is poorly understood and difficult to predict. Nitrogen is used to estimate protein content because nitrogen is a component of the amino acids that comprise protein, but a substantial portion of the nitrogen in plants may be bound to fiber in an indigestible form. To estimate the amount of crude protein that is unavailable in the diets of mountain gorillas (Gorilla beringei) in Bwindi Impenetrable National Park, Uganda, foods routinely eaten were analyzed to determine the amount of nitrogen bound to the acid-detergent fiber residue. The amount of fiber-bound nitrogen varied among plant parts: herbaceous leaves 14.5+/-8.9% (reported as a percentage of crude protein on a dry matter (DM) basis), tree leaves (16.1+/-6.7% DM), pith/herbaceous peel (26.2+/-8.9% DM), fruit (34.7+/-17.8% DM), bark (43.8+/-15.6% DM), and decaying wood (85.2+/-14.6% DM). When crude protein and available protein intake of adult gorillas was estimated over a year, 15.1% of the dietary crude protein was indigestible. These results indicate that the proportion of fiber-bound protein in primate diets should be considered when estimating protein intake, food selection, and food/habitat quality.",
"title": ""
},
{
"docid": "60e56a59ecbdee87005407ed6a117240",
"text": "The visionary Steve Jobs said, “A lot of times, people don’t know what they want until you show it to them.” A powerful recommender system not only shows people similar items, but also helps them discover what they might like, and items that complement what they already purchased. In this paper, we attempt to instill a sense of “intention” and “style” into our recommender system, i.e., we aim to recommend items that are visually complementary with those already consumed. By identifying items that are visually coherent with a query item/image, our method facilitates exploration of the long tail items, whose existence users may be even unaware of. This task is formulated only recently by Julian et al. [1], with the input being millions of item pairs that are frequently viewed/bought together, entailing noisy style coherence. In the same work, the authors proposed a Mahalanobisbased transform to discriminate a given pair to be sharing a same style or not. Despite its success, we experimentally found that it’s only able to recommend items on the margin of different clusters, which leads to limited coverage of the items to be recommended. Another limitation is it totally ignores the existence of taxonomy information that is ubiquitous in many datasets like Amazon the authors experimented with. In this report, we propose two novel methods that make use of the hierarchical category metadata to overcome the limitations identified above. The main contributions are listed as following.",
"title": ""
},
{
"docid": "0c420c064519e15e071660c750c0b7e3",
"text": "In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.",
"title": ""
},
{
"docid": "4ca7e1893c0ab71d46af4954f7daf58e",
"text": "Identifying coordinate transformations that make strongly nonlinear dynamics approximately linear has the potential to enable nonlinear prediction, estimation, and control using linear theory. The Koopman operator is a leading data-driven embedding, and its eigenfunctions provide intrinsic coordinates that globally linearize the dynamics. However, identifying and representing these eigenfunctions has proven challenging. This work leverages deep learning to discover representations of Koopman eigenfunctions from data. Our network is parsimonious and interpretable by construction, embedding the dynamics on a low-dimensional manifold. We identify nonlinear coordinates on which the dynamics are globally linear using a modified auto-encoder. We also generalize Koopman representations to include a ubiquitous class of systems with continuous spectra. Our framework parametrizes the continuous frequency using an auxiliary network, enabling a compact and efficient embedding, while connecting our models to decades of asymptotics. Thus, we benefit from the power of deep learning, while retaining the physical interpretability of Koopman embeddings. It is often advantageous to transform a strongly nonlinear system into a linear one in order to simplify its analysis for prediction and control. Here the authors combine dynamical systems with deep learning to identify these hard-to-find transformations.",
"title": ""
},
{
"docid": "eeff1f2e12e5fc5403be8c2d7ca4d10c",
"text": "Optical Character Recognition (OCR) systems have been effectively developed for the recognition of printed script. The accuracy of OCR system mainly depends on the text preprocessing and segmentation algorithm being used. When the document is scanned it can be placed in any arbitrary angle which would appear on the computer monitor at the same angle. This paper addresses the algorithm for correction of skew angle generated in scanning of the text document and a novel profile based method for segmentation of printed text which separates the text in document image into lines, words and characters. Keywords—Skew correction, Segmentation, Text preprocessing, Horizontal Profile, Vertical Profile.",
"title": ""
},
{
"docid": "ce8914e02eeed8fb228b5b2950cf87de",
"text": "Different alternatives to detect and diagnose faults in induction machines have been proposed and implemented in the last years. The technology of artificial neural networks has been successfully used to solve the motor incipient fault detection problem. The characteristics, obtained by this technique, distinguish them from the traditional ones, which, in most cases, need that the machine which is being analyzed is not working to do the diagnosis. This paper reviews an artificial neural network (ANN) based technique to identify rotor faults in a three-phase induction motor. The main types of faults considered are broken bar and dynamic eccentricity. At light load, it is difficult to distinguish between healthy and faulty rotors because the characteristic broken rotor bar fault frequencies are very close to the fundamental component and their amplitudes are small in comparison. As a result, detection of the fault and classification of the fault severity under light load is almost impossible. In order to overcome this problem, the detection of rotor faults in induction machines is done by analysing the starting current using a newly developed quantification technique based on artificial neural networks.",
"title": ""
},
{
"docid": "33b4ba89053ed849d23758f6e3b06b09",
"text": "We develop a deep architecture to learn to find good correspondences for wide-baseline stereo. Given a set of putative sparse matches and the camera intrinsics, we train our network in an end-to-end fashion to label the correspondences as inliers or outliers, while simultaneously using them to recover the relative pose, as encoded by the essential matrix. Our architecture is based on a multi-layer perceptron operating on pixel coordinates rather than directly on the image, and is thus simple and small. We introduce a novel normalization technique, called Context Normalization, which allows us to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Our experiments on multiple challenging datasets demonstrate that our method is able to drastically improve the state of the art with little training data.",
"title": ""
},
{
"docid": "2aae53713324b297f0e145ef8d808ce9",
"text": "In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum polynomial time (BQP) introduced by Bernstein and Vazirani [Proc. 25th ACM Symposium on Theory of Computation, 1993, pp. 11–20, SIAM J. Comput., 26 (1997), pp. 1411–1473]. On the other hand, if quantum Turing machines are allowed unrestricted amplitudes (i.e., arbitrary complex amplitudes), then the corresponding BQP class has uncountable cardinality and contains sets of all Turing degrees. In contrast, allowing unrestricted amplitudes does not increase the power of computation for error-free quantum polynomial time (EQP). Moreover, with unrestricted amplitudes, BQP is not equal to EQP. The relationship between quantum complexity classes and classical complexity classes is also investigated. It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial time (NQP) are all contained in PP, hence in P#P and PSPACE. A potentially practical issue of designing “machine independent” quantum programs is also addressed. A single (“almost universal”) quantum algorithm based on Shor’s method for factoring integers is developed which would run correctly on almost all quantum computers, even if the underlying unitary transformations are unknown to the programmer and the device builder.",
"title": ""
},
{
"docid": "f617b8b5c2c5fc7829cbcd0b2e64ed2d",
"text": "This paper proposes a novel lifelong learning (LL) approach to sentiment classification. LL mimics the human continuous learning process, i.e., retaining the knowledge learned from past tasks and use it to help future learning. In this paper, we first discuss LL in general and then LL for sentiment classification in particular. The proposed LL approach adopts a Bayesian optimization framework based on stochastic gradient descent. Our experimental results show that the proposed method outperforms baseline methods significantly, which demonstrates that lifelong learning is a promising research direction.",
"title": ""
},
{
"docid": "925709dfe0d0946ca06d05b290f2b9bd",
"text": "Mentalization, operationalized as reflective functioning (RF), can play a crucial role in the psychological mechanisms underlying personality functioning. This study aimed to: (a) study the association between RF, personality disorders (cluster level) and functioning; (b) investigate whether RF and personality functioning are influenced by (secure vs. insecure) attachment; and (c) explore the potential mediating effect of RF on the relationship between attachment and personality functioning. The Shedler-Westen Assessment Procedure (SWAP-200) was used to assess personality disorders and levels of psychological functioning in a clinical sample (N = 88). Attachment and RF were evaluated with the Adult Attachment Interview (AAI) and Reflective Functioning Scale (RFS). Findings showed that RF had significant negative associations with cluster A and B personality disorders, and a significant positive association with psychological functioning. Moreover, levels of RF and personality functioning were influenced by attachment patterns. Finally, RF completely mediated the relationship between (secure/insecure) attachment and adaptive psychological features, and thus accounted for differences in overall personality functioning. Lack of mentalization seemed strongly associated with vulnerabilities in personality functioning, especially in patients with cluster A and B personality disorders. These findings provide support for the development of therapeutic interventions to improve patients' RF.",
"title": ""
},
{
"docid": "9a1d6be6fbce508e887ee4e06a932cd2",
"text": "For ranked search in encrypted cloud data, order preserving encryption (OPE) is an efficient tool to encrypt relevance scores of the inverted index. When using deterministic OPE, the ciphertexts will reveal the distribution of relevance scores. Therefore, Wang et al. proposed a probabilistic OPE, called one-to-many OPE, for applications of searchable encryption, which can flatten the distribution of the plaintexts. In this paper, we proposed a differential attack on one-to-many OPE by exploiting the differences of the ordered ciphertexts. The experimental results show that the cloud server can get a good estimate of the distribution of relevance scores by a differential attack. Furthermore, when having some background information on the outsourced documents, the cloud server can accurately infer the encrypted keywords using the estimated distributions.",
"title": ""
},
{
"docid": "460e8daf5dfc9e45c3ade5860aa9cc57",
"text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.",
"title": ""
}
] | scidocsrr |
a7ce59adc981813107323821e694c2f8 | A Bistatic SAR Raw Data Simulator Based on Inverse $ \omega{-}k$ Algorithm | [
{
"docid": "b3e1bdd7cfca17782bde698297e191ab",
"text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.",
"title": ""
}
] | [
{
"docid": "8bc095fca33d850db89ffd15a84335dc",
"text": "There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.",
"title": ""
},
{
"docid": "b77d297feeff92a2e7b03bf89b5f20db",
"text": "Dependability evaluation main objective is to assess the ability of a system to correctly function over time. There are many possible approaches to the evaluation of dependability: in these notes we are mainly concerned with dependability evaluation based on probabilistic models. Starting from simple probabilistic models with very efficient solution methods we shall then come to the main topic of the paper: how Petri nets can be used to evaluate the dependability of complex systems.",
"title": ""
},
{
"docid": "3182542aa5b500780bb8847178b8ec8d",
"text": "The United States is a diverse country with constantly changing demographics. The noticeable shift in demographics is even more phenomenal among the school-aged population. The increase of ethnic-minority student presence is largely credited to the national growth of the Hispanic population, which exceeded the growth of all other ethnic minority group students in public schools. Scholars have pondered over strategies to assist teachers in teaching about diversity (multiculturalism, racism, etc.) as well as interacting with the diversity found within their classrooms in order to ameliorate the effects of cultural discontinuity. One area that has developed in multicultural education literature is culturally relevant pedagogy (CRP). CRP maintains that teachers need to be non-judgmental and inclusive of the cultural backgrounds of their students in order to be effective facilitators of learning in the classroom. The plethora of literature on CRP, however, has not been presented as a testable theoretical model nor has it been systematically viewed through the lens of critical race theory (CRT). By examining the evolution of CRP among some of the leading scholars, the authors broaden this work through a CRT infusion which includes race and indeed racism as normal parts of American society that have been integrated into the educational system and the systematic aspects of school relationships. Their purpose is to infuse the tenets of CRT into an overview of the literature that supports a conceptual framework for understanding and studying culturally relevant pedagogy. They present a conceptual framework of culturally relevant pedagogy that is grounded in over a quarter of a century of research scholarship. By synthesizing the literature into the five areas and infusing it with the tenets of CRT, the authors have developed a collection of principles that represents culturally relevant pedagogy. (Contains 1 figure and 1 note.) culturally relevant pedagogy | teacher education | student-teacher relationships |",
"title": ""
},
{
"docid": "a0306096725c0d4b6bdd648bfa396f13",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "164fd7be21190314a27bacb4dec522c5",
"text": "The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?",
"title": ""
},
{
"docid": "28439c317c1b7f94527db6c2e0edcbd0",
"text": "AnswerBus1 is an open-domain question answering system based on sentence level Web information retrieval. It accepts users’ natural-language questions in English, German, French, Spanish, Italian and Portuguese and provides answers in English. Five search engines and directories are used to retrieve Web pages that are relevant to user questions. From the Web pages, AnswerBus extracts sentences that are determined to contain answers. Its current rate of correct answers to TREC-8’s 200 questions is 70.5% with the average response time to the questions being seven seconds. The performance of AnswerBus in terms of accuracy and response time is better than other similar systems.",
"title": ""
},
{
"docid": "933073c108baa0229c8bcd423ceade47",
"text": "Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. We have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.",
"title": ""
},
{
"docid": "b7673dbe46a1118511d811241940e328",
"text": "A 100-MHz–2-GHz closed-loop analog in-phase/ quadrature correction circuit for digital clocks is presented. The proposed circuit consists of a phase-locked loop- type architecture for quadrature error correction. The circuit corrects the phase error to within a 1.5° up to 1 GHz and to within 3° at 2 GHz. It consumes 5.4 mA from a 1.2 V supply at 2 GHz. The circuit was designed in UMC 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> mixed-mode CMOS with an active area of <inline-formula> <tex-math notation=\"LaTeX\">$102\\,\\,\\mu {\\mathrm{ m}} \\times 95\\,\\,\\mu {\\mathrm{ m}}$ </tex-math></inline-formula>. The impact of duty cycle distortion has been analyzed. High-frequency quadrature measurement related issues have been discussed. The proposed circuit was used in two different applications for which the functionality has been verified.",
"title": ""
},
{
"docid": "216c1f8d96e8392fe05e51f556caf2ef",
"text": "The Hypogonadism in Males study estimated the prevalence of hypogonadism [total testosterone (TT) < 300 ng/dl] in men aged > or = 45 years visiting primary care practices in the United States. A blood sample was obtained between 8 am and noon and assayed for TT, free testosterone (FT) and bioavailable testosterone (BAT). Common symptoms of hypogonadism, comorbid conditions, demographics and reason for visit were recorded. Of 2162 patients, 836 were hypogonadal, with 80 receiving testosterone. Crude prevalence rate of hypogonadism was 38.7%. Similar trends were observed for FT and BAT. Among men not receiving testosterone, 756 (36.3%) were hypogonadal; odds ratios for having hypogonadism were significantly higher in men with hypertension (1.84), hyperlipidaemia (1.47), diabetes (2.09), obesity (2.38), prostate disease (1.29) and asthma or chronic obstructive pulmonary disease (1.40) than in men without these conditions. The prevalence of hypogonadism was 38.7% in men aged > or = 45 years presenting to primary care offices.",
"title": ""
},
{
"docid": "ac76a4fe36e95d87f844c6876735b82f",
"text": "Theoretical estimates indicate that graphene thin films can be used as transparent electrodes for thin-film devices such as solar cells and organic light-emitting diodes, with an unmatched combination of sheet resistance and transparency. We demonstrate organic light-emitting diodes with solution-processed graphene thin film transparent conductive anodes. The graphene electrodes were deposited on quartz substrates by spin-coating of an aqueous dispersion of functionalized graphene, followed by a vacuum anneal step to reduce the sheet resistance. Small molecular weight organic materials and a metal cathode were directly deposited on the graphene anodes, resulting in devices with a performance comparable to control devices on indium-tin-oxide transparent anodes. The outcoupling efficiency of devices on graphene and indium-tin-oxide is nearly identical, in agreement with model predictions.",
"title": ""
},
{
"docid": "1ccc1b904fa58b1e31f4f3f4e2d76707",
"text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.",
"title": ""
},
{
"docid": "14aefcc95313cecbce5f575fd78a9ae5",
"text": "The Penn Treebank does not annotate within base noun phrases (NPs), committing only to flat structures that ignore the complexity of English NPs. This means that tools trained on Treebank data cannot learn the correct internal structure of NPs. This paper details the process of adding gold-standard bracketing within each noun phrase in the Penn Treebank. We then examine the consistency and reliability of our annotations. Finally, we use this resource to determine NP structure using several statistical approaches, thus demonstrating the utility of the corpus. This adds detail to the Penn Treebank that is necessary for many NLP applications.",
"title": ""
},
{
"docid": "2c63b16ba725f8941f2f9880530911ef",
"text": "To facilitate wireless transmission of multimedia content to mobile users, we propose a content caching and distribution framework for smart grid enabled OFDM networks, where each popular multimedia file is coded and distributively stored in multiple energy harvesting enabled serving nodes (SNs), and the green energy distributively harvested by SNs can be shared with each other through the smart grid. The distributive caching, green energy sharing, and the on-grid energy backup have improved the reliability and performance of the wireless multimedia downloading process. To minimize the total on-grid power consumption of the whole network, while guaranteeing that each user can retrieve the whole content, the user association scheme is jointly designed with consideration of resource allocation, including subchannel assignment, power allocation, and power flow among nodes. Simulation results demonstrate that bringing content, green energy, and SN closer to the end user can notably reduce the on-grid energy consumption.",
"title": ""
},
{
"docid": "f4c1a8b19248e0cb8e2791210715e7b7",
"text": "The translation of proper names is one of the most challenging activities every translator faces. While working on children’s literature, the translation is especially complicated since proper names usually have various allusions indicating sex, age, geographical belonging, history, specific meaning, playfulness of language and cultural connotations. The goal of this article is to draw attention to strategic choices for the translation of proper names in children’s literature. First, the article presents the theoretical considerations that deal with different aspects of proper names in literary works and the issue of their translation. Second, the translation strategies provided by the translation theorist Eirlys E. Davies used for this research are explained. In addition, the principles of adaptation of proper names provided the State Commission of the Lithuanian Language are presented. Then, the discussion proceeds to the quantitative analysis of the translated proper names with an emphasis on providing and explaining numerous examples. The research has been carried out on four popular fantasy books translated from English and German by three Lithuanian translators. After analyzing the strategies of preservation, localization, transformation and creation, the strategy of localization has proved to be the most frequent one in all translations.",
"title": ""
},
{
"docid": "0170bcdc662628fb46142e62bc8e011d",
"text": "Agriculture is the sole provider of human food. Most farm machines are driven by fossil fuels, which contribute to greenhouse gas emissions and, in turn, accelerate climate change. Such environmental damage can be mitigated by the promotion of renewable resources such as solar, wind, biomass, tidal, geo-thermal, small-scale hydro, biofuels and wave-generated power. These renewable resources have a huge potential for the agriculture industry. The farmers should be encouraged by subsidies to use renewable energy technology. The concept of sustainable agriculture lies on a delicate balance of maximizing crop productivity and maintaining economic stability, while minimizing the utilization of finite natural resources and detrimental environmental impacts. Sustainable agriculture also depends on replenishing the soil while minimizing the use of non-renewable resources, such as natural gas, which is used in converting atmospheric nitrogen into synthetic fertilizer, and mineral ores, e.g. phosphate or fossil fuel used in diesel generators for water pumping for irrigation. Hence, there is a need for promoting use of renewable energy systems for sustainable agriculture, e.g. solar photovoltaic water pumps and electricity, greenhouse technologies, solar dryers for post-harvest processing, and solar hot water heaters. In remote agricultural lands, the underground submersible solar photovoltaic water pump is economically viable and also an environmentally-friendly option as compared with a diesel generator set. If there are adverse climatic conditions for the growth of particular plants in cold climatic zones then there is need for renewable energy technology such as greenhouses for maintaining the optimum plant ambient temperature conditions for the growth of plants and vegetables. The economics of using greenhouses for plants and vegetables, and solar photovoltaic water pumps for sustainable agriculture and the environment are presented in this article. Clean development provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at the lowest cost. The mechanism of clean development is discussed in brief for the use of renewable systems for sustainable agricultural development specific to solar photovoltaic water pumps in India and the world. This article explains in detail the role of renewable energy in farming by connecting all aspects of agronomy with ecology, the environment, economics and societal change.",
"title": ""
},
{
"docid": "afcfe379acfd727b6044c70478b3c2a3",
"text": "We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. SfSNet is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.",
"title": ""
},
{
"docid": "0d1f9b3fa3d03b37438024ba354ca68a",
"text": "Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-theart results on a recent context-dependent semantic parsing task.",
"title": ""
},
{
"docid": "c85e5745141e64e224a5c4c61f1b1866",
"text": "Crowd-sourcing has become a popular means of acquiring labeled data for many tasks where humans are more accurate than computers, such as image tagging, entity resolution, or sentiment analysis. However, due to the time and cost of human labor, solutions that solely rely on crowd-sourcing are often limited to small datasets (i.e., a few thousand items). This paper proposes algorithms for integrating machine learning into crowd-sourced databases in order to combine the accuracy of human labeling with the speed and cost-effectiveness of machine learning classifiers. By using active learning as our optimization strategy for labeling tasks in crowdsourced databases, we can minimize the number of questions asked to the crowd, allowing crowd-sourced applications to scale (i.e, label much larger datasets at lower costs). Designing active learning algorithms for a crowd-sourced database poses many practical challenges: such algorithms need to be generic, scalable, and easy-to-use for a broad range of practitioners, even those who are not machine learning experts. We draw on the theory of nonparametric bootstrap to design, to the best of our knowledge, the first active learning algorithms that meet all these requirements. Our results, on 3 real-world datasets collected with Amazon’s Mechanical Turk, and on 15 UCI datasets, show that our methods on average ask 1–2 orders of magnitude fewer questions than the baseline, and 4.5–44× fewer than existing active learning algorithms.",
"title": ""
},
{
"docid": "c4e6176193677f62f6b33dc02580c7f2",
"text": "E-learning has become an essential factor in the modern educational system. In today's diverse student population, E-learning must recognize the differences in student personalities to make the learning process more personalized. The objective of this study is to create a data model to identify both the student personality type and the dominant preference based on the Myers-Briggs Type Indicator (MBTI) theory. The proposed model utilizes data from student engagement with the learning management system (Moodle) and the social network, Facebook. The model helps students become aware of their personality, which in turn makes them more efficient in their study habits. The model also provides vital information for educators, equipping them with a better understanding of each student's personality. With this knowledge, educators will be more capable of matching students with their respective learning styles. The proposed model was applied on a sample data collected from the Business College at the German university in Cairo, Egypt (240 students). The model was tested using 10 data mining classification algorithms which were NaiveBayes, BayesNet, Kstar, Random forest, J48, OneR, JRIP, KNN /IBK, RandomTree, Decision Table. The results showed that OneR had the best accuracy percentage of 97.40%, followed by Random forest 93.23% and J48 92.19%.",
"title": ""
}
] | scidocsrr |
381a180ecd74e87262ec5c5be0ccbe97 | Facial Action Coding System | [
{
"docid": "6b6285cd8512a2376ae331fda3fedf20",
"text": "The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.",
"title": ""
}
] | [
{
"docid": "a65d1881f5869f35844064d38b684ac8",
"text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.",
"title": ""
},
{
"docid": "8fc758632346ce45e8f984018cde5ece",
"text": "Today Recommendation systems [3] have become indispensible because of the sheer bulk of information made available to a user from web-services(Netflix, IMDB, Amazon and many others) and the need for personalized suggestions. Recommendation systems are a well studied research area. In the following work, we present our study on the Netflix Challenge [1]. The Neflix Challenge can be summarized in the following way: ”Given a movie, predict the rating of a particular user based on the user’s prior ratings”. The performance of all such approaches is measured using the RMSE (root mean-squared error) of the submitted ratings from the actual ratings. Currently, the best system has an RMSE of 0.8616 [2]. We obtained ratings from the following approaches:",
"title": ""
},
{
"docid": "c197198ca45acec2575d5be26fc61f36",
"text": "General systems theory has been proposed as a basis for the unification of science. The open systems model has stimulated many new conceptualizations in organization theory and management practice. However, experience in utilizing these concepts suggests many unresolved dilemmas. Contingency views represent a step toward less abstraction, more explicit patterns of relationships, and more applicable theory. Sophistication will come when we have a more complete understanding of organizations as total systems (configurations of subsystems) so that we can prescribe more appropriate organizational designs and managerial systems. Ultimately, organization theory should serve as the foundation for more effective management practice.",
"title": ""
},
{
"docid": "12eff845ccb6e5cc2b2fbe74935aff46",
"text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.",
"title": ""
},
{
"docid": "5f20ed750fc260f40d01e8ac5ddb633d",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii CHAPTER",
"title": ""
},
{
"docid": "f1cfd3980bb7dc78309074012be3cf03",
"text": "A chatbot is a conversational agent that interacts with users using natural language. Multi chatbots are available to serve in different domains. However, the knowledge base of chatbots is hand coded in its brain. This paper presents an overview of ALICE chatbot, its AIML format, and our experiments to generate different prototypes of ALICE automatically based on a corpus approach. A description of developed software which converts readable text (corpus) into AIML format is presented alongside with describing the different corpora we used. Our trials revealed the possibility of generating useful prototypes without the need for sophisticated natural language processing or complex machine learning techniques. These prototypes were used as tools to practice different languages, to visualize corpus, and to provide answers for questions.",
"title": ""
},
{
"docid": "22ad4568fbf424592c24783fb3037f62",
"text": "We propose an unsupervised learning technique for extracting information about authors and topics from large text collections. We model documents as if they were generated by a two-stage stochastic process. An author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words. The probability distribution over topics in a multi-author paper is a mixture of the distributions associated with the authors. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to three large text corpora: 150,000 abstracts from the CiteSeer digital library, 1740 papers from the Neural Information Processing Systems (NIPS) Conferences, and 121,000 emails from the Enron corporation. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, parsing of abstracts by topics and authors, and detection of unusual papers by specific authors. Experiments based on perplexity scores for test documents and precision-recall for document retrieval are used to illustrate systematic differences between the proposed author-topic model and a number of alternatives. Extensions to the model, allowing for example, generalizations of the notion of an author, are also briefly discussed.",
"title": ""
},
{
"docid": "34bfec0f1f7eb748b3632bbf288be3bd",
"text": "An omnidirectional mobile robot is able, kinematically, to move in any direction regardless of current pose. To date, nearly all designs and analyses of omnidirectional mobile robots have considered the case of motion on flat, smooth terrain. In this paper, an investigation of the design and control of an omnidirectional mobile robot for use in rough terrain is presented. Kinematic and geometric properties of the active split offset caster drive mechanism are investigated along with system and subsystem design guidelines. An optimization method is implemented to explore the design space. The use of this method results in a robot that has higher mobility than a robot designed using engineering judgment. A simple kinematic controller that considers the effects of terrain unevenness via an estimate of the wheel-terrain contact angles is also presented. It is shown in simulation that under the proposed control method, near-omnidirectional tracking performance is possible even in rough, uneven terrain. DOI: 10.1115/1.4000214",
"title": ""
},
{
"docid": "e364db9141c85b1f260eb3a9c1d42c5b",
"text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557",
"title": ""
},
{
"docid": "abdffec5ea2b05b61006cc7b6b295976",
"text": "Making recommendation requires predicting what is of interest to a user at a specific time. Even the same user may have different desires at different times. It is important to extract the aggregate interest of a user from his or her navigational path through the site in a session. This paper concentrates on the discovery and modelling of the user’s aggregate interest in a session. This approach relies on the premise that the visiting time of a page is an indicator of the user’s interest in that page. The proportion of times spent in a set of pages requested by the user within a single session forms the aggregate interest of that user in that session. We first partition user sessions into clusters such that only sessions which represent similar aggregate interest of users are placed in the same cluster. We employ a model-based clustering approach and partition user sessions according to similar amount of time in similar pages. In particular, we cluster sessions by learning a mixture of Poisson models using Expectation Maximization algorithm. The resulting clusters are then used to recommend pages to a user that are most likely contain the information which is of interest to that user at that time. Although the approach does not use the sequential patterns of transactions, experimental evaluation shows that the approach is quite effective in capturing a Web user’s access pattern. The model has an advantage over previous proposals in terms of speed and memory usage.",
"title": ""
},
{
"docid": "53b48550158b06dfbdb8c44a4f7241c6",
"text": "The primary aim of the study was to examine the relationship between media exposure and body image in adolescent girls, with a particular focus on the ‘new’ and as yet unstudied medium of the Internet. A sample of 156 Australian female high school students (mean age= 14.9 years) completed questionnaire measures of media consumption and body image. Internet appearance exposure and magazine reading, but not television exposure, were found to be correlated with greater internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. Regression analyses indicated that the effects of magazines and Internet exposure were mediated by internalization and appearance comparison. It was concluded that the Internet represents a powerful sociocultural influence on young women’s lives.",
"title": ""
},
{
"docid": "f3b0bace6028b3d607618e2e53294704",
"text": "State-of-the art spoken language understanding models that automatically capture user intents in human to machine dialogs are trained with manually annotated data, which is cumbersome and time-consuming to prepare. For bootstrapping the learning algorithm that detects relations in natural language queries to a conversational system, one can rely on publicly available knowledge graphs, such as Freebase, and mine corresponding data from the web. In this paper, we present an unsupervised approach to discover new user intents using a novel Bayesian hierarchical graphical model. Our model employs search query click logs to enrich the information extracted from bootstrapped models. We use the clicked URLs as implicit supervision and extend the knowledge graph based on the relational information discovered from this model. The posteriors from the graphical model relate the newly discovered intents with the search queries. These queries are then used as additional training examples to complement the bootstrapped relation detection models. The experimental results demonstrate the effectiveness of this approach, showing extended coverage to new intents without impacting the known intents.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
},
{
"docid": "296025d4851569031f0ebe36d792fadc",
"text": "In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT’s notions of left and right distributive relations. We achieve an Fmeasure of 0.531 for fully labeled structures which beats the previous state of the art.",
"title": ""
},
{
"docid": "496ba5ee48281afe48b5afce02cc4dbf",
"text": "OBJECTIVE\nThis study examined the relationship between reported exposure to child abuse and a history of parental substance abuse (alcohol and drugs) in a community sample in Ontario, Canada.\n\n\nMETHOD\nThe sample consisted of 8472 respondents to the Ontario Mental Health Supplement (OHSUP), a comprehensive population survey of mental health. The association of self-reported retrospective childhood physical and sexual abuse and parental histories of drug or alcohol abuse was examined.\n\n\nRESULTS\nRates of physical and sexual abuse were significantly higher, with a more than twofold increased risk among those reporting parental substance abuse histories. The rates were not significantly different between type or severity of abuse. Successively increasing rates of abuse were found for those respondents who reported that their fathers, mothers or both parents had substance abuse problems; this risk was significantly elevated for both parents compared to father only with substance abuse problem.\n\n\nCONCLUSIONS\nParental substance abuse is associated with a more than twofold increase in the risk of exposure to both childhood physical and sexual abuse. While the mechanism for this association remains unclear, agencies involved in child protection or in treatment of parents with substance abuse problems must be cognizant of this relationship and focus on the development of interventions to serve these families.",
"title": ""
},
{
"docid": "461ec14463eb20962ef168de781ac2a2",
"text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.",
"title": ""
},
{
"docid": "eae289c213d5b67d91bb0f461edae7af",
"text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China",
"title": ""
},
{
"docid": "0562b3b1692f07060cf4eeb500ea6cca",
"text": "As the volume of medicinal information stored electronically increase, so do the need to enhance how it is secured. The inaccessibility to patient record at the ideal time can prompt death toll and also well degrade the level of health care services rendered by the medicinal professionals. Criminal assaults in social insurance have expanded by 125% since 2010 and are now the leading cause of medical data breaches. This study therefore presents the combination of 3DES and LSB to improve security measure applied on medical data. Java programming language was used to develop a simulation program for the experiment. The result shows medical data can be stored, shared, and managed in a reliable and secure manner using the combined model. Keyword: Information Security; Health Care; 3DES; LSB; Cryptography; Steganography 1.0 INTRODUCTION In health industries, storing, sharing and management of patient information have been influenced by the current technology. That is, medical centres employ electronical means to support their mode of service in order to deliver quality health services. The importance of the patient record cannot be over emphasised as it contributes to when, where, how, and how lives can be saved. About 91% of health care organizations have encountered no less than one data breach, costing more than $2 million on average per organization [1-3]. Report also shows that, medical records attract high degree of importance to hoodlums compare to Mastercard information because they infer more cash base on the fact that bank",
"title": ""
},
{
"docid": "fcdde2f5b55b6d8133e6dea63d61b2c8",
"text": "It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of <italic>n</italic> cities, indexed by 1, ··· , <italic>n</italic>. He leaves from a “base city” indexed by 0, visits each of the <italic>n</italic> other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly <italic>t</italic> times, including his final return (here <italic>t</italic> may be allowed to vary), and he must visit no more than <italic>p</italic> cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.\n Note that if <italic>t</italic> is fixed, then for the problem to have a solution we must have <italic>tp</italic> ≧ <italic>n</italic>. For <italic>t</italic> = 1, <italic>p</italic> ≧ <italic>n</italic>, we have the standard traveling salesman problem.\nLet <italic>d<subscrpt>ij</subscrpt></italic> (<italic>i</italic> ≠ <italic>j</italic> = 0, 1, ··· , <italic>n</italic>) be the distance covered in traveling from city <italic>i</italic> to city <italic>j</italic>. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑<subscrpt>0≦<italic>i</italic>≠<italic>j</italic>≦<italic>n</italic></subscrpt>∑ <italic>d<subscrpt>ij</subscrpt>x<subscrpt>ij</subscrpt></italic> over the set determined by the relations ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=0<italic>i</italic>≠<italic>j</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>j</italic> = 1, ··· , <italic>n</italic>) ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>j</italic>=0<italic>j</italic>≠<italic>i</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>i</italic> = 1, ··· , <italic>n</italic>) <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> + <italic>px<subscrpt>ij</subscrpt></italic> ≦ <italic>p</italic> - 1 (1 ≦ <italic>i</italic> ≠ <italic>j</italic> ≦ <italic>n</italic>) where the <italic>x<subscrpt>ij</subscrpt></italic> are non-negative integers and the <italic>u<subscrpt>i</subscrpt></italic> (<italic>i</italic> = 1, …, <italic>n</italic>) are arbitrary real numbers. (We shall see that it is permissible to restrict the <italic>u<subscrpt>i</subscrpt></italic> to be non-negative integers as well.)\n If <italic>t</italic> is fixed it is necessary to add the additional relation: ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>u</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> = <italic>t</italic> Note that the constraints require that <italic>x<subscrpt>ij</subscrpt></italic> = 0 or 1, so that a natural correspondence between these two problems exists if the <italic>x<subscrpt>ij</subscrpt></italic> are interpreted as follows: The salesman proceeds from city <italic>i</italic> to city <italic>j</italic> if and only if <italic>x<subscrpt>ij</subscrpt></italic> = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has <italic>x<subscrpt>ij</subscrpt></italic> which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines <italic>x<subscrpt>ij</subscrpt></italic>, which, together with appropriate <italic>u<subscrpt>i</subscrpt></italic>, satisfy the constraints of (2).\nConsider a feasible solution to (2).\n The number of returns to city 0 is given by ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt>. The constraints of the form ∑ <italic>x<subscrpt>ij</subscrpt></italic> = 1, all <italic>x<subscrpt>ij</subscrpt></italic> non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The <italic>u<subscrpt>i</subscrpt></italic> play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than <italic>p</italic> cities. Consider any <italic>x</italic><subscrpt><italic>r</italic><subscrpt>0</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> = 1 (<italic>r</italic><subscrpt>1</subscrpt> ≠ 0). There exists a unique <italic>r</italic><subscrpt>2</subscrpt> such that <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> = 1. Unless <italic>r</italic><subscrpt>2</subscrpt> = 0, there is a unique <italic>r</italic><subscrpt>3</subscrpt> with <italic>x</italic><subscrpt><italic>r</italic><subscrpt>2</subscrpt><italic>r</italic><subscrpt>3</subscrpt></subscrpt> = 1. We proceed in this fashion until some <italic>r<subscrpt>j</subscrpt></italic> = 0. This must happen since the alternative is that at some point we reach an <italic>r<subscrpt>k</subscrpt></italic> = <italic>r<subscrpt>j</subscrpt></italic>, <italic>j</italic> + 1 < <italic>k</italic>. \n Since none of the <italic>r</italic>'s are zero we have <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r<subscrpt>i</subscrpt></italic><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ - 1. Summing from <italic>i</italic> = <italic>j</italic> to <italic>k</italic> - 1, we have <italic>u<subscrpt>r<subscrpt>j</subscrpt></subscrpt></italic> - <italic>u<subscrpt>r<subscrpt>k</subscrpt></subscrpt></italic> = 0 ≦ <italic>j</italic> + 1 - <italic>k</italic>, which is a contradiction. Thus all tours include city 0. It remains to observe that no tours is of length greater than <italic>p</italic>. Suppose such a tour exists, <italic>x</italic><subscrpt>0<italic>r</italic><subscrpt>1</subscrpt></subscrpt> , <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> , ···· , <italic>x</italic><subscrpt><italic>r<subscrpt>p</subscrpt>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> = 1 with all <italic>r<subscrpt>i</subscrpt></italic> ≠ 0. Then, as before, <italic>u</italic><subscrpt><italic>r</italic>1</subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> ≦ - <italic>p</italic> or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≧ <italic>p</italic>.\n But we have <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> (1 - <italic>x</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt>) - 1 ≦ <italic>p</italic> - 1, which is a contradiction.\nConversely, if the <italic>x<subscrpt>ij</subscrpt></italic> correspond to a legitimate itinerary, it is clear that the <italic>u<subscrpt>i</subscrpt></italic> can be adjusted so that <italic>u<subscrpt>i</subscrpt></italic> = <italic>j</italic> if city <italic>i</italic> is the <italic>j</italic>th city visited in the tour which includes city <italic>i</italic>, for we then have <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> = - 1 if <italic>x<subscrpt>ij</subscrpt></italic> = 1, and always <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> ≦ <italic>p</italic> - 1.\n The above integer program involves <italic>n</italic><supscrpt>2</supscrpt> + <italic>n</italic> constraints (if <italic>t</italic> is not fixed) in <italic>n</italic><supscrpt>2</supscrpt> + 2<italic>n</italic> variables. Since the inequality form of constraint is fundamental for integer programming calculations, one may eliminate 2<italic>n</italic> variables, say the <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> and <italic>x</italic><subscrpt>0<italic>j</italic></subscrpt>, by means of the equation constraints and produce",
"title": ""
},
{
"docid": "05cea038adce7f5ae2a09a7fd5e024a7",
"text": "The paper describes the use TMS320C5402 DSP for single channel active noise cancellation (ANC) in duct system. The canceller uses a feedback control topology and is designed to cancel narrowband periodic tones. The signal is processed with well-known filtered-X least mean square (filtered-X LMS) Algorithm in the digital signal processing. The paper describes the hardware and use chip support libraries for data streaming. The FXLMS algorithm is written in assembly language callable from C main program. The results obtained are compatible to the expected result in the literature available. The paper highlights the features of cancellation and analyzes its performance at different gain and frequency.",
"title": ""
}
] | scidocsrr |
0122b9fb5f10ff47ba9f9a6d8b634b3b | Hierarchical Reinforcement Learning for Adaptive Text Generation | [
{
"docid": "8640cd629e07f8fa6764c387d9fa7c29",
"text": "We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ‘PrecisionRecall’. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "85da95f8d04a8c394c320d2cce25a606",
"text": "Improved numerical weather prediction simulations have led weather services to examine how and where human forecasters add value to forecast production. The Forecast Production Assistant (FPA) was developed with that in mind. The authors discuss the Forecast Generator (FOG), the first application developed on the FPA. FOG is a bilingual report generator that produces routine and special purpose forecast directly from the FPA's graphical weather predictions. Using rules and a natural-language generator, FOG converts weather maps into forecast text. The natural-language issues involved are relevant to anyone designing a similar system.<<ETX>>",
"title": ""
},
{
"docid": "5b08a93afae9cf64b5300c586bfb3fdc",
"text": "Social interactions are characterized by distinct forms of interdependence, each of which has unique effects on how behavior unfolds within the interaction. Despite this, little is known about the psychological mechanisms that allow people to detect and respond to the nature of interdependence in any given interaction. We propose that interdependence theory provides clues regarding the structure of interdependence in the human ancestral past. In turn, evolutionary psychology offers a framework for understanding the types of information processing mechanisms that could have been shaped under these recurring conditions. We synthesize and extend these two perspectives to introduce a new theory: functional interdependence theory (FIT). FIT can generate testable hypotheses about the function and structure of the psychological mechanisms for inferring interdependence. This new perspective offers insight into how people initiate and maintain cooperative relationships, select social partners and allies, and identify opportunities to signal social motives.",
"title": ""
},
{
"docid": "01b2c742693e24e431b1bb231ae8a135",
"text": "Over the years, software development failures is really a burning issue, might be ascribed to quite a number of attributes, of which, no-compliance of users requirements and using the non suitable technique to elicit user requirements are considered foremost. In order to address this issue and to facilitate system designers, this study had filtered and compared user requirements elicitation technique, based on principles of requirements engineering. This comparative study facilitates developers to build systems based on success stories, making use of a optimistic perspective for achieving a foreseeable future. This paper is aimed at enhancing processes of choosing a suitable technique to elicit user requirements; this is crucial to determine the requirements of the user, as it enables much better software development and does not waste resources unnecessarily. Basically, this study will complement the present approaches, by representing a optimistic and potential factor for every single method in requirements engineering, which results in much better user needs, and identifies novel and distinctive specifications. Keywords— Requirements Engineering, Requirements Elicitation Techniques, Conversational methods, Observational methods, Analytic methods, Synthetic methods.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "aec5c475caa7f2e0490c871882e94363",
"text": "The use of prognostic methods in maintenance in order to predict remaining useful life is receiving more attention over the past years. The use of these techniques in maintenance decision making and optimization in multi-component systems is however a still underexplored area. The objective of this paper is to optimally plan maintenance for a multi-component system based on prognostic/predictive information while considering different component dependencies (i.e. economic, structural and stochastic dependence). Consequently, this paper presents a dynamic predictive maintenance policy for multi-component systems that minimizes the long-term mean maintenance cost per unit time. The proposed maintenance policy is a dynamic method as the maintenance schedule is updated when new information on the degradation and remaining useful life of components becomes available. The performance, regarding the objective of minimal long-term mean cost per unit time, of the developed dynamic predictive maintenance policy is compared to five other conventional maintenance policies, these are: block-based maintenance, age-based maintenance, age-based maintenance with grouping, inspection condition-based maintenance and continuous condition-based maintenance. The ability of the predictive maintenance policy to react to changing component deterioration and dependencies within a multi-component system is quantified and the results show significant cost",
"title": ""
},
{
"docid": "4e71be70e5c8c081c5ff60f8b6cb5449",
"text": "Spin-transfer torque magnetic random access memory (STT-MRAM) is considered as one of the most promising candidates to build up a true universal memory thanks to its fast write/read speed, infinite endurance, and nonvolatility. However, the conventional access architecture based on 1 transistor + 1 memory cell limits its storage density as the selection transistor should be large enough to ensure the write current higher than the critical current for the STT operation. This paper describes a design of cross-point architecture for STT-MRAM. The mean area per word corresponds to only two transistors, which are shared by a number of bits (e.g., 64). This leads to significant improvement of data density (e.g., 1.75 F2/bit). Special techniques are also presented to address the sneak currents and low-speed issues of conventional cross-point architecture, which are difficult to surmount and few efficient design solutions have been reported in the literature. By using an STT-MRAM SPICE model including precise experimental parameters and STMicroelectronics 65 nm technology, some chip characteristic results such as cell area, data access speed, and power have been calculated or simulated to demonstrate the expected performances of this new memory architecture.",
"title": ""
},
{
"docid": "2b109799a55bcb1c0592c02b60478975",
"text": "Zero-shot learning (ZSL) is to construct recognition models for unseen target classes that have no labeled samples for training. It utilizes the class attributes or semantic vectors as side information and transfers supervision information from related source classes with abundant labeled samples. Existing ZSL approaches adopt an intermediary embedding space to measure the similarity between a sample and the attributes of a target class to perform zero-shot classification. However, this way may suffer from the information loss caused by the embedding process and the similarity measure cannot fully make use of the data distribution. In this paper, we propose a novel approach which turns the ZSL problem into a conventional supervised learning problem by synthesizing samples for the unseen classes. Firstly, the probability distribution of an unseen class is estimated by using the knowledge from seen classes and the class attributes. Secondly, the samples are synthesized based on the distribution for the unseen class. Finally, we can train any supervised classifiers based on the synthesized samples. Extensive experiments on benchmarks demonstrate the superiority of the proposed approach to the state-of-the-art ZSL approaches.",
"title": ""
},
{
"docid": "bc43482b0299fc339cf13df6e9288410",
"text": "Acute auricular hematoma is common after blunt trauma to the side of the head. A network of vessels provides a rich blood supply to the ear, and the ear cartilage receives its nutrients from the overlying perichondrium. Prompt management of hematoma includes drainage and prevention of reaccumulation. If left untreated, an auricular hematoma can result in complications such as perichondritis, infection, and necrosis. Cauliflower ear may result from long-standing loss of blood supply to the ear cartilage and formation of neocartilage from disrupted perichondrium. Management of cauliflower ear involves excision of deformed cartilage and reshaping of the auricle.",
"title": ""
},
{
"docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33",
"text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.",
"title": ""
},
{
"docid": "66d5101d55595754add37e9e50952056",
"text": "The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines",
"title": ""
},
{
"docid": "a8e72235f2ec230a1be162fa6129db5e",
"text": "Lateral inhibition in top-down feedback is widely existing in visual neurobiology, but such an important mechanism has not be well explored yet in computer vision. In our recent research, we find that modeling lateral inhibition in convolutional neural network (LICNN) is very useful for visual attention and saliency detection. In this paper, we propose to formulate lateral inhibition inspired by the related studies from neurobiology, and embed it into the top-down gradient computation of a general CNN for classification, i.e. only category-level information is used. After this operation (only conducted once), the network has the ability to generate accurate category-specific attention maps. Further, we apply LICNN for weakly-supervised salient object detection. Extensive experimental studies on a set of databases, e.g., ECSSD, HKU-IS, PASCAL-S and DUT-OMRON, demonstrate the great advantage of LICNN which achieves the state-ofthe-art performance. It is especially impressive that LICNN with only category-level supervised information even outperforms some recent methods with segmentation-level super-",
"title": ""
},
{
"docid": "5c394c460f01c451e2ede526f73426ee",
"text": "Renal transplant recipients are at increased risk of bladder carcinoma. The aetiology is unknown but a polyoma virus (PV), BK virus (BKV), may play a role; urinary reactivation of this virus is common post-renal transplantation and PV large T-antigen (T-Ag) has transforming activity. In this study, we investigate the potential role of BKV in post-transplant urothelial carcinoma by immunostaining tumour tissue for PV T-Ag. There was no positivity for PV T-Ag in urothelial carcinomas from 20 non-transplant patients. Since 1990, 10 transplant recipients in our unit have developed urothelial carcinoma, and tumour tissue was available in eight recipients. Two patients were transplanted since the first case of PV nephropathy (PVN) was diagnosed in our unit in 2000 and both showed PV reactivation post-transplantation. In one of these patients, there was strong nuclear staining for PV T-Ag in tumour cells, with no staining of non-neoplastic urothelium. We conclude that PV infection is not associated with urothelial carcinoma in non-transplant patients, and is uncommon in transplant-associated tumours. Its presence in all tumour cells in one patient transplanted in the PVN era might suggest a possible role in tumorigenesis in that case.",
"title": ""
},
{
"docid": "186f2950bd4ce621eb0696c2fd09a468",
"text": "In this paper, I investigate the use of a disentangled VAE for downstream image classification tasks. I train a disentangled VAE in an unsupervised manner, and use the learned encoder as a feature extractor on top of which a linear classifier is learned. The models are trained and evaluated on the MNIST handwritten digits dataset. Experiments compared the disentangled VAE with both a standard (entangled) VAE and a vanilla supervised model. Results show that the disentangled VAE significantly outperforms the other two models when the proportion of labelled data is artificially reduced, while it loses this advantage when the amount of labelled data increases, and instead matches the performance of the other models. These results suggest that the disentangled VAE may be useful in situations where labelled data is scarce but unlabelled data is abundant.",
"title": ""
},
{
"docid": "538047fc099d0062ab100343b26f5cb7",
"text": "AIM\nTo examine the evidence on the association between cannabis and depression and evaluate competing explanations of the association.\n\n\nMETHODS\nA search of Medline, Psychinfo and EMBASE databases was conducted. All references in which the terms 'cannabis', 'marijuana' or 'cannabinoid', and in which the words 'depression/depressive disorder/depressed', 'mood', 'mood disorder' or 'dysthymia' were collected. Only research studies were reviewed. Case reports are not discussed.\n\n\nRESULTS\nThere was a modest association between heavy or problematic cannabis use and depression in cohort studies and well-designed cross-sectional studies in the general population. Little evidence was found for an association between depression and infrequent cannabis use. A number of studies found a modest association between early-onset, regular cannabis use and later depression, which persisted after controlling for potential confounding variables. There was little evidence of an increased risk of later cannabis use among people with depression and hence little support for the self-medication hypothesis. There have been a limited number of studies that have controlled for potential confounding variables in the association between heavy cannabis use and depression. These have found that the risk is much reduced by statistical control but a modest relationship remains.\n\n\nCONCLUSIONS\nHeavy cannabis use and depression are associated and evidence from longitudinal studies suggests that heavy cannabis use may increase depressive symptoms among some users. It is still too early, however, to rule out the hypothesis that the association is due to common social, family and contextual factors that increase risks of both heavy cannabis use and depression. Longitudinal studies and studies of twins discordant for heavy cannabis use and depression are needed to rule out common causes. If the relationship is causal, then on current patterns of cannabis use in the most developed societies cannabis use makes, at most, a modest contribution to the population prevalence of depression.",
"title": ""
},
{
"docid": "3b78223f5d11a56dc89a472daf23ca49",
"text": "Shadow maps provide a fast and convenient method of identifying shadows in scenes but can introduce aliasing. This paper introduces the Adaptive Shadow Map (ASM) as a solution to this problem. An ASM removes aliasing by resolving pixel size mismatches between the eye view and the light source view. It achieves this goal by storing the light source view (i.e., the shadow map for the light source) as a hierarchical grid structure as opposed to the conventional flat structure. As pixels are transformed from the eye view to the light source view, the ASM is refined to create higher-resolution pieces of the shadow map when needed. This is done by evaluating the contributions of shadow map pixels to the overall image quality. The improvement process is view-driven, progressive, and confined to a user-specifiable memory footprint. We show that ASMs enable dramatic improvements in shadow quality while maintaining interactive rates.",
"title": ""
},
{
"docid": "0e5a11ef4daeb969702e40ea0c50d7f3",
"text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).",
"title": ""
},
{
"docid": "77bbd6d3e1f1ae64bda32cd057cf0580",
"text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.",
"title": ""
},
{
"docid": "8a9680ae0d35a1c53773ccf7dcef4df7",
"text": "Support Vector Machines SVMs have proven to be highly e ective for learning many real world datasets but have failed to establish them selves as common machine learning tools This is partly due to the fact that they are not easy to implement and their standard imple mentation requires the use of optimization packages In this paper we present simple iterative algorithms for training support vector ma chines which are easy to implement and guaranteed to converge to the optimal solution Furthermore we provide a technique for automati cally nding the kernel parameter and best learning rate Extensive experiments with real datasets are provided showing that these al gorithms compare well with standard implementations of SVMs in terms of generalisation accuracy and computational cost while being signi cantly simpler to implement",
"title": ""
},
{
"docid": "2d82220d88794093209aa4b8151e70d9",
"text": "Iterative Hard Thresholding (IHT) is a class of projected gradient descent methods for optimizing sparsity-constrained minimization models, with the best known efficiency and scalability in practice. As far as we know, the existing IHT-style methods are designed for sparse minimization in primal form. It remains open to explore duality theory and algorithms in such a non-convex and NP-hard problem setting. In this paper, we bridge this gap by establishing a duality theory for sparsity-constrained minimization with `2-regularized loss function and proposing an IHT-style algorithm for dual maximization. Our sparse duality theory provides a set of sufficient and necessary conditions under which the original NP-hard/non-convex problem can be equivalently solved in a dual formulation. The proposed dual IHT algorithm is a super-gradient method for maximizing the non-smooth dual objective. An interesting finding is that the sparse recovery performance of dual IHT is invariant to the Restricted Isometry Property (RIP), which is required by virtually all the existing primal IHT algorithms without sparsity relaxation. Moreover, a stochastic variant of dual IHT is proposed for large-scale stochastic optimization. Numerical results demonstrate the superiority of dual IHT algorithms to the state-of-the-art primal IHT-style algorithms in model estimation accuracy and computational efficiency.",
"title": ""
},
{
"docid": "225ac2816e26f156b16ad65401fcbaf6",
"text": "This paper investigates how internet users’ perception of control over their personal information affects how likely they are to click on online advertising on a social networking website. The paper uses data from a randomized field experiment that examined the effectiveness of personalizing ad text with user-posted personal information relative to generic text. The website gave users more control over their personally identifiable information in the middle of the field test. However, the website did not change how advertisers used data to target and personalize ads. Before the policy change, personalized ads did not perform particularly well. However, after this enhancement of perceived control over privacy, users were nearly twice as likely to click on personalized ads. Ads that targeted but did not use personalized text remained unchanged in effectiveness. The increase in effectiveness was larger for ads that used more unique private information to personalize their message and for target groups who were more likely to use opt-out privacy settings.",
"title": ""
}
] | scidocsrr |
b35e238b5c76fec76d33eb3e0dae3c06 | Using trust for collaborative filtering in eCommerce | [
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
}
] | [
{
"docid": "c077231164a8a58f339f80b83e5b4025",
"text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.",
"title": ""
},
{
"docid": "6a5abcabca3d4bb0696a9f19dd5e358f",
"text": "Distributional models of meaning (see Turney and Pantel (2010) for an overview) are based on the pragmatic hypothesis that meanings of words are deducible from the contexts in which they are often used. This hypothesis is formalized using vector spaces, wherein a word is represented as a vector of cooccurrence statistics with a set of context dimensions. With the increasing availability of large corpora of text, these models constitute a well-established NLP technique for evaluating semantic similarities. Their methods however do not scale up to larger text constituents (i.e. phrases and sentences), since the uniqueness of multi-word expressions would inevitably lead to data sparsity problems, hence to unreliable vectorial representations. The problem is usually addressed by the provision of a compositional function, the purpose of which is to prepare a vector for a phrase or sentence by combining the vectors of the words therein. This line of research has led to the field of compositional distributional models of meaning (CDMs), where reliable semantic representations are provided for phrases, sentences, and discourse units such as dialogue utterances and even paragraphs or documents. As a result, these models have found applications in various NLP tasks, for example paraphrase detection; sentiment analysis; dialogue act tagging; machine translation; textual entailment; and so on, in many cases presenting stateof-the-art performance. Being the natural evolution of the traditional and well-studied distributional models at the word level, CDMs are steadily evolving to a popular and active area of NLP. The topic has inspired a number of workshops and tutorials in top CL conferences such as ACL and EMNLP, special issues at high-profile journals, and it attracts a substantial amount of submissions in annual NLP conferences. The approaches employed by CDMs are as much as diverse as statistical machine leaning (Baroni and Zamparelli, 2010), linear algebra (Mitchell and Lapata, 2010), simple category theory (Coecke et al., 2010), or complex deep learning architectures based on neural networks and borrowing ideas from image processing (Socher et al., 2012; Kalchbrenner et al., 2014; Cheng and Kartsaklis, 2015). Furthermore, they create opportunities for interesting novel research, related for example to efficient methods for creating tensors for relational words such as verbs and adjectives (Grefenstette and Sadrzadeh, 2011), the treatment of logical and functional words in a distributional setting (Sadrzadeh et al., 2013; Sadrzadeh et al., 2014), or the role of polysemy and the way it affects composition (Kartsaklis and Sadrzadeh, 2013; Cheng and Kartsaklis, 2015). The purpose of this tutorial is to provide a concise introduction to this emerging field, presenting the different classes of CDMs and the various issues related to them in sufficient detail. The goal is to allow the student to understand the general philosophy of each approach, as well as its advantages and limitations with regard to the other alternatives.",
"title": ""
},
{
"docid": "6ae4be7a85f7702ae76649d052d7c37d",
"text": "information technologies as “the ability to reformulate knowledge, to express oneself creatively and appropriately, and to produce and generate information (rather than simply to comprehend it).” Fluency, according to the report, “goes beyond traditional notions of computer literacy...[It] requires a deeper, more essential understanding and mastery of information technology for information processing, communication, and problem solving than does computer literacy as traditionally defined.” Scratch is a networked, media-rich programming environment designed to enhance the development of technological fluency at after-school centers in economically-disadvantaged communities. Just as the LEGO MindStorms robotics kit added programmability to an activity deeply rooted in youth culture (building with LEGO bricks), Scratch adds programmability to the media-rich and network-based activities that are most popular among youth at afterschool computer centers. Taking advantage of the extraordinary processing power of current computers, Scratch supports new programming paradigms and activities that were previously infeasible, making it better positioned to succeed than previous attempts to introduce programming to youth. In the past, most initiatives to improve technological fluency have focused on school classrooms. But there is a growing recognition that after-school centers and other informal learning settings can play an important role, especially in economicallydisadvantaged communities, where schools typically have few technological resources and many young people are alienated from the formal education system. Our working hypothesis is that, as kids work on personally meaningful Scratch projects such as animated stories, games, and interactive art, they will develop technological fluency, mathematical and problem solving skills, and a justifiable selfconfidence that will serve them well in the wider spheres of their lives. During the past decade, more than 2000 community technology centers (CTCs) opened in the United States, specifically to provide better access to technology in economically-disadvantaged communities. But most CTCs support only the most basic computer activities such as word processing, email, and Web browsing, so participants do not gain the type of fluency described in the NRC report. Similarly, many after-school centers (which, unlike CTCs, focus exclusively on youth) have begun to introduce computers, but they too tend to offer only introductory computer activities, sometimes augmented by educational games.",
"title": ""
},
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
},
{
"docid": "1ebb333d5a72c649cd7d7986f5bf6975",
"text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under stand. The notion of plans is introduced to ac count for general knowledge about novel situa tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys tem or any working computational system the res triction of world knowledge need not critically concern him. Our feeling is that an effective characteri zation of knowledge can result in a real under standing system in the not too distant future. We expect that programs based on the theory we out …",
"title": ""
},
{
"docid": "8a5bbfcb8084c0b331e18dcf64cdf915",
"text": "This paper describes wildcards, a new language construct designed to increase the flexibility of object-oriented type systems with parameterized classes. Based on the notion of use-site variance, wildcards provide a type safe abstraction over different instantiations of parameterized classes, by using '?' to denote unspecified type arguments. Thus they essentially unify the distinct families of classes often introduced by parametric polymorphism. Wildcards are implemented as part of the upcoming addition of generics to the Java™ programming language, and will thus be deployed world-wide as part of the reference implementation of the Java compiler javac available from Sun Microsystems, Inc. By providing a richer type system, wildcards allow for an improved type inference scheme for polymorphic method calls. Moreover, by means of a novel notion of wildcard capture, polymorphic methods can be used to give symbolic names to unspecified types, in a manner similar to the \"open\" construct known from existential types. Wildcards show up in numerous places in the Java Platform APIs of the upcoming release, and some of the examples in this paper are taken from these APIs.",
"title": ""
},
{
"docid": "1912f9ad509e446d3e34e3c6dccd4c78",
"text": "Lumbar disc herniation is a common male disease. In the past, More academic attention was directed to its relationship with lumbago and leg pain than to its association with andrological diseases. Studies show that central lumber intervertebral disc herniation may cause cauda equina injury and result in premature ejaculation, erectile dysfunction, chronic pelvic pain syndrome, priapism, and emission. This article presents an overview on the correlation between central lumbar intervertebral disc herniation and andrological diseases, focusing on the aspects of etiology, pathology, and clinical progress, hoping to invite more attention from andrological and osteological clinicians.",
"title": ""
},
{
"docid": "55b88b38dbde4d57fddb18d487099fc6",
"text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f82a57baca9a0381c9b2af0368a5531e",
"text": "We tested the hypothesis derived from eye blink literature that when liars experience cognitive demand, their lies would be associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has ceased after the lie is told. A total of 13 liars and 13 truth tellers lied or told the truth in a target period; liars and truth tellers both told the truth in two baseline periods. Their eye blinks during the target and baseline periods and directly after the target period (target offset period) were recorded. The predicted pattern (compared to the baseline periods, a decrease in eye blinks during the target period and an increase in eye blinks during the target offset period) was found in liars and was strikingly different from the pattern obtained in truth tellers. They showed an increase in eye blinks during the target period compared to the baseline periods, whereas their pattern of eye blinks in the target offset period did not differ from baseline periods. The implications for lie detection are discussed.",
"title": ""
},
{
"docid": "e4a74019c34413f8ace000512ab26da0",
"text": "Scaling the transaction throughput of decentralized blockchain ledgers such as Bitcoin and Ethereum has been an ongoing challenge. Two-party duplex payment channels have been designed and used as building blocks to construct linked payment networks, which allow atomic and trust-free payments between parties without exhausting the resources of the blockchain.\n Once a payment channel, however, is depleted (e.g., because transactions were mostly unidirectional) the channel would need to be closed and re-funded to allow for new transactions. Users are envisioned to entertain multiple payment channels with different entities, and as such, instead of refunding a channel (which incurs costly on-chain transactions), a user should be able to leverage his existing channels to rebalance a poorly funded channel.\n To the best of our knowledge, we present the first solution that allows an arbitrary set of users in a payment channel network to securely rebalance their channels, according to the preferences of the channel owners. Except in the case of disputes (similar to conventional payment channels), our solution does not require on-chain transactions and therefore increases the scalability of existing blockchains. In our security analysis, we show that an honest participant cannot lose any of its funds while rebalancing. We finally provide a proof of concept implementation and evaluation for the Ethereum network.",
"title": ""
},
{
"docid": "fc3283b1d81de45772ec730c1f5185f1",
"text": "In this paper, three different techniques which can be used for control of three phase PWM Rectifier are discussed. Those three control techniques are Direct Power Control, Indirect Power Control or Voltage Oriented Control and Hysteresis Control. The main aim of this paper is to compare and establish the merits and demerits of each technique in various aspects mainly regarding switching frequency hence switching loss, computation and transient state behavior. Each control method is studied in detail and simulated using Matlab/Simulink in order to make the comparison.",
"title": ""
},
{
"docid": "ee045772d55000b6f2d3f7469a4161b1",
"text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses",
"title": ""
},
{
"docid": "f9c938a98621f901c404d69a402647c7",
"text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.",
"title": ""
},
{
"docid": "16d2e0605d45c69302c71b8434b7a23a",
"text": "Emotions play an important role in human cognition, perception, decision making, and interaction. This paper presents a six-layer biologically inspired feedforward neural network to discriminate human emotions from EEG. The neural network comprises a shift register memory after spectral filtering for the input layer, and the estimation of coherence between each pair of input signals for the hidden layer. EEG data are collected from 57 healthy participants from eight locations while subjected to audio-visual stimuli. Discrimination of emotions from EEG is investigated based on valence and arousal levels. The accuracy of the proposed neural network is compared with various feature extraction methods and feedforward learning algorithms. The results showed that the highest accuracy is achieved when using the proposed neural network with a type of radial basis function.",
"title": ""
},
{
"docid": "a18da0c7d655fee44eebdf61c7371022",
"text": "This paper describes and compares a set of no-reference quality assessment algorithms for H.264/AVC encoded video sequences. These algorithms have in common a module that estimates the error due to lossy encoding of the video signals, using only information available on the compressed bitstream. In order to obtain perceived quality scores from the estimated error, three methods are presented: i) to weight the error estimates according to a perceptual model; ii) to linearly combine the mean squared error (MSE) estimates with additional video features; iii) to use MSE estimates as the input of a logistic function. The performances of the algorithms are evaluated using cross-validation procedures and the one showing the best performance is also in a preliminary study of quality assessment in the presence of transmission losses.",
"title": ""
},
{
"docid": "550e19033cb00938aed89eb3cce50a76",
"text": "This paper presents a high gain wide band 2×2 microstrip array antenna. The microstrip array antenna (MSA) is fabricated on inexpensive FR4 substrate and placed 1mm above ground plane to improve the bandwidth and efficiency of the antenna. A reactive impedance surface (RIS) consisting of 13×13 array of 4 mm square patches with inter-element spacing of 1 mm is fabricated on the bottom side of FR4 substrate. RIS reduces the coupling between the ground plane and MSA array and therefore increases the efficiency of antenna. It enhances the bandwidth and gain of the antenna. RIS also helps in reduction of SLL and cross polarization. This MSA array with RIS is place in a Fabry Perot cavity (FPC) resonator to enhance the gain of the antenna. 2×2 and 4×4 array of square parasitic patches are fed by MSA array fabricated on a FR4 superstrate which forms the partially reflecting surface of FPC. The FR4 superstrate layer is supported with help of dielectric rods at the edges with air at about λ0/2 from ground plane. A microstrip feed line network is designed and the printed MSA array is fed by a 50 Ω coaxial probe. The VSWR is <; 2 is obtained over 5.725-6.4 GHz, which covers 5.725-5.875 GHz ISM WLAN frequency band and 5.9-6.4 GHz satellite uplink C band. The antenna gain increases from 12 dB to 15.8 dB as 4×4 square parasitic patches are fabricated on superstrate layer. The gain variation is less than 2 dB over the entire band. The antenna structure provides SLL and cross polarization less than -2ο dB, front to back lobe ratio higher than 20 dB and more than 70 % antenna efficiency. A prototype structure is realized and tested. The measured results satisfy with the simulation results. The antenna can be a suitable candidate for access point, satellite communication, mobile base station antenna and terrestrial communication system.",
"title": ""
},
{
"docid": "1615e93f027c6f6f400ce1cc7a1bb8aa",
"text": "In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Figure 1 illustrates three main categories of events along with characteristic photos from Flickr for each of them: a) news-related events, e.g. demonstrations, riots, public speeches, natural disasters, terrorist attacks, b) entertainment events, e.g. sports, music, live shows, exhibitions, festivals, and c) personal events, e.g. wedding, birthday, graduation ceremonies, vacations, and going out. Depending on the event, different types of multimedia and social media platform are more popular. For instance, news-related events are extensively published in the form of text updates, images and videos on Twitter and YouTube, entertainment and social events are often captured in the form of images and videos and shared on Flickr and YouTube, while personal events are mostly represented by images that are shared on Facebook and Instagram. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. However, the vast amount of noisy and non-informative social media posts, in conjunction with their large scale, makes that task very challenging. For instance, in the case of popular events that are covered live on Twitter, there are often millions of posts referring to a single event, as in the case of the World Cup Final 2014 between Brazil and Germany, which produced approximately 32.1 million tweets with a rate of 618,725 tweets per minute. Processing, aggregating and selecting the most informative, entertaining and representative tweets among such a large dataset is a very challenging multimedia retrieval problem. In other",
"title": ""
},
{
"docid": "82fdd14f7766e8afe9b11a255073b3ce",
"text": "We develop a stochastic model of a simple protocol for the self-configuration of IP network interfaces. We describe the mean cost that incurs during a selfconfiguration phase and describe a trade-off between reliability and speed. We derive a cost function which we use to derive optimal parameters. We show that optimal cost and optimal reliability are qualities that cannot be achieved at the same time. Keywords—Embedded control software; IP; zeroconf protocol; cost optimisation",
"title": ""
},
{
"docid": "7a62e5a78eabbcbc567d5538a2f35434",
"text": "This paper presents a system for a design and implementation of Optical Arabic Braille Recognition(OBR) with voice and text conversion. The implemented algorithm based on a comparison of Braille dot position extraction in each cell with the database generated for each Braille cell. Many digital image processing have been performed on the Braille scanned document like binary conversion, edge detection, holes filling and finally image filtering before dot extraction. The work in this paper also involved a unique decimal code generation for each Braille cell used as a base for word reconstruction with the corresponding voice and text conversion database. The implemented algorithm achieve expected result through letter and words recognition and transcription accuracy over 99% and average processing time around 32.6 sec per page. using matlab environmemt",
"title": ""
}
] | scidocsrr |
20c49ce8a94be9f93d4a86ed7e1f84b6 | Context-Aware Correlation Filter Tracking | [
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
},
{
"docid": "aee250663a05106c4c0fad9d0f72828c",
"text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.",
"title": ""
}
] | [
{
"docid": "49736d49ee7b777523064efcd99c5cbb",
"text": "Immune checkpoint antagonists (CTLA-4 and PD-1/PD-L1) and CAR T-cell therapies generate unparalleled durable responses in several cancers and have firmly established immunotherapy as a new pillar of cancer therapy. To extend the impact of immunotherapy to more patients and a broader range of cancers, targeting additional mechanisms of tumor immune evasion will be critical. Adenosine signaling has emerged as a key metabolic pathway that regulates tumor immunity. Adenosine is an immunosuppressive metabolite produced at high levels within the tumor microenvironment. Hypoxia, high cell turnover, and expression of CD39 and CD73 are important factors in adenosine production. Adenosine signaling through the A2a receptor expressed on immune cells potently dampens immune responses in inflamed tissues. In this article, we will describe the role of adenosine signaling in regulating tumor immunity, highlighting potential therapeutic targets in the pathway. We will also review preclinical data for each target and provide an update of current clinical activity within the field. Together, current data suggest that rational combination immunotherapy strategies that incorporate inhibitors of the hypoxia-CD39-CD73-A2aR pathway have great promise for further improving clinical outcomes in cancer patients.",
"title": ""
},
{
"docid": "721ff703dfafad6b1b330226c36ed641",
"text": "In the Narrowband Internet-of-Things (NB-IoT) LTE systems, the device shall be able to blindly lock to a cell within 200-KHz bandwidth and with only one receive antenna. In addition, the device is required to setup a call at a signal-to-noise ratio (SNR) of −12.6 dB in the extended coverage mode. A new set of synchronization signals have been introduced to provide data-aided synchronization and cell search. In this letter, we present a procedure for NB-IoT cell search and initial synchronization subject to the new challenges given the new specifications. Simulation results show that this method not only provides the required performance at very low SNRs, but also can be quickly camped on a cell, if any.",
"title": ""
},
{
"docid": "6420f394cb02e9415b574720a9c64e7f",
"text": "Interleaved power converter topologies have received increasing attention in recent years for high power and high performance applications. The advantages of interleaved boost converters include increased efficiency, reduced size, reduced electromagnetic emission, faster transient response, and improved reliability. The front end inductors in an interleaved boost converter are magnetically coupled to improve electrical performance and reduce size and weight. Compared to a direct coupled configuration, inverse coupling provides the advantages of lower inductor ripple current and negligible dc flux levels in the core. In this paper, we explore the possible advantages of core geometry on core losses and converter efficiency. Analysis of FEA simulation and empirical characterization data indicates a potential superiority of a square core, with symmetric 45deg energy storage corner gaps, for providing both ac flux balance and maximum dc flux cancellation when wound in an inverse coupled configuration.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "8c0588538b1b04193e80ef5ce5ad55a7",
"text": "Unlike traditional bipolar constrained liners, the Osteonics Omnifit constrained acetabular insert is a tripolar device, consisting of an inner bipolar bearing articulating within an outer, true liner. Every reported failure of the Omnifit tripolar implant has been by failure at the shell-bone interface (Type I failure), failure at the shell-liner interface (Type II failure), or failure of the locking mechanism resulting in dislocation of the bipolar-liner interface (Type III failure). In this report we present two cases of failure of the Omnifit tripolar at the bipolar-femoral head interface. To our knowledge, these are the first reported cases of failure at the bipolar-femoral head interface (Type IV failure). In addition, we described the first successful closed reduction of a Type IV failure.",
"title": ""
},
{
"docid": "536c739e6f0690580568a242e1d65ef3",
"text": "Intrusion Detection Systems (IDS) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or hosts. However, the efficiency of an IDS depends primarily on both its configuration and its precision. The large amount of network traffic that needs to be analyzed, in addition to the increase in attacks’ sophistication, renders the optimization of intrusion detection an important requirement for infrastructure security, and a very active research subject. In the state of the art, a number of approaches have been proposed to improve the efficiency of intrusion detection and response systems. In this article, we review the works relying on decision-making techniques focused on game theory and Markov decision processes to analyze the interactions between the attacker and the defender, and classify them according to the type of the optimization problem they address. While these works provide valuable insights for decision-making, we discuss the limitations of these solutions as a whole, in particular regarding the hypotheses in the models and the validation methods. We also propose future research directions to improve the integration of game-theoretic approaches into IDS optimization techniques.",
"title": ""
},
{
"docid": "048cc782baeec3a7f46ef5ee7abf0219",
"text": "Autoerotic asphyxiation is an unusual but increasingly more frequently occurring phenomenon, with >1000 fatalities in the United States per year. Understanding of this manner of death is likewise increasing, as noted by the growing number of cases reported in the literature. However, this form of accidental death is much less frequently seen in females (male:female ratio >50:1), and there is correspondingly less literature on female victims of autoerotic asphyxiation. The authors present the case of a 31-year-old woman who died of an autoerotic ligature strangulation and review the current literature on the subject. The forensic examiner must be able to discern this syndrome from similar forms of accidental and suicidal death, and from homicidal hanging/strangulation.",
"title": ""
},
{
"docid": "a2f36e0f8abaa07124d446f6aa870491",
"text": "We explore the capabilities of Auto-Encoders to fuse the information available from cameras and depth sensors, and to reconstruct missing data, for scene understanding tasks. In particular we consider three input modalities: RGB images; depth images; and semantic label information. We seek to generate complete scene segmentations and depth maps, given images and partial and/or noisy depth and semantic data. We formulate this objective of reconstructing one or more types of scene data using a Multi-modal stacked Auto-Encoder. We show that suitably designed Multi-modal Auto-Encoders can solve the depth estimation and the semantic segmentation problems simultaneously, in the partial or even complete absence of some of the input modalities. We demonstrate our method using the outdoor dataset KITTI that includes LIDAR and stereo cameras. Our results show that as a means to estimate depth from a single image, our method is comparable to the state-of-the-art, and can run in real time (i.e., less than 40ms per frame). But we also show that our method has a significant advantage over other methods in that it can seamlessly use additional data that may be available, such as a sparse point-cloud and/or incomplete coarse semantic labels.",
"title": ""
},
{
"docid": "aa30fc0f921509b1f978aeda1140ffc0",
"text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.",
"title": ""
},
{
"docid": "d7eb92756c8c3fb0ab49d7b101d96343",
"text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.",
"title": ""
},
{
"docid": "74ff09a1d3ca87a0934a1b9095c282c4",
"text": "The cancer metastasis suppressor protein KAI1/CD82 is a member of the tetraspanin superfamily. Recent studies have demonstrated that tetraspanins are palmitoylated and that palmitoylation contributes to the organization of tetraspanin webs or tetraspanin-enriched microdomains. However, the effect of palmitoylation on tetraspanin-mediated cellular functions remains obscure. In this study, we found that tetraspanin KAI1/CD82 was palmitoylated when expressed in PC3 metastatic prostate cancer cells and that palmitoylation involved all of the cytoplasmic cysteine residues proximal to the plasma membrane. Notably, the palmitoylation-deficient KAI1/CD82 mutant largely reversed the wild-type KAI1/CD82's inhibitory effects on migration and invasion of PC3 cells. Also, palmitoylation regulates the subcellular distribution of KAI1/CD82 and its association with other tetraspanins, suggesting that the localized interaction of KAI1/CD82 with tetraspanin webs or tetraspanin-enriched microdomains is important for KAI1/CD82's motility-inhibitory activity. Moreover, we found that KAI1/CD82 palmitoylation affected motility-related subcellular events such as lamellipodia formation and actin cytoskeleton organization and that the alteration of these processes likely contributes to KAI1/CD82's inhibition of motility. Finally, the reversal of cell motility seen in the palmitoylation-deficient KAI1/CD82 mutant correlates with regaining of p130(CAS)-CrkII coupling, a signaling step important for KAI1/CD82's activity. Taken together, our results indicate that palmitoylation is crucial for the functional integrity of tetraspanin KAI1/CD82 during the suppression of cancer cell migration and invasion.",
"title": ""
},
{
"docid": "136a2f401b3af00f0f79b991ab65658f",
"text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.",
"title": ""
},
{
"docid": "10423f367850761fd17cf1b146361f34",
"text": "OBJECTIVE\nDetection and characterization of microcalcification clusters in mammograms is vital in daily clinical practice. The scope of this work is to present a novel computer-based automated method for the characterization of microcalcification clusters in digitized mammograms.\n\n\nMETHODS AND MATERIAL\nThe proposed method has been implemented in three stages: (a) the cluster detection stage to identify clusters of microcalcifications, (b) the feature extraction stage to compute the important features of each cluster and (c) the classification stage, which provides with the final characterization. In the classification stage, a rule-based system, an artificial neural network (ANN) and a support vector machine (SVM) have been implemented and evaluated using receiver operating characteristic (ROC) analysis. The proposed method was evaluated using the Nijmegen and Mammographic Image Analysis Society (MIAS) mammographic databases. The original feature set was enhanced by the addition of four rule-based features.\n\n\nRESULTS AND CONCLUSIONS\nIn the case of Nijmegen dataset, the performance of the SVM was Az=0.79 and 0.77 for the original and enhanced feature set, respectively, while for the MIAS dataset the corresponding characterization scores were Az=0.81 and 0.80. Utilizing neural network classification methodology, the corresponding performance for the Nijmegen dataset was Az=0.70 and 0.76 while for the MIAS dataset it was Az=0.73 and 0.78. Although the obtained high classification performance can be successfully applied to microcalcification clusters characterization, further studies must be carried out for the clinical evaluation of the system using larger datasets. The use of additional features originating either from the image itself (such as cluster location and orientation) or from the patient data may further improve the diagnostic value of the system.",
"title": ""
},
{
"docid": "813a0d47405d133263deba0da6da27a8",
"text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.",
"title": ""
},
{
"docid": "e59b203f3b104553a84603240ea467eb",
"text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.",
"title": ""
},
{
"docid": "06c3f32f07418575c700e2f0925f4398",
"text": "The spacing of a fixed amount of study time across multiple sessions usually increases subsequent test performance*a finding known as the spacing effect. In the spacing experiment reported here, subjects completed multiple learning trials, and each included a study phase and a test. Once a subject achieved a perfect test, the remaining learning trials within that session comprised what is known as overlearning. The number of these overlearning trials was reduced when learning trials were spaced across multiple sessions rather than massed in a single session. In addition, the degree to which spacing reduced overlearning predicted the size of the spacing effect, which is consistent with the possibility that spacing increases subsequent recall by reducing the occurrence of overlearning. By this account, overlearning is an inefficient use of study time, and the efficacy of spacing depends at least partly on the degree to which it reduces the occurrence of overlearning.",
"title": ""
},
{
"docid": "a636f977eb29b870cefe040f3089de44",
"text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "be079999e630df22254e7aa8a9ecdcae",
"text": "Strokes are one of the leading causes of death and disability in the UK. There are two main types of stroke: ischemic and hemorrhagic, with the majority of stroke patients suffering from the former. During an ischemic stroke, parts of the brain lose blood supply, and if not treated immediately, can lead to irreversible tissue damage and even death. Ischemic lesions can be detected by diffusion weighted magnetic resonance imaging (DWI), but localising and quantifying these lesions can be a time consuming task for clinicians. Work has already been done in training neural networks to segment these lesions, but these frameworks require a large amount of manually segmented 3D images, which are very time consuming to create. We instead propose to use past examinations of stroke patients which consist of DWIs, corresponding radiological reports and diagnoses in order to develop a learning framework capable of localising lesions. This is motivated by the fact that the reports summarise the presence, type and location of the ischemic lesion for each patient, and thereby provide more context than a single diagnostic label. Acute lesions prediction is aided by an attention mechanism which implicitly learns which regions within the DWI are most relevant to the classification.",
"title": ""
}
] | scidocsrr |
56bff8526270ff83758c75bc68eb1666 | Development of a cloud-based RTAB-map service for robots | [
{
"docid": "82835828a7f8c073d3520cdb4b6c47be",
"text": "Simultaneous Localization and Mapping (SLAM) for mobile robots is a computationally expensive task. A robot capable of SLAM needs a powerful onboard computer, but this can limit the robot's mobility because of weight and power demands. We consider moving this task to a remote compute cloud, by proposing a general cloud-based architecture for real-time robotics computation, and then implementing a Rao-Blackwellized Particle Filtering-based SLAM algorithm in a multi-node cluster in the cloud. In our implementation, expensive computations are executed in parallel, yielding significant improvements in computation time. This allows the algorithm to increase the complexity and frequency of calculations, enhancing the accuracy of the resulting map while freeing the robot's onboard computer for other tasks. Our method for implementing particle filtering in the cloud is not specific to SLAM and can be applied to other computationally-intensive tasks.",
"title": ""
}
] | [
{
"docid": "3eaa3a1a3829345aaa597cf843f720d6",
"text": "Relationship science is a theory-rich discipline, but there have been no attempts to articulate the broader themes or principles that cut across the theories themselves. We have sought to fill that void by reviewing the psychological literature on close relationships, particularly romantic relationships, to extract its core principles. This review reveals 14 principles, which collectively address four central questions: (a) What is a relationship? (b) How do relationships operate? (c) What tendencies do people bring to their relationships? (d) How does the context affect relationships? The 14 principles paint a cohesive and unified picture of romantic relationships that reflects a strong and maturing discipline. However, the principles afford few of the sorts of conflicting predictions that can be especially helpful in fostering novel theory development. We conclude that relationship science is likely to benefit from simultaneous pushes toward both greater integration across theories (to reduce redundancy) and greater emphasis on the circumstances under which existing (or not-yet-developed) principles conflict with one another.",
"title": ""
},
{
"docid": "5de11e0cbfce77414d1c552007d63892",
"text": "© 2012 Cassisi et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Similarity Measures and Dimensionality Reduction Techniques for Time Series Data Mining",
"title": ""
},
{
"docid": "0d5ba680571a9051e70ababf0c685546",
"text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization",
"title": ""
},
{
"docid": "6e675e8a57574daf83ab78cea25688f5",
"text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore âunsupervisedâ approaches to quality prediction that does not require labelled data. An alternate technique is to use âsupervisedâ approaches that learn models from project data labelled with, say, âdefectiveâ or ânot-defectiveâ. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSEâ16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.",
"title": ""
},
{
"docid": "264c63f249f13bf3eb4fd5faac8f4fa0",
"text": "This paper presents the study to investigate the possibility of the stand-alone micro hydro for low-cost electricity production which can satisfy the energy load requirements of a typical remote and isolated rural area. In this framework, the feasibility study in term of the technical and economical performances of the micro hydro system are determined according to the rural electrification concept. The proposed axial flux permanent magnet (AFPM) generator will be designed for micro hydro under sustainable development to optimize between cost and efficiency by using the local materials and basic engineering knowledge. First of all, the simple simulation of micro hydro model for lighting system is developed by considering the optimal size of AFPM generator. The simulation results show that the optimal micro hydro power plant with 70 W can supply the 9 W compact fluorescent up to 20 set for 8 hours by using pressure of water with 6 meters and 0.141 m3/min of flow rate. Lastly, a proposed micro hydro power plant can supply lighting system for rural electrification up to 525.6 kWh/year or 1,839.60 Baht/year and reduce 0.33 ton/year of CO2 emission.",
"title": ""
},
{
"docid": "bf57a5fcf6db7a9b26090bd9a4b65784",
"text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.",
"title": ""
},
{
"docid": "fe95e139aab1453750224bd856059fcf",
"text": "IMPORTANCE\nChronic sinusitis is a common inflammatory condition defined by persistent symptomatic inflammation of the sinonasal cavities lasting longer than 3 months. It accounts for 1% to 2% of total physician encounters and is associated with large health care expenditures. Appropriate use of medical therapies for chronic sinusitis is necessary to optimize patient quality of life (QOL) and daily functioning and minimize the risk of acute inflammatory exacerbations.\n\n\nOBJECTIVE\nTo summarize the highest-quality evidence on medical therapies for adult chronic sinusitis and provide an evidence-based approach to assist in optimizing patient care.\n\n\nEVIDENCE REVIEW\nA systematic review searched Ovid MEDLINE (1947-January 30, 2015), EMBASE, and Cochrane Databases. The search was limited to randomized clinical trials (RCTs), systematic reviews, and meta-analyses. Evidence was categorized into maintenance and intermittent or rescue therapies and reported based on the presence or absence of nasal polyps.\n\n\nFINDINGS\nTwenty-nine studies met inclusion criteria: 12 meta-analyses (>60 RCTs), 13 systematic reviews, and 4 RCTs that were not included in any of the meta-analyses. Saline irrigation improved symptom scores compared with no treatment (standardized mean difference [SMD], 1.42 [95% CI, 1.01 to 1.84]; a positive SMD indicates improvement). Topical corticosteroid therapy improved overall symptom scores (SMD, -0.46 [95% CI, -0.65 to -0.27]; a negative SMD indicates improvement), improved polyp scores (SMD, -0.73 [95% CI, -1.0 to -0.46]; a negative SMD indicates improvement), and reduced polyp recurrence after surgery (relative risk, 0.59 [95% CI, 0.45 to 0.79]). Systemic corticosteroids and oral doxycycline (both for 3 weeks) reduced polyp size compared with placebo for 3 months after treatment (P < .001). Leukotriene antagonists improved nasal symptoms compared with placebo in patients with nasal polyps (P < .01). Macrolide antibiotic for 3 months was associated with improved QOL at a single time point (24 weeks after therapy) compared with placebo for patients without polyps (SMD, -0.43 [95% CI, -0.82 to -0.05]).\n\n\nCONCLUSIONS AND RELEVANCE\nEvidence supports daily high-volume saline irrigation with topical corticosteroid therapy as a first-line therapy for chronic sinusitis. A short course of systemic corticosteroids (1-3 weeks), short course of doxycycline (3 weeks), or a leukotriene antagonist may be considered in patients with nasal polyps. A prolonged course (3 months) of macrolide antibiotic may be considered for patients without polyps.",
"title": ""
},
{
"docid": "983ec9cdd75d0860c96f89f3c9b2f752",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "db822d9deda1a707b6e6385c79aa93e2",
"text": "We propose simple tangible language elements for very young children to use when constructing programmes. The equivalent Turtle Talk instructions are given for comparison. Two examples of the tangible language code are shown to illustrate alternative methods of solving a given challenge.",
"title": ""
},
{
"docid": "980dc3d4b01caac3bf56df039d5ca513",
"text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.",
"title": ""
},
{
"docid": "c62bc7391e55d66c9e27befe81446ebe",
"text": "Opaque predicates have been widely used to insert superfluous branches for control flow obfuscation. Opaque predicates can be seamlessly applied together with other obfuscation methods such as junk code to turn reverse engineering attempts into arduous work. Previous efforts in detecting opaque predicates are far from mature. They are either ad hoc, designed for a specific problem, or have a considerably high error rate. This paper introduces LOOP, a Logic Oriented Opaque Predicate detection tool for obfuscated binary code. Being different from previous work, we do not rely on any heuristics; instead we construct general logical formulas, which represent the intrinsic characteristics of opaque predicates, by symbolic execution along a trace. We then solve these formulas with a constraint solver. The result accurately answers whether the predicate under examination is opaque or not. In addition, LOOP is obfuscation resilient and able to detect previously unknown opaque predicates. We have developed a prototype of LOOP and evaluated it with a range of common utilities and obfuscated malicious programs. Our experimental results demonstrate the efficacy and generality of LOOP. By integrating LOOP with code normalization for matching metamorphic malware variants, we show that LOOP is an appealing complement to existing malware defenses.",
"title": ""
},
{
"docid": "0b08e657d012d26310c88e2129c17396",
"text": "In order to accurately determine the growth of greenhouse crops, the system based on AVR Single Chip microcontroller and wireless sensor networks is developed, it transfers data through the wireless transceiver devices without setting up electric wiring, the system structure is simple. The monitoring and management center can control the temperature and humidity of the greenhouse, measure the carbon dioxide content, and collect the information about intensity of illumination, and so on. In addition, the system adopts multilevel energy memory. It combines energy management with energy transfer, which makes the energy collected by solar energy batteries be used reasonably. Therefore, the self-managing energy supply system is established. The system has advantages of low power consumption, low cost, good robustness, extended flexible. An effective tool is provided for monitoring and analysis decision-making of the greenhouse environment.",
"title": ""
},
{
"docid": "7c8d1b0c77acb4fd6db6e7f887e66133",
"text": "Subdural hematomas (SDH) in infants often result from nonaccidental head injury (NAHI), which is diagnosed based on the absence of history of trauma and the presence of associated lesions. When these are lacking, the possibility of spontaneous SDH in infant (SSDHI) is raised, but this entity is hotly debated; in particular, the lack of positive diagnostic criteria has hampered its recognition. The role of arachnoidomegaly, idiopathic macrocephaly, and dehydration in the pathogenesis of SSDHI is also much discussed. We decided to analyze apparent cases of SSDHI from our prospective databank. We selected cases of SDH in infants without systemic disease, history of trauma, and suspicion of NAHI. All cases had fundoscopy and were evaluated for possible NAHI. Head growth curves were reconstructed in order to differentiate idiopathic from symptomatic macrocrania. Sixteen patients, 14 males and two females, were diagnosed with SSDHI. Twelve patients had idiopathic macrocrania, seven of these being previously diagnosed with arachnoidomegaly on imaging. Five had risk factors for dehydration, including two with severe enteritis. Two patients had mild or moderate retinal hemorrhage, considered not indicative of NAHI. Thirteen patients underwent cerebrospinal fluid drainage. The outcome was favorable in almost all cases; one child has sequels, which were attributable to obstetrical difficulties. SSDHI exists but is rare and cannot be diagnosed unless NAHI has been questioned thoroughly. The absence of traumatic features is not sufficient, and positive elements like macrocrania, arachnoidomegaly, or severe dehydration are necessary for the diagnosis of SSDHI.",
"title": ""
},
{
"docid": "0ad4432a79ea6b3eefbe940adf55ff7b",
"text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.",
"title": ""
},
{
"docid": "2b688f9ca05c2a79f896e3fee927cc0d",
"text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.",
"title": ""
},
{
"docid": "2a7983e91cd674d95524622e82c4ded7",
"text": "• FC (fully-connected) layer takes the pooling results, produces features FROI, Fcontext, Fframe, and feeds them into two streams, inspired by [BV16]. • Classification stream produces a matrix of classification scores S = [FCcls(FROI1); . . . ;FCcls(FROIK)] ∈ RK×C • Localization stream implements the proposed context-aware guidance that uses FROIk, Fcontextk, Fframek to produce a localization score matrix L ∈ RK×C.",
"title": ""
},
{
"docid": "9e208a394475931aafdcdfbad1408489",
"text": "Ocular complications following cosmetic filler injections are serious situations. This study provided scientific evidence that filler in the facial and the superficial temporal arteries could enter into the orbits and the globes on both sides. We demonstrated the existence of an embolic channel connecting the arterial system of the face to the ophthalmic artery. After the removal of the ocular contents from both eyes, liquid dye was injected into the cannulated channel of the superficial temporal artery in six soft embalmed cadavers and different color dye was injected into the facial artery on both sides successively. The interior sclera was monitored for dye oozing from retrograde ophthalmic perfusion. Among all 12 globes, dye injections from the 12 superficial temporal arteries entered ipsilateral globes in three and the contralateral globe in two arteries. Dye from the facial artery was infused into five ipsilateral globes and in three contralateral globes. Dye injections of two facial arteries in the same cadaver resulted in bilateral globe staining but those of the superficial temporal arteries did not. Direct communications between the same and different arteries of the four cannulated arteries were evidenced by dye dripping from the cannulating needle hubs in 14 of 24 injected arteries. Compression of the orbital rim at the superior nasal corner retarded ocular infusion in 11 of 14 arterial injections. Under some specific conditions favoring embolism, persistent interarterial anastomoses between the face and the eye allowed filler emboli to flow into the globe causing ocular complications. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .",
"title": ""
},
{
"docid": "db3c5c93daf97619ad927532266b3347",
"text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.",
"title": ""
},
{
"docid": "3f207c3c622d1854a7ad6c5365354db1",
"text": "The field of Music Information Retrieval has always acknowledged the need for rigorous scientific evaluations, and several efforts have set out to develop and provide the infrastructure, technology and methodologies needed to carry out these evaluations. The community has enormously gained from these evaluation forums, but we have reached a point where we are stuck with evaluation frameworks that do not allow us to improve as much and as well as we want. The community recently acknowledged this problem and showed interest in addressing it, though it is not clear what to do to improve the situation. We argue that a good place to start is again the Text IR field. Based on a formalization of the evaluation process, this paper presents a survey of past evaluation work in the context of Text IR, from the point of view of validity, reliability and efficiency of the experiments. We show the problems that our community currently has in terms of evaluation, point to several lines of research to improve it and make various proposals in that line.",
"title": ""
},
{
"docid": "84b018fa45e06755746309014854bb9a",
"text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies",
"title": ""
}
] | scidocsrr |
0b590d5f3bc41286db3de0ab3bf48308 | Neural Models for Key Phrase Extraction and Question Generation | [
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
}
] | [
{
"docid": "cdb937def5a92e3843a761f57278783e",
"text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.",
"title": ""
},
{
"docid": "5cd3abebf4d990bb9196b7019b29c568",
"text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.",
"title": ""
},
{
"docid": "3f96a3cd2e3f795072567a3f3c8ccc46",
"text": "Good corporate reputations are critical because of their potential for value creation, but also because their intangible character makes replication by competing firms considerably more difficult. Existing empirical research confirms that there is a positive relationship between reputation and financial performance. This paper complements these findings by showing that firms with relatively good reputations are better able to sustain superior profit outcomes over time. In particular, we undertake an analysis of the relationship between corporate reputation and the dynamics of financial performance using two complementary dynamic models. We also decompose overall reputation into a component that is predicted by previous financial performance, and that which is ‘left over’, and find that each (orthogonal) element supports the persistence of above-average profits over time. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "e9b5dc63f981cc101521d8bbda1847d5",
"text": "The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB : A → B and FBA : B → A is commonly used by the state-of-the-art methods, like CycleGAN (Zhu et al., 2017), to learn this translation by introducing cycle consistency requirement to the learning problem, i.e. FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark data sets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-toend learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.",
"title": ""
},
{
"docid": "288845120cdf96a20850b3806be3d89a",
"text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.",
"title": ""
},
{
"docid": "46ac5e994ca0bf0c3ea5dd110810b682",
"text": "The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data. Introduction and Motivation While the Web has changed with the advent of the Social Web from mostly authoritative content towards increasing amounts of user generated information, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the evolving Data Web is about linking data, not documents. Such datasets are not bound to a specific document but can be easily combined and used outside of their original creation context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple, heterogeneous data sources from different scientific domains. However, this uncoupling of data from its creation context makes the interpretation of data challenging. Thus, research on semantic interoperability and ontologies is crucial to ensure consistency and meaningful results. Space and time are fundamental ordering principles to structure such data and provide an implicit context for their interpretation. Hence, it is not surprising that many linked datasets either contain spatiotemporal identifiers themselves or link out to such datasets, making them central hubs of the Linked Data cloud. Prominent examples include Geonames.org as well as the Linked Geo Data project, which provides a RDF serialization of Points Of Interest from Open Street Map [103]. Besides such Voluntary Geographic Information (VGI), governments 1570-0844/12/$27.50 c © 2012 – IOS Press and the authors. All rights reserved",
"title": ""
},
{
"docid": "2aefddf5e19601c8338f852811cebdee",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
},
{
"docid": "37572963400c8a78cef3cd4a565b328e",
"text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.",
"title": ""
},
{
"docid": "9d37260c493c40523c268f6e54c8b4ea",
"text": "Social collaborative filtering recommender systems extend the traditional user-to-item interaction with explicit user-to-user relationships, thereby allowing for a wider exploration of correlations among users and items, that potentially lead to better recommendations. A number of methods have been proposed in the direction of exploring the social network, either locally (i.e. the vicinity of each user) or globally. In this paper, we propose a novel methodology for collaborative filtering social recommendation that tries to combine the merits of both the aforementioned approaches, based on the soft-clustering of the Friend-of-a-Friend (FoaF) network of each user. This task is accomplished by the non-negative factorization of the adjacency matrix of the FoaF graph, while the edge-centric logic of the factorization algorithm is ameliorated by incorporating more general structural properties of the graph, such as the number of edges and stars, through the introduction of the exponential random graph models. The preliminary results obtained reveal the potential of this idea.",
"title": ""
},
{
"docid": "6604a90f21796895300d37cefed5b6fa",
"text": "Distributed power system network is going to be complex, and it will require high-speed, reliable and secure communication systems for managing intermittent generation with coordination of centralised power generation, including load control. Cognitive Radio (CR) is highly favourable for providing communications in Smart Grid by using spectrum resources opportunistically. The IEEE 802.22 Wireless Regional Area Network (WRAN) having the capabilities of CR use vacant channels opportunistically in the frequency range of 54 MHz to 862 MHz occupied by TV band. A comprehensive review of using IEEE 802.22 for Field Area Network in power system network using spectrum sensing (CR based communication) is provided in this paper. The spectrum sensing technique(s) at Base Station (BS) and Customer Premises Equipment (CPE) for detecting the presence of incumbent in order to mitigate interferences is also studied. The availability of backup and candidate channels are updated during “Quite Period” for further use (spectrum switching and management) with geolocation capabilities. The use of IEEE 802.22 for (a) radio-scene analysis, (b) channel identification, and (c) dynamic spectrum management are examined for applications in power management.",
"title": ""
},
{
"docid": "e8403145a3d4a8a75348075410683e28",
"text": "This paper presents a current-reuse complementary-input (CRCI) telescopic-cascode chopper stabilized amplifier with low-noise low-power operation. The current-reuse complementary-input strategy doubles the amplifier's effective transconductance by full current-reuse between complementary inputs, which significantly improves the noise-power efficiency. A pseudo-resistor based integrator is used in the DC servo loop to generate a high-pass cutoff below 1 Hz. The proposed amplifier features a mid-band gain of 39.25 dB, bandwidth from 0.12 Hz to 7.6 kHz, and draws 2.57 μA from a 1.2-V supply and exhibits an input-referred noise of 3.57 μVrms integrated from 100 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 2.5. The amplifier is designed in 0.13 μm 8-metal CMOS process.",
"title": ""
},
{
"docid": "6c92652aa5bab1b25910d16cca697d48",
"text": "Intrusion detection has attracted a considerable interest from researchers and industries. The community, after many years of research, still faces the problem of building reliable and efficient IDS that are capable of handling large quantities of data, with changing patterns in real time situations. The work presented in this manuscript classifies intrusion detection systems (IDS). Moreover, a taxonomy and survey of shallow and deep networks intrusion detection systems is presented based on previous and current works. This taxonomy and survey reviews machine learning techniques and their performance in detecting anomalies. Feature selection which influences the effectiveness of machine learning (ML) IDS is discussed to explain the role of feature selection in the classification and training phase of ML IDS. Finally, a discussion of the false and true positive alarm rates is presented to help researchers model reliable and efficient machine learning based intrusion detection systems. Keywords— Shallow network, Deep networks, Intrusion detection, False positive alarm rates and True positive alarm rates 1.0 INTRODUCTION Computer networks have developed rapidly over the years contributing significantly to social and economic development. International trade, healthcare systems and military capabilities are examples of human activity that increasingly rely on networks. This has led to an increasing interest in the security of networks by industry and researchers. The importance of Intrusion Detection Systems (IDS) is critical as networks can become vulnerable to attacks from both internal and external intruders [1], [2]. An IDS is a detection system put in place to monitor computer networks. These have been in use since the 1980’s [3]. By analysing patterns of captured data from a network, IDS help to detect threats [4]. These threats can be devastating, for example, Denial of service (DoS) denies or prevents legitimate users resource on a network by introducing unwanted traffic [5]. Malware is another example, where attackers use malicious software to disrupt systems [6].",
"title": ""
},
{
"docid": "27401a6fe6a1edb5ba116db4bbdc7bcc",
"text": "Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http://apc.cs.princeton.edu/",
"title": ""
},
{
"docid": "8e64738b0d21db1ec5ef0220507f3130",
"text": "Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.",
"title": ""
},
{
"docid": "e82e44e851486b557948a63366486fef",
"text": "v Combinatorial and algorithmic aspects of identifying codes in graphs Abstract: An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs. An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs.",
"title": ""
},
{
"docid": "bef317c450503a7f2c2147168b3dd51e",
"text": "With the development of the Internet of Things (IoT) and the usage of low-powered devices (sensors and effectors), a large number of people are using IoT systems in their homes and businesses to have more control over their technology. However, a key challenge of IoT systems is data protection in case the IoT device is lost, stolen, or used by one of the owner's friends or family members. The problem studied here is how to protect the access to data of an IoT system. To solve the problem, an attribute-based access control (ABAC) mechanism is applied to give the system the ability to apply policies to detect any unauthorized entry. Finally, a prototype was built to test the proposed solution. The evaluation plan was applied on the proposed solution to test the performance of the system.",
"title": ""
},
{
"docid": "3d2e82a0353d0b2803a579c413403338",
"text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi",
"title": ""
},
{
"docid": "c3e2ceebd3868dd9fff2a87fdd339dce",
"text": "Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.",
"title": ""
},
{
"docid": "583d2f754a399e8446855b165407f6ee",
"text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.",
"title": ""
},
{
"docid": "20e5855c2bab00b7f91cca5d7bd07245",
"text": "The increase in the number and complexity of biological databases has raised the need for modern and powerful data analysis tools and techniques. In order to fulfill these requirements, the machine learning discipline has become an everyday tool in bio-laboratories. The use of machine learning techniques has been extended to a wide spectrum of bioinformatics applications. It is broadly used to investigate the underlying mechanisms and interactions between biological molecules in many diseases, and it is an essential tool in any biomarker discovery process. In this chapter, we provide a basic taxonomy of machine learning algorithms, and the characteristics of main data preprocessing, supervised classification, and clustering techniques are shown. Feature selection, classifier evaluation, and two supervised classification topics that have a deep impact on current bioinformatics are presented. We make the interested reader aware of a set of popular web resources, open source software tools, and benchmarking data repositories that are frequently used by the machine",
"title": ""
}
] | scidocsrr |
d211f8d25ed48575a3f39ca00c42ea4c | Managing Non-Volatile Memory in Database Systems | [
{
"docid": "149b1f7861d55e90b1f423ff98e765ca",
"text": "The advent of Storage Class Memory (SCM) is driving a rethink of storage systems towards a single-level architecture where memory and storage are merged. In this context, several works have investigated how to design persistent trees in SCM as a fundamental building block for these novel systems. However, these trees are significantly slower than DRAM-based counterparts since trees are latency-sensitive and SCM exhibits higher latencies than DRAM. In this paper we propose a novel hybrid SCM-DRAM persistent and concurrent B-Tree, named Fingerprinting Persistent Tree (FPTree) that achieves similar performance to DRAM-based counterparts. In this novel design, leaf nodes are persisted in SCM while inner nodes are placed in DRAM and rebuilt upon recovery. The FPTree uses Fingerprinting, a technique that limits the expected number of in-leaf probed keys to one. In addition, we propose a hybrid concurrency scheme for the FPTree that is partially based on Hardware Transactional Memory. We conduct a thorough performance evaluation and show that the FPTree outperforms state-of-the-art persistent trees with different SCM latencies by up to a factor of 8.2. Moreover, we show that the FPTree scales very well on a machine with 88 logical cores. Finally, we integrate the evaluated trees in memcached and a prototype database. We show that the FPTree incurs an almost negligible performance overhead over using fully transient data structures, while significantly outperforming other persistent trees.",
"title": ""
}
] | [
{
"docid": "20436a21b4105700d7e95a477a22d830",
"text": "We introduce a new type of Augmented Reality games: By using a simple webcam and Computer Vision techniques, we turn a standard real game board pawns into an AR game. We use these objects as a tangible interface, and augment them with visual effects. The game logic can be performed automatically by the computer. This results in a better immersion compared to the original board game alone and provides a different experience than a video game. We demonstrate our approach on Monopoly− [1], but it is very generic and could easily be adapted to any other board game.",
"title": ""
},
{
"docid": "467bb4ffb877b4e21ad4f7fc7adbd4a6",
"text": "In this paper, a 6 × 6 planar slot array based on a hollow substrate integrated waveguide (HSIW) is presented. To eliminate the tilting of the main beam, the slot array is fed from the centre at the back of the HSIW, which results in a blockage area. To reduce the impact on sidelobe levels, a slot extrusion technique is introduced. A simplified multiway power divider is demonstrated to feed the array elements and the optimisation procedure is described. To verify the antenna design, a 6 × 6 planar array is fabricated and measured in a low temperature co-fired ceramic (LTCC) technology. The HSIW has lower loss, comparable to standard WR28, and a high gain of 17.1 dBi at 35.5 GHz has been achieved in the HSIW slot array.",
"title": ""
},
{
"docid": "572453e5febc5d45be984d7adb5436c5",
"text": "An analysis of several role playing games indicates that player quests share common elements, and that these quests may be abstractly represented using a small expressive language. One benefit of this representation is that it can guide procedural content generation by allowing quests to be generated using this abstraction, and then later converting them into a concrete form within a game’s domain.",
"title": ""
},
{
"docid": "539fb99a52838d6ce6f980b9b9703a2b",
"text": "The Blinder-Oaxaca decomposition technique is widely used to identify and quantify the separate contributions of differences in measurable characteristics to group differences in an outcome of interest. The use of a linear probability model and the standard BlinderOaxaca decomposition, however, can provide misleading estimates when the dependent variable is binary, especially when group differences are very large for an influential explanatory variable. A simulation method of performing a nonlinear decomposition that uses estimates from a logit, probit or other nonlinear model was first developed in a Journal of Labor Economics article (Fairlie 1999). This nonlinear decomposition technique has been used in nearly a thousand subsequent studies published in a wide range of fields and disciplines. In this paper, I address concerns over path dependence in using the nonlinear decomposition technique. I also present a straightforward method of incorporating sample weights in the technique. I thank Eric Aldrich and Ben Jann for comments and suggestions, and Brandon Heck for research assistance.",
"title": ""
},
{
"docid": "590e0965ca61223d5fefb82e89f24fd0",
"text": "Large software projects contain significant code duplication, mainly due to copying and pasting code. Many techniques have been developed to identify duplicated code to enable applications such as refactoring, detecting bugs, and protecting intellectual property. Because source code is often unavailable, especially for third-party software, finding duplicated code in binaries becomes particularly important. However, existing techniques operate primarily on source code, and no effective tool exists for binaries.\n In this paper, we describe the first practical clone detection algorithm for binary executables. Our algorithm extends an existing tree similarity framework based on clustering of characteristic vectors of labeled trees with novel techniques to normalize assembly instructions and to accurately and compactly model their structural information. We have implemented our technique and evaluated it on Windows XP system binaries totaling over 50 million assembly instructions. Results show that it is both scalable and precise: it analyzed Windows XP system binaries in a few hours and produced few false positives. We believe our technique is a practical, enabling technology for many applications dealing with binary code.",
"title": ""
},
{
"docid": "a4a15096e116a6afc2730d1693b1c34f",
"text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.",
"title": ""
},
{
"docid": "82234158dc94216222efa5f80eee0360",
"text": "We investigate the possibility to prove security of the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model, i.e., without random oracles. We subsume these schemes under a more general class of blind signature schemes and show that finding security proofs for these schemes via black-box reductions in the standard model is hard. Technically, our result deploys meta-reduction techniques showing that black-box reductions for such schemes could be turned into efficient solvers for hard non-interactive cryptographic problems like RSA or discrete-log. Our technique yields significantly stronger impossibility results than previous meta-reductions in other settings by playing off the two security requirements of the blind signatures (unforgeability and blindness).",
"title": ""
},
{
"docid": "d0985c38f3441ca0d69af8afaf67c998",
"text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.",
"title": ""
},
{
"docid": "96c1f90ff04e7fd37d8b8a16bc4b9c54",
"text": "Graph triangulation, which finds all triangles in a graph, has been actively studied due to its wide range of applications in the network analysis and data mining. With the rapid growth of graph data size, disk-based triangulation methods are in demand but little researched. To handle a large-scale graph which does not fit in memory, we must iteratively load small parts of the graph. In the existing literature, achieving the ideal cost has been considered to be impossible for billion-scale graphs due to the memory size constraint. In this paper, we propose an overlapped and parallel disk-based triangulation framework for billion-scale graphs, OPT, which achieves the ideal cost by (1) full overlap of the CPU and I/O operations and (2) full parallelism of multi-core CPU and FlashSSD I/O. In OPT, triangles in memory are called the internal triangles while triangles constituting vertices in memory and vertices in external memory are called the external triangles. At the macro level, OPT overlaps the internal triangulation and the external triangulation, while it overlaps the CPU and I/O operations at the micro level. Thereby, the cost of OPT is close to the ideal cost. Moreover, OPT instantiates both vertex-iterator and edge-iterator models and benefits from multi-thread parallelism on both types of triangulation. Extensive experiments conducted on large-scale datasets showed that (1) OPT achieved the elapsed time close to that of the ideal method with less than 7% of overhead under the limited memory budget, (2) OPT achieved linear speed-up with an increasing number of CPU cores, (3) OPT outperforms the state-of-the-art parallel method by up to an order of magnitude with 6 CPU cores, and (4) for the first time in the literature, the triangulation results are reported for a billion-vertex scale real-world graph.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "b32286014bb7105e62fba85a9aab9019",
"text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.",
"title": ""
},
{
"docid": "1ee444fda98b312b0462786f5420f359",
"text": "After years of banning consumer devices (e.g., iPads and iPhone) and applications (e.g., DropBox, Evernote, iTunes) organizations are allowing employees to use their consumer tools in the workplace. This IT consumerization phenomenon will have serious consequences on IT departments which have historically valued control, security, standardization and support (Harris et al. 2012). Based on case studies of three organizations in different stages of embracing IT consumerization, this study identifies the conflicts IT consumerization creates for IT departments. All three organizations experienced similar goal and behavior conflicts, while identity conflict varied depending upon the organizations’ stage implementing consumer tools (e.g., embryonic, initiating or institutionalized). Theoretically, this study advances IT consumerization research by applying a role conflict perspective to understand consumerization’s impact on the IT department.",
"title": ""
},
{
"docid": "da9432171ceba5ae76fa76a8416b1a8f",
"text": "Social tagging on online portals has become a trend now. It has emerged as one of the best ways of associating metadata with web objects. With the increase in the kinds of web objects becoming available, collaborative tagging of such objects is also developing along new dimensions. This popularity has led to a vast literature on social tagging. In this survey paper, we would like to summarize different techniques employed to study various aspects of tagging. Broadly, we would discuss about properties of tag streams, tagging models, tag semantics, generating recommendations using tags, visualizations of tags, applications of tags and problems associated with tagging usage. We would discuss topics like why people tag, what influences the choice of tags, how to model the tagging process, kinds of tags, different power laws observed in tagging domain, how tags are created, how to choose the right tags for recommendation, etc. We conclude with thoughts on future work in the area.",
"title": ""
},
{
"docid": "318aa0dab44cca5919100033aa692cd9",
"text": "Text classification is one of the important research issues in the field of text mining, where the documents are classified with supervised knowledge. In literature we can find many text representation schemes and classifiers/learning algorithms used to classify text documents to the predefined categories. In this paper, we present various text representation schemes and compare different classifiers used to classify text documents to the predefined classes. The existing methods are compared and contrasted based on qualitative parameters viz., criteria used for classification, algorithms adopted and classification time complexities.",
"title": ""
},
{
"docid": "709853992cae8d5b5fa4c3cc86d898a7",
"text": "The rise of big data age in the Internet has led to the explosive growth of data size. However, trust issue has become the biggest problem of big data, leading to the difficulty in data safe circulation and industry development. The blockchain technology provides a new solution to this problem by combining non-tampering, traceable features with smart contracts that automatically execute default instructions. In this paper, we present a credible big data sharing model based on blockchain technology and smart contract to ensure the safe circulation of data resources.",
"title": ""
},
{
"docid": "c5f521d5e5e089261914f6784e2d77da",
"text": "Generating structured query language (SQL) from natural language is an emerging research topic. This paper presents a new learning paradigm from indirect supervision of the answers to natural language questions, instead of SQL queries. This paradigm facilitates the acquisition of training data due to the abundant resources of question-answer pairs for various domains in the Internet, and expels the difficult SQL annotation job. An endto-end neural model integrating with reinforcement learning is proposed to learn SQL generation policy within the answerdriven learning paradigm. The model is evaluated on datasets of different domains, including movie and academic publication. Experimental results show that our model outperforms the baseline models.",
"title": ""
},
{
"docid": "0ccfbd8f2b8979ec049d94fa6dddf614",
"text": "Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular projectbased instruction. No significant differences were found between the two groups with respect to motivation for History or the MiddleAges. The impact of location-based technology and gamebased learning on pupil knowledge and motivation are discussed along with suggestions for future research.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "50875a63d0f3e1796148d809b5673081",
"text": "Coreference resolution seeks to find the mentions in text that refer to the same real-world entity. This task has been well-studied in NLP, but until recent years, empirical results have been disappointing. Recent research has greatly improved the state-of-the-art. In this review, we focus on five papers that represent the current state-ofthe-art and discuss how they relate to each other and how these advances will influence future work in this area.",
"title": ""
}
] | scidocsrr |
2100642ab81be76885180790c4aaaa95 | Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics | [
{
"docid": "ed7a114d02244b7278c8872c567f1ba6",
"text": "We present a new visualization, called the Table Lens, for visualizing and making sense of large tables. The visualization uses a focus+context (fisheye) technique that works effectively on tabular information because it allows display of crucial label information and multiple distal focal areas. In addition, a graphical mapping scheme for depicting table contents has been developed for the most widespread kind of tables, the cases-by-variables table. The Table Lens fuses symbolic and graphical representations into a single coherent view that can be fluidly adjusted by the user. This fusion and interactivity enables an extremely rich and natural style of direct manipulation exploratory data analysis.",
"title": ""
}
] | [
{
"docid": "8f2b9981d15b8839547f56f5f1152882",
"text": "In this paper we study how to discover the evolution of topics over time in a time-stamped document collection. Our approach is uniquely designed to capture the rich topology of topic evolution inherent in the corpus. Instead of characterizing the evolving topics at fixed time points, we conceptually define a topic as a quantized unit of evolutionary change in content and discover topics with the time of their appearance in the corpus. Discovered topics are then connected to form a topic evolution graph using a measure derived from the underlying document network. Our approach allows inhomogeneous distribution of topics over time and does not impose any topological restriction in topic evolution graphs. We evaluate our algorithm on the ACM corpus.\n The topic evolution graphs obtained from the ACM corpus provide an effective and concrete summary of the corpus with remarkably rich topology that are congruent to our background knowledge. In a finer resolution, the graphs reveal concrete information about the corpus that were previously unknown to us, suggesting the utility of our approach as a navigational tool for the corpus.",
"title": ""
},
{
"docid": "673ce42f089d555d8457f35bf7dcb733",
"text": "Visual relationship detection aims to capture interactions between pairs of objects in images. Relationships between objects and humans represent a particularly important subset of this problem, with implications for challenges such as understanding human behaviour, and identifying affordances, amongst others. In addressing this problem we first construct a large-scale human-centric visual relationship detection dataset (HCVRD), which provides many more types of relationship annotation (nearly 10K categories) than the previous released datasets. This large label space better reflects the reality of human-object interactions, but gives rise to a long-tail distribution problem, which in turn demands a zero-shot approach to labels appearing only in the test set. This is the first time this issue has been addressed. We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset.",
"title": ""
},
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "8bcb5b946b9f5e07807ec9a44884cf4e",
"text": "Using data from two waves of a panel study of families who currently or recently received cash welfare benefits, we test hypotheses about the relationship between food hardships and behavior problems among two different age groups (458 children ages 3–5-and 747 children ages 6–12). Results show that food hardships are positively associated with externalizing behavior problems for older children, even after controlling for potential mediators such as parental stress, warmth, and depression. Food hardships are positively associated with internalizing behavior problems for older children, and with both externalizing and internalizing behavior problems for younger children, but these effects are mediated by parental characteristics. The implications of these findings for child and family interventions and food assistance programs are discussed. Food Hardships and Child Behavior Problems among Low-Income Children INTRODUCTION In the wake of the 1996 federal welfare reforms, several large-scale, longitudinal studies of welfare recipients and low-income families were launched with the intent of assessing direct benchmarks, such as work and welfare activity, over time, as well as indirect and unintended outcomes related to material hardship and mental health. One area of special concern to many researchers and policymakers alike is child well-being in the context of welfare reforms. As family welfare use and parental work activities change under new welfare policies, family income and material resources may also fluctuate. To the extent that family resources are compromised by changes in welfare assistance and earnings, children may experience direct hardships, such as instability in food consumption, which in turn may affect other areas of functioning. It is also possible that changes in parental work and family welfare receipt influence children indirectly through their caregivers. As parents themselves experience hardships or new stresses, their mental health and interactions with their children may change, which in turn could affect their children’s functioning. This research assesses whether one particular form of hardship, food hardship, is associated with adverse behaviors among low-income children. Specifically, analyses assess whether food hardships have relationships with externalizing (e.g., aggressive or hyperactive) and internalizing (e.g., anxietyand depression-related) child behavior problems, and whether associations between food hardships and behavior problems are mediated by parental stress, warmth, and depression. The study involves a panel survey of individuals in one state who were receiving Temporary Assistance for Needy Families (TANF) in 1998 and were caring for minor-aged children. Externalizing and internalizing behavior problems associated with a randomly selected child from each household are assessed in relation to key predictors, taking advantage of the prospective study design. 2 BACKGROUND Food hardships have been conceptualized by researchers in various ways. For example, food insecurity is defined by the U.S. Department of Agriculture (USDA) as the “limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways” (Bickel, Nord, Price, Hamilton, and Cook, 2000, p. 6). An 18-item scale was developed by the USDA to assess household food insecurity with and without hunger, where hunger represents a potential result of more severe forms of food insecurity, but not a necessary condition for food insecurity to exist (Price, Hamilton, and Cook, 1997). Other researchers have used selected items from the USDA Food Security Module to assess food hardships (Nelson, 2004; Bickel et al., 2000) The USDA also developed the following single-item question to identify food insufficiency: “Which of the following describes the amount of food your household has to eat....enough to eat, sometimes not enough to eat, or often not enough to eat?” This measure addresses the amount of food available to a household, not assessments about the quality of the food consumed or worries about food (Alaimo, Olson and Frongillo, 1999; Dunifon and Kowaleski-Jones, 2003). The Community Childhood Hunger Identification Project (CCHIP) assesses food hardships using an 8-item measure to determine whether the household as a whole, adults as individuals, or children are affected by food shortages, perceived food insufficiency, or altered food intake due to resource constraints (Wehler, Scott, and Anderson, 1992). Depending on the number of affirmative answers, respondents are categorized as either “hungry,” “at-risk for hunger,” or “not hungry” (Wehler et al., 1992; Kleinman et al., 1998). Other measures, such as the Radimer/Cornell measures of hunger and food insecurity, have also been created to measure food hardships (Kendall, Olson, and Frongillo, 1996). In recent years, food hardships in the United States have been on the rise. After declining from 1995 to 1999, the prevalence of household food insecurity in households with children rose from 14.8 percent in 1999 to 16.5 percent in 2002, and the prevalence of household food insecurity with hunger in households with children rose from 0.6 percent in 1999 to 0.7 percent in 2002 (Nord, Andrews, and 3 Carlson, 2003). A similar trend was also observed using a subset of questions from the USDA Food Security Module (Nelson, 2004). Although children are more likely than adults to be buffered from household food insecurity (Hamilton et al., 1997) and inadequate nutrition (McIntyre et al., 2003), a concerning number of children are reported to skip meals or have reduced food intake due to insufficient household resources. Nationally, children in 219,000 U.S. households were hungry at times during the 12 months preceding May 1999 (Nord and Bickel, 2002). Food Hardships and Child Behavior Problems Very little research has been conducted on the effects of food hardship on children’s behaviors, although the existing research suggests that it is associated with adverse behavioral and mental health outcomes for children. Using data from the National Health and Nutrition Examination Survey (NHANES), Alaimo and colleagues (2001a) found that family food insufficiency is positively associated with visits to a psychologist among 6to 11year-olds. Using the USDA Food Security Module, Reid (2002) found that greater severity and longer periods of children’s food insecurity were associated with greater levels of child behavior problems. Dunifon and Kowaleski-Jones (2003) found, using the same measure, that food insecurity is associated with fewer positive behaviors among school-age children. Children from households with incomes at or below 185 percent of the poverty level who are identified as hungry are also more likely to have a past or current history of mental health counseling and to have more psychosocial dysfunctions than children who are not identified as hungry (Kleinman et al., 1998; Murphy et al., 1998). Additionally, severe child hunger in both pre-school-age and school-age children is associated with internalizing behavior problems (Weinreb et al., 2002), although Reid (2002) found a stronger association between food insecurity and externalizing behaviors than between food insecurity and internalizing behaviors among children 12 and younger. Other research on hunger has identified several adverse behavioral consequences for children (See Wachs, 1995 for a review; Martorell, 1996; Pollitt, 1994), including poor play behaviors, poor preschool achievement, and poor scores on 4 developmental indices (e.g., Bayley Scores). These studies have largely taken place in developing countries, where the prevalence of hunger and malnutrition is much greater than in the U.S. population (Reid, 2002), so it is not known whether similar associations would emerge for children in the United States. Furthermore, while existing studies point to a relationship between food hardships and adverse child behavioral outcomes, limitations in design stemming from cross-sectional data, reliance on singleitem measures of food difficulties, or failure to adequately control for factors that may confound the observed relationships make it difficult to assess the robustness of the findings. For current and recent recipients of welfare and their families, increased food hardships are a potential problem, given the fluctuations in benefits and resources that families are likely to experience as a result of legislative reforms. To the extent that food hardships are tied to economic factors, we may expect levels of food hardships to increase for families who experience periods of insufficient material resources, and to decrease for families whose economic situations improve. If levels of food hardship are associated with the availability of parents and other caregivers, we may find that the provision of food to children changes as parents work more hours, or as children spend more time in alternative caregiving arrangements. Poverty and Child Behavior Problems When exploring the relationship between food hardships and child well-being, it is crucial to ensure that factors associated with economic hardship and poverty are adequately controlled, particularly since poverty has been linked to some of the same outcomes as food hardships. Extensive research has shown a higher prevalence of behavior problems among children from families of lower socioeconomic status (McLoyd, 1998; Duncan, Brooks-Gunn, and Klebanov, 1994), and from families receiving welfare (Hofferth, Smith, McLoyd, and Finkelstein, 2000). This relationship has been shown to be stronger among children in single-parent households than among those in two-parent households (Hanson, McLanahan, and Thompson, 1996), and among younger children (Bradley and Corwyn, 2002; McLoyd, 5 1998), with less consistent findings for adolescents (Conger, Conger, and Elder, 1997; Elder, N",
"title": ""
},
{
"docid": "df02dafb455e2b68035cf8c150e28a0a",
"text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.",
"title": ""
},
{
"docid": "cdda683f089f630176b88c1b91c1cff2",
"text": "Article history: Received 15 March 2011 Received in revised form 28 November 2011 Accepted 23 December 2011 Available online 29 December 2011",
"title": ""
},
{
"docid": "3f1ab17fb722d5a2612675673b200a82",
"text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.",
"title": ""
},
{
"docid": "c3f81c5e4b162564b15be399b2d24750",
"text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.",
"title": ""
},
{
"docid": "7490d342ffb59bd396421e198b243775",
"text": "Antioxidant activities of defatted sesame meal extract increased as the roasting temperature of sesame seed increased, but the maximum antioxidant activity was achieved when the seeds were roasted at 200 °C for 60 min. Roasting sesame seeds at 200 °C for 60 min significantly increased the total phenolic content, radical scavenging activity (RSA), reducing powers, and antioxidant activity of sesame meal extract; and several low-molecularweight phenolic compounds such as 2-methoxyphenol, 4-methoxy-3-methylthio-phenol, 5-amino-3-oxo-4hexenoic acid, 3,4-methylenedioxyphenol (sesamol), 3-hydroxy benzoic acid, 4-hydroxy benzoic acid, vanillic acid, filicinic acid, and 3,4-dimethoxy phenol were newly formed in the sesame meal after roasting sesame seeds at 200 °C for 60 min. These results indicate that antioxidant activity of defatted sesame meal extracts was significantly affected by roasting temperature and time of sesame seeds.",
"title": ""
},
{
"docid": "44d8cb42bd4c2184dc226cac3adfa901",
"text": "Several descriptions of redundancy are presented in the literature , often from widely dif ferent perspectives . Therefore , a discussion of these various definitions and the salient points would be appropriate . In particular , any definition and redundancy needs to cover the following issues ; the dif ference between multiple solutions and an infinite number of solutions ; degenerate solutions to inverse kinematics ; task redundancy ; and the distinction between non-redundant , redundant and highly redundant manipulators .",
"title": ""
},
{
"docid": "dcf7214c15c13f13d33c9a7b2c216588",
"text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.",
"title": ""
},
{
"docid": "74af567f4b0257dc12c3346146c0f46c",
"text": "This paper presents the experimental data of human mechanical impedance properties (HMIPs) of the arms measured in steering operations according to the angle of a steering wheel (limbs posture) and the steering torque (muscle cocontraction). The HMIP data show that human stiffness/viscosity has the minimum/maximum value at the neutral angle of the steering wheel in relax (standard condition) and increases/decreases for the amplitude of the steering angle and the torque, and that the stability of the arms' motion in handling the steering wheel becomes high around the standard condition. Next, a novel methodology for designing an adaptive steering control system based on the HMIPs of the arms is proposed, and the effectiveness was then demonstrated via a set of double-lane-change tests, with several subjects using the originally developed stationary driving simulator and the 4-DOF driving simulator with a movable cockpit.",
"title": ""
},
{
"docid": "f5648e3bd38e876b53ee748021e165f2",
"text": "The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder’s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "c3566171b68e4025931a72064e74e4ae",
"text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.",
"title": ""
},
{
"docid": "363872994876ab6c68584d4f31913b43",
"text": "The Internet is quickly becoming the world’s largest public electronic marketplace. It is estimated to reach 50 million people worldwide, with growth estimates averaging approximately 10% per month. Innovative business professionals have discovered that the Internet can A BUYER’S-EYE VIEW OF ONLINE PURCHASING WORRIES. • H U A I Q I N G W A N G , M A T T H E W K . O . L E E , A N D C H E N W A N G •",
"title": ""
},
{
"docid": "0d9420b97012ce445fdf39fb009e32c4",
"text": "Greater numbers of young children with complicated, serious physical health, mental health, or developmental problems are entering foster care during the early years when brain growth is most active. Every effort should be made to make foster care a positive experience and a healing process for the child. Threats to a child’s development from abuse and neglect should be understood by all participants in the child welfare system. Pediatricians have an important role in assessing the child’s needs, providing comprehensive services, and advocating on the child’s behalf. The developmental issues important for young children in foster care are reviewed, including: 1) the implications and consequences of abuse, neglect, and placement in foster care on early brain development; 2) the importance and challenges of establishing a child’s attachment to caregivers; 3) the importance of considering a child’s changing sense of time in all aspects of the foster care experience; and 4) the child’s response to stress. Additional topics addressed relate to parental roles and kinship care, parent-child contact, permanency decision-making, and the components of comprehensive assessment and treatment of a child’s development and mental health needs. More than 500 000 children are in foster care in the United States.1,2 Most of these children have been the victims of repeated abuse and prolonged neglect and have not experienced a nurturing, stable environment during the early years of life. Such experiences are critical in the shortand long-term development of a child’s brain and the ability to subsequently participate fully in society.3–8 Children in foster care have disproportionately high rates of physical, developmental, and mental health problems1,9 and often have many unmet medical and mental health care needs.10 Pediatricians, as advocates for children and their families, have a special responsibility to evaluate and help address these needs. Legal responsibility for establishing where foster children live and which adults have custody rests jointly with the child welfare and judiciary systems. Decisions about assessment, care, and planning should be made with sufficient information about the particular strengths and challenges of each child. Pediatricians have an important role in helping to develop an accurate, comprehensive profile of the child. To create a useful assessment, it is imperative that complete health and developmental histories are available to the pediatrician at the time of these evaluations. Pediatricians and other professionals with expertise in child development should be proactive advisors to child protection workers and judges regarding the child’s needs and best interests, particularly regarding issues of placement, permanency planning, and medical, developmental, and mental health treatment plans. For example, maintaining contact between children and their birth families is generally in the best interest of the child, and such efforts require adequate support services to improve the integrity of distressed families. However, when keeping a family together may not be in the best interest of the child, alternative placement should be based on social, medical, psychological, and developmental assessments of each child and the capabilities of the caregivers to meet those needs. Health care systems, social services systems, and judicial systems are frequently overwhelmed by their responsibilities and caseloads. Pediatricians can serve as advocates to ensure each child’s conditions and needs are evaluated and treated properly and to improve the overall operation of these systems. Availability and full utilization of resources ensure comprehensive assessment, planning, and provision of health care. Adequate knowledge about each child’s development supports better placement, custody, and treatment decisions. Improved programs for all children enhance the therapeutic effects of government-sponsored protective services (eg, foster care, family maintenance). The following issues should be considered when social agencies intervene and when physicians participate in caring for children in protective services. EARLY BRAIN AND CHILD DEVELOPMENT More children are entering foster care in the early years of life when brain growth and development are most active.11–14 During the first 3 to 4 years of life, the anatomic brain structures that govern personality traits, learning processes, and coping with stress and emotions are established, strengthened, and made permanent.15,16 If unused, these structures atrophy.17 The nerve connections and neurotransmitter networks that are forming during these critical years are influenced by negative environmental conditions, including lack of stimulation, child abuse, or violence within the family.18 It is known that emotional and cognitive disruptions in the early lives of children have the potential to impair brain development.18 Paramount in the lives of these children is their need for continuity with their primary attachment figures and a sense of permanence that is enhanced The recommendations in this statement do not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. PEDIATRICS (ISSN 0031 4005). Copyright © 2000 by the American Acad-",
"title": ""
},
{
"docid": "5d98548bc4f65d66a8ece7e70cb61bc4",
"text": "0140-3664/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.comcom.2011.09.003 ⇑ Corresponding author. Tel.: +86 10 62283240. E-mail address: [email protected] (W. Li). Value-added applications in vehicular ad hoc network (VANET) come with the emergence of electronic trading. The restricted connectivity scenario in VANET, where the vehicle cannot communicate directly with the bank for authentication due to the lack of internet access, opens up new security challenges. Hence a secure payment protocol, which meets the additional requirements associated with VANET, is a must. In this paper, we propose an efficient and secure payment protocol that aims at the restricted connectivity scenario in VANET. The protocol applies self-certified key agreement to establish symmetric keys, which can be integrated with the payment phase. Thus both the computational cost and communication cost can be reduced. Moreover, the protocol can achieve fair exchange, user anonymity and payment security. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "64de7935c22f74069721ff6e66a8fe8c",
"text": "In the setting of secure multiparty computation, a set of n parties with private inputs wish to jointly compute some functionality of their inputs. One of the most fundamental results of secure computation was presented by Ben-Or, Goldwasser, and Wigderson (BGW) in 1988. They demonstrated that any n-party functionality can be computed with perfect security, in the private channels model. When the adversary is semi-honest, this holds as long as $$t<n/2$$ t < n / 2 parties are corrupted, and when the adversary is malicious, this holds as long as $$t<n/3$$ t < n / 3 parties are corrupted. Unfortunately, a full proof of these results was never published. In this paper, we remedy this situation and provide a full proof of security of the BGW protocol. This includes a full description of the protocol for the malicious setting, including the construction of a new subprotocol for the perfect multiplication protocol that seems necessary for the case of $$n/4\\le t<n/3$$ n / 4 ≤ t < n / 3 .",
"title": ""
},
{
"docid": "9b19f343a879430283881a69e3f9cb78",
"text": "Effective analysis of applications (shortly apps) is essential to understanding apps' behavior. Two analysis approaches, i.e., static and dynamic, are widely used; although, both have well known limitations. Static analysis suffers from obfuscation and dynamic code updates. Whereas, it is extremely hard for dynamic analysis to guarantee the execution of all the code paths in an app and thereby, suffers from the code coverage problem. However, from a security point of view, executing all paths in an app might be less interesting than executing certain potentially malicious paths in the app. In this work, we use a hybrid approach that combines static and dynamic analysis in an iterative manner to cover their shortcomings. We use targeted execution of interesting code paths to solve the issues of obfuscation and dynamic code updates. Our targeted execution leverages a slicing-based analysis for the generation of data-dependent slices for arbitrary methods of interest (MOI) and on execution of the extracted slices for capturing their dynamic behavior. Motivated by the fact that malicious apps use Inter Component Communications (ICC) to exchange data [19], our main contribution is the automatic targeted triggering of MOI that use ICC for passing data between components. We implement a proof of concept, TelCC, and report the results of our evaluation.",
"title": ""
},
{
"docid": "04d8cd068da3aa0a7ede285de372a139",
"text": "Testing is a major cost factor in software development. Test automation has been proposed as one solution to reduce these costs. Test automation tools promise to increase the number of tests they run and the frequency at which they run them. So why not automate every test? In this paper we discuss the question \"When should a test be automated?\" and the trade-off between automated and manual testing. We reveal problems in the overly simplistic cost models commonly used to make decisions about automating testing. We introduce an alternative model based on opportunity cost and present influencing factors on the decision of whether or not to invest in test automation. Our aim is to stimulate discussion about these factors as well as their influence on the benefits and costs of automated testing in order to support researchers and practitioners reflecting on proposed automation approaches.",
"title": ""
}
] | scidocsrr |
cc5c0ab4f614ed9d050a47dfa842d177 | Supervised topic models for multi-label classification | [
{
"docid": "c44f060f18e55ccb1b31846e618f3282",
"text": "In multi-label classification, each sample can be associated with a set of class labels. When the number of labels grows to the hundreds or even thousands, existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are based either on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by an efficient randomized sampling procedure where the sampling probability of each class label reflects its importance among all the labels. Experiments on a number of realworld multi-label data sets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.",
"title": ""
}
] | [
{
"docid": "d437e700df5c3a4d824b177c95def4ac",
"text": "In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant. Interactive theorem provers such as Coq enable users to construct machine-checkable proofs in a step-by-step manner. Hence, they provide an opportunity to explore theorem proving at a human level of abstraction. We use GamePad to synthesize proofs for a simple algebraic rewrite problem and train baseline models for a formalization of the Feit-Thompson theorem. We address position evaluation (i.e., predict the number of proof steps left) and tactic prediction (i.e., predict the next proof step) tasks, which arise naturally in human-level theorem proving.",
"title": ""
},
{
"docid": "d7acbf20753e2c9c50b2ab0683d7f03a",
"text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.",
"title": ""
},
{
"docid": "3e7a9fa9f575270a5cdf8f869d4a75dd",
"text": "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certaintydriven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain/reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.",
"title": ""
},
{
"docid": "4f6ce186679f9ab4f0aaada92ccf5a84",
"text": "Sensor networks have a significant potential in diverse applications some of which are already beginning to be deployed in areas such as environmental monitoring. As the application logic becomes more complex, programming difficulties are becoming a barrier to adoption of these networks. The difficulty in programming sensor networks is not only due to their inherently distributed nature but also the need for mechanisms to address their harsh operating conditions such as unreliable communications, faulty nodes, and extremely constrained resources. Researchers have proposed different programming models to overcome these difficulties with the ultimate goal of making programming easy while making full use of available resources. In this article, we first explore the requirements for programming models for sensor networks. Then we present a taxonomy of the programming models, classified according to the level of abstractions they provide. We present an evaluation of various programming models for their responsiveness to the requirements. Our results point to promising efforts in the area and a discussion of the future directions of research in this area.",
"title": ""
},
{
"docid": "993d7ee2498f7b19ae70850026c0a0c4",
"text": "We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish.",
"title": ""
},
{
"docid": "65bf805e87a02c4e733c7e6cefbf8c7d",
"text": "We describe a nonlinear observer-based design for control of vehicle traction that is important in providing safety and obtaining desired longitudinal vehicle motion. First, a robust sliding mode controller is designed to maintain the wheel slip at any given value. Simulations show that longitudinal traction controller is capable of controlling the vehicle with parameter deviations and disturbances. The direct state feedback is then replaced with nonlinear observers to estimate the vehicle velocity from the output of the system (i.e., wheel velocity). The nonlinear model of the system is shown locally observable. The effects and drawbacks of the extended Kalman filters and sliding observers are shown via simulations. The sliding observer is found promising while the extended Kalman filter is unsatisfactory due to unpredictable changes in the road conditions.",
"title": ""
},
{
"docid": "3d3bc851a71f7caf96343004f1d584fe",
"text": "Next generation sequencing (NGS) has been leading the genetic study of human disease into an era of unprecedented productivity. Many bioinformatics pipelines have been developed to call variants from NGS data. The performance of these pipelines depends crucially on the variant caller used and on the calling strategies implemented. We studied the performance of four prevailing callers, SAMtools, GATK, glftools and Atlas2, using single-sample and multiple-sample variant-calling strategies. Using the same aligner, BWA, we built four single-sample and three multiple-sample calling pipelines and applied the pipelines to whole exome sequencing data taken from 20 individuals. We obtained genotypes generated by Illumina Infinium HumanExome v1.1 Beadchip for validation analysis and then used Sanger sequencing as a \"gold-standard\" method to resolve discrepancies for selected regions of high discordance. Finally, we compared the sensitivity of three of the single-sample calling pipelines using known simulated whole genome sequence data as a gold standard. Overall, for single-sample calling, the called variants were highly consistent across callers and the pairwise overlapping rate was about 0.9. Compared with other callers, GATK had the highest rediscovery rate (0.9969) and specificity (0.99996), and the Ti/Tv ratio out of GATK was closest to the expected value of 3.02. Multiple-sample calling increased the sensitivity. Results from the simulated data suggested that GATK outperformed SAMtools and glfSingle in sensitivity, especially for low coverage data. Further, for the selected discrepant regions evaluated by Sanger sequencing, variant genotypes called by exome sequencing versus the exome array were more accurate, although the average variant sensitivity and overall genotype consistency rate were as high as 95.87% and 99.82%, respectively. In conclusion, GATK showed several advantages over other variant callers for general purpose NGS analyses. The GATK pipelines we developed perform very well.",
"title": ""
},
{
"docid": "ce6041954779f1f5141cee0548ea8491",
"text": "In vivo exposure is the recommended treatment of choice for specific phobias; however, it demonstrates a high attrition rate and is not effective in all instances. The use of virtual reality (VR) has improved the acceptance of exposure treatments to some individuals. Augmented reality (AR) is a variation of VR wherein the user sees the real world augmented by virtual elements. The present study tests an AR system in the short (posttreatment) and long term (3, 6, and 12 months) for the treatment of cockroach phobia using a multiple baseline design across individuals (with 6 participants). The AR exposure therapy was applied using the \"one-session treatment\" guidelines developed by Ost, Salkovskis, and Hellström (1991). Results showed that AR was effective at treating cockroach phobia. All participants improved significantly in all outcome measures after treatment; furthermore, the treatment gains were maintained at 3, 6, and 12-month follow-up periods. This study discusses the advantages of AR as well as its potential applications.",
"title": ""
},
{
"docid": "4029bbbff0c115c8bf8c787cafc72ae0",
"text": "In recent times, data is growing rapidly in every domain such as news, social media, banking, education, etc. Due to the excessiveness of data, there is a need of automatic summarizer which will be capable to summarize the data especially textual data in original document without losing any critical purposes. Text summarization is emerged as an important research area in recent past. In this regard, review of existing work on text summarization process is useful for carrying out further research. In this paper, recent literature on automatic keyword extraction and text summarization are presented since text summarization process is highly depend on keyword extraction. This literature includes the discussion about different methodology used for keyword extraction and text summarization. It also discusses about different databases used for text summarization in several domains along with evaluation matrices. Finally, it discusses briefly about issues and research challenges faced by researchers along with future direction.",
"title": ""
},
{
"docid": "688ff3348e2d5af9b0f388fd9a99f1bf",
"text": "The core issue in this article is the empirical tracing of the connection between a variety of value orientations and the life course choices concerning living arrangements and family formation. The existence of such a connection is a crucial element in the socalled theory of the Second Demographic Transition (SDT). The underlying model is of a recursive nature and based on two effects: firstly, values-based self-selection of individuals into alternative living arrangement or household types, and secondly, event-based adaptation of values to the newly chosen household situation. Any testing of such a recursive model requires the use of panel data. Failing these, only “footprints” of the two effects can be derived and traced in cross-sectional data. Here, use is made of the latest round of the European Values Surveys of 1999-2000, mainly because no other source has such a large selection of value items. The comparison involves two Iberian countries, three western European ones, and two Scandinavian samples. The profiles of the value orientations are based on 80 items which cover a variety of dimensions (e.g. religiosity, ethics, civil morality, family values, social cohesion, expressive values, gender role orientations, trust in institutions, protest proneness and post-materialism, tolerance for minorities etc.). These are analysed according to eight different household positions based on the transitions to independent living, cohabitation and marriage, parenthood and union dissolution. Multiple Classification Analysis (MCA) is used to control for confounding effects of other relevant covariates (age, gender, education, economic activity and stratification, urbanity). Subsequently, 1 Interface Demography, Vrije Universiteit Brussel. E-mail: [email protected] 2 Interface Demography, Vrije Universiteit Brussel. E-mail: [email protected] Demographic Research – Special Collection 3: Article 3 -Contemporary Research on European Fertility: Perspectives and Developments -46 http://www.demographic-research.org Correspondence Analysis is used to picture the proximities between the 80 value items and the eight household positions. Very similar value profiles according to household position are found for the three sets of countries, despite the fact that the onset of the SDT in Scandinavia precedes that in the Iberian countries by roughly twenty years. Moreover, the profile similarity remains intact when the comparison is extended to an extra group of seven formerly communist countries in central and Eastern Europe. Such pattern robustness is supportive of the contention that the ideational or “cultural” factor is indeed a nonredundant and necessary (but not a sufficient) element in the explanation of the demographic changes of the SDT. Moreover, the profile similarity also points in the direction of the operation of comparable mechanisms of selection and adaptation in the contrasting European settings. Demographic Research – Special Collection 3: Article 3 -Contemporary Research on European Fertility: Perspectives and Developments -http://www.demographic-research.org 47",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "6e05f588374b57f95524b04fe5600917",
"text": "Matrix factorization (MF) models and their extensions are standard in modern recommender systems. MF models decompose the observed user-item interaction matrix into user and item latent factors. In this paper, we propose a co-factorization model, CoFactor, which jointly decomposes the user-item interaction matrix and the item-item co-occurrence matrix with shared item latent factors. For each pair of items, the co-occurrence matrix encodes the number of users that have consumed both items. CoFactor is inspired by the recent success of word embedding models (e.g., word2vec) which can be interpreted as factorizing the word co-occurrence matrix. We show that this model significantly improves the performance over MF models on several datasets with little additional computational overhead. We provide qualitative results that explain how CoFactor improves the quality of the inferred factors and characterize the circumstances where it provides the most significant improvements.",
"title": ""
},
{
"docid": "058db5e1a8c58a9dc4b68f6f16847abc",
"text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.",
"title": ""
},
{
"docid": "f33134ec67d1237a39e91c0fd5bfb25a",
"text": "This research is driven by the assumption made in several user resistance studies that employees are generally resistant to change. It investigates the extent to which employees’ resistance to IT-induced change is caused by individuals’ predisposition to resist change. We develop a model of user resistance that assumes the influence of dispositional resistance to change on perceptual resistance to change, perceived ease of use, and usefulness, which in turn influence user resistance behavior. Using an empirical study of 106 HR employees forced to use a new human resources information system, the analysis reveals that 17.0–22.1 percent of the variance in perceived ease of use, usefulness, and perceptual resistance to change can be explained by the dispositional inclination to change initiatives. The four dimensions of dispositional resistance to change – routine seeking, emotional reaction, short-term focus and cognitive rigidity – have an even stronger effect than other common individual variables, such as age, gender, or working experiences. We conclude that dispositional resistance to change is an example of an individual difference that is instrumental in explaining a large proportion of the variance in beliefs about and user resistance to mandatory IS in organizations, which has implications for theory, practice, and future research. Journal of Information Technology advance online publication, 16 June 2015; doi:10.1057/jit.2015.17",
"title": ""
},
{
"docid": "e7e1fd16be5186474dc9e1690347716a",
"text": "One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that Stair-Net significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.",
"title": ""
},
{
"docid": "4d2bfda62140962af079817fc7dbd43e",
"text": "Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.",
"title": ""
},
{
"docid": "0b01870332dd93897fbcecb9254c40b9",
"text": "Computer-aided detection or decision support systems aim to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. Commonly such methods proceed in two steps: selection of candidate regions for malignancy, and later classification as either malignant or not. In this study, we present a candidate detection method based on deep learning to automatically detect and additionally segment soft tissue lesions in DM. A database of DM exams (mostly bilateral and two views) was collected from our institutional archive. In total, 7196 DM exams (28294 DM images) acquired with systems from three different vendors (General Electric, Siemens, Hologic) were collected, of which 2883 contained malignant lesions verified with histopathology. Data was randomly split on an exam level into training (50%), validation (10%) and testing (40%) of deep neural network with u-net architecture. The u-net classifies the image but also provides lesion segmentation. Free receiver operating characteristic (FROC) analysis was used to evaluate the model, on an image and on an exam level. On an image level, a maximum sensitivity of 0.94 at 7.93 false positives (FP) per image was achieved. Similarly, per exam a maximum sensitivity of 0.98 at 7.81 FP per image was achieved. In conclusion, the method could be used as a candidate selection model with high accuracy and with the additional information of lesion segmentation.",
"title": ""
},
{
"docid": "bf239cb017be0b2137b0b4fd1f1d4247",
"text": "Network function virtualization was recently proposed to improve the flexibility of network service provisioning and reduce the time to market of new services. By leveraging virtualization technologies and commercial off-the-shelf programmable hardware, such as general-purpose servers, storage, and switches, NFV decouples the software implementation of network functions from the underlying hardware. As an emerging technology, NFV brings several challenges to network operators, such as the guarantee of network performance for virtual appliances, their dynamic instantiation and migration, and their efficient placement. In this article, we provide a brief overview of NFV, explain its requirements and architectural framework, present several use cases, and discuss the challenges and future directions in this burgeoning research area.",
"title": ""
},
{
"docid": "3e7adbc4ea0bb5183792efd19d3c23a5",
"text": "a Faculty of Science and Information Technology, Al-Zaytoona University of Jordan, Amman, Jordan b School of Informatics, University of Bradford, Bradford BD7 1DP, United Kingdom c Information & Computer Science Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia d Centre for excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, G1 1XW, United Kingdom",
"title": ""
},
{
"docid": "532f3aee6b67f1e521ccda7f77116f7a",
"text": "Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as \"work in progress.\" The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1idabstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on May 2008.",
"title": ""
}
] | scidocsrr |
5b64d5546765f7ad18ec9b4bda17a71f | Investigation of friction characteristics of a tendon driven wearable robotic hand | [
{
"docid": "030b25a7c93ca38dec71b301843c7366",
"text": "Simple grippers with one or two degrees of freedom are commercially available prosthetic hands; these pinch type devices cannot grasp small cylinders and spheres because of their small degree of freedom. This paper presents the design and prototyping of underactuated five-finger prosthetic hand for grasping various objects in daily life. Underactuated mechanism enables the prosthetic hand to move fifteen compliant joints only by one ultrasonic motor. The innovative design of this prosthetic hand is the underactuated mechanism optimized to distribute grasping force like those of humans who can grasp various objects robustly. Thanks to human like force distribution, the prototype of prosthetic hand could grasp various objects in daily life and heavy objects with the maximum ejection force of 50 N that is greater than other underactuated prosthetic hands.",
"title": ""
},
{
"docid": "720eccb945faa357bc44c5aa33fe60a9",
"text": "The evolution of an arm exoskeleton design for treating shoulder pathology is examined. Tradeoffs between various kinematics configurations are explored, and a device with five active degrees of freedom is proposed. Two rapid-prototype designs were built and fitted to several subjects to verify the kinematic design and determine passive link adjustments. Control modes are developed for exercise therapy and functional rehabilitation, and a distributed software architecture that incorporates computer safety monitoring is described. Although intended primarily for therapy, the exoskeleton is also used to monitor progress in strength, range of motion, and functional task performance",
"title": ""
}
] | [
{
"docid": "fb2ff96dbfe584f450dd19f8d3cea980",
"text": "[1] Nondestructive imaging methods such as X-ray computed tomography (CT) yield high-resolution, three-dimensional representations of pore space and fluid distribution within porous materials. Steadily increasing computational capabilities and easier access to X-ray CT facilities have contributed to a recent surge in microporous media research with objectives ranging from theoretical aspects of fluid and interfacial dynamics at the pore scale to practical applications such as dense nonaqueous phase liquid transport and dissolution. In recent years, significant efforts and resources have been devoted to improve CT technology, microscale analysis, and fluid dynamics simulations. However, the development of adequate image segmentation methods for conversion of gray scale CT volumes into a discrete form that permits quantitative characterization of pore space features and subsequent modeling of liquid distribution and flow processes seems to lag. In this paper we investigated the applicability of various thresholding and locally adaptive segmentation techniques for industrial and synchrotron X-ray CT images of natural and artificial porous media. A comparison between directly measured and image-derived porosities clearly demonstrates that the application of different segmentation methods as well as associated operator biases yield vastly differing results. This illustrates the importance of the segmentation step for quantitative pore space analysis and fluid dynamics modeling. Only a few of the tested methods showed promise for both industrial and synchrotron tomography. Utilization of local image information such as spatial correlation as well as the application of locally adaptive techniques yielded significantly better results.",
"title": ""
},
{
"docid": "79a2cc561cd449d8abb51c162eb8933d",
"text": "We introduce a new test of how well language models capture meaning in children’s books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lowerfrequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.",
"title": ""
},
{
"docid": "7f1eb105b7a435993767e4a4b40f7ed9",
"text": "In the last two decades, organizations have recognized, indeed fixated upon, the impOrtance of quality and quality management One manifestation of this is the emergence of the total quality management (TQM) movement, which has been proclaimed as the latest and optimal way of managing organizations. Likewise, in the domain of human resource management, the concept of quality of work life (QWL) has also received much attention of late from theoreticians, researchers, and practitioners. However, little has been done to build a bridge between these two increasingly important concepts, QWL and TQM. The purpose of this research is to empirically examine the relationship between quality of work life (the internalized attitudes employees' have about their jobs) and an indicatorofTQM, customer service attitudes, CSA (the externalized signals employees' send to customers about their jobs). In addition, this study examines how job involvement and organizational commitment mediate the relationship between QWL and CSA. OWL and <:sA HlU.3 doc JJ a9t94 page 3 INTRODUCTION Quality and quality management have become increasingly important topics for both practitioners and researchers (Anderson, Rungtusanatham, & Schroeder, 1994). Among the many quality related activities that have arisen, the principle of total quality mana~ement (TQM) has been advanced as the optimal approach for managing people and processes. Indeed, it is considered by some to be the key to ensuring the long-term viability of organizations (Feigenbaum, 1982). Ofcourse, niany companies have invested heavily in total quality efforts in the form of capital expenditures on plant and equipment, and through various human resource management programs designed to spread the quality gospel. However, many still argue that there is insufficient theoretical development and empirical eviden~e for the determinants and consequences of quality management initiatives (Dean & Bowen, 1994). Mter reviewing the relevant research literatures, we find that three problems persist in the research on TQM. First, a definition of quality has not been agreed upon. Even more problematic is the fact that many of the definitions that do exist are continuously evolving. Not smprisingly, these variable definitions often lead to inconsistent and even conflicting conclusions, Second, very few studies have systematically examined these factors that influence: the quality of goods and services, the implementation of quality activities, or the performance of organizations subsequent to undertaking quality initiatives (Spencer, 1994). Certainly this has been true for quality-related human resource management interventions. Last, TQM has suffered from an \"implementation problem\" (Reger, Gustafson, Demarie, & Mullane, 1994, p. 565) which has prevented it from transitioning from the theoretical to the applied. In the domain of human resource management, quality of working life (QWL) has also received a fair amount of attention of late from theorists, researchers, and practitioners. The underlying, and mostimportant, principles of QWL capture an employee's satisfaction with and feelings about their: work, work environment, and organization. Most who study QWL, and TQM for that matter, tend to focus on the importance of employee systems and organizational performance, whereas researchers in the field ofHRM OWLmdCSA HlU.3doc 1J1l2f}4 pBgc4 usually emphasize individual attitudes and individual performance (Walden, 1994). Fmthennore, as Walden (1994) alludes to, there are significantly different managerial prescriptions and applied levels for routine human resource management processes, such as selection, performance appraisal, and compensation, than there are for TQM-driven processes, like teamwork, participative management, and shared decision-making (Deming, 1986, 1993; Juran, 1989; M. Walton, 1986; Dean & Bowen, 1994). To reiterate, these variations are attributable to the difference between a mico focus on employees as opposed to a more macrofocus on employee systems. These specific differences are but a few of the instances where the views of TQM and the views of traditional HRM are not aligned (Cardy & Dobbins, 1993). In summary, although TQM is a ubiquitous organizational phenomenon; it has been given little research attention, especially in the form ofempirical studies. Therefore, the goal of this study is to provide an empirical assessment of how one, internalized, indicator ofHRM effectiveness, QWL, is associated with one, externalized, indicator of TQM, customer service attitudes, CSA. In doing so, it bridges the gap between \"employee-focused\" H.RM outcoines and \"customer-focused\" TQM consequences. In addition, it examines the mediating effects of organizational commitment and job involvement on this relationship. QUALITY OF WORK LIFE AND CUSTOMER SERVICE AITITUDES In this section, we introduce and review the main principles of customer service attitudes, CSA, and discuss its measurement Thereafter, our extended conceptualization and measurement of QWL will be presented. Fmally, two variables hypothesized to function as mediators of the relationship between CSA and QWL, organization commitment and job involvement, will be· explored. Customer Service Attitudes (CSA) Despite all the ruminations about it in the business and trade press, TQM still remains an ambiguous notion, one that often gives rise to as many different definitions as there are observers. Some focus on the presence of organizational systems. Others, the importance of leadership. ., Many stress the need to reduce variation in organizational processes (Deming, 1986). A number · OWL and CSA mn.3 doc 11 fl9tlJ4 page 5 emphasize reducing costs through q~ty improvement (p.B. Crosby, 1979). Still others focus on quality planing, control, and improvement (Juran, 1989). Regardless of these differences, however, the most important, generally agreed upon principle is to be \"customer focused\" (Feigenbaum, 1982). The cornerstone for this principle is the belief that customer satisfaction and customer judgments about the organization and itsproducts are the most important determinants of long-term organizational viability (Oliva, Oliver & MacMillan, 1992). Not surprisingly, this belief is a prominent tenet in both the manufacturing and service sectors alike. Conventional wisdom holds that quality can best be evaluated from the customers' perspective. Certainly, customers can easily articulate how well a product or service meets their expectations. Therefore, managers and researchers must take into account subjective and cognitive factors that influence customers' judgments when trying to identify influential customer cues, rather than just relying on organizational presumptions. Recently, for example, Hannon & Sano (1994) described how customer-driven HR strategies and practices are pervasive in Japan. An example they cited was the practice of making the tOp graduates from the best schools work in low level, customer service jobs for their first 1-2 years so that they might better underst3nd customers and their needs. To be sure, defining quality in terms of whether a product or service meets the expectations ofcustomers is all-encompassing. As a result of the breadth of this issue, and the limited research on this topic, many importantquestions about the service relationship, particularly those penaining to exchanges between employees and customers, linger. Some include, \"What are the key dimensions of service quality?\" and \"What are the actions service employees might direct their efforts to in order to foster good relationships with customers?\" Arguably, the most readily obvious manifestations of quality for any customer are the service attitudes ofemployees. In fact, dming the employee-customer interaction, conventional wisdom holds that employees' customer service attitudes influence customer satisfaction, customer evaluations, and decisions to buy. . OWL and <:SA HJU.3,doc J J129m page 6 According to Rosander (1980), there are five dimensions of service quality: quality of employee performance, facility, data, decision, and outcome. Undoubtedly, the performance of the employee influences customer satisfaction. This phenomenon has been referred to as interactive quality (Lehtinen & Lehtinen, 1982). Parasuraman, Zeithaml, & Berry (1985) go so far as to suggest that service quality is ultimately a function of the relationship between the employee and the customer, not the product or the price. Sasser, Olsen, & Wyckoff (1987) echo the assertion that personnel performance is a critical factor in the satisfaction of customers. If all of them are right, the relationship between satisfaction with quality of work life and customer service attitudes cannot be understated. Measuring Customer Service Attitudes The challenge of measuring service quality has increasingly captured the attention of researchers (Teas, 1994; Cronin & Taylor, 1992). While the substance and determinants of quality may remain undefined, its importance to organizations is unquestionable. Nevertheless, numerous problems inherent in the measurement of customer service attitudes still exist (Reeves & Bednar, 1994). Perhaps the complexities involved in measuring this construct have deterred many researchers from attempting to define and model service quality. Maybe this is also the reason why many of the efforts to define and measure service quality have emanated primarily from manufacturing, rather than service, settings. When it has been measured, quality has sometimes been defined as a \"zero defect\" policy, a perspective the Japanese have embraced. Alternatively, P.B. Crosby (1979) quantifies quality as \"conformance to requirements.\" Garvin (1983; 1988), on the other hand, measures quality in terms ofcounting the incidence of \"internal failures\" and \"external failures.\" Other definitions include \"value\" (Abbot, 1955; Feigenbaum, 1982), \"concordance to specification'\" (Gilmo",
"title": ""
},
{
"docid": "83187228617d62fb37f99cf107c7602a",
"text": "A very important class of spatial queries consists of nearestneighbor (NN) query and its variations. Many studies in the past decade utilize R-trees as their underlying index structures to address NN queries efficiently. The general approach is to use R-tree in two phases. First, R-tree’s hierarchical structure is used to quickly arrive to the neighborhood of the result set. Second, the R-tree nodes intersecting with the local neighborhood (Search Region) of an initial answer are investigated to find all the members of the result set. While R-trees are very efficient for the first phase, they usually result in the unnecessary investigation of many nodes that none or only a small subset of their including points belongs to the actual result set. On the other hand, several recent studies showed that the Voronoi diagrams are extremely efficient in exploring an NN search region, while due to lack of an efficient access method, their arrival to this region is slow. In this paper, we propose a new index structure, termed VoR-Tree that incorporates Voronoi diagrams into R-tree, benefiting from the best of both worlds. The coarse granule rectangle nodes of R-tree enable us to get to the search region in logarithmic time while the fine granule polygons of Voronoi diagram allow us to efficiently tile or cover the region and find the result. Utilizing VoR-Tree, we propose efficient algorithms for various Nearest Neighbor queries, and show that our algorithms have better I/O complexity than their best competitors.",
"title": ""
},
{
"docid": "90e6a1fa70ddec11248ba658623d2d6e",
"text": "This paper proposes a new technique for grid synchronization under unbalanced and distorted conditions, i.e., the dual second order generalised integrator - frequency-locked loop (DSOGI-FLL). This grid synchronization system results from the application of the instantaneous symmetrical components method on the stationary and orthogonal alphabeta reference frame. The second order generalized integrator concept (SOGI) is exploited to generate in-quadrature signals used on the alphabeta reference frame. The frequency-adaptive characteristic is achieved by a simple control loop, without using either phase-angles or trigonometric functions. In this paper, the development of the DSOGI-FLL is plainly exposed and hypothesis and conclusions are verified by simulation and experimental results",
"title": ""
},
{
"docid": "026408a6ad888ea0bcf298a23ef77177",
"text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.",
"title": ""
},
{
"docid": "a0f20c2481aefc3b431f708ade0cc1aa",
"text": "Objective Video game violence has become a highly politicized issue for scientists and the general public. There is continuing concern that playing violent video games may increase the risk of aggression in players. Less often discussed is the possibility that playing violent video games may promote certain positive developments, particularly related to visuospatial cognition. The objective of the current article was to conduct a meta-analytic review of studies that examine the impact of violent video games on both aggressive behavior and visuospatial cognition in order to understand the full impact of such games. Methods A detailed literature search was used to identify peer-reviewed articles addressing violent video game effects. Effect sizes r (a common measure of effect size based on the correlational coefficient) were calculated for all included studies. Effect sizes were adjusted for observed publication bias. Results Results indicated that publication bias was a problem for studies of both aggressive behavior and visuospatial cognition. Once corrected for publication bias, studies of video game violence provided no support for the hypothesis that violent video game playing is associated with higher aggression. However playing violent video games remained related to higher visuospatial cognition (r x = 0.36). Conclusions Results from the current analysis did not support the conclusion that violent video game playing leads to aggressive behavior. However, violent video game playing was associated with higher visuospatial cognition. It may be advisable to reframe the violent video game debate in reference to potential costs and benefits of this medium.",
"title": ""
},
{
"docid": "bf8f46e4c85f7e45879cee4282444f78",
"text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "4645d0d7b1dfae80657f75d3751ef72a",
"text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.",
"title": ""
},
{
"docid": "6e05c3e76e87317db05c43a1f564724a",
"text": "Data science or \"data-driven research\" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.",
"title": ""
},
{
"docid": "9db779a5a77ac483bb1991060dca7c28",
"text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.",
"title": ""
},
{
"docid": "cbfdea54abb1e4c1234ca44ca6913220",
"text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.",
"title": ""
},
{
"docid": "55b95e06bdf28ebd0b6a1e39875635e2",
"text": "As the security landscape evolves over time, where thousands of species of malicious codes are seen every day, antivirus vendors strive to detect and classify malware families for efficient and effective responses against malware campaigns. To enrich this effort and by capitalizing on ideas from the social network analysis domain, we build a tool that can help classify malware families using features driven from the graph structure of their system calls. To achieve that, we first construct a system call graph that consists of system calls found in the execution of the individual malware families. To explore distinguishing features of various malware species, we study social network properties as applied to the call graph, including the degree distribution, degree centrality, average distance, clustering coefficient, network density, and component ratio. We utilize features driven from those properties to build a classifier for malware families. Our experimental results show that “influence-based” graph metrics such as the degree centrality are effective for classifying malware, whereas the general structural metrics of malware are less effective for classifying malware. Our experiments demonstrate that the proposed system performs well in detecting and classifying malware families within each malware class with accuracy greater than 96%.",
"title": ""
},
{
"docid": "26f2b200bf22006ab54051c9288420e8",
"text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.",
"title": ""
},
{
"docid": "417eff5fd6251c70790d69e2b8dae255",
"text": "This paper is a report on the initial trial for its kind in the development of the performance index of the autonomous mobile cleaning robot. The unique characteristic features of the cleaning robot have been identified as autonomous mobility, dust collection, and operation noise. Along with the identification of the performance indices the standardized performance-evaluation methods including the corresponding performance evaluation platform for each indices have been developed as well. The validity of the proposed performance evaluation methods has been demonstrated by applying the proposed evaluation methods on two commercial cleaning robots available in market. The proposed performance evaluation methods can be applied to general-purpose autonomous service robots which will be introduced in the consumer market in near future.",
"title": ""
},
{
"docid": "0f9d6fcd53560c0c0433d64014f2aeb2",
"text": "The task of plagiarism detection entails two main steps, suspicious candidate retrieval and pairwise document similarity analysis also called detailed analysis. In this paper we focus on the second subtask. We will report our monolingual plagiarism detection system which is used to process the Persian plagiarism corpus for the task of pairwise document similarity. To retrieve plagiarised passages a plagiarism detection method based on vector space model, insensitive to context reordering, is presented. We evaluate the performance in terms of precision, recall, granularity and plagdet metrics.",
"title": ""
},
{
"docid": "fa851a3828bf6ebf371c49917bab3b4e",
"text": "Recent research has documented large di!erences among countries in ownership concentration in publicly traded \"rms, in the breadth and depth of capital markets, in dividend policies, and in the access of \"rms to external \"nance. A common element to the explanations of these di!erences is how well investors, both shareholders and creditors, are protected by law from expropriation by the managers and controlling shareholders of \"rms. We describe the di!erences in laws and the e!ectiveness of their enforcement across countries, discuss the possible origins of these di!erences, summarize their consequences, and assess potential strategies of corporate governance reform. We argue that the legal approach is a more fruitful way to understand corporate governance and its reform than the conventional distinction between bank-centered and market-centered \"nancial systems. ( 2000 Elsevier Science S.A. All rights reserved. JEL classixcation: G21; G28; G32",
"title": ""
},
{
"docid": "9655259173f749134723f98585a254c1",
"text": "With the rapid growth of streaming media applications, there has been a strong demand of Quality-of-Experience (QoE) measurement and QoE-driven video delivery technologies. While the new worldwide standard dynamic adaptive streaming over hypertext transfer protocol (DASH) provides an inter-operable solution to overcome the volatile network conditions, its complex characteristic brings new challenges to the objective video QoE measurement models. How streaming activities such as stalling and bitrate switching events affect QoE is still an open question, and is hardly taken into consideration in the traditionally QoE models. More importantly, with an increasing number of objective QoE models proposed, it is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this study, we build two subject-rated streaming video databases. The progressive streaming video database is dedicated to investigate the human responses to the combined effect of video compression, initial buffering, and stalling. The adaptive streaming video database is designed to evaluate the performance of adaptive bitrate streaming algorithms and objective QoE models. We also provide useful insights on the improvement of adaptive bitrate streaming algorithms. Furthermore, we propose a novel QoE prediction approach to account for the instantaneous quality degradation due to perceptual video presentation impairment, the playback stalling events, and the instantaneous interactions between them. Twelve QoE algorithms from four categories including signal fidelity-based, network QoS-based, application QoSbased, and hybrid QoE models are assessed in terms of correlation with human perception",
"title": ""
}
] | scidocsrr |
9751bcc37c86fa0f0834e3c7a3ce1381 | Robust Capped Norm Nonnegative Matrix Factorization: Capped Norm NMF | [
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
}
] | [
{
"docid": "0dd78cb46f6d2ddc475fd887a0dc687c",
"text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.",
"title": ""
},
{
"docid": "299deaffdd1a494fc754b9e940ad7f81",
"text": "In this work, we study an important problem: learning programs from input-output examples. We propose a novel method to learn a neural program operating a domain-specific non-differentiable machine, and demonstrate that this method can be applied to learn programs that are significantly more complex than the ones synthesized before: programming language parsers from input-output pairs without knowing the underlying grammar. The main challenge is to train the neural program without supervision on execution traces. To tackle it, we propose: (1) LL machines and neural programs operating them to effectively regularize the space of the learned programs; and (2) a two-phase reinforcement learning-based search technique to train the model. Our evaluation demonstrates that our approach can successfully learn to parse programs in both an imperative language and a functional language, and achieve 100% test accuracy, while existing approaches’ accuracies are almost 0%. This is the first successful demonstration of applying reinforcement learning to train a neural program operating a non-differentiable machine that can fully generalize to test sets on a non-trivial task.",
"title": ""
},
{
"docid": "f58d69de4b5bcc4100a3bfe3426fa81f",
"text": "This study evaluated the factor structure of the Rosenberg Self-Esteem Scale (RSES) with a diverse sample of 1,248 European American, Latino, Armenian, and Iranian adolescents. Adolescents completed the 10-item RSES during school as part of a larger study on parental influences and academic outcomes. Findings suggested that method effects in the RSES are more strongly associated with negatively worded items across three diverse groups but also more pronounced among ethnic minority adolescents. Findings also suggested that accounting for method effects is necessary to avoid biased conclusions regarding cultural differences in selfesteem and how predictors are related to the RSES. Moreover, the two RSES factors (positive self-esteem and negative self-esteem) were differentially predicted by parenting behaviors and academic motivation. Substantive and methodological implications of these findings for crosscultural research on adolescent self-esteem are discussed.",
"title": ""
},
{
"docid": "f2a9d15d9b38738d563f9d9f9fa5d245",
"text": "Mobile cellular networks have become both the generators and carriers of massive data. Big data analytics can improve the performance of mobile cellular networks and maximize the revenue of operators. In this paper, we introduce a unified data model based on the random matrix theory and machine learning. Then, we present an architectural framework for applying the big data analytics in the mobile cellular networks. Moreover, we describe several illustrative examples, including big signaling data, big traffic data, big location data, big radio waveforms data, and big heterogeneous data, in mobile cellular networks. Finally, we discuss a number of open research challenges of the big data analytics in the mobile cellular networks.",
"title": ""
},
{
"docid": "232eabfb63f0b908ef3a64d0731ba358",
"text": "This paper reviews the potential of spin-transfer torque devices as an alternative to complementary metal-oxide-semiconductor for non-von Neumann and non-Boolean computing. Recent experiments on spin-transfer torque devices have demonstrated high-speed magnetization switching of nanoscale magnets with small current densities. Coupled with other properties, such as nonvolatility, zero leakage current, high integration density, we discuss that the spin-transfer torque devices can be inherently suitable for some unconventional computing models for information processing. We review several spintronic devices in which magnetization can be manipulated by current induced spin transfer torque and explore their applications in neuromorphic computing and reconfigurable memory-based computing.",
"title": ""
},
{
"docid": "dc6aafe2325dfdea5e758a30c90d8940",
"text": "When a query is submitted to a search engine, the search engine returns a dynamically generated result page containing the result records, each of which usually consists of a link to and/or snippet of a retrieved Web page. In addition, such a result page often also contains information irrelevant to the query, such as information related to the hosting site of the search engine and advertisements. In this paper, we present a technique for automatically producing wrappers that can be used to extract search result records from dynamically generated result pages returned by search engines. Automatic search result record extraction is very important for many applications that need to interact with search engines such as automatic construction and maintenance of metasearch engines and deep Web crawling. The novel aspect of the proposed technique is that it utilizes both the visual content features on the result page as displayed on a browser and the HTML tag structures of the HTML source file of the result page. Experimental results indicate that this technique can achieve very high extraction accuracy.",
"title": ""
},
{
"docid": "7b1dad9f2e8a2a454fe01bab4cca47a3",
"text": "We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We achieve these results by softening the neural response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our analysis shows that implementations of these networks on neuromorphic hardware will be many times more power-efficient than the equivalent non-spiking networks on traditional hardware.",
"title": ""
},
{
"docid": "ecd541de66690a9f2aa5341646a63742",
"text": "The purpose is to determine whether use of perioperative antibiotics for more than 24 h decreases the incidence of SSI in neonates and infants. We studied neonates and infants who had clean–contaminated or contaminated gastrointestinal operations from 1996 to 2006. Patient- and operation-related variables, duration of perioperative antibiotics, and SSI within 30 days were ascertained by retrospective chart review. In assessing the effects of antibiotic duration, we controlled for confounding by indication using standard covariate adjustment and propensity score matching. Among 732 operations, the incidence of SSI was 13 %. Using propensity score matching, the odds of SSI were similar (OR 1.1, 95 % CI 0.6–1.9) in patients who received ≤24 h of postoperative antibiotics compared to >24 h. No difference was also found in standard covariate adjustment. This multivariate model identified three independent predictors of SSI: preoperative infection (OR 3.9, 95 % CI 1.4–10.9) and re-operation through the same incision, both within 30 days (OR 3.5, 95 % CI 1.7–7.4) and later (OR 2.3, 95 % CI 1.4–3.8). In clean–contaminated and contaminated gastrointestinal operations, giving >24 h of postoperative antibiotics offered no protection against SSI. An adequately powered randomized clinical trial is needed to conclusively evaluate longer duration antibiotic prophylaxis.",
"title": ""
},
{
"docid": "382eec3778d98cb0c8445633c16f59ef",
"text": "In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.",
"title": ""
},
{
"docid": "bf8f46e4c85f7e45879cee4282444f78",
"text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.",
"title": ""
},
{
"docid": "1b5a28c875cf49eadac7032d3dd6398f",
"text": "This paper proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, although the approach generalizes to other media. The central idea is to analyze style in terms of principles for blending, based on our £nding that very different principles from those of common sense blending are needed for some creative works.",
"title": ""
},
{
"docid": "77796f30d8d1604c459fb3f3fe841515",
"text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: [email protected] (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X",
"title": ""
},
{
"docid": "294d29b68d67d5be0d9fb88dd6329e34",
"text": "A semi-recurrent hybrid VAE-GAN model for generating sequential data is introduced. In order to consider the spatial correlation of the data in each frame of the generated sequence, CNNs are utilized in the encoder, generator, and discriminator. The subsequent frames are sampled from the latent distributions obtained by encoding the previous frames. As a result, the dependencies between the frames are maintained. Two testing frameworks for synthesizing a sequence with any number of frames are also proposed. The promising experimental results on piano music generation indicates the potential of the proposed framework in modelling other sequential data such as video.",
"title": ""
},
{
"docid": "b12049aac966497b17e075c2467151dd",
"text": "IV HLA-G and HLA-E alleles and RPL HLA-G and HLA-E gene polymorphism in patients with Idiopathic Recurrent Pregnancy Loss in Gaza strip",
"title": ""
},
{
"docid": "70a534183750abab91aa74710027a092",
"text": "We consider whether sentiment affects the profitability of momentum strategies. We hypothesize that news that contradicts investors’ sentiment causes cognitive dissonance, slowing the diffusion of such news. Thus, losers (winners) become underpriced under optimism (pessimism). Shortselling constraints may impede arbitraging of losers and thus strengthen momentum during optimistic periods. Supporting this notion, we empirically show that momentum profits arise only under optimism. An analysis of net order flows from small and large trades indicates that small investors are slow to sell losers during optimistic periods. Momentum-based hedge portfolios formed during optimistic periods experience long-run reversals. JFQ_481_2013Feb_Antoniou-Doukas-Subrahmanyam_ms11219_SH_FB_0122_DraftToAuthors.pdf",
"title": ""
},
{
"docid": "fb1c4605eb6663fdd04e9ac1579e7ff0",
"text": "We present an enhanced autonomous indoor navigation system for a stock quadcopter drone where all navigation commands are derived off-board on a base station. The base station processes the video stream transmitted from a forward-facing camera on the drone to determine the drone's physical disposition and trajectory in building hallways to derive steering commands that are sent to the drone. Off-board processing and the lack of on-board sensors for localizing the drone permits standard mid-range quadcopters to be used and conserves the limited power source on the quadcopter. We introduce improved and new techniques, compared to our prototype system [1], to maintain stable flights, estimate distance to hallway intersections and describe algorithms to stop the drone ahead of time and turn correctly at intersections.",
"title": ""
},
{
"docid": "a18da0c7d655fee44eebdf61c7371022",
"text": "This paper describes and compares a set of no-reference quality assessment algorithms for H.264/AVC encoded video sequences. These algorithms have in common a module that estimates the error due to lossy encoding of the video signals, using only information available on the compressed bitstream. In order to obtain perceived quality scores from the estimated error, three methods are presented: i) to weight the error estimates according to a perceptual model; ii) to linearly combine the mean squared error (MSE) estimates with additional video features; iii) to use MSE estimates as the input of a logistic function. The performances of the algorithms are evaluated using cross-validation procedures and the one showing the best performance is also in a preliminary study of quality assessment in the presence of transmission losses.",
"title": ""
},
{
"docid": "8734436dbd821d7a1bb0d2de97ba44d3",
"text": "What makes a face attractive and why do we have the preferences we do? Emergence of preferences early in development and cross-cultural agreement on attractiveness challenge a long-held view that our preferences reflect arbitrary standards of beauty set by cultures. Averageness, symmetry, and sexual dimorphism are good candidates for biologically based standards of beauty. A critical review and meta-analyses indicate that all three are attractive in both male and female faces and across cultures. Theorists have proposed that face preferences may be adaptations for mate choice because attractive traits signal important aspects of mate quality, such as health. Others have argued that they may simply be by-products of the way brains process information. Although often presented as alternatives, I argue that both kinds of selection pressures may have shaped our perceptions of facial beauty.",
"title": ""
},
{
"docid": "b02ebfa85f0948295b401152c0190d74",
"text": "SAGE has had a remarkable impact at Microsoft.",
"title": ""
}
] | scidocsrr |
ef9f48caaba38c29329650121b2ef6c8 | Predictive role of prenasal thickness and nasal bone for Down syndrome in the second trimester. | [
{
"docid": "e7315716a56ffa7ef2461c7c99879efb",
"text": "OBJECTIVE\nTo investigate the potential value of ultrasound examination of the fetal profile for present/hypoplastic fetal nasal bone at 15-22 weeks' gestation as a marker for trisomy 21.\n\n\nMETHODS\nThis was an observational ultrasound study in 1046 singleton pregnancies undergoing amniocentesis for fetal karyotyping at 15-22 (median, 17) weeks' gestation. Immediately before amniocentesis the fetal profile was examined to determine if the nasal bone was present or hypoplastic (absent or shorter than 2.5 mm). The incidence of nasal hypoplasia in the trisomy 21 and the chromosomally normal fetuses was determined and the likelihood ratio for trisomy 21 for nasal hypoplasia was calculated.\n\n\nRESULTS\nAll fetuses were successfully examined for the presence of the nasal bone. The nasal bone was hypoplastic in 21/34 (61.8%) fetuses with trisomy 21, in 12/982 (1.2%) chromosomally normal fetuses and in 1/30 (3.3%) fetuses with other chromosomal defects. In 3/21 (14.3%) trisomy 21 fetuses with nasal hypoplasia there were no other abnormal ultrasound findings. In the chromosomally normal group hypoplastic nasal bone was found in 0.5% of Caucasians and in 8.8% of Afro-Caribbeans. The likelihood ratio for trisomy 21 for hypoplastic nasal bone was 50.5 (95% CI 27.1-92.7) and for present nasal bone it was 0.38 (95% CI 0.24-0.56).\n\n\nCONCLUSION\nNasal bone hypoplasia at the 15-22-week scan is associated with a high risk for trisomy 21 and it is a highly sensitive and specific marker for this chromosomal abnormality.",
"title": ""
}
] | [
{
"docid": "2adf5e06cfc7e6d8cf580bdada485a23",
"text": "This paper describes the comprehensive Terrorism Knowledge Base TM (TKB TM) which will ultimately contain all relevant knowledge about terrorist groups, their members, leaders, affiliations , etc., and full descriptions of specific terrorist events. Led by world-class experts in terrorism , knowledge enterers have, with simple tools, been building the TKB at the rate of up to 100 assertions per person-hour. The knowledge is stored in a manner suitable for computer understanding and reasoning. The TKB also utilizes its reasoning modules to integrate data and correlate observations, generate scenarios, answer questions and compose explanations.",
"title": ""
},
{
"docid": "87133250a9e04fd42f5da5ecacd39d70",
"text": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.",
"title": ""
},
{
"docid": "cd0c1507c1187e686c7641388413d3b5",
"text": "Inference of three-dimensional motion from the fusion of inertial and visual sensory data has to contend with the preponderance of outliers in the latter. Robust filtering deals with the joint inference and classification task of selecting which data fits the model, and estimating its state. We derive the optimal discriminant and propose several approximations, some used in the literature, others new. We compare them analytically, by pointing to the assumptions underlying their approximations, and empirically. We show that the best performing method improves the performance of state-of-the-art visual-inertial sensor fusion systems, while retaining the same computational complexity.",
"title": ""
},
{
"docid": "7e683f15580e77b1e207731bb73b8107",
"text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f2b6afabd67354280d091d11e8265b96",
"text": "This paper aims to present three new methods for color detection and segmentation of road signs. The images are taken by a digital camera mounted in a car. The RGB images are converted into IHLS color space, and new methods are applied to extract the colors of the road signs under consideration. The methods are tested on hundreds of outdoor images in different light conditions, and they show high robustness. This project is part of the research taking place in Dalarna University/Sweden in the field of the ITS",
"title": ""
},
{
"docid": "8f289714182c490b726b8edbbb672cfd",
"text": "Design and implementation of a 15kV sub-nanosecond pulse generator using Trigatron type spark gap as a switch. Straightforward and compact trigger generator using pulse shaping network which produces a trigger pulse of sub-nanosecond rise time. A pulse power system requires delivering a high voltage, high coulomb in short rise time. This is achieved by using pulse shaping network comprises of parallel combinations of capacitors and inductor. Spark gap switches are used to switch the energy from capacitive source to inductive load. The pulse hence generated can be used for synchronization of two or more spark gap. Because of the fast rise time and the high output voltage, the reliability of the synchronization is increased. The analytical calculations, simulation, have been carried out to select the circuit parameters. Simulation results using MATLAB/SIMULINK have been implemented in the experimental setup and sub-nanoseconds output waveforms have been obtained.",
"title": ""
},
{
"docid": "874b14b3c3e15b43de3310327affebaf",
"text": "We present the Accelerated Quadratic Proxy (AQP) - a simple first-order algorithm for the optimization of geometric energies defined over triangular and tetrahedral meshes.\n The main stumbling block of current optimization techniques used to minimize geometric energies over meshes is slow convergence due to ill-conditioning of the energies at their minima. We observe that this ill-conditioning is in large part due to a Laplacian-like term existing in these energies. Consequently, we suggest to locally use a quadratic polynomial proxy, whose Hessian is taken to be the Laplacian, in order to achieve a preconditioning effect. This already improves stability and convergence, but more importantly allows incorporating acceleration in an almost universal way, that is independent of mesh size and of the specific energy considered.\n Experiments with AQP show it is rather insensitive to mesh resolution and requires a nearly constant number of iterations to converge; this is in strong contrast to other popular optimization techniques used today such as Accelerated Gradient Descent and Quasi-Newton methods, e.g., L-BFGS. We have tested AQP for mesh deformation in 2D and 3D as well as for surface parameterization, and found it to provide a considerable speedup over common baseline techniques.",
"title": ""
},
{
"docid": "c7ea816f2bb838b8c5aac3cdbbd82360",
"text": "Semantic annotated parallel corpora, though rare, play an increasingly important role in natural language processing. These corpora provide valuable data for computational tasks like sense-based machine translation and word sense disambiguation, but also to contrastive linguistics and translation studies. In this paper we present the ongoing development of a web-based corpus semantic annotation environment that uses the Open Multilingual Wordnet (Bond and Foster, 2013) as a sense inventory. The system includes interfaces to help coordinating the annotation project and a corpus browsing interface designed specifically to meet the needs of a semantically annotated corpus. The tool was designed to build the NTU-Multilingual Corpus (Tan and Bond, 2012). For the past six years, our tools have been tested and developed in parallel with the semantic annotation of a portion of this corpus in Chinese, English, Japanese and Indonesian. The annotation system is released under an open source license (MIT).",
"title": ""
},
{
"docid": "933312292c64c916e69357c5aec42189",
"text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.",
"title": ""
},
{
"docid": "4a043a02f3fad07797245b0a2c4ea4c5",
"text": "The worldwide population of people over the age of 65 has been predicted to more than double from 1990 to 2025. Therefore, ubiquitous health-care systems have become an important topic of research in recent years. In this paper, an integrated system for portable electrocardiography (ECG) monitoring, with an on-board processor for time–frequency analysis of heart rate variability (HRV), is presented. The main function of proposed system comprises three parts, namely, an analog-to-digital converter (ADC) controller, an HRV processor, and a lossless compression engine. At the beginning, ECG data acquired from front-end circuits through the ADC controller is passed through the HRV processor for analysis. Next, the HRV processor performs real-time analysis of time–frequency HRV using the Lomb periodogram and a sliding window configuration. The Lomb periodogram is suited for spectral analysis of unevenly sampled data and has been applied to time–frequency analysis of HRV in the proposed system. Finally, the ECG data are compressed by 2.5 times using the lossless compression engine before output using universal asynchronous receiver/transmitter (UART). Bluetooth is employed to transmit analyzed HRV data and raw ECG data to a remote station for display or further analysis. The integrated ECG health-care system design proposed has been implemented using UMC 90 nm CMOS technology. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6eb229b17a4634183818ff4a15f981b6",
"text": "Fine-grained image classification is a challenging task due to the large intra-class variance and small inter-class variance, aiming at recognizing hundreds of sub-categories belonging to the same basic-level category. Most existing fine-grained image classification methods generally learn part detection models to obtain the semantic parts for better classification accuracy. Despite achieving promising results, these methods mainly have two limitations: (1) not all the parts which obtained through the part detection models are beneficial and indispensable for classification, and (2) fine-grained image classification requires more detailed visual descriptions which could not be provided by the part locations or attribute annotations. For addressing the above two limitations, this paper proposes the two-stream model combing vision and language (CVL) for learning latent semantic representations. The vision stream learns deep representations from the original visual information via deep convolutional neural network. The language stream utilizes the natural language descriptions which could point out the discriminative parts or characteristics for each image, and provides a flexible and compact way of encoding the salient visual aspects for distinguishing sub-categories. Since the two streams are complementary, combing the two streams can further achieves better classification accuracy. Comparing with 12 state-of-the-art methods on the widely used CUB-200-2011 dataset for fine-grained image classification, the experimental results demonstrate our CVL approach achieves the best performance.",
"title": ""
},
{
"docid": "06675c4b42683181cecce7558964c6b6",
"text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.",
"title": ""
},
{
"docid": "0d9057d8a40eb8faa7e67128a7d24565",
"text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.",
"title": ""
},
{
"docid": "c0b30475f78acefae1c15f9f5d6dc57b",
"text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.",
"title": ""
},
{
"docid": "898ff77dbfaf00efa3b08779a781aa0b",
"text": "The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks.",
"title": ""
},
{
"docid": "bf4b6cd15c0b3ddb5892f1baea9dec68",
"text": "The purpose of this study was to examine the distribution, abundance and characteristics of plastic particles in plankton samples collected routinely in Northeast Pacific ecosystems, and to contribute to the development of ideas for future research into the occurrence and impact of small plastic debris in marine pelagic ecosystems. Plastic debris particles were assessed from zooplankton samples collected as part of the National Oceanic and Atmospheric Administration's (NOAA) ongoing ecosystem surveys during two research cruises in the Southeast Bering Sea in the spring and fall of 2006 and four research cruises off the U.S. west coast (primarily off southern California) in spring, summer and fall of 2006, and in January of 2007. Nets with 0.505 mm mesh were used to collect surface samples during all cruises, and sub-surface samples during the four cruises off the west coast. The 595 plankton samples processed indicate that plastic particles are widely distributed in surface waters. The proportion of surface samples from each cruise that contained particles of plastic ranged from 8.75 to 84.0%, whereas particles were recorded in sub-surface samples from only one cruise (in 28.2% of the January 2007 samples). Spatial and temporal variability was apparent in the abundance and distribution of the plastic particles and mean standardized quantities varied among cruises with ranges of 0.004-0.19 particles/m³, and 0.014-0.209 mg dry mass/m³. Off southern California, quantities for the winter cruise were significantly higher, and for the spring cruise significantly lower than for the summer and fall surveys (surface data). Differences between surface particle concentrations and mass for the Bering Sea and California coast surveys were significant for pair-wise comparisons of the spring but not the fall cruises. The particles were assigned to three plastic product types: product fragments, fishing net and line fibers, and industrial pellets; and five size categories: <1 mm, 1-2.5 mm, >2.5-5 mm, >5-10 mm, and >10 mm. Product fragments accounted for the majority of the particles, and most were less than 2.5 mm in size. The ubiquity of such particles in the survey areas and predominance of sizes <2.5 mm implies persistence in these pelagic ecosystems as a result of continuous breakdown from larger plastic debris fragments, and widespread distribution by ocean currents. Detailed investigations of the trophic ecology of individual zooplankton species, and their encounter rates with various size ranges of plastic particles in the marine pelagic environment, are required in order to understand the potential for ingestion of such debris particles by these organisms. Ongoing plankton sampling programs by marine research institutes in large marine ecosystems are good potential sources of data for continued assessment of the abundance, distribution and potential impact of small plastic debris in productive coastal pelagic zones.",
"title": ""
},
{
"docid": "0fe02fcc6f68ba1563d3f5d96a8da330",
"text": "We present a novel technique for jointly predicting semantic arguments for lexical predicates. The task is to find the best matching between semantic roles and sentential spans, subject to structural constraints that come from expert linguistic knowledge (e.g., in the FrameNet lexicon). We formulate this task as an integer linear program (ILP); instead of using an off-the-shelf tool to solve the ILP, we employ a dual decomposition algorithm, which we adapt for exact decoding via a branch-and-bound technique. Compared to a baseline that makes local predictions, we achieve better argument identification scores and avoid all structural violations. Runtime is nine times faster than a proprietary ILP solver.",
"title": ""
},
{
"docid": "e1b6cc1dbd518760c414cd2ddbe88dd5",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] | scidocsrr |
df7ea4f56972e28521968146f39b8ee3 | Machine Learning-based Software Testing: Towards a Classification Framework | [
{
"docid": "112ecbb8547619577962298fbe65eae1",
"text": "In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box, Category-Partition testing, we propose a methodology and a tool based on machine learning that has shown promising results on a case study involving students as testers. 2009 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "b886b54f77168eab82e449b7e5cd3aac",
"text": "BACKGROUND\nLow desire is the most common sexual problem in women at midlife. Prevalence data are limited by lack of validated instruments or exclusion of un-partnered or sexually inactive women.\n\n\nAIM\nTo document the prevalence of and factors associated with low desire, sexually related personal distress, and hypoactive sexual desire dysfunction (HSDD) using validated instruments.\n\n\nMETHODS\nCross-sectional, nationally representative, community-based sample of 2,020 Australian women 40 to 65 years old.\n\n\nOUTCOMES\nLow desire was defined as a score no higher than 5.0 on the desire domain of the Female Sexual Function Index (FSFI); sexually related personal distress was defined as a score of at least 11.0 on the Female Sexual Distress Scale-Revised; and HSDD was defined as a combination of these scores. The Menopause Specific Quality of Life Questionnaire was used to document menopausal vasomotor symptoms. The Beck Depression Inventory-II was used to identify moderate to severe depressive symptoms (score ≥ 20).\n\n\nRESULTS\nThe prevalence of low desire was 69.3% (95% CI = 67.3-71.3), that of sexually related personal distress was 40.5% (95% CI = 38.4-42.6), and that of HSDD was 32.2% (95% CI = 30.1-34.2). Of women who were not partnered or sexually active, 32.4% (95% CI = 24.4-40.2) reported sexually related personal distress. Factors associated with HSDD in an adjusted logistic regression model included being partnered (odds ratio [OR] = 3.30, 95% CI = 2.46-4.41), consuming alcohol (OR = 1.48, 95% CI = 1.16-1.89), vaginal dryness (OR = 2.08, 95% CI = 1.66-2.61), pain during or after intercourse (OR = 1.63, 95% CI = 1.27-2.09), moderate to severe depressive symptoms (OR = 2.69, 95% CI 1.99-3.64), and use of psychotropic medication (OR = 1.42, 95% CI = 1.10-1.83). Vasomotor symptoms were not associated with low desire, sexually related personal distress, or HSDD.\n\n\nCLINICAL IMPLICATIONS\nGiven the high prevalence, clinicians should screen midlife women for HSDD.\n\n\nSTRENGTHS AND LIMITATIONS\nStrengths include the large size and representative nature of the sample and the use of validated tools. Limitations include the requirement to complete a written questionnaire in English. Questions within the FSFI limit the applicability of FSFI total scores, but not desire domain scores, in recently sexually inactive women, women without a partner, and women who do not engage in penetrative intercourse.\n\n\nCONCLUSIONS\nLow desire, sexually related personal distress, and HSDD are common in women at midlife, including women who are un-partnered or sexually inactive. Some factors associated with HSDD, such as psychotropic medication use and vaginal dryness, are modifiable or can be treated with safe and effective therapies. Worsley R, Bell RJ, Gartoulla P, Davis SR. Prevalence and Predictors of Low Sexual Desire, Sexually Related Personal Distress, and Hypoactive Sexual Desire Dysfunction in a Community-Based Sample of Midlife Women. J Sex Med 2017;14:675-686.",
"title": ""
},
{
"docid": "88cf953ba92b54f89cdecebd4153bee3",
"text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.",
"title": ""
},
{
"docid": "8d4bf1b8b45bae6c506db5339e6d9025",
"text": "Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrixmatrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.",
"title": ""
},
{
"docid": "cb26bb277afc6d521c4c5960b35ed77d",
"text": "We propose a novel algorithm for the segmentation and prerecognition of offline handwritten Arabic text. Our character segmentation method over-segments each word, and then removes extra breakpoints using knowledge of letter shapes. On a test set of 200 images, 92.3% of the segmentation points were detected correctly, with 5.1% instances of over-segmentation. The prerecognition component annotates each detected letter with shape information, to be used for recognition in future work.",
"title": ""
},
{
"docid": "6131fdbfe28aaa303b1ee4c29a65f766",
"text": "Destination prediction is an essential task for many emerging location based applications such as recommending sightseeing places and targeted advertising based on destination. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, existing techniques using this approach suffer from the “data sparsity problem”, i.e., the available historical trajectories is far from being able to cover all possible trajectories. This problem considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) algorithm to address the data sparsity problem. SubSyn algorithm first decomposes historical trajectories into sub-trajectories comprising two neighbouring locations, and then connects the sub-trajectories into “synthesised” trajectories. The number of query trajectories that can have predicted destinations is exponentially increased by this means. Experiments based on real datasets show that SubSyn algorithm can predict destinations for up to ten times more query trajectories than a baseline algorithm while the SubSyn prediction algorithm runs over two orders of magnitude faster than the baseline algorithm. In this paper, we also consider the privacy protection issue in case an adversary uses SubSyn algorithm to derive sensitive location information of users. We propose an efficient algorithm to select a minimum number of locations a user has to hide on her trajectory in order to avoid privacy leak. Experiments also validate the high efficiency of the privacy protection algorithm.",
"title": ""
},
{
"docid": "aefade278a0af130e0c7923b704e2ee1",
"text": "Prediction of the risk in patients with upper gastrointestinal bleeding has been the subject of different studies for several decades. This study showed the significance of Forrest classification, used in initial endoscopic investigation for evaluation of bleeding lesion, for the prediction of rebleeding. Rockall and Blatchford risk score systems evaluate certain clinical, biochemical and endoscopic variables significant for the prediction of rebleeding as well as the final outcome of disease. The percentage of rebleeding in the group of studied patients in accordance with Forrest classification showed that the largest number of patients belonged to the FIIb group. The predictive evaluation of initial and definitive Rockall score was significantly associated with percentage of rebleeding, while Blatchfor score had boundary significance. Acta Medica Medianae 2007;46(4):38-43.",
"title": ""
},
{
"docid": "865cfae2da5ad3d1d10d21b1defdc448",
"text": "During the last decade, novel immunotherapeutic strategies, in particular antibodies directed against immune checkpoint inhibitors, have revolutionized the treatment of different malignancies leading to an improved survival of patients. Identification of immune-related biomarkers for diagnosis, prognosis, monitoring of immune responses and selection of patients for specific cancer immunotherapies is urgently required and therefore areas of intensive research. Easily accessible samples in particular liquid biopsies (body fluids), such as blood, saliva or urine, are preferred for serial tumor biopsies.Although monitoring of immune and tumor responses prior, during and post immunotherapy has led to significant advances of patients' outcome, valid and stable prognostic biomarkers are still missing. This might be due to the limited capacity of the technologies employed, reproducibility of results as well as assay stability and validation of results. Therefore solid approaches to assess immune regulation and modulation as well as to follow up the nature of the tumor in liquid biopsies are urgently required to discover valuable and relevant biomarkers including sample preparation, timing of the collection and the type of liquid samples. This article summarizes our knowledge of the well-known liquid material in a new context as liquid biopsy and focuses on collection and assay requirements for the analysis and the technical developments that allow the implementation of different high-throughput assays to detect alterations at the genetic and immunologic level, which could be used for monitoring treatment efficiency, acquired therapy resistance mechanisms and the prognostic value of the liquid biopsies.",
"title": ""
},
{
"docid": "0525d981721fc8a85bb4daef78b6cbe9",
"text": "Cloud computing environments provide on-demand resource provisioning, allowing applications to elastically scale. However, application benchmarks currently being used to test cloud management systems are not designed for this purpose. This results in resource underprovisioning and quality-of-service (QoS) violations when systems tested using these benchmarks are deployed in production environments. We present C-MART, a benchmark designed to emulate a modern web application running in a cloud computing environment. It is designed using the cloud computing paradigm of elastic scalability at every application tier and utilizes modern web-based technologies such as HTML5, AJAX, jQuery, and SQLite. C-MART consists of a web application, client emulator, deployment server, and scaling API. The deployment server automatically deploys and configures the test environment in orders of magnitude less time than current benchmarks. The scaling API allows users to define and provision their own customized datacenter. The client emulator generates the web workload for the application by emulating complex and varied client behaviors, including decisions based on page content and prior history. We show that C-MART can detect problems in management systems that previous benchmarks fail to identify, such as an increase from 4.4 to 50 percent error in predicting server CPU utilization and resource underprovisioning in 22 percent of QoS measurements.",
"title": ""
},
{
"docid": "c988dc0e9be171a5fcb555aedcdf67e3",
"text": "Online social networks, such as Facebook, are increasingly utilized by many people. These networks allow users to publish details about themselves and to connect to their friends. Some of the information revealed inside these networks is meant to be private. Yet it is possible to use learning algorithms on released data to predict private information. In this paper, we explore how to launch inference attacks using released social networking data to predict private information. We then devise three possible sanitization techniques that could be used in various situations. Then, we explore the effectiveness of these techniques and attempt to use methods of collective inference to discover sensitive attributes of the data set. We show that we can decrease the effectiveness of both local and relational classification algorithms by using the sanitization methods we described.",
"title": ""
},
{
"docid": "f262aba2003f986012bbec1a9c2fcb83",
"text": "Hemiplegic migraine is a rare form of migraine with aura that involves motor aura (weakness). This type of migraine can occur as a sporadic or a familial disorder. Familial forms of hemiplegic migraine are dominantly inherited. Data from genetic studies have implicated mutations in genes that encode proteins involved in ion transportation. However, at least a quarter of the large families affected and most sporadic cases do not have a mutation in the three genes known to be implicated in this disorder, suggesting that other genes are still to be identified. Results from functional studies indicate that neuronal hyperexcitability has a pivotal role in the pathogenesis of hemiplegic migraine. The clinical manifestations of hemiplegic migraine range from attacks with short-duration hemiparesis to severe forms with recurrent coma and prolonged hemiparesis, permanent cerebellar ataxia, epilepsy, transient blindness, or mental retardation. Diagnosis relies on a careful patient history and exclusion of potential causes of symptomatic attacks. The principles of management are similar to those for common varieties of migraine, except that vasoconstrictors, including triptans, are historically contraindicated but are often used off-label to stop the headache, and prophylactic treatment can include lamotrigine and acetazolamide.",
"title": ""
},
{
"docid": "1b60ded506c85edd798fe0759cce57fa",
"text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.",
"title": ""
},
{
"docid": "67d41a84050f3bf9bc004e7c1787a2bc",
"text": "Facial aging is a complex process individualized by interaction with exogenous and endogenous factors. The upper lip is one of the facial components by which facial attractiveness is defined. Upper lip aging is significantly influenced by maxillary bone and teeth. Aging of the cutaneous part can be aggravated by solar radiation and smoking. We provide a review about minimally invasive techniques for correction of aging signs of the upper lip with a tailored approach to patient’s characteristics. The treatment is based upon use of fillers, laser, and minor surgery. Die Alterung des Gesichts ist ein komplexer Prozess, welcher durch die Wechselwirkung exogener und endogener Faktoren individuell geprägt wird. Die Oberlippe zählt zu den fazialen Komponenten, welche die Attraktivität des Gesichts definieren. Die Alterung der Oberlippe wird durch den Oberkieferknochen und die Zähne beeinflusst. Alterungsprozesse des kutanen Anteils können durch Sonnenbestrahlung und Rauchen aggraviert werden. Die Autoren stellen eine Übersicht zur den minimalinvasiven Verfahren der Korrektur altersbedingter Veränderungen der Oberlippe mit Individualisierung je nach Patientenmerkmalen vor. Die Technik basiert auf der Nutzung von Fillern, Lasern und kleineren chirurgischen Eingriffen.",
"title": ""
},
{
"docid": "572be2eb18bd929c2b4e482f7d3e0754",
"text": "• Supervised learning --where the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate the behavior of) a function which maps a vector into one of several classes by looking at several input-output examples of the function. • Unsupervised learning --which models a set of inputs: labeled examples are not available. • Semi-supervised learning --which combines both labeled and unlabeled examples to generate an appropriate function or classifier. • Reinforcement learning --where the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm. • Transduction --similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and new inputs. • Learning to learn --where the algorithm learns its own inductive bias based on previous experience.",
"title": ""
},
{
"docid": "47251c2ce233226b015a2482847dc48d",
"text": "Recent advances in computer graphics have made it possible to visualize mathematical models of biological structures and processes with unprecedented realism. The resulting images, animations, and interactive systems are useful as research and educational tools in developmental biology and ecology. Prospective applications also include computer-assisted landscape architecture, design of new varieties of plants, and crop yield prediction. In this paper we revisit foundations of the applications of L-systems to the modeling of plants, and we illustrate them using recently developed sample models.",
"title": ""
},
{
"docid": "9e5c123b6f744037436e0d5c917e8640",
"text": "Relational databases have limited support for data collaboration, where teams collaboratively curate and analyze large datasets. Inspired by software version control systems like git, we propose (a) a dataset version control system, giving users the ability to create, branch, merge, difference and search large, divergent collections of datasets, and (b) a platform, DATAHUB, that gives users the ability to perform collaborative data analysis building on this version control system. We outline the challenges in providing dataset version control at scale.",
"title": ""
},
{
"docid": "ea5697d417fe154be77d941c19d8a86e",
"text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.",
"title": ""
},
{
"docid": "68a5192778ae203ea1e31ba4e29b4330",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "19d8b6ff70581307e0a00c03b059964f",
"text": "We propose a novel approach for analysing time series using complex network theory. We identify the recurrence matrix (calculated from time series) with the adjacency matrix of a complex network and apply measures for the characterisation of complex networks to this recurrence matrix. By using the logistic map, we illustrate the potential of these complex network measures for the detection of dynamical transitions. Finally, we apply the proposed approach to a marine palaeo-climate record and identify the subtle changes to the climate regime.",
"title": ""
},
{
"docid": "038064c2998a5da8664be1ba493a0326",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
}
] | scidocsrr |
3cb19df8a8927abec692de0d2f258b47 | IoT Security Techniques Based on Machine Learning: How Do IoT Devices Use AI to Enhance Security? | [
{
"docid": "c2571afd6f2b9e9856c8f8c4eeb60b81",
"text": "In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages – not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.",
"title": ""
},
{
"docid": "efe74721de3eda130957ce26435375a3",
"text": "Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.",
"title": ""
},
{
"docid": "9bafd07082066235a6b99f00e360b0d2",
"text": "Mobile devices have become a significant part of people’s lives, leading to an increasing number of users involved with such technology. The rising number of users invites hackers to generate malicious applications. Besides, the security of sensitive data available on mobile devices is taken lightly. Relying on currently developed approaches is not sufficient, given that intelligent malware keeps modifying rapidly and as a result becomes more difficult to detect. In this paper, we propose an alternative solution to evaluating malware detection using the anomaly-based approach with machine learning classifiers. Among the various network traffic features, the four categories selected are basic information, content based, time based and connection based. The evaluation utilizes two datasets: public (i.e. MalGenome) and private (i.e. self-collected). Based on the evaluation results, both the Bayes network and random forest classifiers produced more accurate readings, with a 99.97 % true-positive rate (TPR) as opposed to the multi-layer perceptron with only 93.03 % on the MalGenome dataset. However, this experiment revealed that the k-nearest neighbor classifier efficiently detected the latest Android malware with an 84.57 % truepositive rate higher than other classifiers. Communicated by V. Loia. F. A. Narudin · A. Gani Mobile Cloud Computing (MCC), University of Malaya, 50603 Kuala Lumpur, Malaysia A. Feizollah (B) · N. B. Anuar Security Research Group (SECReg), Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: [email protected]",
"title": ""
}
] | [
{
"docid": "048975c29cd23b08f414861d9804e900",
"text": "Diversity is a defining characteristic of global collectives facilitated by the Internet. Though substantial evidence suggests that diversity has profound implications for a variety of outcomes including performance, member engagement, and withdrawal behavior, the effects of diversity have been predominantly investigated in the context of organizational workgroups or virtual teams. We use a diversity lens to study the success of non-traditional virtual work groups exemplified by open source software (OSS) projects. Building on the diversity literature, we propose that three types of diversity (separation, variety and disparity) influence two critical outcomes for OSS projects: community engagement and market success. We draw on the OSS literature to further suggest that the effects of diversity on market success are moderated by the application development stage. We instantiate the operational definitions of three forms of diversity to the unique context of open source projects. Using archival data from 357 projects hosted on SourceForge, we find that disparity diversity, reflecting variation in participants' contribution-based reputation, is positively associated with success. The impact of separation diversity, conceptualized as culture and measured as diversity in the spoken language and country of participants, has a negative impact on community engagement but an unexpected positive effect on market success. Variety diversity, reflected in dispersion in project participant roles, positively influences community engagement and market success. The impact of diversity on market success is conditional on the development stage of the project. We discuss how the study's findings advance the literature on antecedents of OSS success, expand our theoretical understanding of diversity, and present the practical implications of the results for managers of distributed collectives.",
"title": ""
},
{
"docid": "02936143b0da0a789fc1c645e30c7e50",
"text": "We describe a robust accurate domain-independent approach t statistical parsing incorporated into the new release of the ANLT toolkit, and publicly available as a research tool. The system has bee n used to parse many well known corpora in order to produce dat a for lexical acquisition efforts; it has also been used as a component in a open-domain question answering project. The performance of the system is competitive with that of statistical parsers using highl y lexicalised parse selection models. However, we plan to ex end the system to improve parse coverage, depth and accuracy.",
"title": ""
},
{
"docid": "653ca5c9478b1b1487fc24eeea8c1677",
"text": "A fundamental question in information theory and in computer science is how to measure similarity or the amount of shared information between two sequences. We have proposed a metric, based on Kolmogorov complexity, to answer this question and have proven it to be universal. We apply this metric in measuring the amount of shared information between two computer programs, to enable plagiarism detection. We have designed and implemented a practical system SID (Software Integrity Diagnosis system) that approximates this metric by a heuristic compression algorithm. Experimental results demonstrate that SID has clear advantages over other plagiarism detection systems. SID system server is online at http://software.bioinformatics.uwaterloo.ca/SID/.",
"title": ""
},
{
"docid": "874cff80953c4a1e929134ce59cb1fee",
"text": "Automatically detecting controversy on the Web is a useful capability for a search engine to help users review web content with a more balanced and critical view. The current state-of-the art approach is to find K-Nearest-Neighbors in Wikipedia to the document query, and to aggregate their controversy scores that are automatically computed from the Wikipedia edit-history features. In this paper, we discover two major weakness in the prior work and propose modifications. First, the generated single query from document to find KNN Wikipages easily becomes ambiguous. Thus, we propose to generate multiple queries from smaller but more topically coherent paragraph of the document. Second, the automatically computed controversy scores of Wikipedia articles that depend on \"edit war\" features have a drawback that without an edit history, there can be no edit wars. To infer more reliable controversy scores for articles with little edit history, we smooth the original score from the scores of the neighbors with more established edit history. We show that the modified framework is improved by up to 5% for binary controversy classification in a publicly available dataset.",
"title": ""
},
{
"docid": "ab1c7ede012bd20f30bab66fcaec49fa",
"text": "Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.",
"title": ""
},
{
"docid": "b98c34a4be7f86fb9506a6b1620b5d3e",
"text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.",
"title": ""
},
{
"docid": "1fe202e68aa2196f8e739173fa94b657",
"text": "Efficient formulations for the dynamics of continuum robots are necessary to enable accurate modeling of the robot's shape during operation. Previous work in continuum robotics has focused on low-fidelity lumped parameter models, in which actuated segments are modeled as circular arcs, or computationally intensive high-fidelity distributed parameter models, in which continuum robots are modeled as a parameterized spatial curve. In this paper, a novel dynamic modeling methodology is studied that captures curvature variations along a segment using a finite set of kinematic variables. This dynamic model is implemented using the principle of virtual power (also called Kane's method) for a continuum robot. The model is derived to account for inertial, actuation, friction, elastic, and gravitational effects. The model is inherently adaptable for including any type of external force or moment, including dissipative effects and external loading. Three case studies are simulated on a cable-driven continuum robot structure to study the dynamic properties of the numerical model. Cross validation is performed in comparison to both experimental results and finite-element analysis.",
"title": ""
},
{
"docid": "22ef70869ce47993bbdf24b18b6988f5",
"text": "Recent results suggest that it is possible to grasp a variety of singulated objects with high precision using Convolutional Neural Networks (CNNs) trained on synthetic data. This paper considers the task of bin picking, where multiple objects are randomly arranged in a heap and the objective is to sequentially grasp and transport each into a packing box. We model bin picking with a discrete-time Partially Observable Markov Decision Process that specifies states of the heap, point cloud observations, and rewards. We collect synthetic demonstrations of bin picking from an algorithmic supervisor uses full state information to optimize for the most robust collision-free grasp in a forward simulator based on pybullet to model dynamic object-object interactions and robust wrench space analysis from the Dexterity Network (Dex-Net) to model quasi-static contact between the gripper and object. We learn a policy by fine-tuning a Grasp Quality CNN on Dex-Net 2.1 to classify the supervisor’s actions from a dataset of 10,000 rollouts of the supervisor in the simulator with noise injection. In 2,192 physical trials of bin picking with an ABB YuMi on a dataset of 50 novel objects, we find that the resulting policies can achieve 94% success rate and 96% average precision (very few false positives) on heaps of 5-10 objects and can clear heaps of 10 objects in under three minutes. Datasets, experiments, and supplemental material are available at http://berkeleyautomation.github.io/dex-net.",
"title": ""
},
{
"docid": "3c667426c8dcea8e7813e9eef23a1e15",
"text": "Radio spectrum has become a precious resource, and it has long been the dream of wireless communication engineers to maximize the utilization of the radio spectrum. Dynamic Spectrum Access (DSA) and Cognitive Radio (CR) have been considered promising to enhance the efficiency and utilization of the spectrum. In current overlay cognitive radio, spectrum sensing is first performed to detect the spectrum holes for the secondary user to harness. However, in a more sophisticated cognitive radio, the secondary user needs to detect more than just the existence of primary users and spectrum holes. For example, in a hybrid overlay/underlay cognitive radio, the secondary use needs to detect the transmission power and localization of the primary users as well. In this paper, we combine the spectrum sensing and primary user power/localization detection together, and propose to jointly detect not only the existence of primary users but the power and localization of them via compressed sensing. Simulation results including the miss detection probability (MDP), false alarm probability (FAP) and reconstruction probability (RP) confirm the effectiveness and robustness of the proposed method.",
"title": ""
},
{
"docid": "03f98b18392bd178ea68ce19b13589fa",
"text": "Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.",
"title": ""
},
{
"docid": "7fa1ebea0989f7a6b8c0396bce54a54d",
"text": "Linear Discriminant Analysis (LDA) is a very common technique for dimensionality reduction problems as a preprocessing step for machine learning and pattern classification applications. At the same time, it is usually used as a black box, but (sometimes) not well understood. The aim of this paper is to build a solid intuition for what is LDA, and how LDA works, thus enabling readers of all levels be able to get a better understanding of the LDA and to know how to apply this technique in different applications. The paper first gave the basic definitions and steps of how LDA technique works supported with visual explanations of these steps. Moreover, the two methods of computing the LDA space, i.e. class-dependent and class-independent methods, were explained in details. Then, in a step-by-step approach, two numerical examples are demonstrated to show how the LDA space can be calculated in case of the class-dependent and class-independent methods. Furthermore, two of the most common LDA problems (i.e. Small Sample Size (SSS) and non-linearity problems) were highlighted and illustrated, and stateof-the-art solutions to these problems were investigated and explained. Finally, a number of experiments was conducted with different datasets to (1) investigate the effect of the eigenvectors that used in the LDA space on the robustness of the extracted feature for the classification accuracy, and (2) to show when the SSS problem occurs and how it can be addressed.",
"title": ""
},
{
"docid": "021243b584395d190e191e0713fe4a5c",
"text": "Convolutional neural networks (CNNs) have achieved remarkable performance in a wide range of computer vision tasks, typically at the cost of massive computational complexity. The low speed of these networks may hinder real-time applications especially when computational resources are limited. In this paper, an efficient and effective approach is proposed to accelerate the test-phase computation of CNNs based on low-rank and group sparse tensor decomposition. Specifically, for each convolutional layer, the kernel tensor is decomposed into the sum of a small number of low multilinear rank tensors. Then we replace the original kernel tensors in all layers with the approximate tensors and fine-tune the whole net with respect to the final classification task using standard backpropagation. \\\\ Comprehensive experiments on ILSVRC-12 demonstrate significant reduction in computational complexity, at the cost of negligible loss in accuracy. For the widely used VGG-16 model, our approach obtains a 6.6$\\times$ speed-up on PC and 5.91$\\times$ speed-up on mobile device of the whole network with less than 1\\% increase on top-5 error.",
"title": ""
},
{
"docid": "74e40c5cb4e980149906495da850d376",
"text": "Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types—not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.",
"title": ""
},
{
"docid": "ab1b9e358d10fc091e8c7eedf4674a8a",
"text": "An effective and efficient defect inspection system for TFT-LCD polarised films using adaptive thresholds and shape-based image analyses Chung-Ho Noha; Seok-Lyong Leea; Deok-Hwan Kimb; Chin-Wan Chungc; Sang-Hee Kimd a School of Industrial and Management Engineering, Hankuk University of Foreign Studies, Yonginshi, Korea b School of Electronics Engineering, Inha University, Yonghyun-dong, Incheon-shi, Korea c Division of Computer Science, KAIST, Daejeon-shi, Korea d Key Technology Research Center, Agency for Defense Development, Daejeon-shi, Korea",
"title": ""
},
{
"docid": "6cdab4de3682ef027c9daf22a05438e1",
"text": "This paper proposes a new method that combines the intensity and motion information to detect scene changes such as abrupt scene changes and gradual scene changes. Two major features are chosen as the basic dissimilarity measures, and selfand cross-validation mechanisms are employed via a static scene test. We also develop a novel intensity statistics model for detecting gradualscenechanges.Experimental resultsshowthat theproposed algorithms are effective and outperform the previous approaches.",
"title": ""
},
{
"docid": "e5bad6942b0afa06f3a87e3c9347bf13",
"text": "We present a monocular 3D reconstruction algorithm for inextensible deformable surfaces. It uses point correspondences between a single image of the deformed surface taken by a camera with known intrinsic parameters and a template. The main assumption we make is that the surface shape as seen in the template is known. Since the surface is inextensible, its deformations are isometric to the template. We exploit the distance preservation constraints to recover the 3D surface shape as seen in the image. Though the distance preservation constraints have already been investigated in the literature, we propose a new way to handle them. Spatial smoothness priors are easily incorporated, as well as temporal smoothness priors in the case of reconstruction from a video. The reconstruction can be used for 3D augmented reality purposes thanks to a fast implementation. We report results on synthetic and real data. Some of them are compared to stereo-based 3D reconstructions to demonstrate the efficiency of our method.",
"title": ""
},
{
"docid": "ae508747717b9e8e149b5f91bb454c96",
"text": "Social robots are robots that help people as capable partners rather than as tools, are believed to be of greatest use for applications in entertainment, education, and healthcare because of their potential to be perceived as trusting, helpful, reliable, and engaging. This paper explores how the robot's physical presence influences a person's perception of these characteristics. The first study reported here demonstrates the differences between a robot and an animated character in terms a person's engagement and perceptions of the robot and character. The second study shows that this difference is a result of the physical presence of the robot and that a person's reactions would be similar even if the robot is not physically collocated. Implications to the design of socially communicative and interactive robots are discussed.",
"title": ""
},
{
"docid": "e40eb32613ed3077177d61ac14e82413",
"text": "Preamble. Billions of people are using cell phone devices on the planet, essentially in poor posture. The purpose of this study is to assess the forces incrementally seen by the cervical spine as the head is tilted forward, into worsening posture. This data is also necessary for cervical spine surgeons to understand in the reconstruction of the neck.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
},
{
"docid": "7332ba6aff8c966d76b1c8f451a02ccf",
"text": "A light-emitting diode (LED) driver compatible with fluorescent lamp (FL) ballasts is presented for a lamp-only replacement without rewiring the existing lamp fixture. Ballasts have a common function to regulate the lamp current, despite widely different circuit topologies. In this paper, magnetic and electronic ballasts are modeled as nonideal current sources and a current-sourced boost converter, which is derived from the duality, is adopted for the power conversion from ballasts. A rectifier circuit with capacitor filaments is proposed to interface the converter with the four-wire output of the ballast. A digital controller emulates the high-voltage discharge of the FL and operates adaptively with various ballasts. A prototype 20-W LED driver for retrofitting T8 36-W FL is evaluated with both magnetic and electronic ballasts. In addition to wide compatibility, accurate regulation of the LED current within 0.6% error and high driver efficiency over 89.7% are obtained.",
"title": ""
}
] | scidocsrr |
01c3e01d851d2eea8a3d24dcf1cc9afa | New prototype of hybrid 3D-biometric facial recognition system | [
{
"docid": "573f12acd3193045104c7d95bbc89f78",
"text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.",
"title": ""
}
] | [
{
"docid": "ac29d60761976a263629a93167516fde",
"text": "Abstruct1-V power supply high-speed low-power digital circuit technology with 0.5-pm multithreshold-voltage CMOS (MTCMOS) is proposed. This technology features both lowthreshold voltage and high-threshold voltage MOSFET’s in a single LSI. The low-threshold voltage MOSFET’s enhance speed Performance at a low supply voltage of 1 V or less, while the high-threshold voltage MOSFET’s suppress the stand-by leakage current during the sleep period. This technology has brought about logic gate characteristics of a 1.7-11s propagation delay time and 0.3-pW/MHz/gate power dissipation with a standard load. In addition, an MTCMOS standard cell library has been developed so that conventional CAD tools can be used to lay out low-voltage LSI’s. To demonstrate MTCMOS’s effectiveness, a PLL LSI based on standard cells was designed as a carrying vehicle. 18-MHz operation at 1 V was achieved using a 0.5-pm CMOS process.",
"title": ""
},
{
"docid": "d63591706309cf602404c34de547184f",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "5d21df36697616719bcc3e0ee22a08bd",
"text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics",
"title": ""
},
{
"docid": "4c12d10fd9c2a12e56b56f62f99333f3",
"text": "The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders.",
"title": ""
},
{
"docid": "705b2a837b51ac5e354e1ec0df64a52a",
"text": "BACKGROUND\nGeneralized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD.\n\n\nMETHODS/DESIGN\nThe trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables.\n\n\nCONCLUSION\nWe argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.\n\n\nTRIAL REGISTRATION\nNCT00602212 (ClinicalTrials.gov).",
"title": ""
},
{
"docid": "2549177f9367d5641a7fc4dfcfaf5c0a",
"text": "Educational data mining is an emerging trend, concerned with developing methods for exploring the huge data that come from the educational system. This data is used to derive the knowledge which is useful in decision making. EDM methods are useful to measure the performance of students, assessment of students and study students’ behavior etc. In recent years, Educational data mining has proven to be more successful at many of the educational statistics problems due to enormous computing power and data mining algorithms. This paper surveys the history and applications of data mining techniques in the educational field. The objective is to introduce data mining to traditional educational system, web-based educational system, intelligent tutoring system, and e-learning. This paper describes how to apply the main data mining methods such as prediction, classification, relationship mining, clustering, and",
"title": ""
},
{
"docid": "9b7ca6e8b7bf87ef61e70ab4c720ec40",
"text": "The support vector machine (SVM) is a widely used tool in classification problems. The SVM trains a classifier by solving an optimization problem to decide which instances of the training data set are support vectors, which are the necessarily informative instances to form the SVM classifier. Since support vectors are intact tuples taken from the training data set, releasing the SVM classifier for public use or shipping the SVM classifier to clients will disclose the private content of support vectors. This violates the privacy-preserving requirements for some legal or commercial reasons. The problem is that the classifier learned by the SVM inherently violates the privacy. This privacy violation problem will restrict the applicability of the SVM. To the best of our knowledge, there has not been work extending the notion of privacy preservation to tackle this inherent privacy violation problem of the SVM classifier. In this paper, we exploit this privacy violation problem, and propose an approach to postprocess the SVM classifier to transform it to a privacy-preserving classifier which does not disclose the private content of support vectors. The postprocessed SVM classifier without exposing the private content of training data is called Privacy-Preserving SVM Classifier (abbreviated as PPSVC). The PPSVC is designed for the commonly used Gaussian kernel function. It precisely approximates the decision function of the Gaussian kernel SVM classifier without exposing the sensitive attribute values possessed by support vectors. By applying the PPSVC, the SVM classifier is able to be publicly released while preserving privacy. We prove that the PPSVC is robust against adversarial attacks. The experiments on real data sets show that the classification accuracy of the PPSVC is comparable to the original SVM classifier.",
"title": ""
},
{
"docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97",
"text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.",
"title": ""
},
{
"docid": "641811eac0e8a078cf54130c35fd6511",
"text": "Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-toset framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.",
"title": ""
},
{
"docid": "23bf81699add38814461d5ac3e6e33db",
"text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.",
"title": ""
},
{
"docid": "f6dd10d4b400234a28b221d0527e71c0",
"text": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English– Romanian.",
"title": ""
},
{
"docid": "6fad371eecbb734c1e54b8fb9ae218c4",
"text": "Quantitative Susceptibility Mapping (QSM) is a novel MRI based technique that relies on estimates of the magnetic field distribution in the tissue under examination. Several sophisticated data processing steps are required to extract the magnetic field distribution from raw MRI phase measurements. The objective of this review article is to provide a general overview and to discuss several underlying assumptions and limitations of the pre-processing steps that need to be applied to MRI phase data before the final field-to-source inversion can be performed. Beginning with the fundamental relation between MRI signal and tissue magnetic susceptibility this review covers the reconstruction of magnetic field maps from multi-channel phase images, background field correction, and provides an overview of state of the art QSM solution strategies.",
"title": ""
},
{
"docid": "13bd6515467934ba7855f981fd4f1efd",
"text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "c0a75bf3a2d594fb87deb7b9f58a8080",
"text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "9f9719336bf6497d7c71590ac61a433b",
"text": "College and universities are increasingly using part-time, adjunct instructors on their faculties to facilitate greater fiscal flexibility. However, critics argue that the use of adjuncts is causing the quality of higher education to deteriorate. This paper addresses questions about the impact of adjuncts on student outcomes. Using a unique dataset of public four-year colleges in Ohio, we quantify how having adjunct instructors affects student persistence after the first year. Because students taking courses from adjuncts differ systematically from other students, we use an instrumental variable strategy to address concerns about biases. The findings suggest that, in general, students taking an \"adjunct-heavy\" course schedule in their first semester are adversely affected. They are less likely to persist into their second year. We reconcile these findings with previous research that shows that adjuncts may encourage greater student interest in terms of major choice and subsequent enrollments in some disciplines, most notably fields tied closely to specific professions. The authors are grateful for helpful suggestions from Ronald Ehrenberg and seminar participants at the NBER Labor Studies Meetings. The authors also thank the Ohio Board of Regents for their support during this research project. Rod Chu, Darrell Glenn, Robert Sheehan, and Andy Lechler provided invaluable access and help with the data. Amanda Starc, James Carlson, Erin Riley, and Suzan Akin provided excellent research assistance. All opinions and mistakes are our own. The authors worked equally on the project and are listed alphabetically.",
"title": ""
},
{
"docid": "115fb4dcd7d5a1240691e430cd107dce",
"text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.",
"title": ""
}
] | scidocsrr |
9252e7671f138a58239660a78a3fa033 | Agile Enterprise Architecture: a Case of a Cloud Technology-Enabled Government Enterprise Transformation | [
{
"docid": "de276ac8417b92ed155f5a9dcb5e680d",
"text": "With the development of parallel computing, distributed computing, grid computing, a new computing model appeared. The concept of computing comes from grid, public computing and SaaS. It is a new method that shares basic framework. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer or remoter server. The running of the enterprise’s data center is just like Internet. This makes the enterprise use the resource in the application that is needed, and access computer and storage system according to the requirement. This article introduces the background and principle of cloud computing, the character, style and actuality. This article also introduces the application field the merit of cloud computing, such as, it do not need user’s high level equipment, so it reduces the user’s cost. It provides secure and dependable data storage center, so user needn’t do the awful things such storing data and killing virus, this kind of task can be done by professionals. It can realize data share through different equipments. It analyses some questions and hidden troubles, and puts forward some solutions, and discusses the future of cloud computing. Cloud computing is a computing style that provide power referenced with IT as a service. Users can enjoy the service even he knows nothing about the technology of cloud computing and the professional knowledge in this field and the power to control it.",
"title": ""
},
{
"docid": "27214c91a4aa61da99084ba2a17a9a2b",
"text": "Emergency agencies (EA) rely on inter-agency approaches to information management during disasters. EA have shown a significant interest in the use of cloud-based social media such as Twitter and Facebook for crowd-sourcing and distribution of disaster information. While the intentions are clear, the question of what are its major challenges are not. EA have a need to recognise the challenges in the use of social media under their local circumstances. This paper analysed the recent literature, 2010 Haiti earthquake and 2010-11 Queensland flood cases and developed a crowd sourcing challenges assessment index construct specific to EA areas of interest. We argue that, this assessment index, as a part of our large conceptual framework of context aware cloud adaptation (CACA), can be useful for the facilitation of citizens, NGOs and government agencies in a strategy for use of social media for crowd sourcing, in preventing, preparing for, responding to and recovering from disasters.",
"title": ""
}
] | [
{
"docid": "dfe4e689e150fc9c8face64bd9628d1e",
"text": "We present and discuss a fully-automated collaboration system, CoCo, that allows multiple participants to video chat and receive feedback through custom video conferencing software. After a conferencing session, a virtual feedback assistant provides insights on the conversation to participants. CoCo automatically pulls audial and visual data during conversations and analyzes the extracted streams for affective features, including smiles, engagement, attention, as well as speech overlap and turn-taking. We validated CoCo with 39 participants split into 10 groups. Participants played two back-to-back team-building games, Lost at Sea and Survival on the Moon, with the system providing feedback between the two. With feedback, we found a statistically significant change in balanced participation---that is, everyone spoke for an equal amount of time. There was also statistically significant improvement in participants' self-evaluations of conversational skills awareness, including how often they let others speak, as well as of teammates' conversational skills. The entire framework is available at https://github.com/ROC-HCI/CollaborationCoach_PostFeedback.",
"title": ""
},
{
"docid": "3a7a7fa5e41a6195ca16f172b72f89a1",
"text": "To integrate unpredictable human behavior in the assessment of active and passive pedestrian safety systems, we introduce a virtual reality (VR)-based pedestrian simulation system. The device uses the Xsens Motion Capture platform and can be used without additional infrastructure. To show the systems applicability for pedestrian behavior studies, we conducted a pilot study evaluating the degree of realism such a system can achieve in a typical unregulated pedestrian crossing scenario. Six participants had to estimate vehicle speeds and distances in four scenarios with varying gaps between vehicles. First results indicate an acceptable level of realism so that the device can be used for further user studies addressing pedestrian behavior, pedestrian interaction with (automated) vehicles, risk assessment and investigation of the pre-crash phase without the risk of injuries.",
"title": ""
},
{
"docid": "f09bc6f1b4f37fc4d822ccc4cdc1497f",
"text": "It is generally believed that a metaphor tends to have a stronger emotional impact than a literal statement; however, there is no quantitative study establishing the extent to which this is true. Further, the mechanisms through which metaphors convey emotions are not well understood. We present the first data-driven study comparing the emotionality of metaphorical expressions with that of their literal counterparts. Our results indicate that metaphorical usages are, on average, significantly more emotional than literal usages. We also show that this emotional content is not simply transferred from the source domain into the target, but rather is a result of meaning composition and interaction of the two domains in the metaphor.",
"title": ""
},
{
"docid": "799f9ca9ea641c1893e4900fdc29c8d4",
"text": "This paper presents a large scale general purpose image database with human annotated ground truth. Firstly, an all-in-all labeling framework is proposed to group visual knowledge of three levels: scene level (global geometric description), object level (segmentation, sketch representation, hierarchical decomposition), and low-mid level (2.1D layered representation, object boundary attributes, curve completion, etc.). Much of this data has not appeared in previous databases. In addition, And-Or Graph is used to organize visual elements to facilitate top-down labeling. An annotation tool is developed to realize and integrate all tasks. With this tool, we’ve been able to create a database consisting of more than 636,748 annotated images and video frames. Lastly, the data is organized into 13 common subsets to serve as benchmarks for diverse evaluation endeavors.",
"title": ""
},
{
"docid": "eddd98b55171f658ddde1e03ea4c04df",
"text": "Over last fifteen years, robot technology has become popular in classrooms across our whole educational system. Both engineering and AI educators have developed ways to integrate robots into their teaching. Engineering educators are primarily concerned with engineering science (e.g., feedback control) and process (e.g., design skills). AI educators have different goals—namely, AI educators want students to learn AI concepts. Both agree that students are enthusiastic about working with robots, and in both cases, the pedagogical challenge is to develop robotics technology and provide classroom assignments that highlight key ideas in the respective field. Mobile robots are particularly intriguing because of their dual nature as both deterministic machines and unpredictable entities. This paper explores challenges for both engineering and AI educators as robot toolkits",
"title": ""
},
{
"docid": "59b12e15badee587c3de8657663315d1",
"text": "Thanks to their excellent performances on typical artificial intelligence problems, deep neural networks have drawn a lot of interest lately. However, this comes at the cost of large computational needs and high power consumption. Benefiting from high precision at acceptable hardware cost on these difficult problems is a challenge. To address it, we advocate the use of ternary neural networks (TNN) that, when properly trained, can reach results close to the state of the art using floatingpoint arithmetic. We present a highly versatile FPGA friendly architecture for TNN in which we can vary both the number of bits of the input data and the level of parallelism at synthesis time, allowing to trade throughput for hardware resources and power consumption. To demonstrate the efficiency of our proposal, we implement high-complexity convolutional neural networks on the Xilinx Virtex-7 VC709 FPGA board. While reaching a better accuracy than comparable designs, we can target either high throughput or low power. We measure a throughput up to 27 000 fps at ≈7W or up to 8.36 TMAC/s at ≈13 W.",
"title": ""
},
{
"docid": "9f37aaf96b8c56f0397b63a7b53776ec",
"text": "The Histogram of Oriented Gradient (HOG) descriptor has led to many advances in computer vision over the last decade and is still part of many state of the art approaches. We realize that the associated feature computation is piecewise differentiable and therefore many pipelines which build on HOG can be made differentiable. This lends to advanced introspection as well as opportunities for end-to-end optimization. We present our implementation of ΔHOG based on the auto-differentiation toolbox Chumpy [18] and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR [19] pipeline. Both applications improve on the respective state-of-the-art HOG approaches.",
"title": ""
},
{
"docid": "ff75699519c0df47220624db263b483a",
"text": "We present BeThere, a proof-of-concept system designed to explore 3D input for mobile collaborative interactions. With BeThere, we explore 3D gestures and spatial input which allow remote users to perform a variety of virtual interactions in a local user's physical environment. Our system is completely self-contained and uses depth sensors to track the location of a user's fingers as well as to capture the 3D shape of objects in front of the sensor. We illustrate the unique capabilities of our system through a series of interactions that allow users to control and manipulate 3D virtual content. We also provide qualitative feedback from a preliminary user study which confirmed that users can complete a shared collaborative task using our system.",
"title": ""
},
{
"docid": "821b1e60e936b3f56031fae450f22dc8",
"text": "Conventional methods for seismic retrofitting of concrete columns include reinforcement with steel plates or steel frame braces, as well as cross-sectional increments and in-filled walls. However, these methods have some disadvantages, such as the increase in mass and the need for precise construction. Fiber-reinforced polymer (FRP) sheets for seismic strengthening of concrete columns using new light-weight composite materials, such as carbon fiber or glass fiber, have been developed, have excellent durability and performance, and are being widely applied to overcome the shortcomings of conventional seismic strengthening methods. Nonetheless, the FRP-sheet reinforcement method also has some drawbacks, such as the need for prior surface treatment, problems at joints, and relatively expensive material costs. In the current research, the structural and material properties associated with a new method for seismic strengthening of concrete columns using FRP were investigated. The new technique is a sprayed FRP system, achieved by mixing chopped glass and carbon fibers with epoxy and vinyl ester resin in the open air and randomly spraying the resulting mixture onto the uneven surface of the concrete columns. This paper reports on the seismic resistance of reinforced concrete columns controlled by shear strengthening using the sprayed FRP system. Five shear column specimens were designed, and then strengthened with sprayed FRP by using different combinations of short carbon or glass fibers and epoxy or vinyl ester resins. There was also a non-strengthened control specimen. Cyclic loading tests were carried out, and the ultimate load carrying capacity and deformation were investigated, as well as hysteresis in the lateral load-drift relationship. The results showed that shear strengths and deformation capacities of shear columns strengthened using sprayed FRP improved markedly, compared with those of the control column. The spraying FRP technique developed in this study can be practically and effectively used for the seismic strengthening of existing concrete columns.",
"title": ""
},
{
"docid": "4b04a4892ef7c614b3bf270f308e6984",
"text": "One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.",
"title": ""
},
{
"docid": "a5c67537b72e3cd184b43c0a0e7c96b2",
"text": "These notes give a short introduction to Gaussian mixture models (GMMs) and the Expectation-Maximization (EM) algorithm, first for the specific case of GMMs, and then more generally. These notes assume you’re familiar with basic probability and basic calculus. If you’re interested in the full derivation (Section 3), some familiarity with entropy and KL divergence is useful but not strictly required. The notation here is borrowed from Introduction to Probability by Bertsekas & Tsitsiklis: random variables are represented with capital letters, values they take are represented with lowercase letters, pX represents a probability distribution for random variable X, and pX(x) represents the probability of value x (according to pX). We’ll also use the shorthand notation X 1 to represent the sequence X1, X2, . . . , Xn, and similarly x n 1 to represent x1, x2, . . . , xn. These notes follow a development somewhat similar to the one in Pattern Recognition and Machine Learning by Bishop.",
"title": ""
},
{
"docid": "ddc0b599dc2cb3672e9a2a1f5a9a9163",
"text": "Head and modifier detection is an important problem for applications that handle short texts such as search queries, ads keywords, titles, captions, etc. In many cases, short texts such as search queries do not follow grammar rules, and existing approaches for head and modifier detection are coarse-grained, domain specific, and/or require labeling of large amounts of training data. In this paper, we introduce a semantic approach for head and modifier detection. We first obtain a large number of instance level head-modifier pairs from search log. Then, we develop a conceptualization mechanism to generalize the instance level pairs to concept level. Finally, we derive weighted concept patterns that are concise, accurate, and have strong generalization power in head and modifier detection. Furthermore, we identify a subset of modifiers that we call constraints. Constraints are usually specific and not negligible as far as the intent of the short text is concerned, while non-constraint modifiers are more subjective. The mechanism we developed has been used in production for search relevance and ads matching. We use extensive experiment results to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "a81b999f495637ba3e12799d727d872d",
"text": "The inversion of remote sensing images is crucial for soil moisture mapping in precision agriculture. However, the large size of remote sensing images complicates their management. Therefore, this study proposes a remote sensing observation sharing method based on cloud computing (ROSCC) to enhance remote sensing observation storage, processing, and service capability. The ROSCC framework consists of a cloud computing-enabled sensor observation service, web processing service tier, and a distributed database tier. Using MongoDB as the distributed database and Apache Hadoop as the cloud computing service, this study achieves a high-throughput method for remote sensing observation storage and distribution. The map, reduced algorithms and the table structure design in distributed databases are then explained. Along the Yangtze River, the longest river in China, Hubei Province was selected as the study area to test the proposed framework. Using GF-1 as a data source, an experiment was performed to enhance earth observation data (EOD) storage and achieve large-scale soil moisture mapping. The proposed ROSCC can be applied to enhance EOD sharing in cloud computing context, so as to achieve soil moisture mapping via the modified perpendicular drought index in an efficient way to better serve precision agriculture.",
"title": ""
},
{
"docid": "722bb59033ea5722b762ccac5d032235",
"text": "In this paper, we provide a real nursing data set for mobile activity recognition that can be used for supervised machine learning, and big data combined the patient medical records and sensors attempted for 2 years, and also propose a method for recognizing activities for a whole day utilizing prior knowledge about the activity segments in a day. Furthermore, we demonstrate data mining by applying our method to the bigger data with additional hospital data. In the proposed method, we 1) convert a set of segment timestamps into a prior probability of the activity segment by exploiting the concept of importance sampling, 2) obtain the likelihood of traditional recognition methods for each local time window within the segment range, and, 3) apply Bayesian estimation by marginalizing the conditional probability of estimating the activities for the segment samples. By evaluating with the dataset, the proposed method outperformed the traditional method without using the prior knowledge by 25.81% at maximum by balanced classification rate. Moreover, the proposed method significantly reduces duration errors of activity segments from 324.2 seconds of the traditional method to 74.6 seconds at maximum. We also demonstrate the data mining by applying our method to bigger data in a hospital.",
"title": ""
},
{
"docid": "37f5fcde86e30359e678ff3f957e3c7e",
"text": "A Phase I dose-proportionality study is an essential tool to understand drug pharmacokinetic dose-response relationship in early clinical development. There are a number of different approaches to the assessment of dose proportionality. The confidence interval (CI) criteria approach, a staitistically sound and clinically relevant approach, has been proposed to detect dose-proportionality (Smith, et al. 2000), by which the proportionality is declared if the 90% CI for slope is completely contained within the pre-determined critical interval. This method, enhancing the information from a clinical dose-proportionality study, has gradually drawn attention. However, exact power calculation of dose proportinality studies based on CI criteria poses difficulity for practioners since the methodology was essentailly from two one-sided tests (TOST) procedure for the slope, which should be unit under proportionality. It requires sophisticated numerical integration, and it is not available in statistical software packages. This paper presents a SAS Macro to compute the empirical power for the CI-based dose proportinality studies. The resulting sample sizes and corresponding empirical powers suggest that this approach is powerful in detecting dose-proportionality under commonly used sample sizes for phase I studies.",
"title": ""
},
{
"docid": "e9b5f3d734b364ebd9ed144719a6ac6b",
"text": "This work presents a literature review of multiple classifier systems based on the dynamic selection of classifiers. First, it briefly reviews some basic concepts and definitions related to such a classification approach and then it presents the state of the art organized according to a proposed taxonomy. In addition, a two-step analysis is applied to the results of the main methods reported in the literature, considering different classification problems. The first step is based on statistical analyses of the significance of these results. The idea is to figure out the problems for which a significant contribution can be observed in terms of classification performance by using a dynamic selection approach. The second step, based on data complexity measures, is used to investigate whether or not a relation exists between the possible performance contribution and the complexity of the classification problem. From this comprehensive study, we observed that, for some classification problems, the performance contribution of the dynamic selection approach is statistically significant when compared to that of a single-based classifier. In addition, we found evidence of a relation between the observed performance contribution and the complexity of the classification problem. These observations allow us to suggest, from the classification problem complexity, that further work should be done to predict whether or not to use a dynamic selection approach. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7e584bca061335c8cd085511f4abb3b",
"text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.",
"title": ""
},
{
"docid": "c252f063dfaf75619855a51c975169d1",
"text": "Bitcoin owes its success to the fact that transactions are transparently recorded in the blockchain, a global public ledger that removes the need for trusted parties. Unfortunately, recording every transaction in the blockchain causes privacy, latency, and scalability issues. Building on recent proposals for \"micropayment channels\" --- two party associations that use the ledger only for dispute resolution --- we introduce techniques for constructing anonymous payment channels. Our proposals allow for secure, instantaneous and private payments that substantially reduce the storage burden on the payment network. Specifically, we introduce three channel proposals, including a technique that allows payments via untrusted intermediaries. We build a concrete implementation of our scheme and show that it can be deployed via a soft fork to existing anonymous currencies such as ZCash.",
"title": ""
},
{
"docid": "2702017be1794708ccec26c569a0a5ad",
"text": "Although a common pain response, whether swearing alters individuals' experience of pain has not been investigated. This study investigated whether swearing affects cold-pressor pain tolerance (the ability to withstand immersing the hand in icy water), pain perception and heart rate. In a repeated measures design, pain outcomes were assessed in participants asked to repeat a swear word versus a neutral word. In addition, sex differences and the roles of pain catastrophising, fear of pain and trait anxiety were explored. Swearing increased pain tolerance, increased heart rate and decreased perceived pain compared with not swearing. However, swearing did not increase pain tolerance in males with a tendency to catastrophise. The observed pain-lessening (hypoalgesic) effect may occur because swearing induces a fight-or-flight response and nullifies the link between fear of pain and pain perception.",
"title": ""
},
{
"docid": "69bfc5edab903692887371464d6eecb0",
"text": "In recent days text summarization had tremendous growth in all languages, especially in India regional languages. Yet the performance of such system needs improvement. This paper proposes an extractive Malayalam summarizer which reduces redundancy in summarized content and meaning of sentences are considered for summary generation. A semantic graph is created for entire document and summary generated by reducing graph using minimal spanning tree algorithm.",
"title": ""
}
] | scidocsrr |
b4ba615cd6e6c6f74f54b6d0cb2a5a50 | A wearable device for physical and emotional health monitoring | [
{
"docid": "6fc013e132bdd347f355c0b187cb5ca9",
"text": "Current wireless technologies, such as wireless body area networks and wireless personal area networks, provide promising applications in medical monitoring systems to measure specified physiological data and also provide location-based information, if required. With the increasing sophistication of wearable and implantable medical devices and their integration with wireless sensors, an ever-expanding range of therapeutic and diagnostic applications is being pursued by research and commercial organizations. This paper aims to provide a comprehensive review of recent developments in wireless sensor technology for monitoring behaviour related to human physiological responses. It presents background information on the use of wireless technology and sensors to develop a wireless physiological measurement system. A generic miniature platform and other available technologies for wireless sensors have been studied in terms of hardware and software structural requirements for a low-cost, low-power, non-invasive and unobtrusive system.",
"title": ""
}
] | [
{
"docid": "57eb8d5adbf8374710a3c40074fb38f8",
"text": "Information security and privacy in the healthcare sector is an issue of growing importance. The adoption of digital patient records, increased regulation, provider consolidation and the increasing need for information exchange between patients, providers and payers, all point towards the need for better information security. We critically survey the literature on information security and privacy in healthcare, published in information systems journals as well as many other related disciplines including health informatics, public health, law, medicine, the trade press and industry reports. In this paper, we provide a holistic view of the recent research and suggest new areas of interest to the information systems community.",
"title": ""
},
{
"docid": "ef7b6c2b0254535e9dbf85a4af596080",
"text": "African swine fever virus (ASFV) is a highly virulent swine pathogen that has spread across Eastern Europe since 2007 and for which there is no effective vaccine or treatment available. The dynamics of shedding and excretion is not well known for this currently circulating ASFV strain. Therefore, susceptible pigs were exposed to pigs intramuscularly infected with the Georgia 2007/1 ASFV strain to measure those dynamics through within- and between-pen transmission scenarios. Blood, oral, nasal and rectal fluid samples were tested for the presence of ASFV by virus titration (VT) and quantitative real-time polymerase chain reaction (qPCR). Serum was tested for the presence of ASFV-specific antibodies. Both intramuscular inoculation and contact transmission resulted in development of acute disease in all pigs although the experiments indicated that the pathogenesis of the disease might be different, depending on the route of infection. Infectious ASFV was first isolated in blood among the inoculated pigs by day 3, and then chronologically among the direct and indirect contact pigs, by day 10 and 13, respectively. Close to the onset of clinical signs, higher ASFV titres were found in blood compared with nasal and rectal fluid samples among all pigs. No infectious ASFV was isolated in oral fluid samples although ASFV genome copies were detected. Only one animal developed antibodies starting after 12 days post-inoculation. The results provide quantitative data on shedding and excretion of the Georgia 2007/1 ASFV strain among domestic pigs and suggest a limited potential of this isolate to cause persistent infection.",
"title": ""
},
{
"docid": "28823f624c037a8b54e9906c3b443f38",
"text": "Aging is associated with progressive losses in function across multiple systems, including sensation, cognition, memory, motor control, and affect. The traditional view has been that functional decline in aging is unavoidable because it is a direct consequence of brain machinery wearing down over time. In recent years, an alternative perspective has emerged, which elaborates on this traditional view of age-related functional decline. This new viewpoint--based upon decades of research in neuroscience, experimental psychology, and other related fields--argues that as people age, brain plasticity processes with negative consequences begin to dominate brain functioning. Four core factors--reduced schedules of brain activity, noisy processing, weakened neuromodulatory control, and negative learning--interact to create a self-reinforcing downward spiral of degraded brain function in older adults. This downward spiral might begin from reduced brain activity due to behavioral change, from a loss in brain function driven by aging brain machinery, or more likely from both. In aggregate, these interrelated factors promote plastic changes in the brain that result in age-related functional decline. This new viewpoint on the root causes of functional decline immediately suggests a remedial approach. Studies of adult brain plasticity have shown that substantial improvement in function and/or recovery from losses in sensation, cognition, memory, motor control, and affect should be possible, using appropriately designed behavioral training paradigms. Driving brain plasticity with positive outcomes requires engaging older adults in demanding sensory, cognitive, and motor activities on an intensive basis, in a behavioral context designed to re-engage and strengthen the neuromodulatory systems that control learning in adults, with the goal of increasing the fidelity, reliability, and power of cortical representations. Such a training program would serve a substantial unmet need in aging adults. Current treatments directed at age-related functional losses are limited in important ways. Pharmacological therapies can target only a limited number of the many changes believed to underlie functional decline. Behavioral approaches focus on teaching specific strategies to aid higher order cognitive functions, and do not usually aspire to fundamentally change brain function. A brain-plasticity-based training program would potentially be applicable to all aging adults with the promise of improving their operational capabilities. We have constructed such a brain-plasticity-based training program and conducted an initial randomized controlled pilot study to evaluate the feasibility of its use by older adults. A main objective of this initial study was to estimate the effect size on standardized neuropsychological measures of memory. We found that older adults could learn the training program quickly, and could use it entirely unsupervised for the majority of the time required. Pre- and posttesting documented a significant improvement in memory within the training group (effect size 0.41, p<0.0005), with no significant within-group changes in a time-matched computer using active control group, or in a no-contact control group. Thus, a brain-plasticity-based intervention targeting normal age-related cognitive decline may potentially offer benefit to a broad population of older adults.",
"title": ""
},
{
"docid": "503ccd79172e5b8b3cc3a26cf0d1b485",
"text": "The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360◦ full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image-based object detector, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.",
"title": ""
},
{
"docid": "9f1d881193369f1b7417d71a9a62bc19",
"text": "Neurofeedback (NFB) is a potential alternative treatment for children with ADHD that aims to optimize brain activity. Whereas most studies into NFB have investigated behavioral effects, less attention has been paid to the effects on neurocognitive functioning. The present randomized controlled trial (RCT) compared neurocognitive effects of NFB to (1) optimally titrated methylphenidate (MPH) and (2) a semi-active control intervention, physical activity (PA), to control for non-specific effects. Using a multicentre three-way parallel group RCT design, children with ADHD, aged 7–13, were randomly allocated to NFB (n = 39), MPH (n = 36) or PA (n = 37) over a period of 10–12 weeks. NFB comprised theta/beta training at CZ. The PA intervention was matched in frequency and duration to NFB. MPH was titrated using a double-blind placebo controlled procedure to determine the optimal dose. Neurocognitive functioning was assessed using parameters derived from the auditory oddball-, stop-signal- and visual spatial working memory task. Data collection took place between September 2010 and March 2014. Intention-to-treat analyses showed improved attention for MPH compared to NFB and PA, as reflected by decreased response speed during the oddball task [η p 2 = 0.21, p < 0.001], as well as improved inhibition, impulsivity and attention, as reflected by faster stop signal reaction times, lower commission and omission error rates during the stop-signal task (range η p 2 = 0.09–0.18, p values <0.008). Working memory improved over time, irrespective of received treatment (η p 2 = 0.17, p < 0.001). Overall, stimulant medication showed superior effects over NFB to improve neurocognitive functioning. Hence, the findings do not support theta/beta training applied as a stand-alone treatment in children with ADHD.",
"title": ""
},
{
"docid": "faec1a6b42cfdd303309c69c4185c9fe",
"text": "The currency which is imitated with illegal sanction of state and government is counterfeit currency. Every country incorporates a number of security features for its currency security. Currency counterfeiting is always been a challenging term for financial system of any country. The problem of counterfeiting majorly affects the economical as well as financial growth of a country. In view of the problem various studies about counterfeit detection has been conducted using various techniques and variety of tools. This paper focuses on the researches and studies that have been conducted by various researchers. The paper highlighted the methodologies used and the particular characteristics features considered for counterfeit money detection.",
"title": ""
},
{
"docid": "b907741ee0918dcbc2c2e42d106e35a4",
"text": "This paper investigates decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC). We study the iterative and maximum-likelihood (ML) decoding of LDPC codes on this channel. We derive bounds on the ML decoding of LDPC codes on the BEC. We then present an improved decoding algorithm. The proposed algorithm has almost the same complexity as the standard iterative decoding. However, it has better performance. Simulations show that we can decrease the error rate by several orders of magnitude using the proposed algorithm. We also provide some graph-theoretic properties of different decoding algorithms of LDPC codes over the BEC which we think are useful to better understand the LDPC decoding methods, in particular, for finite-length codes.",
"title": ""
},
{
"docid": "0c8947cbaa2226a024bf3c93541dcae1",
"text": "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80% and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.",
"title": ""
},
{
"docid": "e3c41b4fc2bcb71872d1d18339e1498c",
"text": "Visual Question Answering (VQA) has received a lot of attention over the past couple of years. A number of deep learning models have been proposed for this task. However, it has been shown [1–4] that these models are heavily driven by superficial correlations in the training data and lack compositionality – the ability to answer questions about unseen compositions of seen concepts. This compositionality is desirable and central to intelligence. In this paper, we propose a new setting for Visual Question Answering where the test question-answer pairs are compositionally novel compared to training question-answer pairs. To facilitate developing models under this setting, we present a new compositional split of the VQA v1.0 [5] dataset, which we call Compositional VQA (C-VQA). We analyze the distribution of questions and answers in the C-VQA splits. Finally, we evaluate several existing VQA models under this new setting and show that the performances of these models degrade by a significant amount compared to the original VQA setting.",
"title": ""
},
{
"docid": "3f88c453eab8b2fbfffbf98fee34d086",
"text": "Face recognition become one of the most important and fastest growing area during the last several years and become the most successful application of image analysis and broadly used in security system. It has been a challenging, interesting, and fast growing area in real time applications. The propose method is tested using a benchmark ORL database that contains 400 images of 40 persons. Pre-Processing technique are applied on the ORL database to increase the recognition rate. The best recognition rate is 97.5% when tested using 9 training images and 1 testing image. Increasing image database brightness is efficient and will increase the recognition rate. Resizing images using 0.3 scale is also efficient and will increase the recognition rate. PCA is used for feature extraction and dimension reduction. Euclidean distance is used for matching process.",
"title": ""
},
{
"docid": "0a3d649baf7483245167979fbbb008d2",
"text": "Students participate more in a classroom and also report a better understanding of course concepts when steps are taken to actively engage them. The Student Engagement (SE) Survey was developed and used in this study for measuring student engagement at the class level and consisted of 14 questions adapted from the original National Survey of Student Engagement (NSSE) survey. The adapted survey examined levels of student engagement in 56 classes at an upper mid-western university in the USA. Campus-wide faculty members participated in a program for training them in innovative teaching methods including problem-based learning (PBL). Results of this study typically showed a higher engagement in higher-level classes and also those classes with fewer students. In addition, the level of engagement was typically higher in those classrooms with more PBL.",
"title": ""
},
{
"docid": "87e2d691570403ae36e0a9a87099ad71",
"text": "Audiovisual translation is one of several overlapping umbrella terms that include ‘media translation’, ‘multimedia translation’, ‘multimodal translation’ and ‘screen translation’. These different terms all set out to cover the interlingual transfer of verbal language when it is transmitted and accessed both visually and acoustically, usually, but not necessarily, through some kind of electronic device. Theatrical plays and opera, for example, are clearly audiovisual yet, until recently, audiences required no technological devices to access their translations; actors and singers simply acted and sang the translated versions. Nowadays, however, opera is frequently performed in the original language with surtitles in the target language projected on to the stage. Furthermore, electronic librettos placed on the back of each seat containing translations are now becoming widely available. However, to date most research in audiovisual translation has been dedicated to the field of screen translation, which, while being both audiovisual and multimedial in nature, is specifically understood to refer to the translation of films and other products for cinema, TV, video and DVD. After the introduction of the first talking pictures in the 1920s a solution needed to be found to allow films to circulate despite language barriers. How to translate film dialogues and make movie-going accessible to speakers of all languages was to become a major concern for both North American and European film directors. Today, of course, screens are no longer restricted to cinema theatres alone. Television screens, computer screens and a series of devices such as DVD players, video game consoles, GPS navigation devices and mobile phones are also able to send out audiovisual products to be translated into scores of languages. Hence, strictly speaking, screen translation includes translations for any electronic appliance with a screen; however, for the purposes of this chapter, the term will be used mainly to refer to translations for the most popular products, namely for cinema, TV, video and DVD, and videogames. The two most widespread modalities adopted for translating products for the screen are dubbing and subtitling.1 Dubbing is a process which uses the acoustic channel for translational purposes, while subtitling is visual and involves a written translation that is superimposed on to the",
"title": ""
},
{
"docid": "346fe809a65e28ccdf717752144843d6",
"text": "The continuous increase in quantity and depth of regulation following the financial crisis has left the financial industry in dire need of making its compliance assessment activities more effective. The field of AI & Law provides models that, despite being fit for the representation of semantics of requirements, do not share the approach favoured by the industry which relies on business vocabularies such as SBVR. This paper presents Mercury, a solution for representing the requirements and vocabulary contained in a regulatory text (or business policy) in a SME-friendly way, for the purpose of determining compliance. Mercury includes a structured language based on SBVR, with a rulebook, containing the regulative and constitutive rules, and a vocabulary, containing the actions and factors that determine a rule’s applicability and its legal effect. Mercury includes an XML persistence model and is mapped to an OWL ontology called FIRO, enabling semantic applications.",
"title": ""
},
{
"docid": "1ffe0a1612214af88315a5a751d3bb4f",
"text": "In recent years, it is getting attention for renewable energy sources such as solar energy, fuel cells, batteries or ultracapacitors for distributed power generation systems. This paper proposes a general mathematical model of solar cells and Matlab/Simulink software based simulation of this model has been visually programmed. Proposed model can be used with other hybrid systems to develop solar cell simulations. Also, all equations are performed by using Matlab/Simulink programming.",
"title": ""
},
{
"docid": "e5f3a4d3e1fd591b81da2c08b228ce47",
"text": "This article is a tutorial for researchers who are designing software to perform a creative task and want to evaluate their system using interdisciplinary theories of creativity. Researchers who study human creativity have a great deal to offer computational creativity. We summarize perspectives from psychology, philosophy, cognitive science, and computer science as to how creativity can be measured both in humans and in computers. We survey how these perspectives have been used in computational creativity research and make recommendations for how they should be used.",
"title": ""
},
{
"docid": "3df57ba5139950ec58785ed669094d26",
"text": "In this paper we present the model used by the team Rivercorners for the 2017 RepEval shared task. First, our model separately encodes a pair of sentences into variable-length representations by using a bidirectional LSTM. Later, it creates fixed-length raw representations by means of simple aggregation functions, which are then refined using an attention mechanism. Finally it combines the refined representations of both sentences into a single vector to be used for classification. With this model we obtained test accuracies of 72.057% and 72.055% in the matched and mismatched evaluation tracks respectively, outperforming the LSTM baseline, and obtaining performances similar to a model that relies on shared information between sentences (ESIM). When using an ensemble both accuracies increased to 72.247% and 72.827% respectively.",
"title": ""
},
{
"docid": "29eebb40973bdfac9d1f1941d4c7c889",
"text": "This paper explains a procedure for getting models of robot kinematics and dynamics that are appropriate for robot control design. The procedure consists of the following steps: 1) derivation of robot kinematic and dynamic models and establishing correctness of their structures; 2) experimental estimation of the model parameters; 3) model validation; and 4) identification of the remaining robot dynamics, not covered with the derived model. We give particular attention to the design of identification experiments and to online reconstruction of state coordinates, as these strongly influence the quality of the estimation process. The importance of correct friction modeling and the estimation of friction parameters are illuminated. The models of robot kinematics and dynamics can be used in model-based nonlinear control. The remaining dynamics cannot be ignored if high-performance robot operation with adequate robustness is required. The complete procedure is demonstrated for a direct-drive robotic arm with three rotational joints.",
"title": ""
},
{
"docid": "830a585529981bd5b61ac5af3055d933",
"text": "Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.",
"title": ""
},
{
"docid": "c0a51f27931d8314b73a7de969bdfb08",
"text": "Organizations need practical security benchmarking tools in order to plan effective security strategies. This paper explores a number of techniques that can be used to measure security within an organization. It proposes a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.",
"title": ""
},
{
"docid": "e96eaf2bde8bf50605b67fb1184b760b",
"text": "In response to your recent publication comparing subjective effects of D9-tetrahydrocannabinol and herbal cannabis (Wachtel et al. 2002), a number of comments are necessary. The first concerns the suitability of the chosen “marijuana” to assay the issues at hand. NIDA cannabis has been previously characterized in a number of studies (Chait and Pierri 1989; Russo et al. 2002), as a crude lowgrade product (2–4% THC) containing leaves, stems and seeds, often 3 or more years old after processing, with a stale odor lacking in terpenoids. This contrasts with the more customary clinical cannabis employed by patients in Europe and North America, composed solely of unseeded flowering tops with a potency of up to 20% THC. Cannabis-based medicine extracts (CBME) (Whittle et al. 2001), employed in clinical trials in the UK (Notcutt 2002; Robson et al. 2002), are extracted from flowering tops with abundant glandular trichomes, and retain full terpenoid and flavonoid components. In the study at issue (Wachtel et al. 2002), we are informed that marijuana contained 2.11% THC, 0.30% cannabinol (CBN), and 0.05% (CBD). The concentration of the latter two cannabinoids is virtually inconsequential. Thus, we are not surprised that no differences were seen between NIDA marijuana with essentially only one cannabinoid, and pure, synthetic THC. In comparison, clinical grade cannabis and CBME customarily contain high quantities of CBD, frequently equaling the percentage of THC (Whittle et al. 2001). Carlini et al. (1974) determined that cannabis extracts produced effects “two or four times greater than that expected from their THC content, based on animal and human studies”. Similarly, Fairbairn and Pickens (1981) detected the presence of unidentified “powerful synergists” in cannabis extracts, causing 330% greater activity in mice than THC alone. The clinical contribution of other CBD and other cannabinoids, terpenoids and flavonoids to clinical cannabis effects has been espoused as an “entourage effect” (Mechoulam and Ben-Shabat 1999), and is reviewed in detail by McPartland and Russo (2001). Briefly summarized, CBD has anti-anxiety effects (Zuardi et al. 1982), anti-psychotic benefits (Zuardi et al. 1995), modulates metabolism of THC by blocking its conversion to the more psychoactive 11-hydroxy-THC (Bornheim and Grillo 1998), prevents glutamate excitotoxicity, serves as a powerful anti-oxidant (Hampson et al. 2000), and has notable anti-inflammatory and immunomodulatory effects (Malfait et al. 2000). Terpenoid cannabis components probably also contribute significantly to clinical effects of cannabis and boil at comparable temperatures to THC (McPartland and Russo 2001). Cannabis essential oil demonstrates serotonin receptor binding (Russo et al. 2000). Its terpenoids include myrcene, a potent analgesic (Rao et al. 1990) and anti-inflammatory (Lorenzetti et al. 1991), betacaryophyllene, another anti-inflammatory (Basile et al. 1988) and gastric cytoprotective (Tambe et al. 1996), limonene, a potent inhalation antidepressant and immune stimulator (Komori et al. 1995) and anti-carcinogenic (Crowell 1999), and alpha-pinene, an anti-inflammatory (Gil et al. 1989) and bronchodilator (Falk et al. 1990). Are these terpenoid effects significant? A dried sample of drug-strain cannabis buds was measured as displaying an essential oil yield of 0.8% (Ross and ElSohly 1996), or a putative 8 mg per 1000 mg cigarette. Buchbauer et al. (1993) demonstrated that 20–50 mg of essential oil in the ambient air in mouse cages produced measurable changes in behavior, serum levels, and bound to cortical cells. Similarly, Komori et al. (1995) employed a gel of citrus fragrance with limonene to produce a significant antidepressant benefit in humans, obviating the need for continued standard medication in some patients, and also improving CD4/8 immunologic ratios. These data would E. B. Russo ()) Montana Neurobehavioral Specialists, 900 North Orange Street, Missoula, MT, 59802 USA e-mail: [email protected]",
"title": ""
}
] | scidocsrr |
11e33996f932f4f0c48c24112e1866f5 | Extraction of Web News from Web Pages Using a Ternary Tree Approach | [
{
"docid": "40e9a5fcc3eaf85840a45dff8a09aec1",
"text": "Web data extractors are used to extract data from web documents in order to feed automated processes. In this article, we propose a technique that works on two or more web documents generated by the same server-side template and learns a regular expression that models it and can later be used to extract data from similar documents. The technique builds on the hypothesis that the template introduces some shared patterns that do not provide any relevant data and can thus be ignored. We have evaluated and compared our technique to others in the literature on a large collection of web documents; our results demonstrate that our proposal performs better than the others and that input errors do not have a negative impact on its effectiveness; furthermore, its efficiency can be easily boosted by means of a couple of parameters, without sacrificing its effectiveness.",
"title": ""
},
{
"docid": "060a024416dd983e226d5318789337a7",
"text": "Extracting information from web documents has become a research area in which new proposals sprout out year after year. This has motivated several researchers to work on surveys that attempt to provide an overall picture of the many existing proposals. Unfortunately, none of these surveys provide a complete picture, because they do not take region extractors into account. These tools are kind of preprocessors, because they help information extractors focus on the regions of a web document that contain relevant information. With the increasing complexity of web documents, region extractors are becoming a must to extract information from many websites. Beyond information extraction, region extractors have also found their way into information retrieval, focused web crawling, topic distillation, adaptive content delivery, mashups, and metasearch engines. In this paper, we survey the existing proposals regarding region extractors and compare them side by side.",
"title": ""
},
{
"docid": "351969655fca37f1d3256481ab037e87",
"text": "Many Web news sites have similar structures and layout styles. Our extensive case studies have indicated that there exists potential relevance between Web content layouts and path patterns. Compared with the delimiting features of Web content, path patterns have many advantages, such as a high positioning accuracy, ease of use and a strong pervasive performance. Consequently, a Web information extraction model with path patterns constructed from a path pattern mining algorithm is proposed in this paper. Our experimental data set is obtained by randomly selecting news Web pages from the CNN website. With a reasonable tolerance threshold, the experimental results show that the average precision is above 99% and the average recall is 100% when we integrate Web information extraction with our path pattern mining algorithm. The performance of path patterns from the pattern mining algorithm is much better than that of priori extraction rules configured by domain knowledge.",
"title": ""
}
] | [
{
"docid": "35981768a2a46c2dd9d52ebbd5b63750",
"text": "A vehicle detection and classification system has been developed based on a low-cost triaxial anisotropic magnetoresistive sensor. Considering the characteristics of vehicle magnetic detection signals, especially the signals for low-speed congested traffic in large cities, a novel fixed threshold state machine algorithm based on signal variance is proposed to detect vehicles within a single lane and segment the vehicle signals effectively according to the time information of vehicles entering and leaving the sensor monitoring area. In our experiments, five signal features are extracted, including the signal duration, signal energy, average energy of the signal, ratio of positive and negative energy of x-axis signal, and ratio of positive and negative energy of y-axis signal. Furthermore, the detected vehicles are classified into motorcycles, two-box cars, saloon cars, buses, and Sport Utility Vehicle commercial vehicles based on a classification tree model. The experimental results have shown that the detection accuracy of the proposed algorithm can reach up to 99.05% and the average classification accuracy is 93.66%, which verify the effectiveness of our algorithm for low-speed congested traffic.",
"title": ""
},
{
"docid": "aa2b1a8d0cf511d5862f56b47d19bc6a",
"text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:",
"title": ""
},
{
"docid": "ea1a56c7bcf4871d1c6f2f9806405827",
"text": "—Prior to the successful use of non-contact photoplethysmography, several engineering issues regarding this monitoring technique must be considered. These issues include ambient light and motion artefacts, the wide dynamic signal range and the effect of direct light source coupling. The latter issue was investigated and preliminary results show that direct coupling can cause attenuation of the detected PPG signal. It is shown that a physical offset can be introduced between the light source and the detector in order to reduce this effect.",
"title": ""
},
{
"docid": "7c287295e022480314d8a2627cd12cef",
"text": "The causal role of human papillomavirus infections in cervical cancer has been documented beyond reasonable doubt. The association is present in virtually all cervical cancer cases worldwide. It is the right time for medical societies and public health regulators to consider this evidence and to define its preventive and clinical implications. A comprehensive review of key studies and results is presented.",
"title": ""
},
{
"docid": "dd975fded3a24052a31bb20587ff8566",
"text": "This paper presents a design methodology for a high power density converter, which emphasizes weight minimization. The design methodology considers various inverter topologies and semiconductor devices with application of cold plate cooling and LCL filter. Design for a high-power inverter is evaluated with demonstration of a 50 kVA 2-level 3-phase SiC inverter operating at 60 kHz switching frequency. The prototype achieves high gravimetric power density of 6.49 kW/kg.",
"title": ""
},
{
"docid": "1f9bf4526e7e58494242ddce17f6c756",
"text": "Consider the following generalization of the classical job-shop scheduling problem in which a set of machines is associated with each operation of a job. The operation can be processed on any of the machines in this set. For each assignment μ of operations to machines letP(μ) be the corresponding job-shop problem andf(μ) be the minimum makespan ofP(μ). How to find an assignment which minimizesf(μ)? For problems with two jobs a polynomial algorithm is derived. Folgende Verallgemeinerung des klassischen Job-Shop Scheduling Problems wird untersucht. Jeder Operation eines Jobs sei eine Menge von Maschinen zugeordnet. Wählt man für jede Operation genau eine Maschine aus dieser Menge aus, so erhält man ein klassisches Job-Shop Problem, dessen minimale Gesamtbearbeitungszeitf(μ) von dieser Zuordnung μ abhängt. Gesucht ist eine Zuordnung μ, dief(μ) minimiert. Für zwei Jobs wird ein polynomialer Algorithmus entwickelt, der dieses Problem löst.",
"title": ""
},
{
"docid": "2c0a4b5c819a8fcfd5a9ab92f59c311e",
"text": "Line starting capability of Synchronous Reluctance Motors (SynRM) is a crucial challenge in their design that if solved, could lead to a valuable category of motors. In this paper, the so-called crawling effect as a potential problem in Line-Start Synchronous Reluctance Motors (LS-SynRM) is analyzed. Two interfering scenarios on LS-SynRM start-up are introduced and one of them is treated in detail by constructing the asynchronous model of the motor. In the third section, a definition of this phenomenon is given utilizing a sample cage configuration. The LS-SynRM model and characteristics are compared with that of a reference induction motor (IM) in all sections of this work to convey a better perception of successful and unsuccessful synchronization consequences to the reader. Several important post effects of crawling on motor performance are discussed in the rest of the paper to evaluate how it would influence the motor operation. All simulations have been performed using Finite Element Analysis (FEA).",
"title": ""
},
{
"docid": "7487f889eae6a32fc1afab23e54de9b8",
"text": "Although many researchers have investigated the use of different powertrain topologies, component sizes, and control strategies in fuel-cell vehicles, a detailed parametric study of the vehicle types must be conducted before a fair comparison of fuel-cell vehicle types can be performed. This paper compares the near-optimal configurations for three topologies of vehicles: fuel-cell-battery, fuel-cell-ultracapacitor, and fuel-cell-battery-ultracapacitor. The objective function includes performance, fuel economy, and powertrain cost. The vehicle models, including detailed dc/dc converter models, are programmed in Matlab/Simulink for the customized parametric study. A controller variable for each vehicle type is varied in the optimization.",
"title": ""
},
{
"docid": "f3cb6de57ba293be0b0833a04086b2ce",
"text": "Due to increasing globalization, urban societies are becoming more multicultural. The availability of large-scale digital mobility traces e.g. from tweets or checkins provides an opportunity to explore multiculturalism that until recently could only be addressed using survey-based methods. In this paper we examine a basic facet of multiculturalism through the lens of language use across multiple cities in Switzerland. Using data obtained from Foursquare over 330 days, we present a descriptive analysis of linguistic differences and similarities across five urban agglomerations in a multicultural, western European country.",
"title": ""
},
{
"docid": "659eea2d34037b6c72728c9149247218",
"text": "Deep learning approaches to breast cancer detection in mammograms have recently shown promising results. However, such models are constrained by the limited size of publicly available mammography datasets, in large part due to privacy concerns and the high cost of generating expert annotations. Limited dataset size is further exacerbated by substantial class imbalance since “normal” images dramatically outnumber those with findings. Given the rapid progress of generative models in synthesizing realistic images, and the known effectiveness of simple data augmentation techniques (e.g. horizontal flipping), we ask if it is possible to synthetically augment mammogram datasets using generative adversarial networks (GANs). We train a class-conditional GAN to perform contextual in-filling, which we then use to synthesize lesions onto healthy screening mammograms. First, we show that GANs are capable of generating high-resolution synthetic mammogram patches. Next, we experimentally evaluate using the augmented dataset to improve breast cancer classification performance. We observe that a ResNet-50 classifier trained with GAN-augmented training data produces a higher AUROC compared to the same model trained only on traditionally augmented data, demonstrating the potential of our approach.",
"title": ""
},
{
"docid": "b7d61816af1dd409e8474cf97fa15b4f",
"text": "This paper presents the detailed circuit operation, mathematical analysis, and design example of the active clamp flyback converter. The auxiliary switch and clamp capacitor are used in the flyback converter to recycle the energy stored in the transformer leakage in order to minimize the spike voltage at the transformer primary side. Therefore the voltage stress of main switch can be reduced. The active clamped circuit can also help the main switch to turn on at ZVS using the switch output capacitor and transformer leakage inductance. First the circuit operation and mathematical analysis are provided. The design example of active clamp flyback converter is also presented. Finally the experimental results based on a 120 W prototype circuit are provided to verify the system performance",
"title": ""
},
{
"docid": "f1910095f08fc72f81c39cc01890c474",
"text": "In today’s competitive business environment, there is a strong need for businesses to collect, monitor, and analyze user-generated data on their own and on their competitors’ social media sites, such as Facebook, Twitter, and blogs. To achieve a competitive advantage, it is often necessary to listen to and understand what customers are saying about competitors’ products and services. Current social media analytics frameworks do not provide benchmarks that allow businesses to compare customer sentiment on social media to easily understand where businesses are doing well and where they need to improve. In this paper, we present a social media competitive analytics framework with sentiment benchmarks that can be used to glean industry-specific marketing intelligence. Based on the idea of the proposed framework, new social media competitive analytics with sentiment benchmarks can be developed to enhance marketing intelligence and to identify specific actionable areas in which businesses are leading and lagging to further improve their customers’ experience using customer opinions gleaned from social media. Guided by the proposed framework, an innovative business-driven social media competitive analytics tool named VOZIQ is developed. We use VOZIQ to analyze tweets associated with five large retail sector companies and to generate meaningful business insight reports. 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1d88a06a34beff2c3e926a6d24f70036",
"text": "Graph-based clustering methods perform clustering on a fixed input data graph. If this initial construction is of low quality then the resulting clustering may also be of low quality. Moreover, existing graph-based clustering methods require post-processing on the data graph to extract the clustering indicators. We address both of these drawbacks by allowing the data graph itself to be adjusted as part of the clustering procedure. In particular, our Constrained Laplacian Rank (CLR) method learns a graph with exactly k connected components (where k is the number of clusters). We develop two versions of this method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic datasets and real-world benchmark datasets exhibit the effectiveness of this new graph-based clustering method. Introduction State-of-the art clustering methods are often based on graphical representations of the relationships among data points. For example, spectral clustering (Ng, Jordan, and Weiss 2001), normalized cut (Shi and Malik 2000) and ratio cut (Hagen and Kahng 1992) all transform the data into a weighted, undirected graph based on pairwise similarities. Clustering is then accomplished by spectral or graphtheoretic optimization procedures. See (Ding and He 2005; Li and Ding 2006) for a discussion of the relations among these graph-based methods, and also the connections to nonnegative matrix factorization. All of these methods involve a two-stage process in which an data graph is formed from the data, and then various optimization procedures are invoked on this fixed input data graph. A disadvantage of this two-stage process is that the final clustering structures are not represented explicitly in the data graph (e.g., graph-cut methods often use K-means algorithm to post-process the ∗To whom all correspondence should be addressed. This work was partially supported by US NSF-IIS 1117965, NSFIIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628, NIH R01 AG049371. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. results to get the clustering indicators); also, the clustering results are dependent on the quality of the input data graph (i.e., they are sensitive to the particular graph construction methods). It seems plausible that a strategy in which the optimization phase is allowed to change the data graph could have advantages relative to the two-phase strategy. In this paper we propose a novel graph-based clustering model that learns a graph with exactly k connected components (where k is the number of clusters). In our new model, instead of fixing the input data graph associated to the affinity matrix, we learn a new data similarity matrix that is a block diagonal matrix and has exactly k connected components—the k clusters. Thus, our new data similarity matrix is directly useful for the clustering task; the clustering results can be immediately obtained without requiring any post-processing to extract the clustering indicators. To achieve such ideal clustering structures, we impose a rank constraint on the Laplacian graph of the new data similarity matrix, thereby guaranteeing the existence of exactly k connected components. Considering both L2-norm and L1norm objectives, we propose two new clustering objectives and derive optimization algorithms to solve them. We also introduce a novel graph-construction method to initialize the graph associated with the affinity matrix. We conduct empirical studies on simulated datasets and seven real-world benchmark datasets to validate our proposed methods. The experimental results are promising— we find that our new graph-based clustering method consistently outperforms other related methods in most cases. Notation: Throughout the paper, all the matrices are written as uppercase. For a matrix M , the i-th row and the ij-th element of M are denoted by mi and mij , respectively. The trace of matrix M is denoted by Tr(M). The L2-norm of vector v is denoted by ‖v‖2, the Frobenius and the L1 norm of matrix M are denoted by ‖M‖F and ‖M‖1, respectively. New Clustering Formulations Graph-based clustering approaches typically optimize their objectives based on a given data graph associated with an affinity matrix A ∈ Rn×n (which can be symmetric or nonsymmetric), where n is the number of nodes (data points) in the graph. There are two drawbacks with these approaches: (1) the clustering performance is sensitive to the quality of the data graph construction; (2) the cluster structures are not explicit in the clustering results and a post-processing step is needed to uncover the clustering indicators. To address these two challenges, we aim to learn a new data graph S based on the given data graph A such that the new data graph is more suitable for the clustering task. In our strategy, we propose to learn a new data graph S that has exactly k connected components, where k is the number of clusters. In order to formulate a clustering objective based on this strategy, we start from the following theorem. If the affinity matrix A is nonnegative, then the Laplacian matrix LA = DA − (A + A)/2, where the degree matrix DA ∈ Rn×n is defined as a diagonal matrix whose i-th diagonal element is ∑ j(aij + aji)/2, has the following important property (Mohar 1991; Chung 1997): Theorem 1 The multiplicity k of the eigenvalue zero of the Laplacian matrix LA is equal to the number of connected components in the graph associated with A. Given a graph with affinity matrix A, Theorem 1 indicates that if rank(LA) = n − k, then the graph is an ideal graph based on which we already partition the data points into k clusters, without the need of performing K-means or other discretization procedures as is necessary with traditional graph-based clustering methods such as spectral clustering. Motivated by Theorem 1, given an initial affinity matrix A ∈ Rn×n, we learn a similarity matrix S ∈ Rn×n such that the corresponding Laplacian matrix LS = DS−(S+S)/2 is constrained to be rank(LS) = n − k. Under this constraint, the learned S is block diagonal with proper permutation, and thus we can directly partition the data points into k clusters based on S (Nie, Wang, and Huang 2014). To avoid the case that some rows of S are all zeros, we further constrain the S such that the sum of each row of S is one. Under these constraints, we learn that S that best approximates the initial affinity matrixA. Considering two different distances, the L2-norm and the L1-norm, between the given affinity matrix A and the learned similarity matrix S, we define the Constrained Laplacian Rank (CLR) for graph-based clustering as the solution to the following optimization problem: JCLR L2 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖2F (1) JCLR L1 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖1. (2) These problems seem very difficult to solve since LS = DS − (S +S)/2, and DS also depends on S, and the constraint rank(LS) = n−k is a complex nonlinear constraint. In the next section, we will propose novel and efficient algorithms to solve these problems. Optimization Algorithms Optimization Algorithm for Solving JCLR L2 in Eq. (1) Let σi(LS) denote the i-th smallest eigenvalue of LS . Note that σi(LS) ≥ 0 because LS is positive semidefinite. The problem (1) is equivalent to the following problem for a large enough value of λ: min ∑ j sij=1,sij≥0 ‖S −A‖2F + 2λ k ∑",
"title": ""
},
{
"docid": "292981db9a4f16e4ba7e02303cbee6c1",
"text": "The millimeter wave frequency spectrum offers unprecedented bandwidths for future broadband cellular networks. This paper presents the world's first empirical measurements for 28 GHz outdoor cellular propagation in New York City. Measurements were made in Manhattan for three different base station locations and 75 receiver locations over distances up to 500 meters. A 400 megachip-per-second channel sounder and directional horn antennas were used to measure propagation characteristics for future mm-wave cellular systems in urban environments. This paper presents measured path loss as a function of the transmitter - receiver separation distance, the angular distribution of received power using directional 24.5 dBi antennas, and power delay profiles observed in New York City. The measured data show that a large number of resolvable multipath components exist in both non line of sight and line of sight environments, with observed multipath excess delay spreads (20 dB) as great as 1388.4 ns and 753.5 ns, respectively. The widely diverse spatial channels observed at any particular location suggest that millimeter wave mobile communication systems with electrically steerable antennas could exploit resolvable multipath components to create viable links for cell sizes on the order of 200 m.",
"title": ""
},
{
"docid": "9544b2cc301e2e3f170f050de659dda4",
"text": "In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings\" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single \"big switch\" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract \"big switch\" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?\n We show that the answer to that question is \"no\" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual \"big switch\" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.",
"title": ""
},
{
"docid": "667837818361e277cee0995308e69d6d",
"text": "We aim to obtain an interpretable, expressive, and disentangled scene representation that contains comprehensive structural and textural information for each object. Previous scene representations learned by neural networks are often uninterpretable, limited to a single object, or lacking 3D knowledge. In this work, we propose 3D scene de-rendering networks (3D-SDN) to address the above issues by integrating disentangled representations for semantics, geometry, and appearance into a deep generative model. Our scene encoder performs inverse graphics, translating a scene into a structured object-wise representation. Our decoder has two components: a differentiable shape renderer and a neural texture generator. The disentanglement of semantics, geometry, and appearance supports 3D-aware scene manipulation, e.g., rotating and moving objects freely while keeping the consistent shape and texture, and changing the object appearance without affecting its shape. Experiments demonstrate that our editing scheme based on 3D-SDN is superior to its 2D counterpart.",
"title": ""
},
{
"docid": "2e99cd85bb172d545648f18a76a0ff14",
"text": "In this work, the use of type-2 fuzzy logic systems as a novel approach for predicting permeability from well logs has been investigated and implemented. Type-2 fuzzy logic system is good in handling uncertainties, including uncertainties in measurements and data used to calibrate the parameters. In the formulation used, the value of a membership function corresponding to a particular permeability value is no longer a crisp value; rather, it is associated with a range of values that can be characterized by a function that reflects the level of uncertainty. In this way, the model will be able to adequately account for all forms of uncertainties associated with predicting permeability from well log data, where uncertainties are very high and the need for stable results are highly desirable. Comparative studies have been carried out to compare the performance of the proposed type-2 fuzzy logic system framework with those earlier used methods, using five different industrial reservoir data. Empirical results from simulation show that type-2 fuzzy logic approach outperformed others in general and particularly in the area of stability and ability to handle data in uncertain situations, which are common characteristics of well logs data. Another unique advantage of the newly proposed model is its ability to generate, in addition to the normal target forecast, prediction intervals as its by-products without extra computational cost. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bab246f8b15931501049862066fde77f",
"text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.",
"title": ""
},
{
"docid": "4859e7f8bfc31401e19e360386867ae2",
"text": "Health data is important as it provides an individual with knowledge of the factors needed to be improved for oneself. The development of fitness trackers and their associated software aid consumers to understand the manner in which they may improve their physical wellness. These devices are capable of collecting health data for a consumer such sleeping patterns, heart rate readings or the number of steps taken by an individual. Although, this information is very beneficial to guide a consumer to a better healthier state, it has been identified that they have privacy and security concerns. Privacy and Security are of great concern for fitness trackers and their associated applications as protecting health data is of critical importance. This is so, as health data is one of the highly sort after information by cyber criminals. Fitness trackers and their associated applications have been identified to contain privacy and security concerns that places the health data of consumers at risk to intruders. As the study of Consumer Health continues to grow it is vital to understand the elements that are needed to better protect the health information of a consumer. This research paper therefore provides a conceptual threat assessment framework that can be used to identify the elements needed to better secure Consumer Health Wearables. These elements consist of six core elements from the CIA triad and Microsoft STRIDE framework. Fourteen vulnerabilities were further discovered that were classified within these six core elements. Through this, better guidance can be achieved to improve the privacy and security of Consumer Health Wearables.",
"title": ""
},
{
"docid": "f70cea53fb4bb6d9cc98bd6dd7a96c88",
"text": "During maintenance, it is common to run the new version of a program against its existing test suite to check whether the modifications in the program introduced unforeseen side effects. Although this kind of regression testing can be effective in identifying some change-related faults, it is limited by the quality of the existing test suite. Because generating tests for real programs is expensive, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. Such test suites necessarily target only a small subset of the program's functionality and may miss many regression faults. To address this issue, we introduce the concept of behavioral regression testing, whose goal is to identify behavioral differences between two versions of a program through dynamic analysis. Intuitively, given a set of changes in the code, behavioral regression testing works by (1) generating a large number of test cases that focus on the changed parts of the code, (2) running the generated test cases on the old and new versions of the code and identifying differences in the tests' outcome, and (3) analyzing the identified differences and presenting them to the developers. By focusing on a subset of the code and leveraging differential behavior, our approach can provide developers with more (and more focused) information than traditional regression testing techniques. This paper presents our approach and performs a preliminary assessment of its feasibility.",
"title": ""
}
] | scidocsrr |
66288ac8ed76e5a13886c97d89aba672 | Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems | [
{
"docid": "7d4fa882673f142c4faa8a4ff3c2a205",
"text": "This paper presents a different perspective on diversity in search results: diversity by proportionality. We consider a result list most diverse, with respect to some set of topics related to the query, when the number of documents it provides on each topic is proportional to the topic's popularity. Consequently, we propose a framework for optimizing proportionality for search result diversification, which is motivated by the problem of assigning seats to members of competing political parties. Our technique iteratively determines, for each position in the result ranked list, the topic that best maintains the overall proportionality. It then selects the best document on this topic for this position. We demonstrate empirically that our method significantly outperforms the top performing approach in the literature not only on our proposed metric for proportionality, but also on several standard diversity measures. This result indicates that promoting proportionality naturally leads to minimal redundancy, which is a goal of the current diversity approaches.",
"title": ""
},
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
},
{
"docid": "1a968e8cf7c35cc6ed36de0a8cccd9f0",
"text": "Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of a CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random-walk direction is an important factor to improve the personalized recommendation performance.",
"title": ""
}
] | [
{
"docid": "79c7bf1036877ca867da7595e8cef6e2",
"text": "A two-process theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in long-term memory that is initiated by appropriate inputs and then proceeds automatically—without subject control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacity-limited (usually serial in nature), and is controlled by the subject. A series of studies using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled, search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search is utilized in varied-mapping paradigms, and in our studies, it takes the form of serial, terminating search. The approach resolves a number of apparent conflicts in the literature.",
"title": ""
},
{
"docid": "ebc107147884d89da4ef04eba2d53a73",
"text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.",
"title": ""
},
{
"docid": "cd71e990546785bd9ba0c89620beb8d2",
"text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. In this work, we use various clustering approaches of data mining to analyse the crime data of Tamilnadu. The crime data is extracted from National Crime Records Bureau (NCRB) of India. It consists of crime information about six cities namely Chennai, Coimbatore, Salem, Madurai, Thirunelvelli and Thiruchirapalli from the year 2000–2014 with 1760 instances and 9 attributes to represent the instances. K-Means clustering, Agglomerative clustering and Density Based Spatial Clustering with Noise (DBSCAN) algorithms are used to cluster crime activities based on some predefined cases and the results of these clustering are compared to find the best suitable clustering algorithm for crime detection. The result of K-Means clustering algorithm is visualized using Google Map for interactive and easy understanding. The K-Nearest Neighbor (KNN) classification is used for crime prediction. The performance of each clustering algorithms are evaluated using the metrics such as precision, recall and F-measure, and the results are compared. This work helps the law enforcement agencies to predict and detect crimes in Tamilnadu with improved accuracy and thus reduces the crime rate.",
"title": ""
},
{
"docid": "a531694dba7fc479b43d0725bc68de15",
"text": "This paper gives an introduction to the essential challenges of software engineering and requirements that software has to fulfill in the domain of automation. Besides, the functional characteristics, specific constraints and circumstances are considered for deriving requirements concerning usability, the technical process, the automation functions, used platform and the well-established models, which are described in detail. On the other hand, challenges result from the circumstances at different points in the single phases of the life cycle of the automated system. The requirements for life-cycle-management, tools and the changeability during runtime are described in detail.",
"title": ""
},
{
"docid": "27745116e5c05802bda2bc6dc548cce6",
"text": "Recently, many researchers have attempted to classify Facial Attributes (FAs) by representing characteristics of FAs such as attractiveness, age, smiling and so on. In this context, recent studies have demonstrated that visual FAs are a strong background for many applications such as face verification, face search and so on. However, Facial Attribute Classification (FAC) in a wide range of attributes based on the regression representation -predicting of FAs as real-valued labelsis still a significant challenge in computer vision and psychology. In this paper, a regression model formulation is proposed for FAC in a wide range of FAs (e.g. 73 FAs). The proposed method accommodates real-valued scores to the probability of what percentage of the given FAs is present in the input image. To this end, two simultaneous dictionary learning methods are proposed to learn the regression and identity feature dictionaries simultaneously. Accordingly, a multi-level feature extraction is proposed for FAC. Then, four regression classification methods are proposed using a regression model formulated based on dictionary learning, SRC and CRC. Convincing results are",
"title": ""
},
{
"docid": "370054a58b8f50719106508b138bd095",
"text": "In-network aggregation has been proposed as one method for reducing energy consumption in sensor networks. In this paper, we explore two ideas related to further reducing energy consumption in the context of in-network aggregation. The first is by influencing the construction of the routing trees for sensor networks with the goal of reducing the size of transmitted data. To this end, we propose a group-aware network configuration method that “clusters” along the same path sensor nodes that belong to the same group. The second idea involves imposing a hierarchy of output filters on the sensor network with the goal of both reducing the size of transmitted data and minimizing the number of transmitted messages. More specifically, we propose a framework to use temporal coherency tolerances in conjunction with in-network aggregation to save energy at the sensor nodes while maintaining specified quality of data. These tolerances are based on user preferences or can be dictated by the network in cases where the network cannot support the current tolerance level. Our framework, called TiNA, works on top of existing in-network aggregation schemes. We evaluate experimentally our proposed schemes in the context of existing in-network aggregation schemes. We present experimental results measuring energy consumption, response time, and quality of data for Group-By queries. Overall, our schemes provide significant energy savings with respect to communication and a negligible drop in quality of data.",
"title": ""
},
{
"docid": "6be2ecf9323b04c5e93276c9a4ca4b96",
"text": "A printed wide-slot antenna for wideband applications is proposed and experimentally investigated in this communication. A modified L-shaped microstrip line is used to excite the square slot. It consists of a horizontal line, a square patch, and a vertical line. For comparison, a simple L-shaped feed structure with the same line width is used as a reference geometry. The reference antenna exhibits dual resonance (lower resonant frequency <i>f</i><sub>1</sub>, upper resonant frequency <i>f</i><sub>2</sub>). When the square patch is embedded in the middle of the L-shaped line, <i>f</i><sub>1</sub> decreases, <i>f</i><sub>2</sub> remains unchanged, and a new resonance mode is formed between <i>f</i><sub>1</sub> and <i>f</i><sub>2</sub> . Moreover, if the size of the square patch is increased, an additional (fourth) resonance mode is formed above <i>f</i><sub>2</sub>. Thus, the bandwidth of a slot antenna is easily enhanced. The measured results indicate that this structure possesses a wide impedance bandwidth of 118.4%, which is nearly three times that of the reference antenna. Also, a stable radiation pattern is observed inside the operating bandwidth. The gain variation is found to be less than 1.7 dB.",
"title": ""
},
{
"docid": "0a97c254e5218637235a7e23597f572b",
"text": "We investigate the design of a reputation system for decentralized unstructured P2P networks like Gnutella. Having reliable reputation information about peers can form the basis of an incentive system and can guide peers in their decision making (e.g., who to download a file from). The reputation system uses objective criteria to track each peer's contribution in the system and allows peers to store their reputations locally. Reputation are computed using either of the two schemes, debit-credit reputation computation (DCRC) and credit-only reputation computation (CORC). Using a reputation computation agent (RCA), we design a public key based mechanism that periodically updates the peer reputations in a secure, light-weight, and partially distributed manner. We evaluate using simulations the performance tradeoffs inherent in the design of our system.",
"title": ""
},
{
"docid": "328d2b9a5786729245f18195f36ca75c",
"text": "As CMOS technology is scaled down and adopted for many RF and millimeter-wave radio systems, design of T/R switches in CMOS has received considerable attention. Many T/R switches designed in 0.5 ¿m 65 nm CMOS processes have been reported. Table 4 summarizes these T/R switches. Some of them have become great candidates for WLAN and UWB radios. However, none of them met the requirements of mobile cellular and WPAN 60-GHz radios. CMOS device innovations and novel ideas such as artificial dielectric strips and bandgap structures may provide a comprehensive solution to the challenges of design of T/R switches for mobile cellular and 60-GHz radios.",
"title": ""
},
{
"docid": "896dc1862adba0ad504116ba5a0de0b9",
"text": "We present the SnapNet, a system that provides accurate real-time map matching for cellular-based trajectories. Such coarse-grained trajectories introduce new challenges to map matching including (1) input locations that are far from the actual road segment (errors in the orders of kilometers), (2) back-and-forth transitions, and (3) highly sparse input data. SnapNet addresses these challenges by applying extensive preprocessing steps to remove the noisy locations and to handle the data sparseness. At the core of SnapNet is a novel incremental HMM algorithm that combines digital map hints and a number of heuristics to reduce the noise and provide real-time estimation. Evaluation of SnapNet in different cities covering more than 100km distance shows that it can achieve more than 90% accuracy under noisy coarse-grained input location estimates. This maps to over 97% and 34% enhancement in precision and recall respectively when compared to traditional HMM map matching algorithms. Moreover, SnapNet has a low latency of 1.2ms per location estimate.",
"title": ""
},
{
"docid": "30938389f71443136d036a95e465f0ac",
"text": "With the development of autonomous driving, offline testing remains an important process allowing low-cost and efficient validation of vehicle performance and vehicle control algorithms in multiple virtual scenarios. This paper aims to propose a novel simulation platform with hardware in the loop (HIL). This platform comprises of four layers: the vehicle simulation layer, the virtual sensors layer, the virtual environment layer and the Electronic Control Unit (ECU) layer for hardware control. Our platform has attained multiple capabilities: (1) it enables the construction and simulation of kinematic car models, various sensors and virtual testing fields; (2) it performs a closed-loop evaluation of scene perception, path planning, decision-making and vehicle control algorithms, whilst also having multi-agent interaction system; (3) it further enables rapid migrations of control and decision-making algorithms from the virtual environment to real self-driving cars. In order to verify the effectiveness of our simulation platform, several experiments have been performed with self-defined car models in virtual scenarios of a public road and an open parking lot and the results are substantial.",
"title": ""
},
{
"docid": "1ff4d4588826459f1d8d200d658b9907",
"text": "BACKGROUND\nHealth promotion organizations are increasingly embracing social media technologies to engage end users in a more interactive way and to widely disseminate their messages with the aim of improving health outcomes. However, such technologies are still in their early stages of development and, thus, evidence of their efficacy is limited.\n\n\nOBJECTIVE\nThe study aimed to provide a current overview of the evidence surrounding consumer-use social media and mobile software apps for health promotion interventions, with a particular focus on the Australian context and on health promotion targeted toward an Indigenous audience. Specifically, our research questions were: (1) What is the peer-reviewed evidence of benefit for social media and mobile technologies used in health promotion, intervention, self-management, and health service delivery, with regard to smoking cessation, sexual health, and otitis media? and (2) What social media and mobile software have been used in Indigenous-focused health promotion interventions in Australia with respect to smoking cessation, sexual health, or otitis media, and what is the evidence of their effectiveness and benefit?\n\n\nMETHODS\nWe conducted a scoping study of peer-reviewed evidence for the effectiveness of social media and mobile technologies in health promotion (globally) with respect to smoking cessation, sexual health, and otitis media. A scoping review was also conducted for Australian uses of social media to reach Indigenous Australians and mobile apps produced by Australian health bodies, again with respect to these three areas.\n\n\nRESULTS\nThe review identified 17 intervention studies and seven systematic reviews that met inclusion criteria, which showed limited evidence of benefit from these interventions. We also found five Australian projects with significant social media health components targeting the Indigenous Australian population for health promotion purposes, and four mobile software apps that met inclusion criteria. No evidence of benefit was found for these projects.\n\n\nCONCLUSIONS\nAlthough social media technologies have the unique capacity to reach Indigenous Australians as well as other underserved populations because of their wide and instant disseminability, evidence of their capacity to do so is limited. Current interventions are neither evidence-based nor widely adopted. Health promotion organizations need to gain a more thorough understanding of their technologies, who engages with them, why they engage with them, and how, in order to be able to create successful social media projects.",
"title": ""
},
{
"docid": "40cf1e5ecb0e79f466c65f8eaff77cb2",
"text": "Spiral patterns on the surface of a sphere have been seen in laboratory experiments and in numerical simulations of reaction–diffusion equations and convection. We classify the possible symmetries of spirals on spheres, which are quite different from the planar case since spirals typically have tips at opposite points on the sphere. We concentrate on the case where the system has an additional sign-change symmetry, in which case the resulting spiral patterns do not rotate. Spiral patterns arise through a mode interaction between spherical harmonics degree l and l+1. Using the methods of equivariant bifurcation theory, possible symmetry types are determined for each l. For small values of l, the centre manifold equations are constructed and spiral solutions are found explicitly. Bifurcation diagrams are obtained showing how spiral states can appear at secondary bifurcations from primary solutions, or tertiary bifurcations. The results are consistent with numerical simulations of a model pattern-forming system.",
"title": ""
},
{
"docid": "59ba2709e4f3653dcbd3a4c0126ceae1",
"text": "Processing-in-memory (PIM) is a promising solution to address the \"memory wall\" challenges for future computer systems. Prior proposed PIM architectures put additional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has showed its potential to be used for main memory. Moreover, with its crossbar array structure, ReRAM can perform matrix-vector multiplication efficiently, and has been widely studied to accelerate neural network (NN) applications. In this work, we propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, a portion of ReRAM crossbar arrays can be configured as accelerators for NN applications or as normal memory for a larger memory space. We provide microarchitecture and circuit designs to enable the morphable functions with an insignificant area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. Benefiting from both the PIM architecture and the efficiency of using ReRAM for NN computation, PRIME distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving. Our experimental results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by ~2360× and the energy consumption by ~895×, across the evaluated machine learning benchmarks.",
"title": ""
},
{
"docid": "3ef36b8675faf131da6cbc4d94f0067e",
"text": "The staggering amount of streaming time series coming from the real world calls for more efficient and effective online modeling solution. For time series modeling, most existing works make some unrealistic assumptions such as the input data is of fixed length or well aligned, which requires extra effort on segmentation or normalization of the raw streaming data. Although some literature claim their approaches to be invariant to data length and misalignment, they are too time-consuming to model a streaming time series in an online manner. We propose a novel and more practical online modeling and classification scheme, DDE-MGM, which does not make any assumptions on the time series while maintaining high efficiency and state-of-the-art performance. The derivative delay embedding (DDE) is developed to incrementally transform time series to the embedding space, where the intrinsic characteristics of data is preserved as recursive patterns regardless of the stream length and misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed to both model and classify the pattern in an online manner. Experimental results demonstrate the effectiveness and superior classification accuracy of the proposed DDE-MGM in an online setting as compared to the state-of-the-art.",
"title": ""
},
{
"docid": "2b101f1d43f2e2c657b50054b7188e99",
"text": "Programs that use animations or visualizations attract student interest and offer feedback that can enhance different learning styles as students work to master programming and problem solving. In this paper we report on several CS 1 assignments we have used successfully at Duke University to introduce or reinforce control constructs, elementary data structures, and object-based programming. All the assignments involve either animations by which we mean graphical displays that evolve over time, or visualizations which include static display of graphical images. The animations do not require extensive programming by students since students use classes and code that we provide to hide much of the complexity that drives the animations. In addition to generating enthusiasm, we believe the animations assist with mastering the debugging process.",
"title": ""
},
{
"docid": "61c73842d25b54f24ff974b439d55c64",
"text": "Many electrical vehicles have been developed recently, and one of them is the vehicle type with the self-balancing capability. Portability also one of issue related to the development of electric vehicles. This paper presents one wheeled self-balancing electric vehicle namely PENS-Wheel. Since it only consists of one motor as its actuator, it becomes more portable than any other self-balancing vehicle types. This paper discusses on the implementation of Kalman filter for filtering the tilt sensor used by the self-balancing controller, mechanical design, and fabrication of the vehicle. The vehicle is designed based on the principle of the inverted pendulum by utilizing motor's torque on the wheel to maintain its upright position. The sensor system uses IMU which combine accelerometer and gyroscope data to get the accurate pitch angle of the vehicle. The paper presents the effects of Kalman filter parameters including noise variance of the accelerometer, noise variance of the gyroscope, and the measurement noise to the response of the sensor output. Finally, we present the result of the proposed filter and compare it with proprietary filter algorithm from InvenSense, Inc. running on Digital Motion Processor (DMP) inside the MPU6050 chip. The result of the filter algorithm implemented in the vehicle shows that it is capable in delivering comparable performance with the proprietary one.",
"title": ""
},
{
"docid": "3888dd754c9f7607d7a4cc2f4a436aac",
"text": "We propose a distributed algorithm to estimate the 3D trajectories of multiple cooperative robots from relative pose measurements. Our approach leverages recent results [1] which show that the maximum likelihood trajectory is well approximated by a sequence of two quadratic subproblems. The main contribution of the present work is to show that these subproblems can be solved in a distributed manner, using the distributed Gauss-Seidel (DGS) algorithm. Our approach has several advantages. It requires minimal information exchange, which is beneficial in presence of communication and privacy constraints. It has an anytime flavor: after few iterations the trajectory estimates are already accurate, and they asymptotically convergence to the centralized estimate. The DGS approach scales well to large teams, and it has a straightforward implementation. We test the approach in simulations and field tests, demonstrating its advantages over related techniques.",
"title": ""
},
{
"docid": "d9160f2cc337de729af34562d77a042e",
"text": "Ontologies proliferate with the progress of the Semantic Web. Ontology matching is an important way of establishing interoperability between (Semantic) Web applications that use different but related ontologies. Due to their sizes and monolithic nature, large ontologies regarding real world domains bring a new challenge to the state of the art ontology matching technology. In this paper, we propose a divide-and-conquer approach to matching large ontologies. We develop a structure-based partitioning algorithm, which partitions entities of each ontology into a set of small clusters and constructs blocks by assigning RDF Sentences to those clusters. Then, the blocks from different ontologies are matched based on precalculated anchors, and the block mappings holding high similarities are selected. Finally, two powerful matchers, V-DOC and GMO, are employed to discover alignments in the block mappings. Comprehensive evaluation on both synthetic and real world data sets demonstrates that our approach both solves the scalability problem and achieves good precision and recall with significant reduction of execution time. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "05a93bfe8e245edbe2438a0dc7025301",
"text": "Statistical machine translation (SMT) treats the translation of natural language as a machine learning problem. By examining many samples of human-produced translation, SMT algorithms automatically learn how to translate. SMT has made tremendous strides in less than two decades, and many popular techniques have only emerged within the last few years. This survey presents a tutorial overview of state-of-the-art SMT at the beginning of 2007. We begin with the context of the current research, and then move to a formal problem description and an overview of the four main subproblems: translational equivalence modeling, mathematical modeling, parameter estimation, and decoding. Along the way, we present a taxonomy of some different approaches within these areas. We conclude with an overview of evaluation and notes on future directions. This is a revised draft of a paper currently under review. The contents may change in later drafts. Please send any comments, questions, or corrections to [email protected]. Feel free to cite as University of Maryland technical report UMIACS-TR-2006-47. The support of this research by the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-2-0001, ONR MURI Contract FCPO.810548265, and Department of Defense contract RD-02-5700 is acknowledged.",
"title": ""
}
] | scidocsrr |
ca2584c9be2200d80892a7708347c83b | An Investigation of the Role of Dependency in Predicting continuance Intention to Use Ubiquitous Media Systems: Combining a Media system Perspective with Expectation-confirmation Theories | [
{
"docid": "e83e6284d3c9cf8fddf972a25d869a1b",
"text": "Internet-based learning systems are being used in many universities and firms but their adoption requires a solid understanding of the user acceptance processes. Our effort used an extended version of the technology acceptance model (TAM), including cognitive absorption, in a formal empirical study to explain the acceptance of such systems. It was intended to provide insight for improving the assessment of on-line learning systems and for enhancing the underlying system itself. The work involved the examination of the proposed model variables for Internet-based learning systems acceptance. Using an on-line learning system as the target technology, assessment of the psychometric properties of the scales proved acceptable and confirmatory factor analysis supported the proposed model structure. A partial-least-squares structural modeling approach was used to evaluate the explanatory power and causal links of the model. Overall, the results provided support for the model as explaining acceptance of an on-line learning system and for cognitive absorption as a variable that influences TAM variables. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "1b82e1fa8619480ba194c83c5370da5d",
"text": "This study presents an extended technology acceptance model (TAM) that integrates innovation diffusion theory, perceived risk and cost into the TAM to investigate what determines user mobile commerce (MC) acceptance. The proposed model was empirically tested using data collected from a survey of MC consumers. The structural equation modeling technique was used to evaluate the causal model and confirmatory factor analysis was performed to examine the reliability and validity of the measurement model. Our findings indicated that all variables except perceived ease of use significantly affected users’ behavioral intent. Among them, the compatibility had the most significant influence. Furthermore, a striking, and somewhat puzzling finding was the positive influence of perceived risk on behavioral intention to use. The implication of this work to both researchers and practitioners is discussed. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ca8aa3e930fd36a16ac36546a25a1fde",
"text": "Accurate State-of-Charge (SOC) estimation of Li-ion batteries is essential for effective battery control and energy management of electric and hybrid electric vehicles. To this end, first, the battery is modelled by an OCV-R-RC equivalent circuit. Then, a dual Bayesian estimation scheme is developed-The battery model parameters are identified online and fed to the SOC estimator, the output of which is then fed back to the parameter identifier. Both parameter identification and SOC estimation are treated in a Bayesian framework. The square-root recursive least-squares estimator and the extended Kalman-Bucy filter are systematically paired up for the first time in the battery management literature to tackle the SOC estimation problem. The proposed method is finally compared with the convectional Coulomb counting method. The results indicate that the proposed method significantly outperforms the Coulomb counting method in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "e3de7dc210e780e1c460a505628ea4ed",
"text": "We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet.\n We train our network with 3--5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game dialogue, low-cost localization, virtual reality avatars, and telepresence.",
"title": ""
},
{
"docid": "1262ce9e36e4208a1d8e641e5078e083",
"text": "D its fundamental role in legitimizing the modern state system, nationalism has rarely been linked to the outbreak of political violence in the recent literature on ethnic conflict and civil war. to a large extent, this is because the state is absent from many conventional theories of ethnic conflict. indeed, some studies analyze conflict between ethnic groups under conditions of state failure, thus making the absence of the state the very core of the causal argument. others assume that the state is ethnically neutral and try to relate ethnodemographic measures, such as fractionalization and polarization, to civil war. in contrast to these approaches, we analyze the state as an institution that is captured to different degrees by representatives of particular ethnic communities, and thus we conceive of ethnic wars as the result of competing ethnonationalist claims to state power. While our work relates to a rich research tradition that links the causes of such conflicts to the mobilization of ethnic minorities, it also goes beyond this tradition by introducing a new data set that addresses some of the shortcomings of this tradition. our analysis is based on the Ethnic power relations data set (epr), which covers all politically relevant ethnic groups and their access to power around the world from 1946 through 2005. this data set improves significantly on the widely used minorities at risk data set, which restricts its sample to mobilized",
"title": ""
},
{
"docid": "2dd42cce112c61950b96754bb7b4df10",
"text": "Hierarchical methods have been widely explored for object recognition, which is a critical component of scene understanding. However, few existing works are able to model the contextual information (e.g., objects co-occurrence) explicitly within a single coherent framework for scene understanding. Towards this goal, in this paper we propose a novel three-level (superpixel level, object level and scene level) hierarchical model to address the scene categorization problem. Our proposed model is a coherent probabilistic graphical model that captures the object co-occurrence information for scene understanding with a probabilistic chain structure. The efficacy of the proposed model is demonstrated by conducting experiments on the LabelMe dataset.",
"title": ""
},
{
"docid": "385c7c16af40ae13b965938ac3bce34c",
"text": "The information age has brought a deluge of data. Much of this is in text form, insurmountable in scope for humans and incomprehensible in structure for computers. Text mining is an expanding field of research that seeks to utilize the information contained in vast document collections. General data mining methods based on machine learning face challenges with the scale of text data, posing a need for scalable text mining methods. This thesis proposes a solution to scalable text mining: generative models combined with sparse computation. A unifying formalization for generative text models is defined, bringing together research traditions that have used formally equivalent models, but ignored parallel developments. This framework allows the use of methods developed in different processing tasks such as retrieval and classification, yielding effective solutions across different text mining tasks. Sparse computation using inverted indices is proposed for inference on probabilistic models. This reduces the computational complexity of the common text mining operations according to sparsity, yielding probabilistic models with the scalability of modern search engines. The proposed combination provides sparse generative models: a solution for text mining that is general, effective, and scalable. Extensive experimentation on text classification and ranked retrieval datasets are conducted, showing that the proposed solution matches or outperforms the leading task-specific methods in effectiveness, with a order of magnitude decrease in classification times for Wikipedia article categorization with a million classes. The developed methods were further applied in two 2014 Kaggle data mining prize competitions with over a hundred competing teams, earning first and second places.",
"title": ""
},
{
"docid": "a1cd4a4ce70c9c8672eee5ffc085bf63",
"text": "Ternary logic is a promising alternative to conventional binary logic, since it is possible to achieve simplicity and energy efficiency due to the reduced circuit overhead. In this paper, a ternary magnitude comparator design based on Carbon Nanotube Field Effect Transistors (CNFETs) is presented. This design eliminates the usage of complex ternary decoder which is a part of existing designs. Elimination of decoder results in reduction of delay and power. Simulations of proposed and existing designs are done on HSPICE and results proves that the proposed 1-bit comparator consumes 81% less power and shows delay advantage of 41.6% compared to existing design. Further a methodology to extend the 1-bit comparator design to n-bit comparator design is also presented.",
"title": ""
},
{
"docid": "0c4f02b3b361d60da1aec0f0c100dcf9",
"text": "Architecture Compliance Checking (ACC) is an approach to verify the conformance of implemented program code to high-level models of architectural design. ACC is used to prevent architectural erosion during the development and evolution of a software system. Static ACC, based on static software analysis techniques, focuses on the modular architecture and especially on rules constraining the modular elements. A semantically rich modular architecture (SRMA) is expressive and may contain modules with different semantics, like layers and subsystems, constrained by rules of different types. To check the conformance to an SRMA, ACC-tools should support the module and rule types used by the architect. This paper presents requirements regarding SRMA support and an inventory of common module and rule types, on which basis eight commercial and non-commercial tools were tested. The test results show large differences between the tools, but all could improve their support of SRMA, what might contribute to the adoption of ACC in practice.",
"title": ""
},
{
"docid": "e1d3708e826499d7f2e656b66303734f",
"text": "Entity Resolution constitutes a core task for data integration that, due to its quadratic complexity, typically scales to large datasets through blocking methods. These can be configured in two ways. The schema-based configuration relies on schema information in order to select signatures of high distinctiveness and low noise, while the schema-agnostic one treats every token from all attribute values as a signature. The latter approach has significant potential, as it requires no fine-tuning by human experts and it applies to heterogeneous data. Yet, there is no systematic study on its relative performance with respect to the schema-based configuration. This work covers this gap by comparing analytically the two configurations in terms of effectiveness, time efficiency and scalability. We apply them to 9 established blocking methods and to 11 benchmarks of structured data. We provide valuable insights into the internal functionality of the blocking methods with the help of a novel taxonomy. Our studies reveal that the schema-agnostic configuration offers unsupervised and robust definition of blocking keys under versatile settings, trading a higher computational cost for a consistently higher recall than the schema-based one. It also enables the use of state-of-the-art blocking methods without schema knowledge.",
"title": ""
},
{
"docid": "81d4baaf6a22a7a480e4568ae05de1db",
"text": "Procedural textures are normally generated from mathematical models with parameters carefully selected by experienced users. However, for naive users, the intuitive way to obtain a desired texture is to provide semantic descriptions such as ”regular,” ”lacelike,” and ”repetitive” and then a procedural model with proper parameters will be automatically suggested to generate the corresponding textures. By contrast, it is less practical for users to learn mathematical models and tune parameters based on multiple examinations of large numbers of generated textures. In this study, we propose a novel framework that generates procedural textures according to user-defined semantic descriptions, and we establish a mapping between procedural models and semantic texture descriptions. First, based on a vocabulary of semantic attributes collected from psychophysical experiments, a multi-label learning method is employed to annotate a large number of textures with semantic attributes to form a semantic procedural texture dataset. Then, we derive a low dimensional semantic space in which the semantic descriptions can be separated from one other. Finally, given a set of semantic descriptions, the diverse properties of the samples in the semantic space can lead the framework to find an appropriate generation model that uses appropriate parameters to produce a desired texture. The experimental results show that the proposed framework is effective and that the generated textures closely correlate with the input semantic descriptions.",
"title": ""
},
{
"docid": "b4a8541c2870ea3d91819c0c0de68ad3",
"text": "The paper will describe various types of security issues which include confidentality, integrity and availability of data. There exists various threats to security issues traffic analysis, snooping, spoofing, denial of service attack etc. The asymmetric key encryption techniques may provide a higher level of security but compared to the symmetric key encryption Although we have existing techniques symmetric and assymetric key cryptography methods but there exists security concerns. A brief description of proposed framework is defined which uses the random combination of public and private keys. The mechanisms includes: Integrity, Availability, Authentication, Nonrepudiation, Confidentiality and Access control which is achieved by private-private key model as the user is restricted both at sender and reciever end which is restricted in other models. A review of all these systems is described in this paper.",
"title": ""
},
{
"docid": "9edf40bfd6875591543ff46e5e211c74",
"text": "The brain is thought to sense gut stimuli only via the passive release of hormones. This is because no connection has been described between the vagus and the putative gut epithelial sensor cell—the enteroendocrine cell. However, these electrically excitable cells contain several features of epithelial transducers. Using a mouse model, we found that enteroendocrine cells synapse with vagal neurons to transduce gut luminal signals in milliseconds by using glutamate as a neurotransmitter. These synaptically connected enteroendocrine cells are referred to henceforth as neuropod cells. The neuroepithelial circuit they form connects the intestinal lumen to the brainstem in one synapse, opening a physical conduit for the brain to sense gut stimuli with the temporal precision and topographical resolution of a synapse.",
"title": ""
},
{
"docid": "ede1f31a32e59d29ee08c64c1a6ed5f7",
"text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.",
"title": ""
},
{
"docid": "6fa6a26b351c45ac5f33f565bc9c01e8",
"text": "Transfer learning, or inductive transfer, refers to the transfer of knowledge from a source task to a target task. In the context of convolutional neural networks (CNNs), transfer learning can be implemented by transplanting the learned feature layers from one CNN (derived from the source task) to initialize another (for the target task). Previous research has shown that the choice of the source CNN impacts the performance of the target task. In the current literature, there is no principled way for selecting a source CNN for a given target task despite the increasing availability of pre-trained source CNNs. In this paper we investigate the possibility of automatically ranking source CNNs prior to utilizing them for a target task. In particular, we present an information theoretic framework to understand the source-target relationship and use this as a basis to derive an approach to automatically rank source CNNs in an efficient, zero-shot manner. The practical utility of the approach is thoroughly evaluated using the PlacesMIT dataset, MNIST dataset and a real-world MRI database. Experimental results demonstrate the efficacy of the proposed ranking method for transfer learning.",
"title": ""
},
{
"docid": "7bce92a72a19aef0079651c805883eb5",
"text": "Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. This paper studies the problem and solutions to automatic modeling of animatable virtual humans. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.",
"title": ""
},
{
"docid": "9a82f33d84cd622ccd66a731fc9755de",
"text": "To discover relationships and associations between pairs of variables in large data sets have become one of the most significant challenges for bioinformatics scientists. To tackle this problem, maximal information coefficient (MIC) is widely applied as a measure of the linear or non-linear association between two variables. To improve the performance of MIC calculation, in this work we present MIC++, a parallel approach based on the heterogeneous accelerators including Graphic Processing Unit (GPU) and Field Programmable Gate Array (FPGA) engines, focusing on both coarse-grained and fine-grained parallelism. As the evaluation of MIC++, we have demonstrated the performance on the state-of-the-art GPU accelerators and the FPGA-based accelerators. Preliminary estimated results show that the proposed parallel implementation can significantly achieve more than 6X-14X speedup using GPU, and 4X-13X using FPGA-based accelerators.",
"title": ""
},
{
"docid": "b0d9c5716052e9cfe9d61d20e5647c8c",
"text": "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.",
"title": ""
},
{
"docid": "ce53bf5131c125fdca2086e28ccca9d7",
"text": "When a firm practices conservative accounting, changes in the amount of its investments can affect the quality of its earnings. Growth in investment reduces reported earnings and creates reserves. Reducing investment releases those reserves, increasing earnings. If the change in investment is temporary, then current earnings is temporarily depressed or inflated, and thus is not a good indicator of future earnings. This study develops diagnostic measures of this joint effect of investment and conservative accounting. We find that these measures forecast differences in future return on net operating assets relative to current return on net operating assets. Moreover, these measures also forecast stock returns-indicating that investors do not appreciate how conservatism and changes in investment combine to raise questions about the quality of reported earnings.",
"title": ""
},
{
"docid": "6e4f71c411a57e3f705dbd0979c118b1",
"text": "BACKGROUND\nStress perception is highly subjective, and so the complexity of nursing practice may result in variation between nurses in their identification of sources of stress, especially when the workplace and roles of nurses are changing, as is currently occurring in the United Kingdom health service. This could have implications for measures being introduced to address problems of stress in nursing.\n\n\nAIMS\nTo identify nurses' perceptions of workplace stress, consider the potential effectiveness of initiatives to reduce distress, and identify directions for future research.\n\n\nMETHOD\nA literature search from January 1985 to April 2003 was conducted using the key words nursing, stress, distress, stress management, job satisfaction, staff turnover and coping to identify research on sources of stress in adult and child care nursing. Recent (post-1997) United Kingdom Department of Health documents and literature about the views of practitioners was also consulted.\n\n\nFINDINGS\nWorkload, leadership/management style, professional conflict and emotional cost of caring have been the main sources of distress for nurses for many years, but there is disagreement as to the magnitude of their impact. Lack of reward and shiftworking may also now be displacing some of the other issues in order of ranking. Organizational interventions are targeted at most but not all of these sources, and their effectiveness is likely to be limited, at least in the short to medium term. Individuals must be supported better, but this is hindered by lack of understanding of how sources of stress vary between different practice areas, lack of predictive power of assessment tools, and a lack of understanding of how personal and workplace factors interact.\n\n\nCONCLUSIONS\nStress intervention measures should focus on stress prevention for individuals as well as tackling organizational issues. Achieving this will require further comparative studies, and new tools to evaluate the intensity of individual distress.",
"title": ""
},
{
"docid": "517a7833e209403cb3db6f3e58c5f3e4",
"text": "Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances.",
"title": ""
}
] | scidocsrr |
Subsets and Splits